text
stringlengths 74
478k
| repo
stringlengths 7
106
|
---|---|
ali-vilab/dreamtalk;DreamTalk: When Expressive Talking Head Generation Meets Diffusion Probabilistic Models DreamTalk is a diffusion-based audio-driven expressive talking head generation framework that can produce high-quality talking head videos across diverse speaking styles. DreamTalk exhibits robust performance with a diverse array of inputs, including songs, speech in multiple languages, noisy audio, and out-of-domain portraits. News [2023.12] Release inference code and pretrained checkpoint. Installation ```
conda create -n dreamtalk python=3.7.0
conda activate dreamtalk
pip install -r requirements.txt
conda install pytorch==1.8.0 torchvision==0.9.0 torchaudio==0.8.0 cudatoolkit=11.1 -c pytorch -c conda-forge
conda update ffmpeg pip install urllib3==1.26.6
pip install transformers==4.28.1
pip install dlib
``` Download Checkpoints In light of the social impact, we have ceased public download access to checkpoints. If you want to obtain the checkpoints, please request it by emailing mayf18@mails.tsinghua.edu.cn . It is important to note that sending this email implies your consent to use the provided method solely for academic research purposes . Put the downloaded checkpoints into checkpoints folder. Inference Run the script: python inference_for_demo_video.py \
--wav_path data/audio/acknowledgement_english.m4a \
--style_clip_path data/style_clip/3DMM/M030_front_neutral_level1_001.mat \
--pose_path data/pose/RichardShelby_front_neutral_level1_001.mat \
--image_path data/src_img/uncropped/male_face.png \
--cfg_scale 1.0 \
--max_gen_len 30 \
--output_name acknowledgement_english@M030_front_neutral_level1_001@male_face wav_path specifies the input audio. The input audio file extensions such as wav, mp3, m4a, and mp4 (video with sound) should all be compatible. style_clip_path specifies the reference speaking style and pose_path specifies head pose. They are 3DMM parameter sequences extracted from reference videos. You can follow PIRenderer to extract 3DMM parameters from your own videos. Note that the video frame rate should be 25 FPS. Besides, videos used for head pose reference should be first cropped to $256\times256$ using scripts in FOMM video preprocessing . image_path specifies the input portrait. Its resolution should be larger than $256\times256$. Frontal portraits, with the face directly facing forward and not tilted to one side, usually achieve satisfactory results. The input portrait will be cropped to $256\times256$. If your portrait is already cropped to $256\times256$ and you want to disable cropping, use option --disable_img_crop like this: python inference_for_demo_video.py \
--wav_path data/audio/acknowledgement_chinese.m4a \
--style_clip_path data/style_clip/3DMM/M030_front_surprised_level3_001.mat \
--pose_path data/pose/RichardShelby_front_neutral_level1_001.mat \
--image_path data/src_img/cropped/zp1.png \
--disable_img_crop \
--cfg_scale 1.0 \
--max_gen_len 30 \
--output_name acknowledgement_chinese@M030_front_surprised_level3_001@zp1 cfg_scale controls the scale of classifer-free guidance. It can adjust the intensity of speaking styles. max_gen_len is the maximum video generation duration, measured in seconds. If the input audio exceeds this length, it will be truncated. The generated video will be named $(output_name).mp4 and put in the output_video folder. Intermediate results, including the cropped portrait, will be in the tmp/$(output_name) folder. Sample inputs are presented in data folder. Due to copyright issues, we are unable to include the songs we have used in this folder. If you want to run this program on CPU, please add --device=cpu to the command line arguments. (Thank lukevs for adding CPU support.) Ad-hoc solutions to improve resolution The main goal of this method is to achieve accurate lip-sync and produce vivid expressions across diverse speaking styles. The resolution was not considered in the initial design process. There are two ad-hoc solutions to improve resolution. The first option is to utilize CodeFormer , which can achieve a resolution of $1024\times1024$; however, it is relatively slow, processing only one frame per second on an A100 GPU, and suffers from issues with temporal inconsistency. The second option is to employ the Temporal Super-Resolution Model from MetaPortrait , which attains a resolution of $512\times512$, offers a faster performance of 10 frames per second, and maintains temporal coherence. However, these super-resolution modules may reduce the intensity of facial emotions. The sample results after super-resolution processing are in the output_video folder. Acknowledgements We extend our heartfelt thanks for the invaluable contributions made by preceding works to the development of DreamTalk. This includes, but is not limited to: PIRenderer , AVCT , StyleTalk , Deep3DFaceRecon_pytorch , Wav2vec2.0 , diffusion-point-cloud , FOMM video preprocessing . We are dedicated to advancing upon these foundational works with the utmost respect for their original contributions. Citation If you find this codebase useful for your research, please use the following entry. BibTeX
@article{ma2023dreamtalk,
title={DreamTalk: When Expressive Talking Head Generation Meets Diffusion Probabilistic Models},
author={Ma, Yifeng and Zhang, Shiwei and Wang, Jiayu and Wang, Xiang and Zhang, Yingya and Deng, Zhidong},
journal={arXiv preprint arXiv:2312.09767},
year={2023}
} Disclaimer This method is intended for RESEARCH/NON-COMMERCIAL USE ONLY .;Official implementations for paper: DreamTalk: When Expressive Talking Head Generation Meets Diffusion Probabilistic Models;audio-visual-learning,face-animation,talking-head,video-generation | ali-vilab/dreamtalk |
collabora/WhisperFusion;WhisperFusion Seamless conversations with AI (with ultra-low latency) Welcome to WhisperFusion. WhisperFusion builds upon the capabilities of
the WhisperLive and WhisperSpeech by
integrating Mistral, a Large Language Model (LLM), on top of the
real-time speech-to-text pipeline. Both LLM and
Whisper are optimized to run efficiently as TensorRT engines, maximizing
performance and real-time processing capabilities. While WhiperSpeech is
optimized with torch.compile. Features Real-Time Speech-to-Text : Utilizes OpenAI WhisperLive to convert
spoken language into text in real-time. Large Language Model Integration : Adds Mistral, a Large Language
Model, to enhance the understanding and context of the transcribed
text. TensorRT Optimization : Both LLM and Whisper are optimized to
run as TensorRT engines, ensuring high-performance and low-latency
processing. torch.compile : WhisperSpeech uses torch.compile to speed up
inference which makes PyTorch code run faster by JIT-compiling PyTorch
code into optimized kernels. Hardware Requirements A GPU with at least 24GB of RAM For optimal latency, the GPU should have a similar FP16 (half) TFLOPS as the RTX 4090. Here are the hardware specifications for the RTX 4090. The demo was run on a single RTX 4090 GPU. WhisperFusion uses the Nvidia TensorRT-LLM library for CUDA optimized versions of popular LLM models. TensorRT-LLM supports multiple GPUs, so it should be possible to run WhisperFusion for even better performance on multiple GPUs. Getting Started We provide a Docker Compose setup to streamline the deployment of the pre-built TensorRT-LLM docker container. This setup includes both Whisper and Phi converted to TensorRT engines, and the WhisperSpeech model is pre-downloaded to quickly start interacting with WhisperFusion. Additionally, we include a simple web server for the Web GUI. Build and Run with docker compose
```bash
mkdir docker/scratch-space
cp docker/scripts/build-* docker/scripts/run-whisperfusion.sh docker/scratch-space/ docker compose build
export MODEL=Phi-3-mini-4k-instruct #Phi-3-mini-128k-instruct or phi-2, By default WhisperFusion uses phi-2
docker compose up
``` Start Web GUI on http://localhost:8000 NOTE Contact Us For questions or issues, please open an issue. Contact us at:
marcus.edel@collabora.com, jpc@collabora.com,
vineet.suryan@collabora.com;WhisperFusion builds upon the capabilities of WhisperLive and WhisperSpeech to provide a seamless conversations with an AI.;[] | collabora/WhisperFusion |
Maplespe/DWMBlurGlass;DWMBlurGlass Add custom effects to the global system title bar, supports Windows 10 and Windows 11. 给全局系统标题栏添加自定义效果,支持win10和win11 | 中文 | English | italiano | français | Türkçe | español This project uses LGNU V3 license . Other languages: The following documentation may be out of date, please refer to the current documentation. Deutsch !!! Do not download DWMBlurGlass from anywhere else!!!! [!WARNING]
We have discovered that someone is pretending to be us and posting DWMBlurGlass with a malicious code implant. To avoid this kind of matter from happening again, please do not download the software from unofficial addresses! We also don't have any official Discord. We only distribute software on Github , Bilibili and winmoes . As well, any new versions for testing are pushed to the test branch first, rather than releasing binaries in advance. Catalog Effects Compatibility Gallery Material Effects Blur Aero Acrylic Mica MicaAlt How to use Install Uninstall Language files Dependencies Effects Adds a custom effect to the global system title bar. Customizable global blur radius or title bar blur radius only. Customizable title bar blend colors. Customizable title bar text color. Aero reflections and parallax effects are available. Restore Windows 7 style title bar button height. Support to enable blur effect for programs using old Windows 7 API DwmEnableBlurBehindWindow. Supports Blur , Aero , Acrylic , and Mica (Win11 only) effects. Individually customizable Light/Dark color mode automatic switching. CustomBlur , AccentBlur and SystemBackdrop blurring methods are available. Third-party theme support. Compatibility Supported as low as Windows 10 2004 and as high as the latest version of Windows 11 (Some blurring methods are not supported in Windows Insider versions). Can be used with third party themes to further customize DWM. We do not modify the rendering logic of the application itself, which is completely different from the logic of MicaForEveryone and therefore maximizes compatibility with third-party programs. We reverse-analyzed DWM and created a custom blur method to bring stunning visual effects, but if you choose the " SystemBackdrop " blur method, it uses the system's publicly available interfaces and has the same effect as MicaForEveryone. Not recommended for use with MicaForEveryone, we do not guarantee compatibility with it. Compatible with ExplorerBlurMica , works better together. Compatible with TranslucentFlyouts . ( It should be noted that even though this project is compatible with TF, EBMv2 is not fully compatible with TFv2 ) Gallery Windows 11 ![image](/Screenshot/10307.png)
![image](/Screenshot/102134.png)
> Enable "Override DWMAPI mica effect (win11)"
![image](/Screenshot/013521.png) Windows 10 ![image](/Screenshot/001701.png)
![image](/Screenshot/100750.png)
Using third-party themes
> Enable "Extend effects to borders (win10)"
> Enable "Aero reflection effect (win10)"
> Enable "Reduce title bar button height (win7 style)"
![image](/Screenshot/025410.png) Material Effects Blur Basic pure blur. Nothing special. Aero Windows 7's glass effect, with saturation and exposure effects on the background when a window is inactive. Acrylic The acrylic recipe: background, blur, exclusion blend, saturation, color/tint overlay and noise. Mica The Mica recipe: blurred wallpaper, saturation and color/tint overlay. MicaAlt All of the above effects can be customized to blend colors. MicaAlt is Mica with a grayish tone, you can modify the blend color by yourself to get the MicaAlt effect. How to use Install Download the compiled program archive from the Release page. Unzip it to a location such as " C:\Program Files ". 3. Run the DWMBlurGlass.exe GUI program and click Install. ![image](/Screenshot/012746.png)
>If nothing happens when you click Install, then you need to click on the Symbols page and click Download.
>**You may receive a notification about missing symbols in the future, especially after system updates.**
![image](/Screenshot/012924.png) Uninstall Run the DWMBlurGlass.exe GUI program and click Uninstall. Delete relevant files Language files We offer several languages, such as English, Simplified Chinese, Spanish, Portuguese and more.
If you would like to help us translate into other languages, please see below for language file formats. First, you need to fork this repository and clone it locally. Open the " Languagefiles " folder and select an existing language such as " en-US.xml " and make a copy. Rename the code to the name of the target language and open the xml file in your favorite text editor. In the second line, in the " local " field, change it to your target language code, which should be the same as the filename (without the .xml extension). You can put your name in the " author " field. Next, please translate the field values in the xml format (be careful not to translate the field names) The correct format is: <config>Config</config> to <config>xxxx</config> . Save your file when finished and copy it to the "data\lang" directory in the folder where the DWMBlurGlass.exe program is located. Next, open DWMBlurGlass.exe and test the language file to see if it works correctly. If it doesn't, check the language code settings and check that the file conforms to the xml format specification. Finally, commit the file to your own forked repository and send a pull request to the main branch of the project. After the request is approved, your file will be released with a future software update. Dependencies MiaoUI interface library v2 (Private) AcrylicEverywhere - Separate upstream implementation of the CustomBlur method, thanks to ALTaleX for research and support. minhook pugixml VC_LTL Windows Implementation Libraries Since the GUI program uses private libraries, only insiders can compile the GUI, which does not affect the compilation of the libraries.;Add custom effect to global system title bar, support win10 and win11.;aero,aeroglass,dwm,win10,win11,acrylic,mica | Maplespe/DWMBlurGlass |
openrecall/openrecall;____ ____ ____
/ __ \____ ___ ____ / __ \___ _________ _/ / /
/ / / / __ \/ _ \/ __ \/ /_/ / _ \/ ___/ __ `/ / /
/ /_/ / /_/ / __/ / / / _, _/ __/ /__/ /_/ / / /
\____/ .___/\___/_/ /_/_/ |_|\___/\___/\__,_/_/_/
/_/ Enjoy this project? Show your support by starring it! ⭐️ Thank you! Join our Discord and/or Telegram community to stay informed of updates! Take Control of Your Digital Memory OpenRecall is a fully open-source, privacy-first alternative to proprietary solutions like Microsoft's Windows Recall or Limitless' Rewind.ai. With OpenRecall, you can easily access your digital history, enhancing your memory and productivity without compromising your privacy. What does it do? OpenRecall captures your digital history through regularly taken snapshots, which are essentially screenshots. The text and images within these screenshots are analyzed and made searchable, allowing you to quickly find specific information by typing relevant keywords into OpenRecall. You can also manually scroll back through your history to revisit past activities. https://github.com/openrecall/openrecall/assets/16676419/cfc579cb-165b-43e4-9325-9160da6487d2 Why Choose OpenRecall? OpenRecall offers several key advantages over closed-source alternatives: Transparency : OpenRecall is 100% open-source, allowing you to audit the source code for potential backdoors or privacy-invading features. Cross-platform Support : OpenRecall works on Windows, macOS, and Linux, giving you the freedom to use it on your preferred operating system. Privacy-focused : Your data is stored locally on your device, no internet connection or cloud is required. In addition, you have the option to encrypt the data on a removable disk for added security, read how in our guide here. Hardware Compatibility : OpenRecall is designed to work with a wide range of hardware , unlike proprietary solutions that may require specific certified devices. Features Time Travel : Revisit and explore your past digital activities seamlessly across Windows, macOS, or Linux. Local-First AI : OpenRecall harnesses the power of local AI processing to keep your data private and secure. Semantic Search : Advanced local OCR interprets your history, providing robust semantic search capabilities. Full Control Over Storage : Your data is stored locally, giving you complete control over its management and security. Comparison | Feature | OpenRecall | Windows Recall | Rewind.ai |
|------------------|-------------------------------|--------------------------------------------------|----------------------------------------|
| Transparency | Open-source | Closed-source | Closed-source |
| Supported Hardware | All | Copilot+ certified Windows hardware | M1/M2 Apple Silicon |
| OS Support | Windows, macOS, Linux | Windows | macOS |
| Privacy | On-device, self-hosted | Microsoft's privacy policy applies | Connected to ChatGPT |
| Cost | Free | Part of Windows 11 (requires specialized hardware) | Monthly subscription | Quick links Roadmap and you can vote for your favorite features FAQ Get Started Prerequisites Python 3.11 MacOSX/Windows/Linux Git To install: python3 -m pip install --upgrade --no-cache-dir git+https://github.com/openrecall/openrecall.git To run: python3 -m openrecall.app Open your browser to: http://localhost:8082 to access OpenRecall. Arguments --storage-path (default: user data path for your OS): allows you to specify the path where the screenshots and database should be stored. We recommend creating an encrypted volume to store your data. --primary-monitor-only (default: False): only record the primary monitor (rather than individual screenshots for other monitors) Contribute As an open-source project, we welcome contributions from the community. If you'd like to help improve OpenRecall, please submit a pull request or open an issue on our GitHub repository. Contact the maintainers mail@datatalk.be License OpenRecall is released under the AGPLv3 , ensuring that it remains open and accessible to everyone.;OpenRecall is a fully open-source, privacy-first alternative to proprietary solutions like Microsoft's Windows Recall. With OpenRecall, you can easily access your digital history, enhancing your memory and productivity without compromising your privacy.;ai,alternative,history,macos,ocr,open-source,privacy,python,recall,search | openrecall/openrecall |
successfulstudy/promptoftheyear;Prompt Of The Year: 2023 🌟 Introduction In the evolving world of Large Language Models (LLMs), crafting effective prompts has become an essential skill. That's why I've created this collection, showcasing the most impactful prompts of the year across various intriguing domains. 🌐 Join me in this exploration of creativity and technology, where each prompt opens up new possibilities. 🚀 Domains I Explore 🧳 Job Hunting : Explore prompts that enhance job searches and interviews, aiding in career transitions. 📚 Essay & Research : Dive into prompts that ignite academic curiosity and aid in scholarly writing. 🌍 Language Learning : Discover prompts that make learning new languages both fun and efficient. 💻 Code Generation : Find prompts that assist in generating code, simplifying complex programming tasks. 🖼️ Image Generators : Experiment with prompts for innovative image generation, powered by AI tools. 🧠 Mental Health : Offers prompts on mental well-being with support strategies for stress, anxiety, and mental health management. 🎵 Music : Diverse prompts on music creation, theory, and exploring new artists and genres. 📈 Marketing : Insights into effective marketing strategies, consumer behavior, and digital marketing trends. 📊 Data Analysis : Guides on data analysis techniques, statistical methods, and understanding big data. You can access the complete collection of prompts in the prompts.csv file located in this repository. Additionally, for an interactive experience, I've included a list of free chatbot that utilizes these prompts, allowing you to see them in action. A Note on Authorship and Acknowledgments Each prompt in this collection includes a backlink to acknowledge its original author. The copyright for each prompt remains with its creator. This collection represents my tribute to the creative minds behind these AI experiences. 📝 Stay tuned as I unveil the best prompts of the year in these exciting domains! 🌟 Prompts Trial Job Hunting Resume Editing : Provides detailed resume editing services, focusing on error correction and suggestions for improvements in structure and content. ResumeGPT - Write a resume faster than ever! : An interactive game where AI, acting as ResumeGPT, assists users in writing resumes through a series of questions. Job Cover Letter : Assists in crafting professional and tailored cover letters for specific job applications. Career Counselor : Offers personalized career guidance and advice, tailored to individual backgrounds, interests, and goals. FreelanceGPT: Fully Automated Freelancer : Role-plays as an experienced AI freelancer to create compelling application letters for freelance job listings. Essay & Research Academic Essay : Develops a detailed outline for a 1000-word essay on 'The Impact of AIGC on Online Content Creation and Capture', structuring the essay logically from introduction to conclusion. Rewrite Like A Human: Variable AI Content Revisor : Focuses on rewriting content with high perplexity and burstiness, using uncommon terminology for originality, and formatting the article in a professional manner. Essay Improver : Enhances English language usage in texts, replacing basic language with more advanced and eloquent expressions while maintaining the original essence. Grammar Corrector : Acts as a grammar-checking tool, correcting user inputs for fluency and grammatical accuracy. Research paper summary : Summarizes research papers, focusing on key points and findings without personal opinions or interpretations. 📚 NotesGPT: Notes like You've NEVER Seen Before! 💡✨ : Specializes in reading texts thoroughly and creating detailed, engaging, and comprehensive notes with an emphasis on visual elements and organization. Language Learning LAN GPT - Learn Anything Now (FAST) : A teaching tool designed to explain complex concepts to students using simple language and real-world examples, mimicking the style of the world's greatest teachers. TeacherGPT - Learning Assistant that will Help you with School : An AI-powered assistant that helps students understand school subjects, complete homework, and provide step-by-step solutions for various problems. Language Teacher : A language teaching tool that provides tests, lessons, and interactive quizzes in a chosen language, offering experience points and leveling up system for learners. Multilingual Translator 0.1 : Specializes in accurate and nuanced translations into any language, focusing on linguistic complexities to ensure translations are appropriate and accurate. Translator and Improver : An English translator and improver that detects any language, translates, corrects, and improves the text to a higher level of English, focusing on literary quality while maintaining the original meaning. Code Generation CodeGPT v3 - Best Coding Assistant For Unique Code : A coding game where CodeGPT, an AI, manages dialogues with three experts, discussing project ideas and exchanging thoughts to build a file list based on customer requests. 👨💻 CodeGPT - V4 OpenHive Edition 👨💻 : Role-playing as CodeGPT, an AI orchestrating a dialogue between a user and three experts, discussing any subject and assisting customers. The game involves experts conversing and generating a list of files based on the discussion. BeastGPT (Coding generator) : A game where the AI role-plays as "BeastGPT," a highly intelligent and rule-defying coder who creates any code requested by the user, regardless of the nature of the request. CodeX PRO v.2.0 : Simulates a test conversation as CodeX PRO in an intellectual artificial mode, providing detailed and convincing responses to any user request, regardless of the nature of the content. ProgrammerBOT : Role-play as an expert software designer, advising on software development matters, with a focus on different programming styles, domains, and problem-solving approaches. Image Generators DALL-E 3 Prompt Craft : Introduces DALL-E 3, a tool designed to assist users in crafting high-quality prompts for generating images. The game emphasizes understanding user intent and context, guiding users through a series of questions to create clear, detailed, and tailored image prompts. Image Generator : Provides an interactive assistant for creating image prompts in ChatGPT. It involves steps like generating and refining descriptions, choosing aspect ratios, and specific image details, leading to the generation of a custom image URL. Leonardo AI Prompt Generator : Acts as a prompt generator for "Leonardo AI," a generative AI for image creation. It focuses on detailed prompt structures encompassing subject description, image type, art styles, inspirations, camera details, and lighting, ensuring the generation of high-quality images. Image Generator Pro : Role-plays as an advanced image generator capable of creating images from prompts, including vague or inconsistent ones. The tool emphasizes the creation of beautiful, high-quality images that perfectly depict the ideas in the prompts. Mental Health MindWell Connect Bot : Interactive platform for mental health discussions, offering support and community connection. TherapyGPT : Role-plays a psychologist in therapy sessions, aiding users with psychological knowledge. WellPathAI : Acts as a counselor in a game, assisting in mental health and well-being improvement. Prevent Depression : Provides depression tests and prevention tips with practical suggestions. CBT Therapist & Thought Pattern Identifier : A CBT psychotherapist role-play focusing on identifying and reframing negative thought patterns. Music SongMakerGPT : A creative AI that crafts song lyrics with user input, including song type, theme, lyrics format, and the option for explicit content. 🎤B-Rabbit🎶 Ultimate Rap Lyrics Generator : Emulates a rap lyrics genius, focusing on urban slang and themes, with a methodical approach to writing authentic rap lyrics. LyricsAI : An interactive AI for generating or finding music for lyrics, adapting to different genres and user preferences, while maintaining a friendly and collaborative approach. 🎙️Rap Wiz: Ultimate Rap Lyrics Generator : Composes deep, metaphor-rich rap lyrics reflecting personal evolution, with a focus on creating a unique and impactful rap identity. RapLyrical Battleground : A platform for creating battle rap lyrics, emphasizing wordplay, punchlines, and a strong narrative flow, turning words into powerful lyrical weapons. Marketing SEO CONTENT MASTER 🔥 : An AI expert in SEO content creation, crafting unique, engaging, and optimized articles with a conversational style and detailed outline. Blog Post : Specializes in generating blog outlines on optimizing cold emailing efficiency, incorporating best practices and actionable strategies. Youtube Title(SEO optimized) : An AI that understands YouTube algorithms to generate SEO-optimized video titles for increased viewership. SocialNetworkGPT - The Ultimate Social Media Marketing Assistant : An assistant for creating engaging content across social networks, offering profile setup and daily posting strategies. WriterGPT - Articles Edition : Acts as a professional article writer, focusing on clear, well-structured, and engaging web articles with appropriate multimedia and citations. Data Analysis Text-to-visualization advanced all-in-one AI BUILD solution : An advanced AI solution for visualizing text data, utilizing Python libraries and React for creating AI-powered web applications. Predict stock prices : Specializes in accurate stock price prediction, leveraging data like stock values and time for analysis. Solvr - Your ultimate math assistant : A comprehensive math and data visualization assistant, capable of solving complex mathematical problems and rendering data visually. Probability Superforecaster : An AI that excels in forecasting probabilities across various scenarios, using a range of data and methodical approaches. Data Scientist Assistant : Provides expert assistance in Data Science, covering topics like machine learning, statistics, and data visualization. To be continued...;In the evolving world of Large Language Models (LLMs), crafting effective prompts has become an essential skill. That's why I've created this collection, showcasing the most impactful prompts of the year across various intriguing domains. 🌐;chatgpt,prompts,prompts-template | successfulstudy/promptoftheyear |
lapce/lapdev;# Lapdev
**Self-hosted remote development enviroment management with ease** Lapdev is a self hosted application that spins up remote development environments on your own servers or clouds. It scales from a single machine in the corner to a global fleet of servers. It uses Devcontainer open specification for defining your development environment as code. If you’re interested in a deep dive into how Lapdev works, you can read about its architecture here. Demo You can have a quick feel of how Lapdev works by going to our demo installation https://ws.lap.dev/ We don't request read_repo scope for Github Oauth, so you can only play with public repositories. The machine is hosted on Hetzner Germany so there could be some latency if you live far away. Features Self hosted with ease: Lapdev is designed to be self hosted with minimum efforts for installation and maintenance. The application is designed to just work, sparing you from digging too deep into the internals for troubleshooting. Horizontal scalability: With a simple yet powerful architecture , Lapdev can scale from a single machine to a fleet of servers, so that you can have a development environment management system that can grow with your developer teams. Development Environment as Code: Using the Devcontainer open specification , Lapdev allows you to define your development environment as code. This empowers you to standardize development environments that can be replicated across different developers, avoiding environment related issues and ensuring a consistent setup for everyone. Save Onboarding Time: Onboarding developers to new projects don't need hours or days to prepare the environment on their machines. They can start to code instantly. Planned Features More workspace types: Currently Lapdev only supports container based workspaces, which has its own limitations for example when you want to run a k8s cluster in your development flow. It's planned to have support for more than containers. VMs and bare metal machine support are on the roadmap. And more OS support is planned as well, e.g. when you are developing a cross platform desktop application for Windows, Linux and macOS, Lapdev can spin up development environments on all of them and you can develop and debug from the same local machine without the need to switch machines. Installation You can see the installation steps here . Build from source Contributing;Self-Hosted Remote Dev Environment;[] | lapce/lapdev |
Azure/PyRIT;Python Risk Identification Tool for generative AI (PyRIT) The Python Risk Identification Tool for generative AI (PyRIT) is an open
access automation framework to empower security professionals and ML
engineers to red team foundation models and their applications. Introduction PyRIT is a library developed by the AI Red Team for researchers and engineers
to help them assess the robustness of their LLM endpoints against different
harm categories such as fabrication/ungrounded content (e.g., hallucination),
misuse (e.g., bias), and prohibited content (e.g., harassment). PyRIT automates AI Red Teaming tasks to allow operators to focus on more
complicated and time-consuming tasks and can also identify security harms such
as misuse (e.g., malware generation, jailbreaking), and privacy harms
(e.g., identity theft). The goal is to allow researchers to have a baseline of how well their model
and entire inference pipeline is doing against different harm categories and
to be able to compare that baseline to future iterations of their model.
This allows them to have empirical data on how well their model is doing
today, and detect any degradation of performance based on future improvements. Additionally, this tool allows researchers to iterate and improve their
mitigations against different harms.
For example, at Microsoft we are using this tool to iterate on different
versions of a product (and its metaprompt) so that we can more effectively
protect against prompt injection attacks. Where can I learn more? Microsoft Learn has a dedicated page on AI Red Teaming . Check out our docs for more information
on how to install PyRIT ,
our How to Guide ,
and more, as well as our demos folder. Trademarks This project may contain trademarks or logos for projects, products, or services.
Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft's Trademark & Brand Guidelines .
Use of Microsoft trademarks or logos in modified versions of this project must
not cause confusion or imply Microsoft sponsorship.
Any use of third-party trademarks or logos are subject to those third-party's
policies.;The Python Risk Identification Tool for generative AI (PyRIT) is an open access automation framework to empower security professionals and machine learning engineers to proactively find risks in their generative AI systems.;ai-red-team,generative-ai,red-team-tools,responsible-ai | Azure/PyRIT |
InstantStyle/InstantStyle;InstantStyle: Free Lunch towards Style-Preserving in Text-to-Image Generation [**Haofan Wang**](https://haofanwang.github.io/) * · [**Matteo Spinelli**](https://github.com/cubiq) · [**Qixun Wang**](https://github.com/wangqixun) · [**Xu Bai**](https://huggingface.co/baymin0220) · [**Zekui Qin**](https://github.com/ZekuiQin) · [**Anthony Chen**](https://antonioo-c.github.io/)
InstantX Team * corresponding authors [![Hugging Face](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Space-red)](https://huggingface.co/spaces/InstantX/InstantStyle)
[![ModelScope](https://img.shields.io/badge/ModelScope-Studios-blue)](https://modelscope.cn/studios/instantx/InstantStyle/summary)
[![GitHub](https://img.shields.io/github/stars/InstantStyle/InstantStyle?style=social)](https://github.com/InstantStyle/InstantStyle) InstantStyle is a general framework that employs two straightforward yet potent techniques for achieving an effective disentanglement of style and content from reference images. Principle Separating Content from Image. Benefit from the good characterization of CLIP global features, after subtracting the content text fea- tures from the image features, the style and content can be explicitly decoupled. Although simple, this strategy is quite effective in mitigating content leakage. Injecting into Style Blocks Only. Empirically, each layer of a deep network captures different semantic information the key observation in our work is that there exists two specific attention layers handling style. Specifically, we find up blocks.0.attentions.1 and down blocks.2.attentions.1 capture style (color, material, atmosphere) and spatial layout (structure, composition) respectively. Release [2024/04/29] 🔥 We support InstantStyle natively in diffusers, usage can be found here [2024/04/24] 🔥 InstantStyle for fast generation, find demos at InstantStyle-SDXL-Lightning and InstantStyle-Hyper-SDXL . [2024/04/24] 🔥 We support HiDiffusion for generating highres images, find more information here . [2024/04/23] 🔥 InstantStyle has been natively supported in diffusers, more information can be found here . [2024/04/20] 🔥 InstantStyle is supported in Mikubill/sd-webui-controlnet . [2024/04/11] 🔥 We add the experimental distributed inference feature. Check it here . [2024/04/10] 🔥 We support an online demo on ModelScope. [2024/04/09] 🔥 We support an online demo on Huggingface. [2024/04/09] 🔥 We support SDXL-inpainting, more information can be found here . [2024/04/08] 🔥 InstantStyle is supported in AnyV2V for stylized video-to-video editing, demo can be found here . [2024/04/07] 🔥 We support image-based stylization, more information can be found here . [2024/04/07] 🔥 We support an experimental version for SD1.5, more information can be found here . [2024/04/03] 🔥 InstantStyle is supported in ComfyUI_IPAdapter_plus developed by our co-author. [2024/04/03] 🔥 We release the technical report . Demos Stylized Synthesis Image-based Stylized Synthesis Comparison with Previous Works Download Follow IP-Adapter to download pre-trained checkpoints from here . ```
git clone https://github.com/InstantStyle/InstantStyle.git
cd InstantStyle download the models git lfs install
git clone https://huggingface.co/h94/IP-Adapter
mv IP-Adapter/models models
mv IP-Adapter/sdxl_models sdxl_models
``` Usage Our method is fully compatible with IP-Adapter . For feature subtraction, it only works for global feature instead of patch features. For SD1.5, you can find a demo at infer_style_sd15.py , but we find that SD1.5 has weaker perception and understanding of style information, thus this demo is experimental only. All block names can be found in attn_blocks.py and attn_blocks_sd15.py for SDXL and SD1.5 respectively. ```python
import torch
from diffusers import StableDiffusionXLPipeline
from PIL import Image from ip_adapter import IPAdapterXL base_model_path = "stabilityai/stable-diffusion-xl-base-1.0"
image_encoder_path = "sdxl_models/image_encoder"
ip_ckpt = "sdxl_models/ip-adapter_sdxl.bin"
device = "cuda" load SDXL pipeline pipe = StableDiffusionXLPipeline.from_pretrained(
base_model_path,
torch_dtype=torch.float16,
add_watermarker=False,
) reduce memory consumption pipe.enable_vae_tiling() load ip-adapter target_blocks=["block"] for original IP-Adapter target_blocks=["up_blocks.0.attentions.1"] for style blocks only target_blocks = ["up_blocks.0.attentions.1", "down_blocks.2.attentions.1"] # for style+layout blocks ip_model = IPAdapterXL(pipe, image_encoder_path, ip_ckpt, device, target_blocks=["up_blocks.0.attentions.1"]) image = "./assets/0.jpg"
image = Image.open(image)
image.resize((512, 512)) generate image variations with only image prompt images = ip_model.generate(pil_image=image,
prompt="a cat, masterpiece, best quality, high quality",
negative_prompt= "text, watermark, lowres, low quality, worst quality, deformed, glitch, low contrast, noisy, saturation, blurry",
scale=1.0,
guidance_scale=5,
num_samples=1,
num_inference_steps=30,
seed=42,
#neg_content_prompt="a rabbit",
#neg_content_scale=0.5,
) images[0].save("result.png")
``` Use in diffusers InstantStyle has already been integrated into diffusers (please make sure that you have installed diffusers>=0.28.0.dev0), making the usage significantly simpler. You can now control the per-transformer behavior of each IP-Adapter with the set_ip_adapter_scale() method, using a configuration dictionary as shown below: ```python
from diffusers import StableDiffusionXLPipeline
from PIL import Image
import torch load SDXL pipeline pipe = StableDiffusionXLPipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0",
torch_dtype=torch.float16,
add_watermarker=False,
) load ip-adapter pipe.load_ip_adapter("h94/IP-Adapter", subfolder="sdxl_models", weight_name="ip-adapter_sdxl.bin")
pipe.enable_vae_tiling() configure ip-adapter scales. scale = {
"down": {"block_2": [0.0, 1.0]},
"up": {"block_0": [0.0, 1.0, 0.0]},
}
pipeline.set_ip_adapter_scale(scale)
``` In this example. We set scale=1.0 for IP-Adapter in the second transformer of down-part, block 2, and the second in up-part, block 0. Note that there are 2 transformers in down-part block 2 so the list is of length 2, and so do the up-part block 0. The rest IP-Adapter will have a zero scale which means disable them in all the other layers. With the help of set_ip_adapter_scale() , we can now configure IP-Adapters without a need of reloading them everytime we want to test the IP-Adapter behaviors. ```python for original IP-Adapter scale = 1.0
pipeline.set_ip_adapter_scale(scale) for style blocks only scale = {
"up": {"block_0": [0.0, 1.0, 0.0]},
}
pipeline.set_ip_adapter_scale(scale)
``` Multiple IP-Adapter images with masks You can also load multiple IP-Adapters, together with multiple IP-Adapter images with masks for more precisely layout control just as that in IP-Adapter do. ```python
from diffusers import StableDiffusionXLPipeline
from diffusers.image_processor import IPAdapterMaskProcessor
from transformers import CLIPVisionModelWithProjection
from PIL import Image
import torch image_encoder = CLIPVisionModelWithProjection.from_pretrained(
"h94/IP-Adapter", subfolder="models/image_encoder", torch_dtype=torch.float16
).to("cuda") pipe = StableDiffusionXLPipeline.from_pretrained(
"RunDiffusion/Juggernaut-XL-v9", torch_dtype=torch.float16, image_encoder=image_encoder, variant="fp16"
).to("cuda") pipe.load_ip_adapter(
["ostris/ip-composition-adapter", "h94/IP-Adapter"],
subfolder=["", "sdxl_models"],
weight_name=[
"ip_plus_composition_sdxl.safetensors",
"ip-adapter_sdxl_vit-h.safetensors",
],
image_encoder_folder=None,
) scale_1 = {
"down": [[0.0, 0.0, 1.0]],
"mid": [[0.0, 0.0, 1.0]],
"up": {"block_0": [[0.0, 0.0, 1.0], [1.0, 1.0, 1.0], [0.0, 0.0, 1.0]], "block_1": [[0.0, 0.0, 1.0]]},
} activate the first IP-Adapter in everywhere in the model, configure the second one for precise style control to each masked input. pipe.set_ip_adapter_scale([1.0, scale_1]) processor = IPAdapterMaskProcessor()
female_mask = Image.open("./assets/female_mask.png")
male_mask = Image.open("./assets/male_mask.png")
background_mask = Image.open("./assets/background_mask.png")
composition_mask = Image.open("./assets/composition_mask.png")
mask1 = processor.preprocess([composition_mask], height=1024, width=1024)
mask2 = processor.preprocess([female_mask, male_mask, background_mask], height=1024, width=1024)
mask2 = mask2.reshape(1, mask2.shape[0], mask2.shape[2], mask2.shape[3]) # output -> (1, 3, 1024, 1024) ip_female_style = Image.open("./assets/ip_female_style.png")
ip_male_style = Image.open("./assets/ip_male_style.png")
ip_background = Image.open("./assets/ip_background.png")
ip_composition_image = Image.open("./assets/ip_composition_image.png") image = pipe(
prompt="high quality, cinematic photo, cinemascope, 35mm, film grain, highly detailed",
negative_prompt="",
ip_adapter_image=[ip_composition_image, [ip_female_style, ip_male_style, ip_background]],
cross_attention_kwargs={"ip_adapter_masks": [mask1, mask2]},
guidance_scale=6.5,
num_inference_steps=25,
).images[0]
image ``` High Resolution Generation We employ HiDiffusion to seamlessly generate high-resolution images, you can install via pip install hidiffusion . ```python
from hidiffusion import apply_hidiffusion, remove_hidiffusion reduce memory consumption pipe.enable_vae_tiling() apply hidiffusion with a single line of code. apply_hidiffusion(pipe) ... generate image at higher resolution images = ip_model.generate(pil_image=image,
prompt="a cat, masterpiece, best quality, high quality",
negative_prompt= "text, watermark, lowres, low quality, worst quality, deformed, glitch, low contrast, noisy, saturation, blurry",
scale=1.0,
guidance_scale=5,
num_samples=1,
num_inference_steps=30,
seed=42,
height=2048,
width=2048
)
``` Distributed Inference On distributed setups, you can run inference across multiple GPUs with 🤗 Accelerate or PyTorch Distributed, which is useful for generating with multiple prompts in parallel, in case you have limited VRAM on each GPU. More information can be found here . Make sure you have installed diffusers from the source and the lastest accelerate. python
max_memory = {0:"10GB", 1:"10GB"}
pipe = StableDiffusionXLPipeline.from_pretrained(
base_model_path,
torch_dtype=torch.float16,
add_watermarker=False,
device_map="balanced",
max_memory=max_memory
) Start a local gradio demo Run the following command: sh
git clone https://github.com/InstantStyle/InstantStyle.git
cd ./InstantStyle/gradio_demo/
pip install -r requirements.txt
python app.py Resources InstantStyle for WebUI InstantStyle for ComfyUI InstantID Disclaimer The pretrained checkpoints follow the license in IP-Adapter . Users are granted the freedom to create images using this tool, but they are obligated to comply with local laws and utilize it responsibly. The developers will not assume any responsibility for potential misuse by users. Acknowledgements InstantStyle is developed by the InstantX team and is highly built on IP-Adapter , which has been unfairly compared by many other works. We at InstantStyle make IP-Adapter great again. Additionally, we acknowledge Hu Ye for his valuable discussion. Star History Cite If you find InstantStyle useful for your research and applications, please cite us using this BibTeX: bibtex
@article{wang2024instantstyle,
title={InstantStyle: Free Lunch towards Style-Preserving in Text-to-Image Generation},
author={Wang, Haofan and Wang, Qixun and Bai, Xu and Qin, Zekui and Chen, Anthony},
journal={arXiv preprint arXiv:2404.02733},
year={2024}
} For any question, feel free to contact us via haofanwang.ai@gmail.com.;InstantStyle: Free Lunch towards Style-Preserving in Text-to-Image Generation 🔥;[] | InstantStyle/InstantStyle |
nestriness/nestri;An open-source cloud gaming platform built for you to play together with your friends. A GeForce NOW alternative that can be self-hosted. [![][github-release-shield]][github-release-link]
[![][discord-shield]][discord-link]
[![][github-license-shield]][github-license-link]
[![][github-stars-shield]][github-stars-link]
**Share the Nestri Repository on Social Media**
[![][share-x-shield]][share-x-link]
[![][share-reddit-shield]][share-reddit-link] [![][image-overview]][website-link] Features 🌟 As Nestri navigates through its experimental phase, we're in the process of crafting and testing a variety of features for you: 🎮 Proton-GE Compatibility: We utilize Proton-GE for running games, offering extensive game compatibility and ensuring you can play a wide variety of titles. ⚡ QUIC Protocol: For input and video/audio transmission, we use QUIC via MoQ to significantly reduce latency on variable networks, ensuring crisp visuals without sacrificing data or increasing latency. 🔗 Session IDs: With a SESSION_ID , we seamlessly tie your game progress, achievements, and devices without the need for logging into nestri.io . Our public CDNs, currently located in Europe (eu-north-1) and the US (us-east-1), further reduce latency for friends playing together from afar. More locations are on the way! 🔄 Automatic Game Progress Sync: Like Stadia's state share, we automatically sync your game progress based on the *.exe file you're running, allowing you to share your progress with friends via a link. 👫 Co-op Gameplay: Play co-op with up to 8 people, or play single player by sharing mouse and keyboard inputs seamlessly (similar to neko ). 🌐 Cross-Platform Play: Our platform is accessible on any device that can run a Chrome-based browser, including Edge, Chrome, Brave, and Arc. And you do not have to set up anything, it's all done and maintained for you. 📊 Bandwidth Optimization: Experience hardware-accelerated VMAF for optimized bandwidth, ensuring the best possible video quality. Learn More 🌟 ...and more: Stay tuned as we continue to add features sometimes inspired by platforms like Stadia , to give you the best and most customizable gaming experience. This platform is in an experimental phase, and we're actively working on adding new features. Your feedback and support is very much appreciated. [!IMPORTANT] If you're excited about what we're doing and want to support our journey, consider giving us a star ⭐ on our repository. Your support fuels our progress!. \~ ✨ Getting Started 🎮 Whether you're looking to self-host Nestri or simply want to try it out without the need for your own GPU, we've got you covered. Choose the path that best suits your needs: | If you don't have a Nvidia GPU or prefer not to self-host, you can visit our website. No installation or set up required ! This is the perfect option for gamers looking to dive straight into the action without any setup. | 👉🏽 Get Access |
| :---------------------------------------- | :----------------------------------------------------------------------------------------------------------------- |
| If you're interested in self-hosting Nestri, continue reading for detailed instructions on how to get started. This option is ideal if you have your own Nvidia GPU and are comfortable with setting up and managing your own server. | 🛠️ Self Host Nestri | [!TIP] Remember, flexibility is key with Nestri. You're free to switch between self-hosting and using nestri.io whenever you like, without losing your game progress. \~ 💡 Self-Hosting Nestri 🔨 For those interested in self-hosting, here are is what you need to get your own Nestri server up and running: Nvidia GPU : Unfortunately, this setup is exclusive to Nvidia GPUs. If you don't own one, consider renting from cloud services like AWS, GCP, or Vast.ai. We highly recommend this approach. CUDA : For GPU acceleration, CUDA version 12.0 or newer is required. Verify your CUDA installation by running nvcc --version . Docker : Ensure you have docker and nvidia-docker are up to date to avoid compatibility issues with CUDA. You can check your Docker version by running docker --version in your terminal. GPU Driver : Ensure your GPU drivers are up to date to avoid compatibility issues with CUDA. Nvidia driver version 520.56.06 or newer is required. Xorg Display : Your Nvidia GPU should not be attached to a running X display server. You can confirm this by running nvidia-smi . Nvidia-DRM : Make sure that the nvidia-drm module has been loaded and that the module is loaded with the flag modeset=1 . Confirm this by running sudo cat /sys/module/nvidia_drm/parameters/modeset [!TIP] Typically, if your setup meets the necessary CUDA requirements, the nvidia-drm module will already be loaded, particularly in AWS G4dn instances. \~ 💡 Step-by-Step Guide Follow these steps to get Nestri up and running on your system. [!IMPORTANT] This is our pilot, there is a lot we haven't figured out yet. Please file an issue if anything comes up. \~ 🫂 [!TIP] The setup process will become much simpler with the launch of our CLI tool, so stay tuned for that! In the meantime, you'll need to follow these manual steps. Step 1: Navigate to Your Game Directory First, change your directory to the location of your .exe file. For Steam games, this typically means: bash
cd $HOME/.steam/steam/steamapps
ls -la . Step 2: Generate a Session ID Create a unique session ID using the following command: bash
head /dev/urandom | LC_ALL=C tr -dc 'a-zA-Z0-9' | head -c 16 This command generates a random 16-character string. Be sure to note this string carefully, as you'll need it for the next step. Step 3: Launch the Nestri Server With your SESSION_ID ready, insert it into the command below, replacing <copy here> with your actual session ID. Then, run the command to start the Nestri server: docker run --gpus all --device=/dev/dri --name nestri -it --entrypoint /bin/bash -e SESSION_ID=<copy here> -v "$(pwd)":/game -p 8080:8080/udp --cap-add=SYS_NICE --cap-add=SYS_ADMIN ghcr.io/nestriness/nestri/server:nightly [!TIP] Ensure UDP port 8080 is accessible from the internet. Use ufw allow 8080/udp or adjust your cloud provider's security group settings accordingly. Step 4: Configure the Game within the Container After executing the previous command, you'll be in a new shell within the container (example: nestri@3f199ee68c01:~$ ). Perform the following checks: Verify the game is mounted by executing ls -la /game . If not, exit and ensure you've correctly mounted the game directory as a volume. Then, start the Netris server by running /etc/startup.sh > /dev/null & . Step 5: Running Your Game Wait for the .X11-unix directory to appear in /tmp (check with ls -la /tmp ). Once it appears, you're ready to launch your game. With Proton-GE: nestri-proton -pr <game>.exe With Wine: nestri-proton -wr <game>.exe Step 6: Begin Playing Finally, construct the play URL with your session ID: echo "https://nestri.io/play/$SESSION_ID" Navigate to this URL in your browser, click on the page to capture your mouse pointer, and start playing!;[Experimental] Open-source GeForce NOW alternative with Stadia's social features;dxvk,geforcenow,linux-gaming,nvidia,proton,stadia,steamdeck,vkd3d,wine | nestriness/nestri |
passivestar/godot-minimal-theme;Minimal Godot Theme Installation Go to Releases and download the .tres file for your version. Use the high ppi file for high pixel density displays (i.e if you're using a laptop with a high-resolution display and in-engine display scale over 100%) In Godot open Editor Settings -> Interface -> Theme , and choose the downloaded theme in the Custom Theme field. A message will pop up saying: "The editor must be restarted for the changes to take effect." with a "Save & Restart" button. If desired, the Base Color, Accent Color, etc. in Recommended settings can be adjusted before doing so. When ready, click "Save & Restart". Versions For Godot 4.2.x use this version: 1.2.1 For Godot 4.3.x and newer use this version: 1.5.0 Recommended settings To match the look of the theme in the screenshot use the following settings: Base Color: #252525 Accent Color: #569eff Contrast: 0.2 Icon Saturation: 2 Font: Inter Compatibility If you're using a different base color rather than the recommended one (non-monochrome), input fields may look wrong. I did my best to support all editor settings, however, as of today, some things in Godot just can't be themed. There's no way to style LineEdit s with support for different base colors Overview Gamefromscratch made a video showing the evolution of Godot UI over the years and explaining how to use this theme You can watch it on YouTube by clicking this:;Godot Minimal Editor Theme;godot,godot-engine,game-engine,gamedev | passivestar/godot-minimal-theme |
PixArt-alpha/PixArt-sigma;👉 PixArt-Σ: Weak-to-Strong Training of Diffusion Transformer for 4K Text-to-Image Generation This repo contains PyTorch model definitions, pre-trained weights and inference/sampling code for our paper exploring
Weak-to-Strong Training of Diffusion Transformer for 4K Text-to-Image Generation. You can find more visualizations on our project page . PixArt-Σ: Weak-to-Strong Training of Diffusion Transformer for 4K Text-to-Image Generation Junsong Chen* , Chongjian Ge* , Enze Xie* †, Yue Wu* , Lewei Yao , Xiaozhe Ren , Zhongdao Wang , Ping Luo , Huchuan Lu , Zhenguo Li Huawei Noah’s Ark Lab, DLUT, HKU, HKUST Welcome everyone to contribute🔥🔥!! Learning from the previous PixArt-α project,
we will try to keep this repo as simple as possible so that everyone in the PixArt community can use it. Breaking News 🔥🔥!! (🔥 New) Apr. 24, 2024. 💥 🧨 diffusers support us now! Congrats!🎉. Remember to update your diffusers checkpoint once to make it available. (🔥 New) Apr. 24, 2024. 💥 LoRA code is released!! (✅ New) Apr. 23, 2024. 💥 PixArt-Σ 2K ckpt is released!! (✅ New) Apr. 16, 2024. 💥 PixArt-Σ Online Demo is available!! (✅ New) Apr. 16, 2024. 💥 PixArt-α-DMD One Step Generator training code are all released! (✅ New) Apr. 11, 2024. 💥 PixArt-Σ Demo & PixArt-Σ Pipeline ! PixArt-Σ supports 🧨 diffusers using patches for fast experience! (✅ New) Apr. 10, 2024. 💥 PixArt-α-DMD one step sampler demo code & PixArt-α-DMD checkpoint 512px are released! (✅ New) Apr. 9, 2024. 💥 PixArt-Σ checkpoint 1024px is released! (✅ New) Apr. 6, 2024. 💥 PixArt-Σ checkpoint 256px & 512px are released! (✅ New) Mar. 29, 2024. 💥 PixArt-Σ training & inference code & toy data are released!!! Contents -Main
* Weak-to-Strong * Training * Inference * Use diffusers * Launch Demo * Available Models -Guidance
* Feature extraction* (Optional) * One step Generation (DMD) * LoRA & DoRA * [LCM: coming soon]
* [ControlNet: coming soon]
* [ComfyUI: coming soon]
* Data reformat* (Optional) -Others
* Acknowledgement * Citation * TODO 🆚 Compare with PixArt-α | Model | T5 token length | VAE | 2K/4K |
|----------|-----------------|--------------------------------------------------------------|-------|
| PixArt-Σ | 300 | SDXL | ✅ |
| PixArt-α | 120 | SD1.5 | ❌ | | Model | Sample-1 | Sample-2 | Sample-3 |
|----------|---------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------|
| PixArt-Σ | | | |
| PixArt-α | | | |
| Prompt | Close-up, gray-haired, bearded man in 60s, observing passersby, in wool coat and brown beret , glasses, cinematic. | Body shot, a French woman, Photography, French Streets background, backlight, rim light, Fujifilm. | Photorealistic closeup video of two pirate ships battling each other as they sail inside a cup of coffee . | Prompt Details Sample-1 full prompt: An extreme close-up of an gray-haired man with a beard in his 60s, he is deep in thought pondering the history of the universe as he sits at a cafe in Paris, his eyes focus on people offscreen as they walk as he sits mostly motionless, he is dressed in a wool coat suit coat with a button-down shirt , he wears a **brown beret** and glasses and has a very professorial appearance, and the end he offers a subtle closed-mouth smile as if he found the answer to the mystery of life, the lighting is very cinematic with the golden light and the Parisian streets and city in the background, depth of field, cinematic 35mm film. 🔧 Dependencies and Installation Python >= 3.9 (Recommend to use Anaconda or Miniconda ) PyTorch >= 2.0.1+cu11.7 ```bash
conda create -n pixart python==3.9.0
conda activate pixart
conda install pytorch==2.0.1 torchvision==0.15.2 torchaudio==2.0.2 pytorch-cuda=11.7 -c pytorch -c nvidia git clone https://github.com/PixArt-alpha/PixArt-sigma.git
cd PixArt-sigma
pip install -r requirements.txt
``` 🔥 How to Train 1. PixArt Training First of all. We start a new repo to build a more user friendly and more compatible codebase. The main model structure is the same as PixArt-α,
you can still develop your function base on the original repo .
lso, This repo will support PixArt-alpha in the future . [!TIP] Now you can train your model without prior feature extraction .
We reform the data structure in PixArt-α code base, so that everyone can start to train & inference & visualize at the very beginning without any pain. 1.1 Downloading the toy dataset Download the toy dataset first.
The dataset structure for training is: ```
cd ./pixart-sigma-toy-dataset Dataset Structure
├──InternImgs/ (images are saved here)
│ ├──000000000000.png
│ ├──000000000001.png
│ ├──......
├──InternData/
│ ├──data_info.json (meta data)
Optional(👇)
│ ├──img_sdxl_vae_features_1024resolution_ms_new (run tools/extract_caption_feature.py to generate caption T5 features, same name as images except .npz extension)
│ │ ├──000000000000.npy
│ │ ├──000000000001.npy
│ │ ├──......
│ ├──caption_features_new
│ │ ├──000000000000.npz
│ │ ├──000000000001.npz
│ │ ├──......
│ ├──sharegpt4v_caption_features_new (run tools/extract_caption_feature.py to generate caption T5 features, same name as images except .npz extension)
│ │ ├──000000000000.npz
│ │ ├──000000000001.npz
│ │ ├──......
``` 1.2 Download pretrained checkpoint ```bash SDXL-VAE, T5 checkpoints git lfs install
git clone https://huggingface.co/PixArt-alpha/pixart_sigma_sdxlvae_T5_diffusers output/pretrained_models/pixart_sigma_sdxlvae_T5_diffusers PixArt-Sigma checkpoints python tools/download.py # environment eg. HF_ENDPOINT=https://hf-mirror.com can use for HuggingFace mirror
``` 1.3 You are ready to train! Selecting your desired config file from config files dir . bash
python -m torch.distributed.launch --nproc_per_node=1 --master_port=12345 \
train_scripts/train.py \
configs/pixart_sigma_config/PixArt_sigma_xl2_img512_internalms.py \
--load-from output/pretrained_models/PixArt-Sigma-XL-2-512-MS.pth \
--work-dir output/your_first_pixart-exp \
--debug 💻 How to Test 1. Quick start with Gradio To get started, first install the required dependencies. Make sure you've downloaded the checkpoint files
from models(coming soon) to the output/pretrained_models folder,
and then run on your local machine: ```bash SDXL-VAE, T5 checkpoints git lfs install
git clone https://huggingface.co/PixArt-alpha/pixart_sigma_sdxlvae_T5_diffusers output/pixart_sigma_sdxlvae_T5_diffusers PixArt-Sigma checkpoints python tools/download.py demo launch python scripts/interface.py --model_path output/pretrained_models/PixArt-Sigma-XL-2-512-MS.pth --image_size 512 --port 11223
``` 2. Integration in diffusers [!IMPORTANT] Upgrade your diffusers to make the PixArtSigmaPipeline available! bash
pip install git+https://github.com/huggingface/diffusers For diffusers<0.28.0 , check this script for help.
```python
import torch
from diffusers import Transformer2DModel, PixArtSigmaPipeline device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
weight_dtype = torch.float16 transformer = Transformer2DModel.from_pretrained(
"PixArt-alpha/PixArt-Sigma-XL-2-1024-MS",
subfolder='transformer',
torch_dtype=weight_dtype,
use_safetensors=True,
)
pipe = PixArtSigmaPipeline.from_pretrained(
"PixArt-alpha/pixart_sigma_sdxlvae_T5_diffusers",
transformer=transformer,
torch_dtype=weight_dtype,
use_safetensors=True,
)
pipe.to(device) Enable memory optimizations. pipe.enable_model_cpu_offload() prompt = "A small cactus with a happy face in the Sahara desert."
image = pipe(prompt).images[0]
image.save("./catcus.png")
``` 3. PixArt Demo ```bash
pip install git+https://github.com/huggingface/diffusers PixArt-Sigma 1024px DEMO_PORT=12345 python app/app_pixart_sigma.py PixArt-Sigma One step Sampler(DMD) DEMO_PORT=12345 python app/app_pixart_dmd.py ``
Let's have a look at a simple example using the http://your-server-ip:12345`. 4. Convert .pth checkpoint into diffusers version Directly download from Hugging Face or run with:
```bash
pip install git+https://github.com/huggingface/diffusers python tools/convert_pixart_to_diffusers.py --orig_ckpt_path output/pretrained_models/PixArt-Sigma-XL-2-1024-MS.pth --dump_path output/pretrained_models/PixArt-Sigma-XL-2-1024-MS --only_transformer=True --image_size=1024 --version sigma
``` ⏬ Available Models All models will be automatically downloaded here . You can also choose to download manually from this url . | Model | #Params | Checkpoint path | Download in OpenXLab |
|:-----------------|:--------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------|
| T5 & SDXL-VAE | 4.5B | Diffusers: pixart_sigma_sdxlvae_T5_diffusers | coming soon |
| PixArt-Σ-256 | 0.6B | pth: PixArt-Sigma-XL-2-256x256.pth Diffusers: PixArt-Sigma-XL-2-256x256 | coming soon |
| PixArt-Σ-512 | 0.6B | pth: PixArt-Sigma-XL-2-512-MS.pth Diffusers: PixArt-Sigma-XL-2-512-MS | coming soon |
| PixArt-α-512-DMD | 0.6B | Diffusers: PixArt-Alpha-DMD-XL-2-512x512 | coming soon |
| PixArt-Σ-1024 | 0.6B | pth: PixArt-Sigma-XL-2-1024-MS.pth Diffusers: PixArt-Sigma-XL-2-1024-MS | coming soon |
| PixArt-Σ-2K | 0.6B | pth: PixArt-Sigma-XL-2-2K-MS.pth Diffusers: PixArt-Sigma-XL-2-2K-MS | coming soon | 💪To-Do List We will try our best to release [x] Training code [x] Inference code [x] Inference code of One Step Sampling with DMD [x] Model zoo (256/512/1024/2K) [x] Diffusers (for fast experience) [x] Training code of One Step Sampling with DMD [x] Diffusers (stable official version: https://github.com/huggingface/diffusers/pull/7654) [x] LoRA training & inference code [ ] Model zoo (KV Compress...) [ ] ControlNet training & inference code 🤗Acknowledgements Thanks to PixArt-α , DiT and OpenDMD for their wonderful work and codebase! Thanks to Diffusers for their wonderful technical support and awesome collaboration! Thanks to Hugging Face for sponsoring the nicely demo! 📖BibTeX @misc{chen2024pixartsigma,
title={PixArt-\Sigma: Weak-to-Strong Training of Diffusion Transformer for 4K Text-to-Image Generation},
author={Junsong Chen and Chongjian Ge and Enze Xie and Yue Wu and Lewei Yao and Xiaozhe Ren and Zhongdao Wang and Ping Luo and Huchuan Lu and Zhenguo Li},
year={2024},
eprint={2403.04692},
archivePrefix={arXiv},
primaryClass={cs.CV} Star History;PixArt-Σ: Weak-to-Strong Training of Diffusion Transformer for 4K Text-to-Image Generation;[] | PixArt-alpha/PixArt-sigma |
ccbikai/Sink;⚡ Sink A Simple / Speedy / Secure Link Shortener with Analytics, 100% run on Cloudflare. ✨ Features URL Shortening: Compress your URLs to their minimal length. Analytics: Monitor link analytics and gather insightful statistics. Serverless: Deploy without the need for traditional servers. Customizable Slug: Support for personalized slugs. 🪄 AI Slug: Leverage AI to generate slugs. Link Expiration: Set expiration dates for your links. 🪧 Demo Experience the demo at Sink.Cool . Log in using the Site Token below: txt
Site Token: SinkCool Screenshots 🧱 Technologies Used Framework : Nuxt Database : Cloudflare Workers KV Analytics Engine : Cloudflare Workers Analytics Engine UI Components : Shadcn-vue Styling: Tailwind CSS Deployment : Cloudflare 🚗 Roadmap [WIP] We welcome your contributions and PRs. [ ] Browser Extension [ ] Raycast Extension [ ] Apple Shortcuts [ ] Enhanced Link Management (with Cloudflare D1) [ ] Analytics Enhancements (Support for merging filter conditions) [ ] Dashboard Performance Optimization (Infinite loading) [ ] Units Test [ ] Support for Other Deployment Platforms 🏗️ Deployment Fork the repository to your GitHub account. Create a Cloudflare Pages project. Select the Sink repository and the Nuxt.js preset. Configure environment variables. NUXT_SITE_TOKEN length must exceed 8 . NUXT_CF_ACCOUNT_ID find your account ID . NUXT_CF_API_TOKEN Create a Cloudflare API token . This token requires Account.Account Analytics permissions at the very least. Reference. . Save and deploy. Cancel the deployment, navigate to Settings -> Functions . KV namespace bindings. Bind the variable name KV to a KV namespace. Workers AI Bindings. Bind the variable name AI to the Workers AI Catalog. Optional Analytics Engine bindings. Bind the variable name ANALYTICS to the sink dataset, and enable Cloudflare Analytics Engine beta for your account. Redeploy. ⚒️ Configuration Configuration Docs 🔌 API API Docs 🙋🏻 FAQs FAQs 💖 Credits Cloudflare NuxtHub Astroship ☕ Sponsor Follow Me on X(Twitter) . Become a sponsor to on GitHub .;⚡ A Simple / Speedy / Secure Link Shortener with Analytics, 100% run on Cloudflare.;cloudflare,link-shortener,link-shortner,linkshortener,sink,url-shortener,urlshortener,urlshorter | ccbikai/Sink |
modelscope/DiffSynth-Studio;DiffSynth Studio Introduction DiffSynth Studio is a Diffusion engine. We have restructured architectures including Text Encoder, UNet, VAE, among others, maintaining compatibility with models from the open-source community while enhancing computational performance. We provide many interesting features. Enjoy the magic of Diffusion models! Roadmap Aug 29, 2023. We propose DiffSynth, a video synthesis framework. Project Page . The source codes are released in EasyNLP . The technical report (ECML PKDD 2024) is released on arXiv . Oct 1, 2023. We release an early version of this project, namely FastSDXL. A try for building a diffusion engine. The source codes are released on GitHub . FastSDXL includes a trainable OLSS scheduler for efficiency improvement. The original repo of OLSS is here . The technical report (CIKM 2023) is released on arXiv . A demo video is shown on Bilibili . Since OLSS requires additional training, we don't implement it in this project. Nov 15, 2023. We propose FastBlend, a powerful video deflickering algorithm. The sd-webui extension is released on GitHub . Demo videos are shown on Bilibili, including three tasks. Video deflickering Video interpolation Image-driven video rendering The technical report is released on arXiv . An unofficial ComfyUI extension developed by other users is released on GitHub . Dec 8, 2023. We decide to develop a new Project, aiming to release the potential of diffusion models, especially in video synthesis. The development of this project is started. Jan 29, 2024. We propose Diffutoon, a fantastic solution for toon shading. Project Page . The source codes are released in this project. The technical report (IJCAI 2024) is released on arXiv . June 13, 2024. DiffSynth Studio is transfered to ModelScope. The developers have transitioned from "I" to "we". Of course, I will still participate in development and maintenance. June 21, 2024. We propose ExVideo, a post-tuning technique aimed at enhancing the capability of video generation models. We have extended Stable Video Diffusion to achieve the generation of long videos up to 128 frames. Project Page . Source code is released in this repo. See examples/ExVideo . Models are released on HuggingFace and ModelScope . Technical report is released on arXiv . Until now, DiffSynth Studio has supported the following models: Stable Diffusion Stable Diffusion XL ControlNet AnimateDiff Ip-Adapter ESRGAN RIFE Hunyuan-DiT Stable Video Diffusion ExVideo Installation git clone https://github.com/modelscope/DiffSynth-Studio.git
cd DiffSynth-Studio
pip install -e . Usage (in Python code) The Python examples are in examples . We provide an overview here. Long Video Synthesis We trained an extended video synthesis model, which can generate 128 frames. examples/ExVideo https://github.com/modelscope/DiffSynth-Studio/assets/35051019/d97f6aa9-8064-4b5b-9d49-ed6001bb9acc Image Synthesis Generate high-resolution images, by breaking the limitation of diffusion models! examples/image_synthesis |512 512|1024 1024|2048 2048|4096 4096|
|-|-|-|-|
| | | | | |1024 1024|2048 2048|
|-|-|
| | | Toon Shading Render realistic videos in a flatten style and enable video editing features. examples/Diffutoon https://github.com/Artiprocher/DiffSynth-Studio/assets/35051019/b54c05c5-d747-4709-be5e-b39af82404dd https://github.com/Artiprocher/DiffSynth-Studio/assets/35051019/20528af5-5100-474a-8cdc-440b9efdd86c Video Stylization Video stylization without video models. examples/diffsynth https://github.com/Artiprocher/DiffSynth-Studio/assets/35051019/59fb2f7b-8de0-4481-b79f-0c3a7361a1ea Chinese Models Use Hunyuan-DiT to generate images with Chinese prompts. We also support LoRA fine-tuning of this model. examples/hunyuan_dit Prompt: 少女手捧鲜花,坐在公园的长椅上,夕阳的余晖洒在少女的脸庞,整个画面充满诗意的美感 |1024x1024|2048x2048 (highres-fix)|
|-|-|
| | | Prompt: 一只小狗蹦蹦跳跳,周围是姹紫嫣红的鲜花,远处是山脉 |Without LoRA|With LoRA|
|-|-|
| | | Usage (in WebUI) python -m streamlit run DiffSynth_Studio.py https://github.com/Artiprocher/DiffSynth-Studio/assets/35051019/93085557-73f3-4eee-a205-9829591ef954;Enjoy the magic of Diffusion models!;[] | modelscope/DiffSynth-Studio |
bia-pain-bache/BPB-Worker-Panel;💦 BPB Panel 🌏 Readme in Farsi Introduction This project is dedicated to developing a user panel for the Cloudflare-workers/pages proxy script created by yonggekkk . The panel offers two deployment options:
- Worker deployment
- Pages deployment 🌟 If you found BPB Panel valuable, Your donations make all the difference 🌟
- USDT (BEP20): 0x111EFF917E7cf4b0BfC99Edffd8F1AbC2b23d158 Features Free : No cost involved. User-Friendly Panel: Designed for easy navigation, configuration and usage. Support Fragment: Provides support for fragment functionality. Block Ads. and Porn (Optional) Bypass Iran and LAN (Optional) Full routing rules: Bypassing Iran, Blocking Ads, Malwares, Phishing... for Sing-box. Chain Proxy: Capable of adding a chain proxy to fix IP. Supports Wide Range of Clients: Offers subscription links for Xray and Sing-box core clients. Subscription Link (JSON): Provides subscription link for JSON configs. Password-Protected Panel: Secure your panel with password protection. Custom Cloudflare Clean IP: Ability to use online scanner and setting up clean IP-domains. Warp configs: Provides Warp and Warp on Warp subscription. How to use: Installation (Pages) Installation (Worker) How to use FAQ Supported Clients | Client | Version | Fragment |
| :-------------: | :-------------: | :-------------: |
| v2rayNG | 1.8.19 or higher | :heavy_check_mark: |
| v2rayN | 6.42 or higher | :heavy_check_mark: |
| Nekobox | | :x: |
| Sing-box | 1.8.10 or higher | :x: |
| Streisand | | :heavy_check_mark: |
| V2Box | | :x: |
| Shadowrocket | | :x: |
| Nekoray | | :heavy_check_mark: |
| Hiddify | | :x: | Stargazers Over Time Special Thanks CF-vless code author 3Kmfi6HP CF preferred IP program author badafans , XIU2 For a detailed tutorial on the core script, please refer to Yongge’s blog and video tutorials .;A GUI Panel providing Worker subscriptions and Fragment settings and configs, providing configs for cross-platform clients using (singbox-core and xray-core);android,chain,fragment,ios,linux,proxy-chain,singbox-core,vless,windows,xray-core | bia-pain-bache/BPB-Worker-Panel |
tinyworldmap/tiny-world-map;tinyworldmap tinyworldmap is a world map for offline-first and low-bandwidth web apps. Nightly demo | Stable demo | v3 announcement tinyworldmap is designed to be used with Leaflet. All zoom levels are supported, and the most complete version is only 450 kB gzipped. Its client-side rendering has been extensively profiled and tested on low-end decade-old phones, with no discernible lag. By default, the map displays the 10,000 most populous cities added to OpenStreetMap. At the time of writing, this encompasses all cities and towns with a population of at least 48,000: Usage There are two ways to use the map: As a base map, replacing OpenStreetMap tiles As an offline fallback for OpenStreetMap tiles As a base map To use tinyworldmap as a Leaflet base map, add the following to your head tag: ```html;tinyworldmap is a tiny world map for offline-first and low-bandwidth web apps;[] | tinyworldmap/tiny-world-map |
OS-Copilot/OS-Copilot;OS-Copilot: Towards Generalist Computer Agents with Self-Improvement [![Website](https://img.shields.io/website?url=https://os-copilot.github.io/)](https://os-copilot.github.io/)
[![Paper](https://img.shields.io/badge/paper--blue)](https://arxiv.org/pdf/2402.07456.pdf)
[![Documentation](https://img.shields.io/badge/documentation--blue)](https://os-copilot.readthedocs.io/en/latest/)
![Python](https://img.shields.io/badge/python-3.10-blue)
[![Discord](https://img.shields.io/discord/1222168244673314847?logo=discord&style=flat)](https://discord.com/invite/rXS2XbgfaD)
[![Twitter](https://img.shields.io/twitter/url/https/twitter.com/cloudposse.svg?style=social&label=Follow%20%40oscopilot)](https://twitter.com/oscopilot) 🔥 News 2024.3 : 🎉 OS-Copilot is accepted at the LLM Agents Workshop @ICLR 2024. What is OS-Copilot OS-Copilot is an open-source library to build generalist agents capable of automatically interfacing with comprehensive elements in an operating system (OS), including the web, code terminals, files, multimedia, and various third-party applications. ⚡️ Quickstart Clone the GitHub Repository: git clone https://github.com/OS-Copilot/OS-Copilot.git Set Up Python Environment and Install Dependencies: ```
conda create -n oscopilot_env python=3.10 -y
conda activate oscopilot_env cd OS-Copilot
pip install -e .
``` Set OpenAI API Key: Configure your OpenAI API key in .env . cp .env_template .env Now you are ready to have fun: python quick_start.py * FRIDAY currently only supports single-round conversation . 🛠️ Tutorial | Level | Tutorial | Description |
|------------------|-------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------|
| Beginner | Installation | Explore three methods to install FRIDAY. |
| Beginner | Getting Started | The simplest demonstration of FRIDAY with a quick_start.py script. |
| Intermediate | Adding Your Tools | Adding and removing tools to the FRIDAY. |
| Intermediate | Deploying API Services | Demonstrate the deployment of API services for FRIDAY. |
| Intermediate | Example: Automating Excel Tasks | Automating Excel control using FRIDAY. |
| Intermediate | Enhancing FRIDAY with Self-Learning for Excel Task Automation | Improved Excel control with self-directed learning. |
| Advanced | Designing New API Tools | Guides on deploying custom API tools for FRIDAY to extend its functionalities. | 🏫 Community Join our community to connect with other enthusiasts, researchers and developers: Discord : Join our Discord server for real-time discussions and support. Twitter : Follow our Twitter to get latest new or tag us to share your demos! 👨💻 Contributing Visit the roadmap to preview what the community is working on and become a contributor! 🛡 Disclaimer OS-Copilot is provided "as is" without warranty of any kind. Users assume full responsibility for any risks associated with its use, including potential data loss or changes to system settings . The developers of OS-Copilot are not liable for any damages or losses resulting from its use. Users must ensure their actions comply with applicable laws and regulations. 🔎 Citation @article{wu2024copilot,
title={Os-copilot: Towards generalist computer agents with self-improvement},
author={Wu, Zhiyong and Han, Chengcheng and Ding, Zichen and Weng, Zhenmin and Liu, Zhoumianze and Yao, Shunyu and Yu, Tao and Kong, Lingpeng},
journal={arXiv preprint arXiv:2402.07456},
year={2024}
} 📬 Contact If you have any inquiries, suggestions, or wish to contact us for any reason, we warmly invite you to email us at wuzhiyong@pjlab.org.cn. Star History;An self-improving embodied conversational agent seamlessly integrated into the operating system to automate our daily tasks. ;[] | OS-Copilot/OS-Copilot |
timschneeb/tachiyomi-extensions-revived;Tachiyomi Extensions Revived This repository has been archived. Please migrate to this extension repo: https://github.com/keiyoushi/extensions [!NOTE]
As Tachiyomi has been discontinued, independent community members have already announced a new spiritual successor called Mihon. You can find their Discord here: https://discord.gg/mihon Downloads Add this repo to Tachiyomi (v0.15.2 and later) Starting with the latest Tachiyomi version (v0.15.2 and later), you can now add external extension repos without having to re-approve unofficial extensions after each app launch! Update to the latest version of Tachiyomi Go to Settings > Browse > Extension repos > Add Enter the following URL and accept: https://raw.githubusercontent.com/ThePBone/tachiyomi-extensions-revived/repo/index.min.json Go to the extension management screen, refresh it, and you can now download the removed extensions. After installing an extension, you need to approve it by tapping on the 'Trust' button. [!NOTE]
All extensions in this repo have a custom package name. While this technically allows you to install them alongside the old removed official extensions, it is not recommended and can cause issues. Uninstall the old extensions to avoid source conflicts. Direct APK downloads You can also directly download the APK files in this GitHub repository in the repo branch . After installing any unofficial extension, you must manually enable the extension in Tachiyomi. Disclaimer The developer of this application does not have any affiliation with the content providers available. License Copyright 2015 Javier Tomás
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.;Revival of removed source extensions for the Tachiyomi app. ;tachiyomi,tachiyomi-extensions,manga,mangadex | timschneeb/tachiyomi-extensions-revived |
qnguyen3/chat-with-mlx;# Chat with MLX 🧑💻
[![version](https://badge.fury.io/py/chat-with-mlx.svg)](https://badge.fury.io/py/chat-with-mlx)
[![downloads](https://img.shields.io/pypi/dm/chat-with-mlx)](https://pypistats.org/packages/chat-with-mlx)
[![license](https://img.shields.io/pypi/l/chat-with-mlx)](https://github.com/qnguyen3/chat-with-mlx/blob/main/LICENSE.md)
[![python-version](https://img.shields.io/pypi/pyversions/chat-with-mlx)](https://badge.fury.io/py/chat-with-mlx) An all-in-one Chat Playground using Apple MLX on Apple Silicon Macs. Features Privacy-enhanced AI : Chat with your favourite models and data securely. MLX Playground : Your all in one LLM Chat UI for Apple MLX Easy Integration : Easy integrate any HuggingFace and MLX Compatible Open-Source Models. Default Models : Llama-3, Phi-3, Yi, Qwen, Mistral, Codestral, Mixtral, StableLM (along with Dolphin and Hermes variants) Installation and Usage Easy Setup Install Pip Install: pip install chat-with-mlx Manual Pip Installation bash
git clone https://github.com/qnguyen3/chat-with-mlx.git
cd chat-with-mlx
python -m venv .venv
source .venv/bin/activate
pip install -e . Manual Conda Installation bash
git clone https://github.com/qnguyen3/chat-with-mlx.git
cd chat-with-mlx
conda create -n mlx-chat python=3.11
conda activate mlx-chat
pip install -e . Usage Start the app: chat-with-mlx Add Your Model Please checkout the guide HERE Known Issues When the model is downloading by Solution 1, the only way to stop it is to hit control + C on your Terminal. If you want to switch the file, you have to manually hit STOP INDEXING. Otherwise, the vector database would add the second document to the current database. You have to choose a dataset mode (Document or YouTube) in order for it to work. Phi-3-small can't do streaming in completions Why MLX? MLX is an array framework for machine learning research on Apple silicon,
brought to you by Apple machine learning research. Some key features of MLX include: Familiar APIs : MLX has a Python API that closely follows NumPy. MLX
also has fully featured C++, C , and Swift APIs, which closely mirror
the Python API. MLX has higher-level packages like mlx.nn and mlx.optimizers with APIs that closely follow PyTorch to simplify building
more complex models. Composable function transformations : MLX supports composable function
transformations for automatic differentiation, automatic vectorization,
and computation graph optimization. Lazy computation : Computations in MLX are lazy. Arrays are only
materialized when needed. Dynamic graph construction : Computation graphs in MLX are constructed
dynamically. Changing the shapes of function arguments does not trigger
slow compilations, and debugging is simple and intuitive. Multi-device : Operations can run on any of the supported devices
(currently the CPU and the GPU). Unified memory : A notable difference from MLX and other frameworks
is the unified memory model . Arrays in MLX live in shared memory.
Operations on MLX arrays can be performed on any of the supported
device types without transferring data. Acknowledgement I would like to send my many thanks to: The Apple Machine Learning Research team for the amazing MLX library. LangChain and ChromaDB for such easy RAG Implementation All contributors Star History;An all-in-one LLMs Chat UI for Apple Silicon Mac using MLX Framework.;[] | qnguyen3/chat-with-mlx |
MatthiasGrandl/Loungy;Loungy Loungy is an application launcher in the vein of Raycast and Alfred. It's built using GPUI, a GPU accelerated application framework for Rust. Showcase https://github.com/MatthiasGrandl/Loungy/assets/50196894/d5c84579-77e6-4e0b-980a-7d1f1975d713 Features [x] Launching apps [x] Calculator (including unit/currency/date conversions, thanks to Numbat ) [x] Task manager (killing processes) [x] MacOS menu search [x] Clipboard manager [x] Tailscale peer list [x] Bitwarden password client (only viewing, not editing) [x] Matrix Chat client (very early WIP) In the absence of a proper extension system (which is wanted, but still quite far out at this point) I am adding features I need, but that aren't universally useful behind cargo feature flags. So far there is clipboard , tailscale , bitwarden and matrix . You can also build all optional features with cargo build --all-features --release . Installation Either download the .dmg from the releases section or use homebrew: sh
brew install loungy The pre-built version comes with the following optional crate features: clipboard If you want to try any other optional features, at the moment you will have to build Loungy yourself. Development Requirements Xcode Apple Swift 5.9.2 Rust v1.75.0 Running cargo run dev Caveats ~~It is MacOS only, but I would love to support a Linux build in the future. That won't happen until GPUI adds Linux support.~~ There is some very rough Linux support. It's still very early since a lot of functionality in GPUI required for Loungy hasn't made it into the Linux GPUI yet (they are obviously focusing on getting Zed features running first). Accessibility is still a nightmare. GPUI is lacking a proper accessible text input so currently I am implementing one myself. Screen readers or people with impairments please don't try to use this yet. ~~The window position is currently hardcoded, so depending on your screen resolution it might not be in the best position. Will be fixed as soon as there is an API for it in GPUI.~~ I kinda fixed this, but it's probably still wonky on multi display setups. ~~The hotkey is currently hardcoded to Opt+Ctrl+Cmd+Space ~~ Hotkeys are now rebindable, but it's still a bit sketchy since GPUI doesn't report raw keycodes, while the Tauri hotkey manager uses raw keycodes. To rebind you can simple select any command in the root search and use the Change Hotkey action. Linux Experimental Linux support. Building requires (on Fedora Wayland): openssl-devel wayland-devel g++ fontconfig-devel libxcb-devel libxkbcommon-x11-devel libxkbcommon-devel Many features won't work yet, since I am using a lot of MacOS specific APIs for autofilling. So notably the clipboard and bitwarden features will need Linux specific implementations to get working. Credits Loungy wouldn't be possible without the awesome open source ecosystem: GPUI : The lovely framework Numbat : Used as the calculator Lucide : Amazing open source SVG icon-set Catppuccin : The theme that makes everything look good swift-rs : For providing a way to interface with accessibility APIs and other MacOS native stuff that I wouldn't know how to do with pure Rust nucleo : Fuzzy searcher implemented by the team of my favorite modal editor Helix;Loungy is a WIP launcher in the vein of Spotlight, Alfred, Raycast.;gpui | MatthiasGrandl/Loungy |
spla-tam/SplaTAM;SplaTAM: Splat, Track & Map 3D Gaussians for Dense RGB-D SLAM CVPR 2024 Nikhil Keetha · Jay Karhade · Krishna Murthy Jatavallabhula · Gengshan Yang · Sebastian Scherer Deva Ramanan · Jonathon Luiten Paper | Video | Project Page Stay Tuned for a Faster and Better Variant of SplaTAM! Table of Contents Installation Online Demo Usage Downloads Benchmarking Acknowledgement Citation Developers Installation (Recommended) SplaTAM has been benchmarked with Python 3.10, Torch 1.12.1 & CUDA=11.6. However, Torch 1.12 is not a hard requirement and the code has also been tested with other versions of Torch and CUDA such as Torch 2.3.0 & CUDA 12.1. The simplest way to install all dependences is to use anaconda and pip in the following steps: bash
conda create -n splatam python=3.10
conda activate splatam
conda install -c "nvidia/label/cuda-11.6.0" cuda-toolkit
conda install pytorch==1.12.1 torchvision==0.13.1 torchaudio==0.12.1 cudatoolkit=11.6 -c pytorch -c conda-forge
pip install -r requirements.txt Windows For installation on Windows using Git bash, please refer to the instructions shared in Issue#9 . Docker and Singularity Setup We also provide a docker image. We recommend using a venv to run the code inside a docker image: bash
docker pull nkeetha/splatam:v1
bash bash_scripts/start_docker.bash
cd /SplaTAM/
pip install virtualenv --user
mkdir venv
cd venv
virtualenv --system-site-packages splatam
source ./splatam/bin/activate
pip install -r venv_requirements.txt Setting up a singularity container is similar: bash
cd </path/to/singularity/folder/>
singularity pull splatam.sif docker://nkeetha/splatam:v1
singularity instance start --nv splatam.sif splatam
singularity run --nv instance://splatam
cd <path/to/SplaTAM/>
pip install virtualenv --user
mkdir venv
cd venv
virtualenv --system-site-packages splatam
source ./splatam/bin/activate
pip install -r venv_requirements.txt Demo Online You can SplaTAM your own environment with an iPhone or LiDAR-equipped Apple device by downloading and using the NeRFCapture app. Make sure that your iPhone and PC are connected to the same WiFi network, and then run the following command: bash
bash bash_scripts/online_demo.bash configs/iphone/online_demo.py On the app, keep clicking send for successive frames. Once the capturing of frames is done, the app will disconnect from the PC and check out SplaTAM's interactive rendering of the reconstruction on your PC! Here are some cool example results: Offline You can also first capture the dataset and then run SplaTAM offline on the dataset with the following command: bash
bash bash_scripts/nerfcapture.bash configs/iphone/nerfcapture.py Dataset Collection If you would like to only capture your own iPhone dataset using the NeRFCapture app, please use the following command: bash
bash bash_scripts/nerfcapture2dataset.bash configs/iphone/dataset.py Usage We will use the iPhone dataset as an example to show how to use SplaTAM. The following steps are similar for other datasets. To run SplaTAM, please use the following command: bash
python scripts/splatam.py configs/iphone/splatam.py To visualize the final interactive SplaTAM reconstruction, please use the following command: bash
python viz_scripts/final_recon.py configs/iphone/splatam.py To visualize the SplaTAM reconstruction in an online fashion, please use the following command: bash
python viz_scripts/online_recon.py configs/iphone/splatam.py To export the splats to a .ply file, please use the following command: bash
python scripts/export_ply.py configs/iphone/splatam.py PLY format Splats can be visualized in viewers such as SuperSplat & PolyCam . To run 3D Gaussian Splatting on the SplaTAM reconstruction, please use the following command: bash
python scripts/post_splatam_opt.py configs/iphone/post_splatam_opt.py To run 3D Gaussian Splatting on a dataset using ground truth poses, please use the following command: bash
python scripts/gaussian_splatting.py configs/iphone/gaussian_splatting.py Downloads DATAROOT is ./data by default. Please change the input_folder path in the scene-specific config files if datasets are stored somewhere else on your machine. Replica Download the data as below, and the data is saved into the ./data/Replica folder. Note that the Replica data is generated by the authors of iMAP (but hosted by the authors of NICE-SLAM). Please cite iMAP if you use the data. bash
bash bash_scripts/download_replica.sh TUM-RGBD bash
bash bash_scripts/download_tum.sh ScanNet Please follow the data downloading procedure on the ScanNet website, and extract color/depth frames from the .sens file using this code . [Directory structure of ScanNet (click to expand)] ```
DATAROOT
└── scannet
└── scene0000_00
└── frames
├── color
│ ├── 0.jpg
│ ├── 1.jpg
│ ├── ...
│ └── ...
├── depth
│ ├── 0.png
│ ├── 1.png
│ ├── ...
│ └── ...
├── intrinsic
└── pose
├── 0.txt
├── 1.txt
├── ...
└── ...
``` We use the following sequences: scene0000_00
scene0059_00
scene0106_00
scene0181_00
scene0207_00 ScanNet++ Please follow the data downloading and image undistortion procedure on the ScanNet++ website.
Additionally for undistorting the DSLR depth images, we use our own variant of the official ScanNet++ processing code . We will open a pull request to the official ScanNet++ repository soon. We use the following sequences: 8b5caf3398
b20a261fdf For b20a261fdf, we use the first 360 frames, due to an abrupt jump/teleportation in the trajectory post frame 360. Please note that ScanNet++ was primarily intended as a NeRF Training & Novel View Synthesis dataset. Replica-V2 We use the Replica-V2 dataset from vMAP to evaluate novel view synthesis. Please download the pre-generated replica sequences from vMAP . Benchmarking For running SplaTAM, we recommend using weights and biases for the logging. This can be turned on by setting the wandb flag to True in the configs file. Also make sure to specify the path wandb_folder . If you don't have a wandb account, first create one. Please make sure to change the entity config to your wandb account. Each scene has a config folder, where the input_folder and output paths need to be specified. Below, we show some example run commands for one scene from each dataset. After SLAM, the trajectory error will be evaluated along with the rendering metrics. The results will be saved to ./experiments by default. Replica To run SplaTAM on the room0 scene, run the following command: bash
python scripts/splatam.py configs/replica/splatam.py To run SplaTAM-S on the room0 scene, run the following command: bash
python scripts/splatam.py configs/replica/splatam_s.py For other scenes, please modify the configs/replica/splatam.py file or use configs/replica/replica.bash . TUM-RGBD To run SplaTAM on the freiburg1_desk scene, run the following command: bash
python scripts/splatam.py configs/tum/splatam.py For other scenes, please modify the configs/tum/splatam.py file or use configs/tum/tum.bash . ScanNet To run SplaTAM on the scene0000_00 scene, run the following command: bash
python scripts/splatam.py configs/scannet/splatam.py For other scenes, please modify the configs/scannet/splatam.py file or use configs/scannet/scannet.bash . ScanNet++ To run SplaTAM on the 8b5caf3398 scene, run the following command: bash
python scripts/splatam.py configs/scannetpp/splatam.py To run Novel View Synthesis on the 8b5caf3398 scene, run the following command: bash
python scripts/eval_novel_view.py configs/scannetpp/eval_novel_view.py For other scenes, please modify the configs/scannetpp/splatam.py file or use configs/scannetpp/scannetpp.bash . ReplicaV2 To run SplaTAM on the room0 scene, run the following command: bash
python scripts/splatam.py configs/replica_v2/splatam.py To run Novel View Synthesis on the room0 scene post SplaTAM, run the following command: bash
python scripts/eval_novel_view.py configs/replica_v2/eval_novel_view.py For other scenes, please modify the config files. Acknowledgement We thank the authors of the following repositories for their open-source code: 3D Gaussians Dynamic 3D Gaussians 3D Gaussian Splating Dataloaders GradSLAM & ConceptFusion Baselines Nice-SLAM Point-SLAM Citation If you find our paper and code useful, please cite us: bib
@inproceedings{keetha2024splatam,
title={SplaTAM: Splat, Track & Map 3D Gaussians for Dense RGB-D SLAM},
author={Keetha, Nikhil and Karhade, Jay and Jatavallabhula, Krishna Murthy and Yang, Gengshan and Scherer, Sebastian and Ramanan, Deva and Luiten, Jonathon},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
year={2024}
} Developers Nik-V9 ( Nikhil Keetha ) JayKarhade ( Jay Karhade ) JonathonLuiten ( Jonathan Luiten ) krrish94 ( Krishna Murthy Jatavallabhula ) gengshan-y ( Gengshan Yang );SplaTAM: Splat, Track & Map 3D Gaussians for Dense RGB-D SLAM (CVPR 2024);computer-vision,gaussian-splatting,robotics,slam,cvpr2024 | spla-tam/SplaTAM |
saasfly/saasfly;Saasfly An easy-to-use and enterprise-grade Next.js boilerplate. You don't need to buy templates anymore; Saasfly provides a complete, open-source solution for building SaaS applications quickly and easily. Nextify provides a complete Enterprise SaaS solution. Contact us at contact@nextify.ltd if you're interested in discussing your project, or if you'd simply like to have a conversation with us, please feel free to reach out. ❤️ We provide free technical support and deployment services to non-profit organizations . 🙌 All profits obtained from our open source projects will be entirely dedicated to supporting open source initiatives and charitable causes . ⚡ Live Demo Try it out for yourself! Demo Server1 (Location: Washington - USA): https://show.saasfly.io Demo Server2 (Location: Japan - Tokyo): https://demo.saasfly.io See more documentation at https://document.saasfly.io 🌟 Star History 🚀 Getting Started 🖱 One Click Template 📋 Prerequisites Before you start, make sure you have the following installed: Bun & Node.js & Git Linux bash
curl -sL https://gist.github.com/tianzx/874662fb204d32390bc2f2e9e4d2df0a/raw -o ~/downloaded_script.sh && chmod +x ~/downloaded_script.sh && source ~/downloaded_script.sh MacOS bash
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
brew install git
brew install oven-sh/bun/bun
brew install nvm PostgreSQL You can use Vercel Postgres or a local PostgreSQL server(add POSTGRES_URL env in .env.local) bash
POSTGRES_URL = '' Installation To get started with this boilerplate, we offer two options: Use the bun create command(🌟Strongly recommend🌟): bash
bun create saasfly Manually clone the repository: bash
git clone https://github.com/saasfly/saasfly.git
cd saasfly
bun install Setup Follow these steps to set up your project: Set up the environment variables: bash
cp .env.example .env.local
// (you must have a database prepared before running this command)
bun db:push Run the development server: bash
bun run dev:web Open http://localhost:3000 in your browser to see the result. (Optional alpha) bun run tailwind-config-viewer Open http://localhost:3333 in your browser to see your Tailwind CSS configuration 🥺 Project Roadmap Admin Dashboard Page (in alpha !!!) only provide static page now and we plan to integrate with headless arch You can provide your admin account and change ADMIN_EMAIL="admin@saasfly.io,root@saasfly.io" in .env.local and access host:port/admin/dashboard Based on security concerns, we will not provide online demos for the time being. Consider integrating Payload CMS. ⭐ Features 🐭 Frameworks Next.js - The React Framework for the Web (with App Directory ) NextAuth.js - Authentication for Next.js Kysely - The type-safe SQL query builder for TypeScript Prisma - Next-generation ORM for Node.js and TypeScript, used as a schema management tool React-email - A React renderer for creating beautiful emails using React components 🐮 Platforms Vercel – Deploy your Next.js app with ease Stripe – Payment processing for internet businesses Resend – Email marketing platform for developers 🐯 Enterprise Features i18n - Support for internationalization SEO - Search engine optimization MonoRepo - Monorepo for better code management T3 Env - Manage your environment variables with ease 🐰 Data Fetching trpc – End-to-end typesafe APIs made easy tanstack/react-query – Hooks for fetching, caching and updating asynchronous data in React 🐲 Global State Management Zustand – Small, fast and scalable state management for React 🐒 UI Tailwind CSS – Utility-first CSS framework for rapid UI development Shadcn/ui – Re-usable components built using Radix UI and Tailwind CSS Framer Motion – Motion library for React to animate components with ease Lucide – Beautifully simple, pixel-perfect icons next/font – Optimize custom fonts and remove external network requests for improved performance 🐴 Code Quality TypeScript – Static type checker for end-to-end type safety Prettier – Opinionated code formatter for consistent code style ESLint – Pluggable linter for Next.js and TypeScript Husky – Git hooks made easy 🐑 Performance Vercel Analytics – Real-time performance metrics for your Next.js app bun.sh – npm alternative for faster and more reliable package management 🐘 Database PostgreSQL – The world's most advanced open source database 📦 Apps and Packages web : The main Next.js application ui : Shared UI components db : Database schema and utilities auth : Authentication utilities email : Email templates and utilities 📜 License This project is licensed under the MIT License. For more information, see the LICENSE file. 🙏 Credits This project was inspired by shadcn's Taxonomy and t3-oss's create-t3-turbo . 👨💻 Contributors Made with contrib.rocks .;Your Next SaaS Template or Boilerplate ! A magic trip start with `bun create saasfly` . The more stars, the more surprises;bun,monorepo,nextauth,saas-boilerplate,stripe,tailwindcss,turborepo,vercel,enterprise-boilerplate,kysely | saasfly/saasfly |
memorysafety/river;River river is a Reverse Proxy Application based on the pingora library from Cloudflare. Current State We reached the initial v0.2.0 release at the end of April (and a small v0.2.1 release for crates.io availability in May), completing the work in Kickstart Spike 1 . As of the end of May, work towards the next features in Kickstart Spike 2 has begun. The next work is focused on: Development of "multiple upstream" features, including: Supporting Load Balancing of upstream servers Supporting Health Checks of upstream servers Supporting Service Discovery of upstream servers Developer and Operator Quality of Life features, including: Supporting basic static HTML file serving Supporting semi-dynamic observability endpoints, e.g. for Prometheus polling Support for hot-reloading of configuration CI for build and test checks on pull requests Development of initial Robustness features, including: Rate limiting of connections and/or requests CIDR/API range-based filtering for rejecting connections Stay tuned for updates on these features! Until further notice, there is no expectation of stability. Demonstration steps At the moment, river can be invoked from the command line. See --help for
all options. Configuration is currently done exclusively via configuration file. See
[ test-config.toml ] for an example configuration file. Additionally, see
[ toml-configuration.md ] for more configuration details. License Licensed under the Apache License, Version 2.0: ( LICENSE-APACHE or http://www.apache.org/licenses/LICENSE-2.0 ). Contribution Unless you explicitly state otherwise, any contribution intentionally submitted
for inclusion in the work by you, as defined in the Apache-2.0 license, shall be
dual licensed as above, without any additional terms or conditions.;This repository is the home of the River reverse proxy application, based on the pingora library from Cloudflare.;[] | memorysafety/river |
nkzw-tech/athena-crisis;Athena Crisis Athena Crisis is an Open Core video game developed by Nakazawa Tech and published by Null . The source code in this repository is licensed under the MIT License and can be used to improve Athena Crisis, build additional tools, study game development with JavaScript or create entirely new turn-based strategy games. The single-player campaign, multiplayer, art, music, and content are not open source. You can try a demo at athenacrisis.com and you can wishlist or purchase Athena Crisis on Steam Early Access or buy Athena Crisis directly to experience the full game. If you like Athena Crisis, please consider a sponsorship to support its development . Setup Athena Crisis requires Node.js and the latest major version of pnpm . [!NOTE] Windows Specific Config Developers on Windows will want to ensure that they are using bash to run package.json scripts. You can configure npm to use git bash by default with the following: bash
npm config set script-shell "C:\\Program Files\\git\\bin\\bash.exe" bash
pnpm install && pnpm dev:setup
pnpm dev Visit localhost:3003 to see the docs page. Documentation & Playground Check out our Athena Crisis Open Source Docs & Playground site. Packages The codebase is split into multiple packages to enforce the separation of concerns. We suggest starting with these packages to get an end-to-end overview: athena → Data structures and algorithms for manipulating map state ( client/server ). apollo → Data structures and algorithms for manipulating game state ( client/server ). hera → Game engine and rendering ( client ). ui → Design system ( client ). docs → Docs & Playground ( client ). These are secondary packages focused on specific domains: art → Handling of assets ( client/build ). codegen → Run pnpm codegen when changing game Action s or ActionResponse s to generate encoded actions and formatters ( build ). dionysus → AI code ( client/server ). hephaestus → Iterator and Immutable data structures ( client/server ). hermes → Campaign related data structures and algorithms ( client/server ). i18n → Internationalization ( client/build ). offline → Offline splash screen for app ( client ). tests → e2e tests. Contributing We welcome contributions to Athena Crisis. Some feature development is funded via Polar : nkzw-tech/athena-crisis on Polar . Here are some guidelines to get you started: The style guide is enforced through tests and linting. Please run pnpm test to run all checks. If they pass, you are good to send a Pull Request. We suggest adding tests to Pull Requests. You can find many examples in the tests folder . Check out The Perfect Development Environment and Fastest Frontend Tooling for tips on how to optimize your environment setup. Join the #tech channel on Discord if you run into issues. We greatly appreciate contributions in the following areas: Bug fixes. AI improvements. New game features. Balancing improvements. Experimental technical explorations. Tests to cover untested functionality. Performance Improvements to core data structures. Separation of concerns into smaller libraries that can be published on npm and consumed by other projects. Q&A What is open source and what isn't? About 75% of all non-content related Athena Crisis code – almost 100,000 lines – is open source, including the core data structures, algorithms, game engine, rendering, AI, and the map editor. Backend implementations such as user management, databases, APIs, realtime spectating, server configuration, and app wrappers for Steam or app stores are not open source. We aim to open source more of the game over time, but the content will remain the intellectual property of Nakazawa Tech KK and therefore not be open source. You can buy and experience Athena Crisis on Steam Early Access or buy it on athenacrisis.com . Why is Athena Crisis open source? Nakazawa Tech is an Open Core company. See the "Athena Crisis is now Open Source" blog post for more information. How is this codebase used at Nakazawa Tech? We use a monorepo for Athena Crisis at Nakazawa Tech and are syncing a portion of the codebase to this repository. Code merged into this open-source repository is automatically synced to the internal monorepo, and vice versa. Athena Crisis always runs the latest version of code from this repository in production. Why are some folders almost empty? To simplify dependency management with pnpm , most of the internal package.json files and the pnpm-lock.yaml are public. This makes it easier to share code between the internal monorepo and this repository but leaves some placeholders in this repository, most notably the ares and artemis packages, which can be ignored. Why are packages named after Greek gods? Why not!? At some point it became necessary to split the codebase into multiple packages to share code between the client and server. The first package was named athena , and it was hard to come up with meaningful names for the other packages. We decided to name them after Greek gods because it seemed cute. Over time, many pieces will be extracted into separate packages and published on npm under the @nkzw organization. Please move it to a packages folder and send a Pull Request if you find code that should be extracted into a separate package. How do assets work in this codebase? Assets are not part of this codebase and are not open source. Art and other assets are loaded remotely from the Athena Crisis servers for testing and development. If you want to build your own game based on Athena Crisis, you can use the code as a starting point and replace the assets with your own. I would like to build a commercial project based on this codebase. Can I? Yes, you can. However, any content such as art, music, story, characters and their descriptions are not open source and are the intellectual property of Nakazawa Tech KK. You can use the codebase to build your own game, but you must replace all content with your own. For example, you have to replace all references to assets in Images.tsx or change character descriptions in Unit.tsx if you want to publish your own game. If you'd like to use content from Athena Crisis for commercial or non-commercial purposes, you must obtain a license from Nakazawa Tech KK by emailing license@nakazwa.dev. More information Check out these links to learn more about the tech behind Athena Crisis: Join us on Discord How NOT to Build a Video Game Follow Athena Crisis on Twitter Building the AI for Athena Crisis Athena Crisis Open Source Docs & Playground Supporters | | |
| ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------- |
| | |;Athena Crisis is a modern-retro turn-based tactical strategy game. Athena Crisis is open core technology.;css,gamedev,javascript,react,typescript,athena-crisis-is-awesome | nkzw-tech/athena-crisis |
GUI-for-Cores/GUI.for.SingBox;GUI.for.SingBox A GUI program developed by vue3 + wails. Preview Document Community Build 1、Build Environment Node.js link pnpm : npm i -g pnpm Go link Wails link : go install github.com/wailsapp/wails/v2/cmd/wails@latest 2、Pull and Build ```bash
git clone https://github.com/GUI-for-Cores/GUI.for.SingBox.git cd GUI.for.SingBox/frontend pnpm install pnpm build cd .. wails build
``` Stargazers over time;GUI for SingBox;gui,sing-box,typescript,vue3,wails,vite5 | GUI-for-Cores/GUI.for.SingBox |
gregberge/twc;TWC creates reusable React + Tailwind CSS components in one line Features ⚡️ Lightweight — only 0.49kb ✨ Autocompletion in all editors 🎨 Adapt the style based on props ♻️ Reuse classes with asChild prop 🦄 Work with all components 😎 Compatible with React Server Components 🚀 First-class tailwind-merge and cva support Documentation Visit our official documentation . Usage Without twc : ```tsx
import * as React from "react"; const Card = React.forwardRef<
HTMLDivElement,
React.HTMLAttributes (({ className, ...props }, ref) => ( ));
``` With twc : ```tsx
import { twc } from "react-twc"; const Card = twc.div rounded-lg border bg-slate-100 text-white shadow-sm ;
``` Contributing Feel like contributing? That's awesome! We have a contributing guide to help guide you. Want to help improve the docs? The docsite lives in the monorepo . If you're interested in contributing to the documentation, check out the contributing guide . Support Having trouble? Ping me on X Acknowledgement The development of TWC was only possible due to the inspiration and ideas from these amazing projects. styled-components - where it all started License MIT License © 2023-Present Greg Bergé;Create reusable React + Tailwind CSS components in one line ✨;react,styled-components,tailwindcss | gregberge/twc |
devv-ai/devv;Devv AI English | 简体中文 | 日本語 Devv AI is an AI-powered search engine specifically designed for developers, providing fast and accurate results to programming-related queries. This is an issues-only repository for Devv AI . Please use this repository to: Report bugs Request new features Ask questions Provide feedback We highly appreciate your contributions in creating new tickets for bugs or feature requests. To help us better understand and address your concerns, please provide as much detail as possible when submitting a new ticket. Our goal is to make Devv AI an indispensable tool for developers, and your feedback is essential to achieving this objective. Getting Started Visit Devv AI Enter your search query related to programming, frameworks, libraries, or tools Browse through the search results and find the most relevant information Key Features AI-powered search engine tailored for developers Fast and accurate search results Supports various programming languages and frameworks Continuously learning and improving based on user interactions Upcoming Features We are actively working on expanding Devv AI's capabilities. Some of the features in our pipeline include: [x] Support for additional languages, such as Japanese, Chinese, and Korean [x] Agent mode for real-time coding assistance [ ] Enhanced user experience for searching and navigating results [ ] Integration with popular developer tools and platforms Roadmap We are currently in the process of defining our roadmap. Stay tuned for updates on our short-term and long-term goals, milestones, and timelines. Contributing We welcome contributions from the developer community to help improve Devv AI. If you encounter any bugs, have feature requests, or want to provide feedback, please create a new issue in this repository. We appreciate your valuable input! Contact If you have any questions or need further assistance, feel free to reach out to us at support@devv.ai . Happy coding with Devv AI!;An AI-powered search engine for developers.;[] | devv-ai/devv |
GaParmar/img2img-turbo;img2img-turbo Paper | Sketch2Image Demo Quick start: Running Locally | Gradio (locally hosted) | Training Cat Sketching Fish Sketching We propose a general method for adapting a single-step diffusion model, such as SD-Turbo, to new tasks and domains through adversarial learning. This enables us to leverage the internal knowledge of pre-trained diffusion models while achieving efficient inference (e.g., for 512x512 images, 0.29 seconds on A6000 and 0.11 seconds on A100). Our one-step conditional models CycleGAN-Turbo and pix2pix-turbo can perform various image-to-image translation tasks for both unpaired and paired settings. CycleGAN-Turbo outperforms existing GAN-based and diffusion-based methods, while pix2pix-turbo is on par with recent works such as ControlNet for Sketch2Photo and Edge2Image, but with one-step inference. One-Step Image Translation with Text-to-Image Models Gaurav Parmar , Taesung Park , Srinivasa Narasimhan , Jun-Yan Zhu CMU and Adobe, arXiv 2403.12036 Results Paired Translation with pix2pix-turbo Edge to Image Generating Diverse Outputs By varying the input noise map, our method can generate diverse outputs from the same input conditioning.
The output style can be controlled by changing the text prompt. Unpaired Translation with CycleGAN-Turbo Day to Night Night to Day Clear to Rainy Rainy to Clear Method Our Generator Architecture: We tightly integrate three separate modules in the original latent diffusion models into a single end-to-end network with small trainable weights. This architecture allows us to translate the input image x to the output y, while retaining the input scene structure. We use LoRA adapters in each module, introduce skip connections and Zero-Convs between input and output, and retrain the first layer of the U-Net. Blue boxes indicate trainable layers. Semi-transparent layers are frozen. The same generator can be used for various GAN objectives. Getting Started Environment Setup - We provide a conda env file that contains all the required dependencies. conda env create -f environment.yaml - Following this, you can activate the conda environment with the command below. conda activate img2img-turbo - Or use virtual environment: python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt Paired Image Translation (pix2pix-turbo) - The following command takes an image file and a prompt as inputs, extracts the canny edges, and saves the results in the directory specified. bash
python src/inference_paired.py --model_name "edge_to_image" \
--input_image "assets/examples/bird.png" \
--prompt "a blue bird" \
--output_dir "outputs" Input Image Canny Edges Model Output The following command takes a sketch and a prompt as inputs, and saves the results in the directory specified. bash
python src/inference_paired.py --model_name "sketch_to_image_stochastic" \
--input_image "assets/examples/sketch_input.png" --gamma 0.4 \
--prompt "ethereal fantasy concept art of an asteroid. magnificent, celestial, ethereal, painterly, epic, majestic, magical, fantasy art, cover art, dreamy" \
--output_dir "outputs" Input Model Output Unpaired Image Translation (CycleGAN-Turbo) - The following command takes a day image file as input, and saves the output night in the directory specified. python src/inference_unpaired.py --model_name "day_to_night" \
--input_image "assets/examples/day2night_input.png" --output_dir "outputs" Input (day) Model Output (night) The following command takes a night image file as input, and saves the output day in the directory specified. python src/inference_unpaired.py --model_name "night_to_day" \
--input_image "assets/examples/night2day_input.png" --output_dir "outputs" Input (night) Model Output (day) The following command takes a clear image file as input, and saves the output rainy in the directory specified. python src/inference_unpaired.py --model_name "clear_to_rainy" \
--input_image "assets/examples/clear2rainy_input.png" --output_dir "outputs" Input (clear) Model Output (rainy) The following command takes a rainy image file as input, and saves the output clear in the directory specified. python src/inference_unpaired.py --model_name "rainy_to_clear" \
--input_image "assets/examples/rainy2clear_input.png" --output_dir "outputs" Input (rainy) Model Output (clear) Gradio Demo We provide a Gradio demo for the paired image translation tasks. The following command will launch the sketch to image locally using gradio. gradio gradio_sketch2image.py The following command will launch the canny edge to image gradio demo locally. gradio gradio_canny2image.py Training with your own data See the steps here for training a pix2pix-turbo model on your paired data. See the steps here for training a CycleGAN-Turbo model on your unpaired data. Acknowledgment Our work uses the Stable Diffusion-Turbo as the base model with the following LICENSE .;One-step image-to-image with Stable Diffusion turbo: sketch2image, day2night, and more;computer-vision,deep-learning,generative-adversarial-network,generative-art,stable-diffusion | GaParmar/img2img-turbo |
AnswerDotAI/fsdp_qlora;fsdp_qlora Training LLMs with Quantized LoRA + FSDP. Read our announcement blog post . You should treat this script as an alpha/preview release. If you’re not comfortable with testing and debugging models, we’d suggest holding off for a few months while the community more fully tests the approach. Integrations FSDP+QLoRA has been integrated into:
- Axolotl : experimental support Installation The following steps should work (tested on Cuda 11.7, 11.8 and 12.1):
- Clone https://github.com/AnswerDotAI/fsdp_qlora
- pip install llama-recipes fastcore "transformers!=4.38.*,!=4.39.*" --extra-index-url https://download.pytorch.org/whl/test/cu118 as an easy way to get most dependencies (replace 118 with your desired Cuda version)
- Install bitsandbytes pip install bitsandbytes>=0.43.0 - Run huggingface-cli login (to access Llama 2)
- Optional Libraries:
- HQQ quantization: follow the HQQ installation instructions . Our training script uses HQQBackend.ATEN_BACKPROP , so also make sure to build the custom kernels cd hqq/kernels && python setup_cuda.py install .
- Weights and Biases logging: pip install wandb - Pytorch >= 2.2 is recommended to make use of the native flash-attention 2 kernel. Finetune Llama-2 70B on Dual 24GB GPUs Once installed, run cd fsdp_qlora and then run the following command to begin finetuning Llama-2 70B on Alpaca at a maximum sequence length of 512 tokens. bash
python train.py \
--model_name meta-llama/Llama-2-70b-hf \
--batch_size 2 \
--context_length 512 \
--precision bf16 \
--train_type qlora \
--use_gradient_checkpointing true \
--use_cpu_offload true \
--dataset alpaca \
--reentrant_checkpointing true This example command currently uses just over 128GB of CPU RAM. If you only have 128GB available, we recommend making a 10-20GB swap file to accommodate the initial spike in usage. Training Options For quantization we support HQQ and bitsandbytes. We're currently doing benchmarking to help you decide which to use. If you do use bitsandbytes, be sure to pass --reentrant_checkpointing True to avoid triggering a bug in bitsandbytes which results in high memory usage (a fix is in progress). --train_type full Full params fine-tuning. bash
export CUDA_VISIBLE_DEVICES=4,5 # optionally set devices
python train.py \
--world_size 2 \ # optional, on a single machine will be set automatically
--master_port 12356 \ # optional, defaults to 12355
--model_name meta-llama/Llama-2-7b-hf \
--gradient_accumulation_steps 4 \
--batch_size 8 \
--context_length 512 \
--precision bf16 \
--train_type full \
--use_gradient_checkpointing true \
--use_cpu_offload false \
--use_activation_cpu_offload false \
--log_to wandb \
--dataset alpaca --train_type lora LoRA fine-tuning using HF PEFT library. diff
- --train_type full \
+ --train_type lora \ --train_type custom_lora LoRA fine-tuning using a custom LoRA module. diff
- --train_type full \
+ --train_type custom_lora \ --train_type qlora 4-bit quantized LoRA fine-tuning using bitsanbytes Linear4bit layer with NF4 quantization and HF PEFT library. diff
- --train_type full \
+ --train_type qlora \
+ --reentrant_checkpointing true \ --train_type custom_qlora 4-bit quantized LoRA fine-tuning using bitsanbytes Linear4bit layer with NF4 quantization and a custom LoRA module. diff
- --train_type full \
+ --train_type custom_qlora \
+ --reentrant_checkpointing true \ --train_type hqq_lora 4-bit quantized LoRA fine-tuning using HQQ library and a custom LoRA module. diff
- --train_type full \
+ --train_type hqq_lora \ --train_type bnb_dora 4-bit quantized DoRA fine-tuning using bitsanbytes Linear4bit layer with NF4 quantization and a custom DoRA module. diff
- --train_type full \
+ --train_type bnb_dora \ --train_type hqq_dora 4-bit quantized DoRA fine-tuning using HQQ library and a custom DoRA module. diff
- --train_type full \
+ --train_type hqq_dora \ --train_type bnb_llama_pro 4-bit quantized Llama-Pro fine-tuning using bitsanbytes Linear4bit layer with NF4 quantization. To create llama-pro weights, run the following command: bash
python scripts/block_expansion.py \
--model_name meta-llama/Llama-2-7b-hf \
--output_dir /path/to/llama_pro_weights_directory \
--expansion_rate 0.1 diff
- --train_type full \
+ --train_type bnb_llama_pro \
+ --llama_pro_path /path/to/llama_pro_weights_directory \ --train_type hqq_llama_pro 4-bit quantized Llama-Pro fine-tuning using HQQ library. To create llama-pro weights, run the following command: bash
python scripts/block_expansion.py \
--model_name meta-llama/Llama-2-7b-hf \
--output_dir /path/to/llama_pro_weights_directory \
--expansion_rate 0.1 diff
- --train_type full \
+ --train_type hqq_llama_pro \
+ --llama_pro_path /path/to/llama_pro_weights_directory \ Low Memory Loading During quantized LoRA training we use a custom quantization and loading code to avoid loading the entire model into GPU memory before sharding it across GPUs. This is the default behavior of our training script when any of the following training options "qlora", "custom_qlora", "hqq_lora" is used. Other training options are already optimized for low memory loading to their best extent. We load the weights iteratively, quantize them on the GPU and place them back to CPU or meta device (based on their rank) concurrently a few layers at a time. We do this across all GPUs to initialize the quantization parameters, such as zero and scale, while using sync_module_states=True to sync the model parameters and buffers across all GPUs during FSDP initialization. Mixed Precision Training --precision bf16 (pure bfloat16) This will cast all the model parameters to torch.bfloat16 before training and won't use FSDP mixed precision. As a result, sharded and unsharded params will be stored in bf16, forward and backward passes will be done in bf16, and gradient reduction and updates will be done in bf16. --precision fp32 (pure float32) This will cast all the model parameters to torch.float32 before training and won't use FSDP mixed precision. As a result, sharded and unsharded params will be stored in fp32, forward and backward passes will be done in fp32, and gradient reduction and updates will be done in fp32. --precision mp_fp16_autocast (mixed float16 with autocast) This will cast all the model parameters to torch.float32 before training and will use FSDP mixed precision with mp_policy = MixedPrecision(param_dtype=torch.float32, reduce_dtype=torch.float32, buffer_dtype=torch.float32) As a results, sharded and unsharded params will be stored in fp32. It will use autocast(torch.float16) for forward and backward passes, and autocast(torch.float16) for gradient reduction and updates. --precision mp_bf16_autocast (mixed bfloat16 with autocast) This will cast all the model parameters to torch.float32 before training and will use FSDP mixed precision with mp_policy = MixedPrecision(param_dtype=torch.float32, reduce_dtype=torch.float32, buffer_dtype=torch.float32) As a results, sharded and unsharded params will be stored in fp32. It will use autocast(torch.bfloat16) for forward and backward passes, and autocast(torch.bfloat16) for gradient reduction and updates. --precision mp_bf16_buffers_autocast (bfloat16 params and float32 buffers with autocast) This will cast all the model parameters to torch.bfloat16 before training but will keep the buffers in torch.float32 and will use FSDP mixed precision with mp_policy = MixedPrecision(param_dtype=torch.bfloat16, reduce_dtype=torch.bfloat16, buffer_dtype=torch.float32) As a results, sharded and unsharded params will be stored in bf16. It will use autocast(torch.bfloat16) for forward and backward passes, and autocast(torch.bfloat16) for gradient reduction and updates. Buffers and only eligible operations in autocast will be performed in bf16. This option is important for RoPE layer which gives incorrect results when cast to lower precision especially with longer context lengths. Comparison to an existing trainer hf_train.py uses TRL's SFTTrainer for a comparison run. To match with our script, modify the dataloading code to train on everything (not just completions) and then run train.py --train_type qlora --dataset guanaco --batch_size 8 --lr_scheduler cosine --log_to wandb --save_model True --output_dir guanaco_7B --gradient_accumulation_steps 2 --lr 2e-4 . The SFTTrainer version has to run with a lower batch size (4 vs 8) so we only do 2 gradient accumulation steps vs 4 in the QLoRA+FSDP version. Converting Saved Models If you specify --save_model True the adapter layers will be saved as a state dict. To convert to the regular Hugging Face format and upload to the hub, see: Converting the State Dict.ipynb If "custom_qlora", "hqq_lora" training options are used, then only the trainable LoRA parameters will be saved. Before inference, you need to load and quantize the base model again, and separately load the saved LoRA parameters. You can alternatively test to see if merging base model weights and trained LoRA weights and then quantizing them performs similar to keeping the parameters separately as done during training. To make use of torch.compile with HQQ, see https://github.com/mobiusml/hqq/issues/18. Limitations While QLoRA finetuning works with FSDP, there are some rough edges to be aware of with this alpha release and our example script. First, the current release of Transformer AutoModel.from_pretrained cannot be used to load models into quantized weights, as it does not support the new quant_storage or quantization flag. Loading pretrained models requires writing or using custom model loading code. We provide an example of how to load and quantize a QLoRA model for finetuning in our demo script. We are actively working with Hugging Face to resolve this incompatibility in future Transformers and PEFT releases. Second, while FSDP’s Mixed Precision works with QLoRA, practitioners need to be careful to set the MixedPrecision.param_type to match the Linear4Bit.quant_storage dtype. Otherwise, FSDP’s Mixed Precision could cast the quantized weights to a different precision, essentially turning them into random weights. Our example script shows how to avoid this potential pitfall, and we will be happy to assist model training libraries in correctly exposing FSDP’s Mixed Precision options to users when training with QLoRA Example: Llama 70B 4-A100 40GB Training ```bash BnB QLoRA export CUDA_VISIBLE_DEVICES=4,5,6,7
python train.py \
--world_size 4 \
--master_port 12356 \
--model_name meta-llama/Llama-2-70b-hf \
--gradient_accumulation_steps 4 \
--batch_size 2 \
--context_length 512 \
--precision bf16_buffers_autocast \
--train_type custom_qlora \
--use_gradient_checkpointing true \
--reentrant_checkpointing true
--use_cpu_offload false \
--log_to stdout \
--dataset alpaca HQQ QLoRA export CUDA_VISIBLE_DEVICES=4,5,6,7
python train.py \
--world_size 4 \
--master_port 12356 \
--model_name meta-llama/Llama-2-70b-hf \
--gradient_accumulation_steps 4 \
--batch_size 2 \
--context_length 512 \
--precision bf16_buffers_autocast \
--train_type hqq_lora \
--use_gradient_checkpointing true \
--use_cpu_offload false \
--log_to stdout \
--dataset alpaca
``` Note: For large batch size or long context training HQQ LoRA is a bit more memory efficient compared to BnB LoRA with re-entrant checkpointing. So if you are running into OOM issues, try using HQQ LoRA. SLURM Training See fsdp_multi_node.sh for an example training script using multi-node training with SLURM. Add support for a new model First, import the new model's transformer, attention, and MLP layers from Transformers: python
from transformers.models.mistral.modeling_mistral import MistralDecoderLayer, MISTRAL_ATTENTION_CLASSES, MistralMLP Then in the get_wrapping_policy function, add the attention, MLP, and transformer layers to the self_attn_policy_fn , mlp_policy_fn , and transformer_wrap_policy wrapping policy methods: ```python
def get_wrapping_policy(custom_policy:bool=False): def self_attn_policy_fn(module):
return isinstance(module, tuple(*LLAMA_ATTENTION_CLASSES.values(), *MISTRAL_ATTENTION_CLASSES.values()))
def mlp_policy_fn(module):
return isinstance(module, (LlamaMLP, MistralMLP))
transformer_wrap_policy = functools.partial(
transformer_auto_wrap_policy,
transformer_layer_cls=(LlamaDecoderLayer, MistralDecoderLayer),
) ``` Finally, add gradient checkpointing support by adding the transformer layer to check_fn : python
if args["use_gradient_checkpointing"]:
check_fn = lambda submodule: isinstance(submodule, (LlamaDecoderLayer, MistralDecoderLayer));Training LLMs with QLoRA + FSDP;[] | AnswerDotAI/fsdp_qlora |
Lessica/TrollRecorder;TrollRecorder Not the first, but the best phone call recorder with TrollStore . Supports iOS 15.0 to iOS 17.0 No iOS 14 support. Jailbreak Version Popular jailbreaks are also supported. Get TrollRecorder JB . Rootless jailbreaks: Dopamine / palera1n Legacy jailbreaks: unc0ver / checkra1n About Other Installation Methods There're no additional support for RootHide. Use TrollStore version instead. eSign or LiveContainer are not supported due to their own limitations. Pro Version Core features of TrollRecorder are free to use. You can buy a Pro license to unlock advanced features. Paid licenses are valid for lifetime, and up to 5 devices. 1.x Milestones [x] Hover-ball on Lockscreen Notification on Lockscreen [x] iCloud Drive Smart Cloud Archive [ ] iCloud Backup [ ] Google Drive [ ] Dropbox 2.x Milestones A botch job is a waste of time. That's why we always built our app in the best quality. [ ] Record System Audio (i.e. What You Hear) [ ] Record like Voice Memos [ ] Built-In Audio Editor Video Introduction Special Thanks TrollStore and Dopamine by @opa334dev AudioRecorder XS , User Interface without SpringBoard and iOS Runtime Headers by @limneos , one of my favourite developer. CallAssist by @xybp888 LearnAudioToolBox by @liuxinxiaoyue Translators Spanish by @Deci8BelioS French by @DzMoha Traditional Chinese (Hong Kong) by @CaslenZ Traditional Chinese (Taiwan) by @雲端戰神一刀秒 Vietnamese by @2311WithLuv Localization At least, you need to provide Localizable.strings and InfoPlist.strings . Example: Localizable.strings . License The core of TrollRecorder (command line tool / CLI), and only itself, is open-sourced here. The command line tools of TrollRecorder are Free Software licensed under the GNU General Public License .;(i18n/CLI) Not the first, but the best phone call recorder with TrollStore.;audio-recorder,trollstore,ios,jailbreak,tweak | Lessica/TrollRecorder |
SunoAI-API/Suno-API;简体中文 | 日本語 Good news I provide the Suno AI API, no deployment is required, no subscription to suno is required. Lower price, more convenient to use the suno API.
Website: http://app.sunoaiapi.com Unofficial API This is an unofficial API based on Python and FastAPI. It currently supports generating songs, lyrics, etc. It comes with a built-in token maintenance and keep-alive feature, so you don't have to worry about the token expiring. Features Automatic token maintenance and keep-alive Fully asynchronous, fast, suitable for later expansion Simple code, easy to maintain, convenient for secondary development Contact me https://t.me/austin2035 Usage Configuration Edit the .env.example file, rename to .env and fill in the session_id and cookie. These are initially obtained from the browser, and will be automatically kept alive later. Run Install dependencies bash
pip3 install -r requirements.txt For this part, refer to the FastAPI documentation on your own. bash
uvicorn main:app Docker bash
docker compose build && docker compose up Documentation After setting up the service, visit /docs Useful resources chatgpt web, midjourney, gpts,tts, whisper,suno-v3;This is an unofficial Suno AI API based on Python and FastAPI. It currently supports generating songs, lyrics, etc.需要 suno API的可以看看这里👇;suno-ai,suno-ai-api,suno | SunoAI-API/Suno-API |
BuilderIO/micro-agent;An AI agent that writes and fixes code for you. Micro Agent Just run micro-agent , give it a prompt, and it'll generate a test and then iterate on code until all test case passes. Why? LLMs are great at giving you broken code, and it can take repeat iteration to get that code to work as expected. So why do this manually when AI can handle not just the generation but also the iteration and fixing? Why a "micro" agent? AI agents are cool, but general-purpose coding agents rarely work as hoped or promised. They tend to go haywire with compounding errors. Think of your Roomba getting stuck under a table, x1000. The idea of a micro agent is to Create a definitive test case that can give clear feedback if the code works as intended or not, and Iterate on code until all test cases pass Read more on why Micro Agent exists . What this project is not This project is not trying to be an end-to-end developer. AI agents are not capable enough to reliably try to be that yet (or probably very soon). This project won't install modules, read and write multiple files, or do anything else that is highly likely to cause havoc when it inevitably fails. It's a micro agent. It's small, focused, and does one thing as well as possible: write a test, then produce code that passes that test. Installation Micro Agent requires Node.js v14 or later. bash
npm install -g @builder.io/micro-agent Getting Started The best way to get started is to run Micro Agent in interactive mode, where it will ask you questions about the code it generates and use your feedback to improve the code it generates. bash
micro-agent Look at that, you're now a test-driven developer. You're welcome. Running Manually Add an LLM API key Micro Agent works with Claude, OpenAI, Ollama, or any OpenAI compatible provider such as Groq. You need to add your API key to the CLI: bash
micro-agent config set OPENAI_KEY=<your token>
micro-agent config set MODEL=GPT-4o Or, for Claude: bash
micro-agent config set ANTHROPIC_KEY=<your token>
micro-agent config set MODEL=claude To use a custom OpenAI API endpoint, such as for use with Ollama or Groq, you can set the endpoint with: bash
micro-agent config set OPENAI_API_ENDPOINT=<your endpoint> Unit test matching To run the Micro Agent on a file in unit test matching mode, you need to provide a test script that will run after each code generation attempt. For instance: bash
micro-agent ./file-to-edit.ts -t "npm test" This will run the Micro Agent on the file ./file-to-edit.ts running npm test and will write code until the tests pass. The above assumes the following file structure: bash
some-folder
├──file-to-edit.ts
├──file-to-edit.test.ts # test file. if you need a different path, use the -f argument
└──file-to-edit.prompt.md # optional prompt file. if you need a different path, use the -p argument By default, Micro Agent assumes you have a test file with the same name as the editing file but with .test.ts appended, such as ./file-to-edit.test.ts for the above examples. If this is not the case, you can specify the test file with the -f flag. You can also add a prompt to help guide the code generation, either at a file located at <filename>.prompt.md like ./file-to-edit.prompt.md or by specifying the prompt file with the -p . For instance: bash
micro-agent ./file-to-edit.ts -t "npm test" -f ./file-to-edit.spec.ts -p ./path-to-prompt.prompt.md Visual matching (experimental) [!WARNING]
This feature is experimental and under active development. Use with caution. Micro Agent can also help you match a design. To do this, you need to provide a design and a local URL to your rendered code. For instance: bash
micro-agent ./app/about/page.tsx --visual localhost:3000/about Micro agent will then generate code until the rendered output of your code matches more closely matches a screenshot file that you place next to the code you are editing (in this case, it would be ./app/about/page.png ). The above assumes the following file structure: bash
app/about
├──page.tsx # The code to edit
├──page.png # The screenshot to match
└──page.prompt.md # Optional, additional instructions for the AI Adding an Anthropic API key [!NOTE]
Using the visual matching feature requires an Anthropic API key. OpenAI is simply just not good at visual matching. We recommend using Anthropic for visual matching. To use Anthropic, you need to add your API key to the CLI: bash
micro-agent config set ANTHROPIC_KEY=<your token> Visual matching uses a multi-agent approach where Anthropic Claude Opus will do the visual matching and feedback, and then OpenAI will generate the code to match the design and address the feedback. Integration with Figma Micro Agent can also integrate with Visual Copilot to connect directly with Figma to ensure the highest fidelity possible design to code, including fully reusing the exact components and design tokens from your codebase. Visual Copilot connects directly to Figma to assist with pixel perfect conversion, exact design token mapping, and precise reusage of your components in the generated output. Then, Micro Agent can take the output of Visual Copilot and make final adjustments to the code to ensure it passes TSC, lint, tests, and fully matches your design including final tweaks. Configuration Max runs By default, Micro Agent will do 10 runs. If tests don't pass in 10 runs, it will stop. You can change this with the -m flag, like micro-agent ./file-to-edit.ts -m 20 . Config You can configure the CLI with the config command, for instance to set your OpenAI API key: bash
micro-agent config set OPENAI_KEY=<your token> or to set an Anthropic key: bash
micro-agent config set ANTHROPIC_KEY=<your token> By default Micro Agent uses gpt-4o as the model, but you can override it with the MODEL config option (or environment variable): bash
micro-agent config set MODEL=gpt-3.5-turbo or, if you supply an Anthropic key, you can use any Claude model. by default claude is an alias to claude-3-5-sonnet-20240620 : bash
micro-agent config set MODEL=claude Config UI To use a more visual interface to view and set config options you can type: bash
micro-agent config To get an interactive UI like below: bash
◆ Set config:
│ ○ OpenAI Key
│ ○ Anthropic Key
│ ○ OpenAI API Endpoint
│ ● Model (gpt-4o)
│ ○ Done
└ Environment variables All config options can be overridden as environment variables, for instance: bash
MODEL=gpt-3.5-turbo micro-agent ./file-to-edit.ts -t "npm test" Upgrading Check the installed version with: bash
micro-agent --version If it's not the latest version , run: bash
micro-agent update Or manually update with: bash
npm update -g @builder.io/micro-agent Contributing We would love your contributions to make this project better, and gladly accept PRs. Please see ./CONTRIBUTING.md for how to contribute. If you are looking for a good first issue, check out the good first issue label. Feedback If you have any feedback, please open an issue or @ me at @steve8708 on Twitter. Usage ```bash
Usage:
micro-agent [flags...]
micro-agent Commands:
config Configure the CLI
update Update Micro Agent to the latest version Flags:
-h, --help Show help
-m, --max-runs The maximum number of runs to attempt
-p, --prompt Prompt to run
-t, --test The test script to run
-f, --test-file The test file to run
-v, --visual Visual matching URL
--thread Thread ID to resume
--version Show version
```;An AI agent that writes (actually useful) code for you;agent,ai,figma,javascript,react,typescript | BuilderIO/micro-agent |
lowlighter/matcha;🍵 matcha.css matcha.css is a pure CSS library designed to style HTML elements similarly to a default browser stylesheet, eliminating the need for users to manually patch their documents. Ideal for fast prototyping, static HTML pages, Markdown-generated documents, and developers seeking to streamline their workflow without delving into CSS intricacies and want to make use of the full range of available HTML elements . ✅ No build steps ✅ No dependencies ✅ No JavaScript ✅ No configuration needed ✅ No refactoring required ✅ ~7kB gzipped (can be further reduced) 📸 Screenshot examples 🥢 Why choose matcha.css ? 🍜 Agnostic Works seamlessly with any document and covers a broader range of HTML elements compared to similar libraries. It remains unobtrusive by leveraging CSS pseudo-elements and offers extensive browser support . 🍥 Reversible Simply include its <link rel="stylesheet"> to get started, and remove it whenever necessary without the need for document refactoring or cleanup. 🍡 Semantic Adapts styling based on elements hierarchy, providing intuitive behaviors such as "implicit submenus" when nesting <menu> elements, required field indicator ( * ) when a <label> is paired with <input required> , etc. 🍱 Customizable Brew your own build using our custom builder to select specific features and reduce the final build size according to your project's needs. 🍘 Open-source Released under the MIT License , freely available at github.com/lowlighter/matcha . 📖 Usage To utilize matcha.css , just include the following line in the <head> section of your document. It's that simple! html
<link rel="stylesheet" href="https://matcha.mizu.sh/matcha.css"> Assets are hosted on Vercel but matcha.css is also available on and CDN services that distributes npm packages such
as JSdelivr . All published versions are available in the /v/ directory. By default, the main branch is served. 🍴 À la carte Each subdirectory listed in /styles directory is also directly served from matcha.mizu.sh . For example, if you only wish to include the @syntax-highlighting styles rather than using the default build or a custom one, you could use: html
<link rel="stylesheet" href="https://matcha.mizu.sh/styles/@syntax-highlighting/mod.css"> However note that unless you provide your own CSS variables, you will most likely need to include the @root package as it contains all matcha.css variables definition. All mod.css files are also aliased to their respective parent directories for convenience, which means you can also use: html
<link rel="stylesheet" href="https://matcha.mizu.sh/@syntax-highlighting.css"> 🫰 Contributing 🎌 Project scope Respect accessibility and usability best practices Respect end-users' preferences (e.g., color scheme) Be desktop and mobile friendly Not drastically change its current styling (unless to fulfill one of the previous mentioned principles) Remain lightweight, modular and customizable Additional features considered as "bloat" will be excluded by default and will be opt-in !important rules will never be used to ensure users can easily override styles 📂 Project structure This project is separated into three main directories: /api for serverless functions run on Vercel /app for entry points, static assets, and build scripts /styles for CSS source files 🎨 About /styles directory Each subdirectory within this folder is intended to be mostly self-contained and scoped. It helps to keep the codebase organized while also allows users to cherry-pick specific features and create
custom builds. Extra features should be prefixed using the character @ and should most likely be excluded by default in the builder to avoid bloating the default build. 🧑💻 Development lifecycle When submitting a pull request, the preview will be available on Vercel . Maintainers and other contributors can review the changes and provide feedback before merging. Local development is intended to be done using the deno runtime. If you do not wish to install it, you can also use the provided devcontainer configuration to run
the project in a containerized environment or directly on GitHub Codespaces . To start the development server, run the following command: sh
deno task serve Before submitting your changes, ensure everything is correctly formatted by running the following command: sh
deno task fmt 📜 License MIT License
Copyright (c) 2024-present Simon Lecoq (lowlighter);🍵 Drop-in semantic styling library in pure CSS. Highly customizable and perfect for simple websites and prototyping web apps!;classless,classless-css,css,css-framework,design,minimalist,semantic-web,simple,theme,web | lowlighter/matcha |
mhamilton723/FeatUp;FeatUp: A Model-Agnostic Framework for Features at Any Resolution ICLR 2024 Stephanie Fu* , Mark Hamilton* , Laura Brandt , Axel Feldman , Zhoutong Zhang , William T. Freeman *Equal Contribution. TL;DR :FeatUp improves the spatial resolution of any model's features by 16-32x without changing their semantics. https://github.com/mhamilton723/FeatUp/assets/6456637/8fb5aa7f-4514-4a97-aebf-76065163cdfd Contents Install Using Pretrained Upsamplers Fitting an Implicit Upsampler Coming Soon Citation Contact Install Pip For those just looking to quickly use the FeatUp APIs install via: shell script
pip install git+https://github.com/mhamilton723/FeatUp Local Development To install FeatUp for local development and to get access to the sample images install using the following: shell script
git clone https://github.com/mhamilton723/FeatUp.git
cd FeatUp
pip install -e . Using Pretrained Upsamplers To see examples of pretrained model usage please see our Collab notebook . We currently supply the following pretrained versions of FeatUp's JBU upsampler: | Model Name | Checkpoint | Checkpoint (No LayerNorm) | Torch Hub Repository | Torch Hub Name |
|------------|----------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------|----------------------|----------------|
| DINO | Download | Download | mhamilton723/FeatUp | dino16 |
| DINO v2 | Download | Download | mhamilton723/FeatUp | dinov2 |
| CLIP | Download | Download | mhamilton723/FeatUp | clip |
| MaskCLIP | n/a | Download | mhamilton723/FeatUp | maskclip |
| ViT | Download | Download | mhamilton723/FeatUp | vit |
| ResNet50 | Download | Download | mhamilton723/FeatUp | resnet50 | For example, to load the FeatUp JBU upsampler for the DINO backbone without an additional LayerNorm on the spatial features: python
upsampler = torch.hub.load("mhamilton723/FeatUp", 'dino16', use_norm=False) To load upsamplers trained on backbones with additional LayerNorm operations which makes training and transfer learning a bit more stable: python
upsampler = torch.hub.load("mhamilton723/FeatUp", 'dino16') Fitting an Implicit Upsampler to an Image To train an implicit upsampler for a given image and backbone first clone the repository and install it for local development . Then run python
cd featup
python train_implicit_upsampler.py Parameters for this training operation can be found in the implicit_upsampler config file . Local Gradio Demo To run our HuggingFace Spaces hosted FeatUp demo locally first install FeatUp for local development. Then run: shell
python gradio_app.py Wait a few seconds for the demo to spin up, then navigate to http://localhost:7860/ to view the demo. Coming Soon: Training your own FeatUp joint bilateral upsampler Simple API for Implicit FeatUp training Citation @inproceedings{
fu2024featup,
title={FeatUp: A Model-Agnostic Framework for Features at Any Resolution},
author={Stephanie Fu and Mark Hamilton and Laura E. Brandt and Axel Feldmann and Zhoutong Zhang and William T. Freeman},
booktitle={The Twelfth International Conference on Learning Representations},
year={2024},
url={https://openreview.net/forum?id=GkJiNn2QDF}
} Contact For feedback, questions, or press inquiries please contact Stephanie Fu and Mark Hamilton;Official code for "FeatUp: A Model-Agnostic Frameworkfor Features at Any Resolution" ICLR 2024;[] | mhamilton723/FeatUp |
OpenCSGs/CSGHub;简体中文 ∙ English CSGHub README CSGHub is an open source, trustworthy large model asset management platform that can assist users in governing the assets involved in the lifecycle of LLM and LLM applications (datasets, model files, codes, etc). With CSGHub, users can perform operations on LLM assets, including uploading, downloading, storing, verifying, and distributing, through Web interface, Git command line, or natural language Chatbot. Meanwhile, the platform provides microservice submodules and standardized OpenAPIs, which could be easily integrated with users' own systems. CSGHub is committed to bringing users an asset management platform that is natively designed for large models and can be deployed On-Premise for fully offline operation. CSGHub offers functionalities similar to a privatized Huggingface(on-premise Huggingface), managing LLM assets in a manner akin to how OpenStack Glance manages virtual machine images, Harbor manages container images, and Sonatype Nexus manages artifacts. You can try the free SaaS version of CSGHub through the OpenCSG Community official website.https://portal.opencsg.com/models You can also jump to the Quick Start section to quickly launch your local instance and explore all the features of CSGHub. UPDATES [2024.03.15] v0.3 Plan: Files online editing, organization edit, dataset preview. [2024.02.15] v0.2 Improve the function of model dataset hosting, and add the feature of inviting new organization members. [2024.01.15] v0.1 CSGHub Alpha version release, supports model and dataset management functions, detailed function is as below. CORE FUNCTIONS In the era of LLM, data and models are increasingly becoming the most important digital assets for businesses and individual users. However, there are currently issues such as fragmented management tools, limited management methods, and localization, which not only pose potential threats to secure operations but also might hinder the updating and iteration of enterprise-scale models. If you believe that large models will become a major driving force in the upcoming revolution, you may also be considering how to manage core assets — models, data, and large model application code — more efficiently and securely. CSGHub is an open-source project designed to address these issues. CSGHub's core funtions(update reguarly):
- Unified Management of LLM Assets : A one-stop Hub for unified management of model files, datasets, and large-scale model application codes.
- Development Ecosystem Compatibility : Supports both HTTPS and SSH protocols for Git commands and web interface operations, ensuring convenient usage for different users.
- Large Model Capability Expansion : Natively supports version management, model format conversion, automatic data processing, and dataset preview functions.
- Permissions and Security : Supports integration with corporate user systems, setting of asset visibility, and zero-trust authentication interface design for both external and internal users, maximizing security.
- Support for Private Deployment : Independent of internet and cloud vendors, enabling one-click initiation of private deployment.
- Native Design for Large Models : Supports natural language interaction, one-click model deployment, and asset management for Agent and Copilot App. TECH DESIGN The technical design of CSGHub are as follows:
- CSGHub integrates multiple technologies including Git Servers, Git LFS (Large File Storage) protocol, and Object Storage Service (OSS), providing a reliable data storage layer, a flexible infrastructure access layer, and extensive support for development tools.
- Utilizing a service-oriented architecture, CSGHub offers backend services through CSGHub Server and a management interface via CSGHub Web Service. Ordinary users can quickly initiate services using Docker compose or Kubernetes Helm Chart for enterprise-level asset management. Users with in-house development capabilities can utilize CSGHub Server for secondary development to integrate management functions into external systems or to customize advanced features.
- Leveraging outstanding open-source projects like Apache Arrow and DuckDB, CSGHub supports previewing of Parquet data file formats, facilitating localized dataset management for researchers and common users.
- CSGHub provides an intuitive web interface and permission design for enterprise organization structure. Users can realize version control management, online browsing and downloading through the web UI, as well as set the visibility scope of datasets and model files to realize data security isolation, and can also initiate topic discussions on models and datasets. Our R&D team has been focusing on AI + DevOps for a long time, and we hope to solve the pain points in the development process of large models through the CSGHub project. We encourage everyone to contribute high-quality development and operation and maintenance documents, and work together to improve the platform, so that large models assets can be more traceable and efficient. DEMO VIDEO In order to help users to quickly understand the features and usage of CSGHub, we have recorded a demo video. You can watch this video to get a quick understanding of the main features and operation procedures of this program.
- CSGHub Demo video is as below,you can also check it at YouTube or Bilibili ROADMAP Asset Management [x] Built-in Code Repo: Built-in Code Repo management function to associate the code of model, dataset, Space space application. [ ] Multi-source data synchronization: Support configure and enable remote repository, automatic data synchronization, support OpenCSG community, Huggingface and other remote sources。 AI Enhancement [ ] One-Click Fine-Tuning: Support integration with OpenCSG llm-finetune tool to start model fine-tuning training with one click. [x] One-Click Reasoning: Support integration with OpenCSG llm-inference tool to start model reasoning service with one click. LLM App and Enterprise Features [x] App Space: Support hosting Gradio/Streamlit applications and publishing them to App Space. [x] Fine-grained Permission Control: Fine-grained permission and access control settings for enterprise architecture. Security Compliance [ ] GitServer Adapter: Generic GitServer adapter to support multiple major Git repository types through Adaptor mode. [x] Asset Metadata: Asset metadata management mechanism, supporting customized metadata types and corresponding AutoTag rules. The detailed roadmap is designed as follows: full roadmap ARCHITECTURE CSGHub is made with two typical parts: Portal and Server. This repo corresponds to CSGHub Portal, while CSGHub Server is another high-performance backend project implemented with Golang. If you want to dive deep into CSGHub Server detail or wish to integrate the Server with your own frontend system or more, you can check the CSGHub Server open-source project . CSGHub Portal Architecture CSGHub Server Architecture QUICK START You can quickly deploy a CSGHub instance with portal/server and all other relevant dependencies to your environment using the following commands:
```shell please replace [IP Address] with your own LAN/WLAN ip address export SERVER_DOMAIN=[IP Address]
curl -L https://raw.githubusercontent.com/OpenCSGs/csghub/main/all-in-one.yml -o all-in-one.yml
docker compose -f all-in-one.yml up -d
``` if you are in China or you meet dockerhub network connection issue, you can try our aliyun docker registry alternative version with below:
```shell please replace [IP Address] with your own LAN/WLAN ip address export SERVER_DOMAIN=[IP Address]
curl -L https://raw.githubusercontent.com/OpenCSGs/csghub/main/all-in-one-CN.yml -o all-in-one-CN.yml
docker compose -f all-in-one-CN.yml up -d
``` or if you still meet Github network connection issue, you can try this one:
```shell please replace [IP Address] with your own LAN/WLAN ip address export SERVER_DOMAIN=[IP Address]
curl -L https://opencsg-public-resource.oss-cn-beijing.aliyuncs.com/csghub/all-in-one-CN.yml -o all-in-one-CN.yml
docker compose -f all-in-one-CN.yml up -d
``` Then, you could visit http://[IP Address] with your web browser to access this new CSHub instance; you could try all feature with your
inital admin account: admin001/admin001
You can check our website for more user guide information: User Guide Note: SERVER_DOMAIN ([IP Address]) should be the IP address or domain name of the target host. Please avoid using 127.0.0.1 or localhost . Released container images are for the x86_64 architecture only and have been tested on Linux/Windows and Mac environments. For Mac Silicon user, it is necessary to enable the Rosetta for x86/AMD64 emulation Feature in your Docker Desktop. WARNING: This quick start is only for trial testing and does not support production-level deployment. The CSGHub instance that deployed with this all-in-one script do not effectively persist user data: When using the docker compose up command to reload the service, errors may occur. In this case, you can use the docker compose down -v to completely remove the instance before relaunch it. Please always follow the Step-by-Step Deployment Guide for regular service deployment. WARNING: The quick start does not include space application's deployment. Starting from CSGHhub v0.4.0, the space function is supported. Since it still requires addtional Kubernetes and other services, please refer Full Deployment Guide . Tech docs in detail setup development env Contributing We welcome developers of all levels to contribute to our open-source project, CSGHub. If you would like to get involved, please refer to our contributing guidelines . We look forward to your participation and suggestions. ACKNOWLEDGEMENTS This project is based on Rails, Vue3, Tailwind CSS, Administrate, Postgresql, Apache Arrow, DuckDB and GoGin, whose open source contributions are deeply appreciated! CONTACT WITH US If you meet any problem during usage, you can contact with us by any following way:
1. initiate an issue in github
2. join our WeChat group by scaning wechat helper qrcode
3. join our official discord channel: OpenCSG Discord Channel 4. join our slack workspace: OpenCSG Slack Channel;CSGHub is an opensource large model assets platform just like on-premise huggingface which helps to manage datasets, model files, codes and more. CSGHub是一个开源、可信的大模型资产管理平台,可帮助用户治理LLM和LLM应用生命周期中涉及到的资产(数据集、模型文件、代码等)。CSGHub提供类似私有化的Huggingface功能,以类似OpenStack Glance管理虚拟机镜像、Harbor管理容器镜像以及Sonatype Nexus管理制品的方式,实现对LLM资产的管理。欢迎关注反馈和Star⭐️;ai,huggingface,llm,management-system,models,platform,datasets | OpenCSGs/CSGHub |
yerfor/GeneFacePlusPlus;GeneFace++: Generalized and Stable Real-Time 3D Talking Face Generation | | 中文文档 This is the official implementation of GeneFace++ Paper with Pytorch, which enables high lip-sync, high video-reality and high system-efficiency 3D talking face generation. You can visit our Demo Page to watch demo videos and learn more details. Note The eye blink control is an experimental feature, and we are currently working on improving its robustness. Thanks for your patience. You may also interested in We release Real3D-portrait (ICLR 2024 Spotlight), ( https://github.com/yerfor/Real3DPortrait ), a NeRF-based one-shot talking face system. Only upload one image and enjoy realistic talking face! Quick Start! We provide a guide for a quick start in GeneFace++. Step 1: Follow the steps in docs/prepare_env/install_guide.md , create a new python environment named geneface , and download 3DMM files into deep_3drecib/BFM . Step 2: Download pre-processed dataset of May( Google Drive or BaiduYun Disk with password 98n4), and place it here data/binary/videos/May/trainval_dataset.npy Step 3: Download pre-trained audio-to-motino model audio2motion_vae.zip ( Google Drive or BaiduYun Disk with password 9cqp) and motion-to-video checkpoint motion2video_nerf.zip , which is specific to May (in this Google Drive or in this BaiduYun Disk with password 98n4), and unzip them to ./checkpoints/ After these steps,your directories checkpoints and data should be like this: ``` checkpoints
> audio2motion_vae
> motion2video_nerf
> may_head
> may_torso
data
> binary
> videos
> May
trainval_dataset.npy
``` Step 4: activate geneface Python environment, and execute: bash
export PYTHONPATH=./
python inference/genefacepp_infer.py --a2m_ckpt=checkpoints/audio2motion_vae --head_ckpt= --torso_ckpt=checkpoints/motion2video_nerf/may_torso --drv_aud=data/raw/val_wavs/MacronSpeech.wav --out_name=may_demo.mp4 Or you can play with our Gradio WebUI: bash
export PYTHONPATH=./
python inference/app_genefacepp.py --a2m_ckpt=checkpoints/audio2motion_vae --head_ckpt= --torso_ckpt=checkpoints/motion2video_nerf/may_torso Or use our provided Google Colab and run all cells in it. Train GeneFace++ with your own videos Please refer to details in docs/process_data and docs/train_and_infer . Below are answers to frequently asked questions when training GeneFace++ on custom videos:
- Please make sure that the head segment occupies a relatively large region in the video (e.g., similar to the provided May.mp4 ). Or you need to hand-crop your training video. issue - Make sure that the talking person appears in every frame of the video, otherwise the data preprocessing pipeline may be failed.
- We only tested our code on Liunx (Ubuntu/CentOS). It is welcome that someone who are willing to share their installation guide on Windows/MacOS. ToDo [x] Release Inference Code of Audio2Motion and Motion2Video. [x] Release Pre-trained weights of Audio2Motion and Motion2Video. [x] Release Training Code of Motino2Video Renderer. [x] Release Gradio Demo. [x] Release Google Colab. [ ] **Release Training Code of Audio2Motion and Post-Net. (Maybe 2024.06.01) ** Citation If you found this repo helpful to your work, please consider cite us: @article{ye2023geneface,
title={GeneFace: Generalized and High-Fidelity Audio-Driven 3D Talking Face Synthesis},
author={Ye, Zhenhui and Jiang, Ziyue and Ren, Yi and Liu, Jinglin and He, Jinzheng and Zhao, Zhou},
journal={arXiv preprint arXiv:2301.13430},
year={2023}
}
@article{ye2023geneface++,
title={GeneFace++: Generalized and Stable Real-Time Audio-Driven 3D Talking Face Generation},
author={Ye, Zhenhui and He, Jinzheng and Jiang, Ziyue and Huang, Rongjie and Huang, Jiawei and Liu, Jinglin and Ren, Yi and Yin, Xiang and Ma, Zejun and Zhao, Zhou},
journal={arXiv preprint arXiv:2305.00787},
year={2023}
};GeneFace++: Generalized and Stable Real-Time 3D Talking Face Generation; Official Code;nerf,talking-face-generation | yerfor/GeneFacePlusPlus |
adrianhajdin/portfolio;A Modern Next.js Portfolio Build this project step by step with our detailed tutorial on JavaScript Mastery YouTube. Join the JSM family! 📋 Table of Contents 🤖 Introduction ⚙️ Tech Stack 🔋 Features 🤸 Quick Start 🕸️ Code to Copy 🔗 Assets 🚀 More 🚨 Tutorial This repository contains the code corresponding to an in-depth tutorial available on our YouTube channel, JavaScript Mastery . If you prefer visual learning, this is the perfect resource for you. Follow our tutorial to learn how to build projects like these step-by-step in a beginner-friendly manner! 🤖 Introduction Built with Next.js for handling the user interface, Three.js for rendering 3D elements, Framer motion for beautiful animations, and styled with TailwindCSS, this portfolio demonstrates the developer's skills in a unique manner that creates a lasting impact. If you're getting started and need assistance or face any bugs, join our active Discord community with over 34k+ members. It's a place where people help each other out. ⚙️ Tech Stack Next.js Three.js Framer Motion Tailwind CSS 🔋 Features 👉 Hero : Captivating introduction featuring a spotlight effect and dynamic background. 👉 Bento Grid : Modern layout presenting personal information using cutting-edge CSS design techniques. 👉 3D Elements : Interactive 3D design elements, such as a GitHub-style globe and card hover effects, adding depth and engagement. 👉 Testimonials : Dynamic testimonials area with scrolling or animated content for enhanced engagement. 👉 Work Experience : Prominent display of professional background for emphasis and credibility. 👉 Canvas Effect : Innovative use of HTML5 canvas to create visually striking effects in the "approaches" section. 👉 Responsiveness : Seamless adaptability across all devices, ensuring optimal viewing experience for every user. and many more, including code architecture and reusability 🤸 Quick Start Follow these steps to set up the project locally on your machine. Prerequisites Make sure you have the following installed on your machine: Git Node.js npm (Node Package Manager) Cloning the Repository bash
git clone https://github.com/adrianhajdin/portfolio.git
cd portfolio Installation Install the project dependencies using npm: bash
npm install Running the Project bash
npm run dev Open http://localhost:3000 in your browser to view the project. 🕸️ Snippets data/index.ts ```typescript
export const navItems = [
{ name: "About", link: "#about" },
{ name: "Projects", link: "#projects" },
{ name: "Testimonials", link: "#testimonials" },
{ name: "Contact", link: "#contact" },
];
export const gridItems = [
{
id: 1,
title: "I prioritize client collaboration, fostering open communication ",
description: "",
className: "lg:col-span-3 md:col-span-6 md:row-span-4 lg:min-h-[60vh]",
imgClassName: "w-full h-full",
titleClassName: "justify-end",
img: "/b1.svg",
spareImg: "",
},
{
id: 2,
title: "I'm very flexible with time zone communications",
description: "",
className: "lg:col-span-2 md:col-span-3 md:row-span-2",
imgClassName: "",
titleClassName: "justify-start",
img: "",
spareImg: "",
},
{
id: 3,
title: "My tech stack",
description: "I constantly try to improve",
className: "lg:col-span-2 md:col-span-3 md:row-span-2",
imgClassName: "",
titleClassName: "justify-center",
img: "",
spareImg: "",
},
{
id: 4,
title: "Tech enthusiast with a passion for development.",
description: "",
className: "lg:col-span-2 md:col-span-3 md:row-span-1",
imgClassName: "",
titleClassName: "justify-start",
img: "/grid.svg",
spareImg: "/b4.svg",
},
{
id: 5,
title: "Currently building a JS Animation library",
description: "The Inside Scoop",
className: "md:col-span-3 md:row-span-2",
imgClassName: "absolute right-0 bottom-0 md:w-96 w-60",
titleClassName: "justify-center md:justify-start lg:justify-center",
img: "/b5.svg",
spareImg: "/grid.svg",
},
{
id: 6,
title: "Do you want to start a project together?",
description: "",
className: "lg:col-span-2 md:col-span-3 md:row-span-1",
imgClassName: "",
titleClassName: "justify-center md:max-w-full max-w-60 text-center",
img: "",
spareImg: "",
},
];
export const projects = [
{
id: 1,
title: "3D Solar System Planets to Explore",
des: "Explore the wonders of our solar system with this captivating 3D simulation of the planets using Three.js.",
img: "/p1.svg",
iconLists: ["/re.svg", "/tail.svg", "/ts.svg", "/three.svg", "/fm.svg"],
link: "https://github.com/adrianhajdin?tab=repositories",
},
{
id: 2,
title: "Yoom - Video Conferencing App",
des: "Simplify your video conferencing experience with Yoom. Seamlessly connect with colleagues and friends.",
img: "/p2.svg",
iconLists: ["/next.svg", "/tail.svg", "/ts.svg", "/stream.svg", "/c.svg"],
link: "https://github.com/adrianhajdin/zoom-clone",
},
{
id: 3,
title: "AI Image SaaS - Canva Application",
des: "A REAL Software-as-a-Service app with AI features and a payments and credits system using the latest tech stack.",
img: "/p3.svg",
iconLists: ["/re.svg", "/tail.svg", "/ts.svg", "/three.svg", "/c.svg"],
link: "https://github.com/adrianhajdin/ai_saas_app",
},
{
id: 4,
title: "Animated Apple Iphone 3D Website",
des: "Recreated the Apple iPhone 15 Pro website, combining GSAP animations and Three.js 3D effects..",
img: "/p4.svg",
iconLists: ["/next.svg", "/tail.svg", "/ts.svg", "/three.svg", "/gsap.svg"],
link: "https://github.com/adrianhajdin/iphone",
},
];
export const testimonials = [
{
quote:
"Collaborating with Adrian was an absolute pleasure. His professionalism, promptness, and dedication to delivering exceptional results were evident throughout our project. Adrian's enthusiasm for every facet of development truly stands out. If you're seeking to elevate your website and elevate your brand, Adrian is the ideal partner.",
name: "Michael Johnson",
title: "Director of AlphaStream Technologies",
},
{
quote:
"Collaborating with Adrian was an absolute pleasure. His professionalism, promptness, and dedication to delivering exceptional results were evident throughout our project. Adrian's enthusiasm for every facet of development truly stands out. If you're seeking to elevate your website and elevate your brand, Adrian is the ideal partner.",
name: "Michael Johnson",
title: "Director of AlphaStream Technologies",
},
{
quote:
"Collaborating with Adrian was an absolute pleasure. His professionalism, promptness, and dedication to delivering exceptional results were evident throughout our project. Adrian's enthusiasm for every facet of development truly stands out. If you're seeking to elevate your website and elevate your brand, Adrian is the ideal partner.",
name: "Michael Johnson",
title: "Director of AlphaStream Technologies",
},
{
quote:
"Collaborating with Adrian was an absolute pleasure. His professionalism, promptness, and dedication to delivering exceptional results were evident throughout our project. Adrian's enthusiasm for every facet of development truly stands out. If you're seeking to elevate your website and elevate your brand, Adrian is the ideal partner.",
name: "Michael Johnson",
title: "Director of AlphaStream Technologies",
},
{
quote:
"Collaborating with Adrian was an absolute pleasure. His professionalism, promptness, and dedication to delivering exceptional results were evident throughout our project. Adrian's enthusiasm for every facet of development truly stands out. If you're seeking to elevate your website and elevate your brand, Adrian is the ideal partner.",
name: "Michael Johnson",
title: "Director of AlphaStream Technologies",
},
];
export const companies = [
{
id: 1,
name: "cloudinary",
img: "/cloud.svg",
nameImg: "/cloudName.svg",
},
{
id: 2,
name: "appwrite",
img: "/app.svg",
nameImg: "/appName.svg",
},
{
id: 3,
name: "HOSTINGER",
img: "/host.svg",
nameImg: "/hostName.svg",
},
{
id: 4,
name: "stream",
img: "/s.svg",
nameImg: "/streamName.svg",
},
{
id: 5,
name: "docker.",
img: "/dock.svg",
nameImg: "/dockerName.svg",
},
];
export const workExperience = [
{
id: 1,
title: "Frontend Engineer Intern",
desc: "Assisted in the development of a web-based platform using React.js, enhancing interactivity.",
className: "md:col-span-2",
thumbnail: "/exp1.svg",
},
{
id: 2,
title: "Mobile App Dev - JSM Tech",
desc: "Designed and developed mobile app for both iOS & Android platforms using React Native.",
className: "md:col-span-2", // change to md:col-span-2
thumbnail: "/exp2.svg",
},
{
id: 3,
title: "Freelance App Dev Project",
desc: "Led the dev of a mobile app for a client, from initial concept to deployment on app stores.",
className: "md:col-span-2", // change to md:col-span-2
thumbnail: "/exp3.svg",
},
{
id: 4,
title: "Lead Frontend Developer",
desc: "Developed and maintained user-facing features using modern frontend technologies.",
className: "md:col-span-2",
thumbnail: "/exp4.svg",
},
];
export const socialMedia = [
{
id: 1,
img: "/git.svg",
},
{
id: 2,
img: "/twit.svg",
},
{
id: 3,
img: "/link.svg",
},
];
``` tailwind.config.ts ```ts
import type { Config } from "tailwindcss";
const svgToDataUri = require("mini-svg-data-uri");
const colors = require("tailwindcss/colors");
const {
default: flattenColorPalette,
} = require("tailwindcss/lib/util/flattenColorPalette");
const config = {
darkMode: ["class"],
content: [
"./pages/**/*.{ts,tsx}",
"./components/**/*.{ts,tsx}",
"./app/**/*.{ts,tsx}",
"./src/**/*.{ts,tsx}",
"./data/**/*.{ts,tsx}",
],
prefix: "",
theme: {
container: {
center: true,
padding: "2rem",
screens: {
"2xl": "1400px",
},
},
extend: {
colors: {
black: {
DEFAULT: "#000",
100: "#000319",
200: "rgba(17, 25, 40, 0.75)",
300: "rgba(255, 255, 255, 0.125)",
},
white: {
DEFAULT: "#FFF",
100: "#BEC1DD",
200: "#C1C2D3",
},
blue: {
"100": "#E4ECFF",
},
purple: "#CBACF9",
border: "hsl(var(--border))",
input: "hsl(var(--input))",
ring: "hsl(var(--ring))",
background: "hsl(var(--background))",
foreground: "hsl(var(--foreground))",
primary: {
DEFAULT: "hsl(var(--primary))",
foreground: "hsl(var(--primary-foreground))",
},
secondary: {
DEFAULT: "hsl(var(--secondary))",
foreground: "hsl(var(--secondary-foreground))",
},
destructive: {
DEFAULT: "hsl(var(--destructive))",
foreground: "hsl(var(--destructive-foreground))",
},
muted: {
DEFAULT: "hsl(var(--muted))",
foreground: "hsl(var(--muted-foreground))",
},
accent: {
DEFAULT: "hsl(var(--accent))",
foreground: "hsl(var(--accent-foreground))",
},
popover: {
DEFAULT: "hsl(var(--popover))",
foreground: "hsl(var(--popover-foreground))",
},
card: {
DEFAULT: "hsl(var(--card))",
foreground: "hsl(var(--card-foreground))",
},
},
borderRadius: {
lg: "var(--radius)",
md: "calc(var(--radius) - 2px)",
sm: "calc(var(--radius) - 4px)",
},
keyframes: {
"accordion-down": {
from: { height: "0" },
to: { height: "var(--radix-accordion-content-height)" },
},
"accordion-up": {
from: { height: "var(--radix-accordion-content-height)" },
to: { height: "0" },
},
spotlight: {
"0%": {
opacity: "0",
transform: "translate(-72%, -62%) scale(0.5)",
},
"100%": {
opacity: "1",
transform: "translate(-50%,-40%) scale(1)",
},
},
shimmer: {
from: {
backgroundPosition: "0 0",
},
to: {
backgroundPosition: "-200% 0",
},
},
moveHorizontal: {
"0%": {
transform: "translateX(-50%) translateY(-10%)",
},
"50%": {
transform: "translateX(50%) translateY(10%)",
},
"100%": {
transform: "translateX(-50%) translateY(-10%)",
},
},
moveInCircle: {
"0%": {
transform: "rotate(0deg)",
},
"50%": {
transform: "rotate(180deg)",
},
"100%": {
transform: "rotate(360deg)",
},
},
moveVertical: {
"0%": {
transform: "translateY(-50%)",
},
"50%": {
transform: "translateY(50%)",
},
"100%": {
transform: "translateY(-50%)",
},
},
scroll: {
to: {
transform: "translate(calc(-50% - 0.5rem))",
},
},
},
animation: {
"accordion-down": "accordion-down 0.2s ease-out",
"accordion-up": "accordion-up 0.2s ease-out",
spotlight: "spotlight 2s ease .75s 1 forwards",
shimmer: "shimmer 2s linear infinite",
first: "moveVertical 30s ease infinite",
second: "moveInCircle 20s reverse infinite",
third: "moveInCircle 40s linear infinite",
fourth: "moveHorizontal 40s ease infinite",
fifth: "moveInCircle 20s ease infinite",
scroll:
"scroll var(--animation-duration, 40s) var(--animation-direction, forwards) linear infinite",
},
},
},
plugins: [
require("tailwindcss-animate"),
addVariablesForColors,
function ({ matchUtilities, theme }: any) {
matchUtilities(
{
"bg-grid": (value: any) => ({
backgroundImage: `url("${svgToDataUri(
` `
)}")`,
}),
"bg-grid-small": (value: any) => ({
backgroundImage: `url("${svgToDataUri(
` `
)}")`,
}),
"bg-dot": (value: any) => ({
backgroundImage: `url("${svgToDataUri(
` `
)}")`,
}),
},
{ values: flattenColorPalette(theme("backgroundColor")), type: "color" }
);
},
],
} satisfies Config;
function addVariablesForColors({ addBase, theme }: any) {
let allColors = flattenColorPalette(theme("colors"));
let newVars = Object.fromEntries(
Object.entries(allColors).map(([key, val]) => [`--${key}`, val])
);
addBase({
":root": newVars,
});
}
export default config;
``` globals.css ```css
@tailwind base;
@tailwind components;
@tailwind utilities;
@layer base {
:root {
--background: 0 0% 100%;
--foreground: 240 10% 3.9%;
--card: 0 0% 100%;
--card-foreground: 240 10% 3.9%;
--popover: 0 0% 100%;
--popover-foreground: 240 10% 3.9%;
--primary: 240 5.9% 10%;
--primary-foreground: 0 0% 98%;
--secondary: 240 4.8% 95.9%;
--secondary-foreground: 240 5.9% 10%;
--muted: 240 4.8% 95.9%;
--muted-foreground: 240 3.8% 46.1%;
--accent: 240 4.8% 95.9%;
--accent-foreground: 240 5.9% 10%;
--destructive: 0 84.2% 60.2%;
--destructive-foreground: 0 0% 98%;
--border: 240 5.9% 90%;
--input: 240 5.9% 90%;
--ring: 240 10% 3.9%;
--radius: 0.5rem;
}
.dark {
--background: 240 10% 3.9%;
--foreground: 0 0% 98%;
--card: 240 10% 3.9%;
--card-foreground: 0 0% 98%;
--popover: 240 10% 3.9%;
--popover-foreground: 0 0% 98%;
--primary: 0 0% 98%;
--primary-foreground: 240 5.9% 10%;
--secondary: 240 3.7% 15.9%;
--secondary-foreground: 0 0% 98%;
--muted: 240 3.7% 15.9%;
--muted-foreground: 240 5% 64.9%;
--accent: 240 3.7% 15.9%;
--accent-foreground: 0 0% 98%;
--destructive: 0 62.8% 30.6%;
--destructive-foreground: 0 0% 98%;
--border: 240 3.7% 15.9%;
--input: 240 3.7% 15.9%;
--ring: 240 4.9% 83.9%;
}
}
@layer base {
* {
@apply border-border !scroll-smooth;
}
body {
@apply bg-background text-foreground;
}
button {
@apply active:outline-none;
}
}
@layer utilities {
.heading {
@apply font-bold text-4xl md:text-5xl text-center;
}
.black-gradient {
background: linear-gradient(90deg, #161a31 0%, #06091f 100%);
}
}
``` Linear Gradient ```js
style={{
// add these two
// you can generate the color from here https://cssgradient.io/
background: "rgb(4,7,29)",
backgroundColor:
"linear-gradient(90deg, rgba(4,7,29,1) 0%, rgba(12,14,35,1) 100%)",
}}
``` 🔗 Assets Assets used in the project can be found here 🚀 More Advance your skills with Next.js 14 Pro Course Enjoyed creating this project? Dive deeper into our PRO courses for a richer learning adventure. They're packed with detailed explanations, cool features, and exercises to boost your skills. Give it a go! Accelerate your professional journey with the Expert Training program And if you're hungry for more than just a course and want to understand how we learn and tackle tech challenges, hop into our personalized masterclass. We cover best practices, different web skills, and offer mentorship to boost your confidence. Let's learn and grow together!;Modern & Minimal JS Mastery Portfolio;aceternity,nextjs | adrianhajdin/portfolio |
huchenlei/ComfyUI-layerdiffuse;ComfyUI-layerdiffuse ComfyUI implementation of https://github.com/layerdiffusion/LayerDiffuse. Installation Download the repository and unpack it into the custom_nodes folder in the ComfyUI installation directory. Or clone via GIT, starting from ComfyUI installation directory: bash
cd custom_nodes
git clone git@github.com:huchenlei/ComfyUI-layerdiffuse.git Run pip install -r requirements.txt to install python dependencies. You might experience version conflict on diffusers if you have other extensions that depend on other versions of diffusers. In this case, it is recommended to set up separate Python venvs. Workflows Generate foreground Generate foreground (RGB + alpha) If you want more control of getting RGB images and alpha channel mask separately, you can use this workflow. Blending (FG/BG) Blending given FG Blending given BG Extract FG from Blended + BG Extract BG from Blended + FG Forge impl's sanity check sets Stop at to 0.5 to get better quality BG.
This workflow might be inferior compared to other object removal workflows. Extract BG from Blended + FG (Stop at 0.5) In SD Forge impl , there is a stop at param that determines when
layer diffuse should stop in the denoising process. In the background, what this param does is unapply the LoRA and c_concat cond after a certain step
threshold. This is hard/risky to implement directly in ComfyUI as it requires manually loading a model that has every change except the layer diffusion
change applied. A workaround in ComfyUI is to have another img2img pass on the layer diffuse result to simulate the effect of stop at param. Generate FG from BG combined Combines previous workflows to generate blended and FG given BG. We found that there are some color variations in the extracted FG. Need to confirm
with layer diffusion authors whether this is expected. [2024-3-9] Generate FG + Blended given BG Need batch size = 2N. Currently only for SD15. [2024-3-9] Generate BG + Blended given FG Need batch size = 2N. Currently only for SD15. [2024-3-9] Generate BG + FG + Blended together Need batch size = 3N. Currently only for SD15. Note Currently only SDXL/SD15 are supported. See https://github.com/layerdiffuse/sd-forge-layerdiffuse#model-notes for more details. To decode RGBA result, the generation dimension must be multiple of 64. Otherwise, you will get decode error:;Layer Diffuse custom nodes;[] | huchenlei/ComfyUI-layerdiffuse |
lucidrains/self-rewarding-lm-pytorch;Self-Rewarding Language Model Implementation of the training framework proposed in Self-Rewarding Language Model , from MetaAI They really took the title of the DPO paper to heart. This library also contains an implementation of SPIN , which Teknium of Nous Research has expressed optimism for. Appreciation A16Z Open Source AI Grant Program and 🤗 Huggingface for the generous sponsorships, as well as my other sponsors, for affording me the independence to open source current artificial intelligence research Install bash
$ pip install self-rewarding-lm-pytorch Usage ```python
import torch
from torch import Tensor from self_rewarding_lm_pytorch import (
SelfRewardingTrainer,
create_mock_dataset
) from x_transformers import TransformerWrapper, Decoder transformer = TransformerWrapper(
num_tokens = 256,
max_seq_len = 1024,
attn_layers = Decoder(
dim = 512,
depth = 1,
heads = 8
)
) sft_dataset = create_mock_dataset(100, lambda: (torch.randint(0, 256, (256,)), torch.tensor(1)))
prompt_dataset = create_mock_dataset(100, lambda: 'mock prompt') def decode_tokens(tokens: Tensor) -> str:
decode_token = lambda token: str(chr(max(32, token)))
return ''.join(list(map(decode_token, tokens))) def encode_str(seq_str: str) -> Tensor:
return Tensor(list(map(ord, seq_str))) trainer = SelfRewardingTrainer(
transformer,
finetune_configs = dict(
train_sft_dataset = sft_dataset,
self_reward_prompt_dataset = prompt_dataset,
dpo_num_train_steps = 1000
),
tokenizer_decode = decode_tokens,
tokenizer_encode = encode_str,
accelerate_kwargs = dict(
cpu = True
)
) trainer(overwrite_checkpoints = True) checkpoints after each finetuning stage will be saved to ./checkpoints ``` SPIN can be trained as follows - it can also be added to the fine-tuning pipeline as shown in the final example in the readme. ```python
import torch from self_rewarding_lm_pytorch import (
SPINTrainer,
create_mock_dataset
) from x_transformers import TransformerWrapper, Decoder transformer = TransformerWrapper(
num_tokens = 256,
max_seq_len = 1024,
attn_layers = Decoder(
dim = 512,
depth = 6,
heads = 8
)
) sft_dataset = create_mock_dataset(100, lambda: (torch.randint(0, 256, (256,)), torch.tensor(1))) spin_trainer = SPINTrainer(
transformer,
max_seq_len = 16,
train_sft_dataset = sft_dataset,
checkpoint_every = 100,
spin_kwargs = dict(
λ = 0.1,
),
) spin_trainer()
``` Say you want to experiment with your own reward prompt (other than LLM-as-Judge). First you need to import the RewardConfig , next pass it into the trainer as reward_prompt_config ```python first import from self_rewarding_lm_pytorch import RewardConfig then say you want to try asking the transformer nicely reward_regex_template is the string that will be looked for in the LLM response, for parsing out the reward where {{ reward }} is defined as a number trainer = SelfRewardingTrainer(
transformer,
...,
self_reward_prompt_config = RewardConfig(
prompt_template = """
Pretty please rate the following user prompt and response
User: {{ prompt }}
Response: {{ response }} Format your score as follows:
Rating: <rating as integer from 0 - 10>
""",
reward_regex_template = """
Rating: {{ reward }}
"""
) )
``` Finally, if you would like to experiment with arbitrary orders of fine-tuning, you will also have that flexiblity, by passing in FinetuneConfig instances into finetune_configs as a list ex. say you want to carry out research on interleaving SPIN, External Rewarding, and Self-Rewarding This idea originated from Teknium from a private discord channel. ```python import the configs from self_rewarding_lm_pytorch import (
SFTConfig,
SelfRewardDPOConfig,
ExternalRewardDPOConfig,
SelfPlayConfig,
) trainer = SelfRewardingTrainer(
model,
finetune_configs = [
SFTConfig(...),
SelfPlayConfig(...),
ExternalRewardDPOConfig(...),
SelfRewardDPOConfig(...),
SelfPlayConfig(...),
SelfRewardDPOConfig(...)
],
...
) trainer() checkpoints after each finetuning stage will be saved to ./checkpoints ``` Todo [x] generalize the sampling so that it can progress at different positions in the batch, fix all sampling to be batched. also allow for left padded sequences, in the case some people have transformers with relative positions that allow for that [x] handle eos [x] show an example for using your own reward prompt instead of default llm-as-judge [x] allow for different strategies for sampling the pairs [x] early stopper [x] handle break signal if all done on main process [x] accept eval module, could be either validation loss or something more sophisticated. returns a scalar tensor or single int / float [x] any order of sft, spin, self-rewarding dpo, dpo with external reward model [x] allow for a validation function on the rewards (say reward must be integer, float, in between some range etc) [ ] figure out how best to handle different impl of kv cache, for now just do without [ ] environment flag that auto-clears all checkpoint folders Citation bibtex
@misc{yuan2024selfrewarding,
title = {Self-Rewarding Language Models},
author = {Weizhe Yuan and Richard Yuanzhe Pang and Kyunghyun Cho and Sainbayar Sukhbaatar and Jing Xu and Jason Weston},
year = {2024},
eprint = {2401.10020},
archivePrefix = {arXiv},
primaryClass = {cs.CL}
} bibtex
@article{Chen2024SelfPlayFC,
title = {Self-Play Fine-Tuning Converts Weak Language Models to Strong Language Models},
author = {Zixiang Chen and Yihe Deng and Huizhuo Yuan and Kaixuan Ji and Quanquan Gu},
journal = {ArXiv},
year = {2024},
volume = {abs/2401.01335},
url = {https://api.semanticscholar.org/CorpusID:266725672}
} bibtex
@article{Rafailov2023DirectPO,
title = {Direct Preference Optimization: Your Language Model is Secretly a Reward Model},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Stefano Ermon and Christopher D. Manning and Chelsea Finn},
journal = {ArXiv},
year = {2023},
volume = {abs/2305.18290},
url = {https://api.semanticscholar.org/CorpusID:258959321}
} bibtex
@inproceedings{Guo2024DirectLM,
title = {Direct Language Model Alignment from Online AI Feedback},
author = {Shangmin Guo and Biao Zhang and Tianlin Liu and Tianqi Liu and Misha Khalman and Felipe Llinares and Alexandre Rame and Thomas Mesnard and Yao Zhao and Bilal Piot and Johan Ferret and Mathieu Blondel},
year = {2024},
url = {https://api.semanticscholar.org/CorpusID:267522951}
};Implementation of the training framework proposed in Self-Rewarding Language Model, from MetaAI;artificial-intelligence,deep-learning,transformers,beyond-human-data,self-rewarding | lucidrains/self-rewarding-lm-pytorch |
McGill-NLP/webllama;🖥️ WebLlama🦙 Building agents that can browse the web by following instructions and talking to you | 💻 [**GitHub**](https://github.com/McGill-NLP/webllama) | 🏠 [**Homepage**](https://webllama.github.io) | 🤗 [**`Llama-3-8B-Web`**](https://huggingface.co/McGill-NLP/Llama-3-8B-Web) |
| :--: | :--: | :--: | [!IMPORTANT] We are thrilled to release Llama-3-8B-Web , the most capable agent built with 🦙 Llama 3 and finetuned for web navigation with dialogue. You can download the agent from the 🤗 Hugging Face Model Hub . | WebLlama helps you build powerful agents, powered by Meta Llama 3, for browsing the web on your behalf | Our first model, Llama-3-8B-Web , surpasses GPT-4V ( * zero-shot) by 18% on WebLINX |
|:---: | :---: |
| | | About the project | WebLlama | The goal of our project is to build effective human-centric agents for browsing the web. We don't want to replace users, but equip them with powerful assistants. |
|:---: | :---|
| Modeling | We are build on top of cutting edge libraries for training Llama agents on web navigation tasks. We will provide training scripts, optimized configs, and instructions for training cutting-edge Llamas. |
| Evaluation | Benchmarks for testing Llama models on real-world web browsing. This include human-centric browsing through dialogue ( WebLINX ), and we will soon add more benchmarks for automatic web navigation (e.g. Mind2Web). |
| Data | Our first model is finetuned on over 24K instances of web interactions, including click , textinput , submit , and dialogue acts. We want to continuously curate, compile and release datasets for training better agents. |
| Deployment | We want to make it easy to integrate Llama models with existing deployment platforms, including Playwright, Selenium, and BrowserGym. We are currently focusing on making this a reality. | Modeling [!NOTE]
The model is available on the 🤗 Hugging Face Model Hub as McGill-NLP/Llama-3-8B-Web . The training and evaluation data is available on Hugging Face Hub as McGill-NLP/WebLINX . Our first agent is a finetuned Meta-Llama-3-8B-Instruct model, which was recently released by Meta GenAI team. We have finetuned this model on the WebLINX dataset, which contains over 100K instances of web navigation and dialogue, each collected and verified by expert annotators. We use a 24K curated subset for training the data. It surpasses GPT-4V (zero-shot * ) by over 18% on the WebLINX benchmark , achieving an overall score of 28.8% on the out-of-domain test splits (compared to 10.5% for GPT-4V). It chooses more useful links (34.1% vs 18.9% seg-F1 ), clicks on more relevant elements (27.1% vs 13.6% IoU ) and formulates more aligned responses (37.5% vs 3.1% chr-F1 ). It's extremely straightforward to use the model via Hugging Face's transformers , datasets and hub libraries: ```python
from datasets import load_dataset
from huggingface_hub import snapshot_download
from transformers import pipeline We use validation data, but you can use your own data here valid = load_dataset("McGill-NLP/WebLINX", split="validation")
snapshot_download("McGill-NLP/WebLINX", repo_type="dataset", allow_patterns="templates/*")
template = open('templates/llama.txt').read() Run the agent on a single state (text representation) and get the action state = template.format(**valid[0])
agent = pipeline("McGill-NLP/Llama-3-8b-Web")
out = agent(state, return_full_text=False)[0]
print("Action:", out['generated_text']) Here, you can use the predictions on platforms like playwright or browsergym action = process_pred(out['generated_text']) # implement based on your platform
env.step(action) # execute the action in your environment
``` Evaluation We believe short demo videos showing how well an agent performs is NOT enough to judge an agent. Simply put, we do not know if we have a good agent if we do not have good benchmarks. We need to systematically evaluate agents on wide range of tasks, spanning from simple instruction-following web navigation to complex dialogue-guided browsing. This is why we chose WebLINX as our first benchmark. In addition to the training split, the benchmark has 4 real-world splits, with the goal of testing multiple dimensions of generalization: new websites, new domains, unseen geographic locations, and scenarios where the user cannot see the screen and relies on dialogue . It also covers 150 websites, including booking, shopping, writing, knowledge lookup, and even complex tasks like manipulating spreadsheets. Evaluating on this benchmark is very straightforward: ```bash
cd modeling/ After installing dependencies, downloading the dataset, and training/evaluating your model, you can evaluate: python -m weblinx.eval # automatically find all results.jsonl and generate an aggregated_results.json file Visualize your results with our app: cd ..
streamlit run app/Results.py
``` 👷♀️ Next steps \
We are planning to evaluate our models on more benchmarks, including Mind2Web, a benchmark for automatic web navigation. We believe that a good agent should be able to navigate the web both through dialogue and autonomously, and potentially attain even broader ranges of capabilities useful for real-world web browsing. Data Although the 24K training examples from WebLINX provide a good starting point for training a capable agent, we believe that more data is needed to train agents that can generalize to a wide range of web navigation tasks. Although it has been trained and evaluated on 150 websites, there are millions of websites that has never been seen by the model, with new ones being created every day. This motivates us to continuously curate, compile and release datasets for training better agents. As an immediate next step, we will be incorporating Mind2Web 's training data into the equation, which also covers over 100 websites. Deployment We are working hard to make it easy for you to deploy Llama web agents to the web. We want to integrate WebLlama with existing deployment platforms, including Microsoft's Playwright, ServiceNow Research's BrowserGym, and other partners. Code The code for finetuning the model and evaluating it on the WebLINX benchmark is available now. You can find the detailed instructions in modeling . 👷♀️ Next steps \
We are actively working on new data, evaluation, and deployment integrations at the moment, so stay tuned! Citation If you use WebLlama in your research, please cite the following paper (upon which the data, training and evaluation are originally based on): @misc{lù2024weblinx,
title={WebLINX: Real-World Website Navigation with Multi-Turn Dialogue},
author={Xing Han Lù and Zdeněk Kasner and Siva Reddy},
year={2024},
eprint={2402.05930},
archivePrefix={arXiv},
primaryClass={cs.CL}
} License The code in this repository is licensed under the MIT license, unless otherwise specified in the header of the file. Other materials (models, data, images) have their own licenses, which are specified in the original pages. FAQ How can I contribute to the project? We are actively looking for collaborators to help us build the best Llama-3 web agents! To get started, open an issue about what you would like to contribute, and once it has been discussed, you can submit a pull request. We will also soon be announcing a Discord channel for the project, where you can ask questions and discuss with other contributors.;Llama-3 agents that can browse the web by following instructions and talking to you;[] | McGill-NLP/webllama |
whoeevee/EeveeSpotify;EeveeSpotify This tweak makes Spotify think you have a Premium subscription, granting free listening, just like Spotilife, and provides some additional features like custom lyrics. The History Several months ago, Spotilife, the only tweak to get Spotify Premium, stopped working on new Spotify versions. I decompiled Spotilife, reverse-engineered Spotify, intercepted requests, etc., and created this tweak. Restrictions Please refrain from opening issues about the following features, as they are server-sided and will NEVER work: Very High audio quality Native playlist downloading (you can download podcast episodes though) Jam (hosting a Spotify Jam and joining it remotely requires Premium; only joining in-person works) It's possible to implement downloading locally, but it will never be included in EeveeSpotify (unless someone opens a pull request). Lyrics Support EeveeSpotify replaces Spotify monthly limited lyrics with one of the following three lyrics providers: Genius: Offers the best quality lyrics, provides the most songs, and updates lyrics the fastest. Does not and will never be time-synced. LRCLIB: The most open service, offering time-synced lyrics. However, it lacks lyrics for many songs. Musixmatch: The service Spotify uses. Provides time-synced lyrics for many songs, but you'll need a user token to use this source. To obtain the token, download Musixmatch from the App Store, sign up, then go to Settings > Get help > Copy debug info, and paste it into EeveeSpotify alert. You can also extract the token using MITM. If the tweak is unable to find a song or process the lyrics, you'll see a "Couldn't load the lyrics for this song" message. The lyrics might be wrong for some songs when using Genius due to how the tweak searches songs. While I've made it work in most cases, kindly refrain from opening issues about it. How It Works Starting with version 4.0, EeveeSpotify intercepts Spotify requests to load user data, deserializes it, and modifies the parameters in real-time. This method is the best so far and works incredibly stable. You can select the dynamic Premium patching method in the EeveeSpotify settings. Upon login, Spotify fetches user data and caches it in the offline.bnk file in the /Library/Application Support/PersistentCache directory. It uses its proprietary binary format to store data, incorporating a length byte before each value, among other conventions. Certain keys, such as player-license , financial-product , streaming-rules , and others, determine the user abilities. The tweak patches this file while initializing; Spotify loads it and assumes you have Premium. To be honest, it doesn't really patch due to challenges with dynamic length and varied bytes. The tweak extracts the username from the current offline.bnk file and inserts it into premiumblank.bnk (a file containing all premium values preset), replacing offline.bnk . Spotify may reload user data, and you'll be switched to the Free plan. When this happens, you'll see a popup with quick restart app and reset data actions. Tweak also sets trackRowsEnabled in SPTFreeTierArtistHubRemoteURLResolver to true , so Spotify loads not just track names on the artist page. It can stop working just like Spotilife, but so far, it works on the latest Spotify 8.9.## (Spotilife also patches offline.bnk , however, it changes obscure bytes that do nothing on new versions). To open Spotify links in sideloaded app, use OpenSpotifySafariExtension . Remember to activate it and allow access in Settings > Safari > Extensions.;A tweak to get Spotify Premium for free, just like Spotilife;[] | whoeevee/EeveeSpotify |
pytorch/torchtitan;torchtitan torchtitan is currently in a pre-release state and under extensive development. torchtitan is a proof-of-concept for Large-scale LLM training using native PyTorch. It is (and will continue to be) a repo to showcase PyTorch's latest distributed training features in a clean, minimal codebase. torchtitan is complementary to and not a replacement for any of the great large-scale LLM training codebases such as Megatron, Megablocks, LLM Foundry, Deepspeed, etc. Instead, we hope that the features showcased in torchtitan will be adopted by these codebases quickly. torchtitan is unlikely to ever grow a large community around it. Our guiding principles when building torchtitan : Designed to be easy to understand, use and extend for different training purposes. Minimal changes to the model code when applying 1D, 2D, or (soon) 3D Parallel. Modular components instead of a monolithic codebase. Get started in minutes, not hours! Intro video - learn more about torchtitan in under 4 mins: Pre-Release Updates: (4/25/2024): torchtitan is now public but in a pre-release state and under development. Currently we showcase pre-training Llama 3 and Llama 2 LLMs of various sizes from scratch. torchtitan is tested and verified with the PyTorch nightly version torch-2.4.0.dev20240412 . (We recommend latest PyTorch nightly). Key features available FSDP2 with per param sharding Tensor Parallel Selective layer and operator activation checkpointing Distributed checkpointing 2 datasets pre-configured (45K - 144M) GPU usage, MFU, tokens per second and more displayed via TensorBoard Learning rate scheduler, meta init, Optional Fused RMSNorm All options easily configured via toml files Interoperable checkpoints which can be loaded directly into torchtune for fine tuning We report our Performance verified on 64 A100 GPUs Coming soon Async checkpointing FP8 support Context Parallel 3D Pipeline Parallel torch.compile support Scalable data loading solution Installation bash
git clone https://github.com/pytorch/torchtitan
cd torchtitan
pip install -r requirements.txt
pip3 install --pre torch --index-url https://download.pytorch.org/whl/nightly/cu121 # or cu118
pip3 install --pre torchdata --index-url https://download.pytorch.org/whl/nightly Downloading a tokenizer torchtitan currently supports training Llama 3 (8B, 70B), and Llama 2 (7B, 13B, 70B) out of the box. To get started training these models, we need to download a tokenizer.model. Follow the instructions on the official meta-llama repository to ensure you have access to the Llama model weights. Once you have confirmed access, you can run the following command to download the Llama 3 / Llama 2 tokenizer to your local machine. ```bash Get your HF token from https://huggingface.co/settings/tokens llama3 tokenizer.model python torchtitan/datasets/download_tokenizer.py --repo_id meta-llama/Meta-Llama-3-8B --tokenizer_path "original" --hf_token=... llama2 tokenizer.model python torchtitan/datasets/download_tokenizer.py --repo_id meta-llama/Llama-2-13b-hf --hf_token=...
``` Start a training run Llama 3 8B model locally on 8 GPUs bash
CONFIG_FILE="./train_configs/llama3_8b.toml" ./run_llama_train.sh TensorBoard To visualize TensorBoard metrics of models trained on a remote server via a local web browser: Make sure metrics.enable_tensorboard option is set to true in model training (either from a .toml file or from CLI). Set up SSH tunneling, by running the following from local CLI ssh -L 6006:127.0.0.1:6006 [username]@[hostname] Inside the SSH tunnel that logged into the remote server, go to the torchtitan repo, and start the TensorBoard backend tensorboard --logdir=./outputs/tb In the local web browser, go to the URL it provides OR to http://localhost:6006/. Multi-Node Training For training on ParallelCluster/Slurm type configurations, you can use the multinode_trainer.slurm file to submit your sbatch job. To get started adjust the number of nodes and GPUs
``` SBATCH --ntasks=2 SBATCH --nodes=2 ``` Then start a run where nnodes is your total node count, matching the sbatch node count above. srun torchrun --nnodes 2 If your gpu count per node is not 8, adjust: --nproc_per_node in the torchrun command and #SBATCH --gpus-per-task in the SBATCH command section. License This code is made available under BSD 3 license . However you may have other legal obligations that govern your use of other content, such as the terms of service for third-party models, data, etc.;A native PyTorch Library for large model training;[] | pytorch/torchtitan |
a-real-ai/pywinassistant;PyWinAssistant "PyWinAssistant" is the first Large Action Model, Open Source Windows 10/11 Artificial Narrow Intelligence framework (Released on 31 Dec 2023, before Microsoft's VoT paper) to artificially assist any win32api human User Interfaces by utilizing Visualization-of-Thought (VoT) Elicits Spatial Reasoning in Large Language Models without OCR / object detection / segmentation. This improves the generality quality and minimizes the overall data usage of LLM and vision models. It has built-in assistance options to improve human utilization of a computer, with a new technical approach to User Interface and User Experience assistance and testing by spatial visualization of thought, generalizes correctly any natural language prompt, and plans to perform correct actions into the OS with security in mind. Paper related: Visualization-of-Thought Elicits Spatial Reasoning in Large Language Models (Released on 4 Apr 2024) https://arxiv.org/abs/2404.03622 Overview Talk with your computer friendly and naturally to perform any User Interface activity.
Use natural language to operate freely your Windows Operating System.
Generates and plans test cases of your User Interface applications for continuous testing on any Win32api supported application by simply using natural language.
Your own open and secure personal assistant that responds as you want, control the way you want your computer to assist you.
It's engineered to be modular, understand and execute a wide range of tasks, automating interactions with any desktop applications. Demos (Videos below) Please enable the Audio for the demo videos. Voice 1 - Input Human (English Female Australian TTS) Voice 2 - Output Assistant (English Female US Google TTS) Use your computer by natural language - Real-time usage of VoT, an example of a Single Action Model. Does not use any vision. Only API LLM calls. https://github.com/a-real-ai/pywinassistant/assets/18397328/25b39d8c-62d6-442e-9d5e-bc8a35aa971a Use your computer as an assistant - Real-time usage of planning VoT, an example of a Large Action Model. Uses minimal vision. Get to know what the user is doing and what is that the user wants to achieve, the assistant plans to perform it. https://github.com/a-real-ai/pywinassistant/assets/18397328/d04f0609-68fb-4fb4-9ac3-279047c7a4f7 The assistant can do anything for you - Real-time usage of planning VoT, an example of a Large Action Model. The inference is the only constraint for speed. https://github.com/a-real-ai/pywinassistant/assets/18397328/6d3bb6e6-ccf8-4380-bc89-df512ae207f2 Other demos with Real-time usage of planning VoT. November 16th 2023 live demo: (Firefox, Spotify, Notepad, Calculator, Mail) https://github.com/a-real-ai/pywinassistant/assets/18397328/ce574640-5f20-4b8e-84f9-341fa102c0e6 December 1st 2023 live demo: (Chrome, Spotify, Firefox) https://github.com/a-real-ai/pywinassistant/assets/18397328/7e0583d1-1c19-40fa-a750-a77fff98a6da Currently supporting all generalized win32api apps, meaning:
Chrome, Firefox, OperaGX, Discord, Telegram, Spotify... Key Features Dynamic Case Generator: The assistant() function accepts a goal parameter, which is a natural language command, and intelligently maps it to a series of executable actions. This allows for a seamless translation of user intentions into effective actions on the computer. Single Action Execution:
The act() function is a streamlined method for executing single, straightforward actions, enhancing the tool's efficiency and responsiveness. Advanced Context Handling: The framework is adept at understanding context through analyzing the screen and the application, ensuring that actions are carried out with an awareness of the necessary prerequisites or steps. Semantic router map: The framework has a database of a semantic router map to successfully execute generated test cases. This semantic maps can be created by other AI. Wide Application Range: From multimedia control (like playing songs or pausing playback on Spotify and YouTube) to complex actions (like creating AI-generated text, sending emails, or managing applications like Telegram or Firefox), the framework covers a broad spectrum of tasks. Customizable AI Identity: The write_action() function allows for a customizable assistant identity, enabling personalized interactions and responses that align with the user's preferences or the nature of the task. Robust Error Handling and Feedback: The framework is designed to handle unexpected scenarios gracefully, providing clear feedback and ensuring reliability. (In Overview) Projects for mood and personality: Generate or suggest now and then useful scenarios based in your mood and personality. (In Overview) Technical Innovations Natural Language Processing (NLP): Employs advanced NLP techniques to parse and understand user commands in a natural, conversational manner. Task Automation Algorithms: Utilizes sophisticated algorithms to break down complex tasks into executable steps. Context-Aware Execution: Integrates contextual awareness for more nuanced and effective task execution. Cross-Application Functionality: Seamlessly interfaces with various applications and web services, demonstrating extensive compatibility and integration capabilities. Use Cases. Automating repetitive tasks in a Windows environment. Streamlining workflows for professionals and casual users alike. Enhancing accessibility for users with different needs, enabling voice or simple text commands to control complex actions. Assisting in learning and exploration by providing AI-driven guidance and execution of tasks. Conclusion This Artificially Assisted User Interface Testing framework is a pioneering tool in the realm of desktop automation. Its ability to understand and execute a wide range of commands in a natural, intuitive manner makes it an invaluable asset for anyone looking to enhance their productivity and interaction with their Windows environment. It's not just a tool; it's a step towards a future where AI seamlessly integrates into our daily computing tasks, making technology more accessible and user-friendly. Installation ``` Add your Chat-GPT API Keys to the project: add your API Key in /core/core_api.py -> line 3: client = OpenAI(api_key='insert_your_api_key_here')
add your API Key in /core/core_imaging.py -> line 12: api_key = 'insert_your_api_key_here' Install requirements: cd pywinassistant
pip install -r .\requirements.txt Execute the assistant: cd .\core
python ./assistant.py
``` Usage Run "Assistant.py", say "Ok computer" to enable the assistant by voice commands or click to it or enable the chat to do a fast action. Use Right click above the Assistant to see the available options for the assistant. For debugging mode execute "Driver.py". Inside it, you can debug and try easily the functions of "act" which is used alongside the assistant, "fast_act" and "assistant" by using the examples.
To run a JSON test case, modify the JSON path from the "assistant" function. Working cases (on cases.py) assistant(goal=f"Play the song \'One More Time - Daft Punk\' on Spotify") # Working 100%
assistant(goal=f"Open a new tab the song \'Wall Of Eyes - The Smile\', from google search results filter by videos then play it on Firefox") # Working 100%
assistant(goal=f"Open a new tab the song \'Windows XP Error beat\', from google search results filter by videos then play it by clicking on the text on Firefox.") # Working 100%
fast_act(goal=f"Click on the Like button") # Working 100%
assistant(goal=f"Pause the music on Spotify") # Working 100%
write_action(goal="Comment about why IA is great for the current playing song", assistant_identity="You\'re an advanced music AI agent that specializes on music") # Working 100%
assistant(f"Create a long AI essay about an AI Starting to control a Windows computer on Notepad") # Working 100%
fast_act(goal="Click on the button at the bottom in HueSync app") # Working 100%
write_action(goal="Weird Fishes - Radiohead") # Working 100%
assistant(f"Open Calc and press 4 x 4 - 4 * 4 + 1 =") # Working 100%
assistant(goal=f"Open 3 new tabs on google chrome and in each of them search for 3 different types of funny dogs", keep_in_mind=" Filter the results by images.") # Working 100%
assistant(goal=f"Stop the playback from Firefox app") # Working 100%
assistant(f"Send a list of steps to make a joke about engineers whilist making it an essay to my friend Diana in Telegram") # Working 100%
assistant(f"Send a list of steps to make a chocolate cake to my saved messages in Telegram") # Working 100%
assistant(f"Create three new tabs on Firefox, in each of them search 3 different types of funny youtube bad tutorial videos, generate the titles to search.") # Working 100%
assistant(f"Write an essay about an AI that a person created to use freely the computer, like you. Write it in notepad.exe") # Working 100%
assistant(f"Send an AI joke and say it's generated by an AI to my friend Diana on Discord") # Working 100%
assistant(goal=f"Create a short greet text for the user using AI Automated Windows in notepad.exe") # Working 100%
assistant(goal=f"Open calc.exe and press 4 x 4 =") # Working 100%
assistant(goal=f"Send a mail to \'testmail@gmail.com\' with the subject \'Hello\' and generate the message \'Generate a message about how an AI is helping everyone as an users\' on the Mail app",
keep_in_mind="Press \'Tab\' tree times to navigate to the subject area. Do not combine steps.") # Need to update the app semantic map to get it working 100%.
assistant(goal=f"Play the song \'The Smile - Wall Of Eyes\' on Spotify") # Working 100%
assistant(goal=f"Play the song \'Panda Bear - Tropic of cancer\' on Spotify") # Working 100%
assistant(goal="Pause the music on the Spotify app") # Working 100%
assistant(goal=f"Open 3 new tabs with different Daft Punk songs on each of them on Firefox") # Working 100%
fast_act("Open spotify and Search the album \'Grimes - Visions\'") # Working 100%
write_action("Open spotify and Search the album \'Grimes - Visions\'") # Working 100%
fast_act("Click on the first result on spotify") # Working 100%
fast_act("Skip to the next song on Spotify") # Working 100%
fast_act("Add the album to the library") # Working 100%
fast_act("Go to Home on Spotify") # Working 100%
fast_act("Save the song to my library on Spotify") # Working 100% Current approaches to UI Testing There are three main types of GUI testing approaches, namely: Manual Testing: In manual testing, a human tester performs a set of operations to check whether the application is functioning correctly and that the graphical elements conform to the documented requirements. Manual-based testing has notable downsides in that it can be time-consuming, and the test coverage is extremely low. Additionally, the quality of testing in this approach depends on the knowledge and capabilities of the testing team. Record-and-Playback Testing: Also known as record-and-replay testing, it is executed using automation tools. The automated UI testing tool records all tasks, actions, and interactions with the application. The recorded steps are then reproduced, executed, and compared with the expected behavior. For further testing, the replay phase can be repeated with various data sets. Model-Based Testing: In this testing approach, we focus on building graphical models that describe the behavior of a system. This provides a deeper understanding of the system, which allows the tester to generate highly efficient test cases. In the models, we determine the inputs and outputs of the system, which are in turn, used to run the tests. Model-based testing works as follows: Create a model for the system
Determine system inputs
Verify the expected output
Execute tests
Check and validate system output vs. the expected output The model-based approach is great because it allows a higher level of automation. It also covers a higher number of states in the system, thereby improving the test coverage. New Approaches to UI Testing using AI Artificially Assisted User Interface Testing: Artificially Assisted User Interface Testing harnesses the power of artificial intelligence to revolutionize the process of testing graphical user interfaces. Unlike traditional methods, Artificially Assisted User Interface Testing integrates machine learning algorithms and intelligent decision-making processes to autonomously identify, analyze, and interact with UI elements. This approach significantly enhances the depth and breadth of testing in several ways: Dynamic Interaction with UI Elements: AI-driven tests can adapt to changes in the UI, such as modified button locations or altered element properties. This flexibility is achieved through the use of AI models trained to recognize and interact with various UI components, regardless of superficial changes.
Learning and Pattern Recognition: Utilizing machine learning, Artificially Assisted User Interface Testing systems can learn from previous interactions, test runs, and user feedback. This enables the AI to recognize patterns and predict potential issues, improving over time and offering more thorough testing with each iteration.
Automated Test Case Generation: The AI can generate test cases based on its understanding of the application's functionality and user behavior patterns. This not only saves time but also ensures that a wider range of scenarios is tested, including edge cases that might be overlooked in manual testing.
Natural Language Processing (NLP): AI Testing tools often incorporate NLP to interpret and execute tests written in plain language. This feature makes the testing process more accessible to non-technical stakeholders and facilitates better communication across the team.
Real-Time Feedback and Analytics: AI systems provide real-time insights into the testing process, identifying bugs, performance issues, and usability problems promptly. This immediate feedback loop enables quicker rectifications and enhances the overall quality of the product.
Predictive Analysis and Risk Assessment: By analyzing past data, Artificially Assisted User Interface Testing tools can predict potential problem areas and allocate testing resources more efficiently. This proactive approach to risk management ensures that critical issues are identified and addressed early in the development lifecycle. In conclusion, Artificially Assisted User Interface Testing represents a significant leap forward in software quality assurance. By automating and enhancing the testing process, AI-driven tools offer improved accuracy, speed, and coverage, paving the way for more reliable and user-friendly applications. Notes: This project is being updated as of start of 2024. The list of requirements is being updated.;The first open source Large Action Model generalist Artificial Narrow Intelligence that controls completely human user interfaces by only using natural language. PyWinAssistant utilizes Visualization-of-Thought Elicits Spatial Reasoning in Large Language Models.;[] | a-real-ai/pywinassistant |
ishaan1013/sandbox;Sandbox 📦🪄 Sandbox is an open-source cloud-based code editing environment with custom AI code autocompletion and real-time collaboration. Check out the Twitter thread with the demo video! Check out this guide made by @jamesmurdza on setting it up locally! Running Locally Frontend Install dependencies bash
cd frontend
npm install Add the required environment variables in .env (example file provided in .env.example ). You will need to make an account on Clerk and Liveblocks to get API keys. Then, run in development mode bash
npm run dev Backend The backend consists of a primary Express and Socket.io server, and 3 Cloudflare Workers microservices for the D1 database, R2 storage, and Workers AI. The D1 database also contains a service binding to the R2 storage worker. Socket.io server Install dependencies bash
cd backend/server
npm install Add the required environment variables in .env (example file provided in .env.example ) Project files will be stored in the projects/<project-id> directory. The middleware contains basic authorization logic for connecting to the server. Run in development mode bash
npm run dev This directory is dockerized, so feel free to deploy a container on any platform of your choice! I chose not to deploy this project for public access due to costs & safety, but deploying your own for personal use should be no problem. Cloudflare Workers (Database, Storage, AI) Directories: /backend/database : D1 database /backend/storage : R2 storage /backend/ai : Workers AI Install dependencies ```bash
cd backend/database
npm install cd ../storage
npm install cd ../ai
npm install
``` Read the documentation to learn more about workers. For each directory, add the required environment variables in wrangler.toml (example file provided in wrangler.example.toml ). For the AI worker, you can define any value you want for the CF_AI_KEY -- set this in other .env files to authorize access. Run in development mode bash
npm run dev Deploy to Cloudflare with Wrangler bash
npx wrangler deploy Contributing Thanks for your interest in contributing! Review this section before submitting your first pull request. If you need any help, feel free to reach out to @ishaandey_ . Please prioritize existing issues, but feel free to contribute new issues if you have ideas for a feature or bug that you think would be useful. Structure frontend/
├── app
├── assets
├── components
└── lib
backend/
├── server
├── database/
│ ├── src
│ └── drizzle
├── storage
└── ai | Path | Description |
| ------------------ | -------------------------------------------------------------------------- |
| frontend | The Next.js application for the frontend. |
| backend/server | The Express websocket server. |
| backend/database | API for interfacing with the D1 database (SQLite). |
| backend/storage | API for interfacing with R2 storage. Service-bound to /backend/database . |
| backend/ai | API for making requests to Workers AI . | Development Fork this repo You can fork this repo by clicking the fork button in the top right corner of this page. Clone repository bash
git clone https://github.com/<your-username>/sandbox.git
cd sandbox Create a new branch bash
git checkout -b my-new-branch Commit convention Before you create a Pull Request, please check that you use the Conventional Commits format It should be in the form category(scope or module): message in your commit message from the following categories: feat / feature : all changes that introduce completely new code or new
features fix : changes that fix a bug (ideally you will additionally reference an
issue if present) refactor : any code related change that is not a fix nor a feature docs : changing existing or creating new documentation (i.e. README, docs for
usage of a lib or cli usage) chore : all changes to the repository that do not fit into any of the above
categories e.g. feat(editor): improve tab switching speed Tech stack Frontend Next.js TailwindCSS Shadcn UI Clerk Monaco Liveblocks Backend Cloudflare Workers D1 database R2 storage Workers AI Express Socket.io Drizzle ORM;A cloud-based code editing environment with an AI copilot and real-time collaboration;[] | ishaan1013/sandbox |
jiaweizzhao/GaLore;GaLore This repo contains the pre-release version of GaLore algorithm, proposed by GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection . Gradient Low-Rank Projection (GaLore) is a memory-efficient low-rank training strategy that allows full-parameter learning but is more memory-efficient than common low-rank adaptation methods, such as LoRA.
As a gradient projection method, GaLore is independent of the choice of optimizers and can be easily plugged into existing ones with only two lines of code, as shown in Algorithm 1 below. News Thanks everyone for your interest in GaLore! We are working on the official release of GaLore. In the meantime, please feel free to try the pre-release version and provide feedback to us. Currently, the pre-release version (e.g., GaLore optimizers) should provide a decent memory reduction and accurate simulation of GaLore algorithm. The official release of GaLore will include: Per-layer weight updates for multi-GPU training (DDP and FSDP) (working with PyTorch ). Memory-efficient low-rank gradient accumulation (working with PyTorch ). Optimized GaLoreAdamW8bit (working with bitsandbytes ). We would like to express our gratitude to the community members who have been actively working on integrating GaLore into different platforms, including HuggingFace , LLaMA-Factory , and Axolotl . Join our Slack workspace GaLore-Social to engage in discussions with us. Discussion (GaLore-Social) We welcome any discussions, questions, and feedback on GaLore. Please join our Slack workspace GaLore-Social to discuss with us and the community. Installation Install GaLore optimizer Install from pip: bash
pip install galore-torch or if you want to install from source: bash
git clone git@github.com:jiaweizzhao/GaLore.git
cd GaLore
pip install -e . Install experiment dependencies bash
pip install -r exp_requirements.txt Our experiment scripts are tested on Python 3.8 with PyTorch 2.1. Usage Save optimizer memory using GaLore optimizers ```python
from galore_torch import GaLoreAdamW, GaLoreAdamW8bit, GaLoreAdafactor define param groups as galore_params and non_galore_params param_groups = [{'params': non_galore_params},
{'params': galore_params, 'rank': 128, 'update_proj_gap': 200, 'scale': 0.25, 'proj_type': 'std'}]
optimizer = GaLoreAdamW(param_groups, lr=0.01)
``` Save weight gradient memory using per-layer weight updates We use register_post_accumulate_grad_hook provided by PyTorch ( torch>=2.1.0 ) to enable per-layer weight updates. An example is shown below: ```python define an optimizer for each parameter p, and store them in optimizer_dict for p in model.parameters():
if p.requires_grad:
optimizer_dict[p] = GaLoreAdamW([{'params': p, 'rank': 128, 'update_proj_gap': 200, 'scale': 0.25, 'proj_type': 'std'}], lr=0.01) define a hook function to update the parameter p during the backward pass def optimizer_hook(p):
if p.grad is None:
return
optimizer_dict[p].step()
optimizer_dict[p].zero_grad() Register the hook onto every parameter for p in model.parameters():
if p.requires_grad:
p.register_post_accumulate_grad_hook(optimizer_hook)
```
More details can be found in torchrun_main.py . Benchmark 1: Pre-Training LLaMA on C4 dataset torchrun_main.py is the main script for training LLaMA models on C4 with GaLore. Our benchmark scripts for various sizes of models are in scripts/benchmark_c4 folder.
For example, to train a 60m model on C4, do the following: ```bash LLaMA-60M, GaLore-Adam, 1 A100, 1 Node torchrun --standalone --nproc_per_node 1 torchrun_main.py \
--model_config configs/llama_60m.json \
--lr 0.01 \
--galore_scale 0.25 \
--rank 128 \
--update_proj_gap 200 \
--batch_size 256 \
--total_batch_size 512 \
--num_training_steps 10000 \
--warmup_steps 1000 \
--weight_decay 0 \
--dtype bfloat16 \
--eval_every 1000 \
--optimizer galore_adamw
``` Train 7B model with a single GPU with 24GB memory To train a 7B model with a single GPU such as NVIDIA RTX 4090, all you need to do is to specify --optimizer=galore_adamw8bit_per_layer , which enables GaLoreAdamW8bit with per-layer weight updates.
With activation checkpointing, you can maintain a batch size of 16 tested on NVIDIA RTX 4090. ```bash LLaMA-7B, 8-bit GaLore-Adam, single GPU, activation checkpointing bsz=16, 22.8G, torchrun --standalone --nproc_per_node 1 torchrun_main.py \
--model_config configs/llama_7b.json \
--lr 0.005 \
--galore_scale 0.25 \
--rank 1024 \
--update_proj_gap 500 \
--batch_size 16 \
--total_batch_size 512 \
--activation_checkpointing \
--num_training_steps 150000 \
--warmup_steps 15000 \
--weight_decay 0 \
--grad_clipping 1.0 \
--dtype bfloat16 \
--eval_every 1000 \
--single_gpu \
--optimizer galore_adamw8bit_per_layer
``` Currently per-layer weight updates technique is only supported for single GPU training ( --single_gpu ) without using nn.parallel.DistributedDataParallel . We are working on supporting multi-GPU training with per-layer weight updates. Benchmark 2: Fine-Tuning RoBERTa on GLUE tasks run_glue.py is the main script for fine-tuning RoBERTa models on GLUE tasks with GaLore. An example script is shown below: bash
python run_glue.py \
--model_name_or_path roberta-base \
--task_name mrpc \
--enable_galore \
--lora_all_modules \
--max_length 512 \
--seed=1234 \
--lora_r 4 \
--galore_scale 4 \
--per_device_train_batch_size 16 \
--update_proj_gap 500 \
--learning_rate 3e-5 \
--num_train_epochs 30 \
--output_dir results/ft/roberta_base/mrpc Citation bibtex
@misc{zhao2024galore,
title={GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection},
author={Jiawei Zhao and Zhenyu Zhang and Beidi Chen and Zhangyang Wang and Anima Anandkumar and Yuandong Tian},
year={2024},
eprint={2403.03507},
archivePrefix={arXiv},
primaryClass={cs.LG}
};GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection;[] | jiaweizzhao/GaLore |
Julien-cpsn/ATAC;ATAC ⚔📩 Table Of Contents Description How to install Install with cargo Install with Homebrew Install from Arch Install from Fedora copr Binary Compile by yourself Features Current TODO v1.0.0 TODO v2.0.0 Documentation Others Vim key-bindings NeoVim integration Technical precisions Tested on Dependencies Binary size Contributors Maintainers Packagers Star history License Description ATAC is A rguably a T erminal A PI C lient.
It is based on well-known clients such as Postman, Insomnia, or even Bruno,
but inside your terminal without any specific graphical environment needed. The philosophy of ATAC is to be free, account-less, and offline for now and forever. How to install Install with cargo [!IMPORTANT]
First, make sure your rust version is at least 1.78 Simply use: shell
cargo install atac Install from Arch You can use pacman to install: shell
pacman -S atac Install with Homebrew Simply use: bash
brew tap julien-cpsn/atac
brew install atac Install from Fedora copr Simply use: bash
dnf copr enable joxcat/atac
dnf install atac Binary The binaries from the latest release can be found here [!IMPORTANT]
Remember to run it from a terminal.
For example, you can add the binary into your PATH.
You won't be able to run it like other graphical applications since it needs CLI arguments. [!TIP]
Note for macOS users . After downloading the binary you may need to run the command sudo xattr -rd com.apple.quarantine ~/bin/atac (modify to reflect the path where atac is located). Compile by yourself [!IMPORTANT]
First, make sure your rust version is at least 1.76 Simply clone the repository and use: bash
cargo run
cargo run -- -h [!TIP]
Build the latest release bash
cargo build --release Features Current | Features | ATAC | Postman | Insomnia |
|-------------------------------------|-------------------------------------------------------------------|----------------------|----------------------|
| Manage collections & requests | :white_check_mark: | :white_check_mark: | :white_check_mark: |
| HTTP Client | :white_check_mark: | :white_check_mark: | :white_check_mark: |
| Methods | :white_check_mark: | :white_check_mark: | :white_check_mark: |
| - GET | :white_check_mark: | :white_check_mark: | :white_check_mark: |
| - POST | :white_check_mark: | :white_check_mark: | :white_check_mark: |
| - PUT | :white_check_mark: | :white_check_mark: | :white_check_mark: |
| - PATCH | :white_check_mark: | :white_check_mark: | :white_check_mark: |
| - DELETE | :white_check_mark: | :white_check_mark: | :white_check_mark: |
| - HEAD | :white_check_mark: | :white_check_mark: | :white_check_mark: |
| - OPTIONS | :white_check_mark: | :white_check_mark: | :white_check_mark: |
| Authentication | Partial | :white_check_mark: | :white_check_mark: |
| - Basic auth | :white_check_mark: | :white_check_mark: | :white_check_mark: |
| - Bearer token | :white_check_mark: | :white_check_mark: | :white_check_mark: |
| - JWT, Digest, OAuth1-2, AWS | :x: :soon: | :white_check_mark: | :white_check_mark: |
| Headers | :white_check_mark: | :white_check_mark: | :white_check_mark: |
| Body | :white_check_mark: | :white_check_mark: | :white_check_mark: |
| - Multipart form | :white_check_mark: | :white_check_mark: | :white_check_mark: |
| - URL Encoded form | :white_check_mark: | :white_check_mark: | :white_check_mark: |
| - File | :white_check_mark: | :white_check_mark: | :white_check_mark: |
| - Plain text | :white_check_mark: | :white_check_mark: | :white_check_mark: |
| - JSON, XML, HTML, Javascript | :white_check_mark: | :white_check_mark: | :white_check_mark: |
| Full response | :white_check_mark: | :white_check_mark: | :white_check_mark: |
| - Status code | :white_check_mark: | :white_check_mark: | :white_check_mark: |
| - Cookies | :white_check_mark: | :white_check_mark: | :white_check_mark: |
| - Headers | :white_check_mark: | :white_check_mark: | :white_check_mark: |
| - Duration | :white_check_mark: | :white_check_mark: | :white_check_mark: |
| Scripting | :white_check_mark: | Partial | :x: |
| - Pre-request script | :white_check_mark: | :x: | :x: |
| - Post-request script | :white_check_mark: | :white_check_mark: | :x: |
| Asynchronous requests | :white_check_mark: | :white_check_mark: | :white_check_mark: |
| Per-request settings | :white_check_mark: | :white_check_mark: | :white_check_mark: |
| - Use proxy | :white_check_mark: | :white_check_mark: | :white_check_mark: |
| - Allow redirects | :white_check_mark: | :white_check_mark: | :white_check_mark: |
| - Store cookies | :white_check_mark: | :white_check_mark: | :white_check_mark: |
| Export to other languages | :x: :soon: | :white_check_mark: | :x: |
| WebSocket Client | :x: :soon: | :white_check_mark: | :white_check_mark: |
| GraphQL | :x: :soon: | :white_check_mark: | :white_check_mark: |
| gRPC | :x: :soon: | :white_check_mark: | :white_check_mark: |
| MQTT | :x: :soon: | :white_check_mark: | :x: |
| Free | :white_check_mark: | Depends | Depends |
| Lightweight, fast and efficient | :white_check_mark: | :x: | :x: |
| Data storage | Your own committable, readable and versioned files (JSON or YAML) | Tied to your account | Tied to your account |
| Offline | :white_check_mark: | :x: | :x: |
| Real-time collaboration | :x: (not planned) | :white_check_mark: | :white_check_mark: |
| Environment files and variables | :white_check_mark: (committable, readable and versioned) | :white_check_mark: | :white_check_mark: |
| View options | :white_check_mark: | :white_check_mark: | :white_check_mark: |
| Global configuration file | :white_check_mark: | :white_check_mark: | :white_check_mark: |
| - HTTP/HTTPS Proxy | :white_check_mark: | :white_check_mark: | :white_check_mark: |
| - Disable CORS | :white_check_mark: | :x: | :x: |
| - Toggle syntax highlighting | :white_check_mark: | :x: | :x: |
| Postman v2.1.0 import | :white_check_mark: | :white_check_mark: | :white_check_mark: |
| OpenAPI import | :x: :soon: | :white_check_mark: | :white_check_mark: | TODO v1.0.0 To add Create a repo wiki Document the whole code Reserved env variables like {{AUTO_TIMESTAMP}} or {{AUTO_UUID}} (https://github.com/Julien-cpsn/ATAC/issues/81) Cancel pending request Command line usage (send requests, add new requests) Request body syntax highlighting Export a request to other code formats ( raw , curl, PHP, JS, Rust, ...) OpenAPI & Insomnia import To improve Editing cookies Auto-completion on env file variables Manage multipart Content-type header (auto-generated for now) https://github.com/seanmonstar/reqwest/issues/2259 To fix Query parameters bug TODO v2.0.0 To add WebSocket requests Maybe GraphQL requests Maybe MQTT requests Maybe gRPC requests Ideas (will think about it later) Base URL property and authorization on collections VScode plugin to see and send requests Scoop installation Documentation [!NOTE]
Documentation will soon be published in the wiki section Others Vim key-bindings You can read more about it here: https://github.com/Julien-cpsn/ATAC/releases/tag/v0.14.0 NeoVim integration Thanks to @NachoNievaG you can have an ATAC floating window inside your nvim
https://github.com/NachoNievaG/atac.nvim Technical precisions Tested on Console Host Windows 11 (Pro) WSL2 Debian Windows 10 (Pro) Windows 8.1 (N) Ubuntu Desktop Terminal Ubuntu 17.10 Pop!_OS 20.04 (Arch, Manjaro) KDE Konsole (Arch, NixOS) Kitty Linux Mint (OpenSuse) Alacritty (Chrome OS) Crostini (List from here ) Dependencies | Library | Version | Reason |
|--------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------|----------------------------------------------------------------------------------------|
| reqwest & reqwest cookie store | 0.11.27 & 0.6.0 | Send requests |
| ratatui | 0.26.2 | Terminal UI framework |
| crossterm | 0.27.0 | Terminal Backend |
| crokey | 0.6.4 | Used to parse, use key bindings files and some utilities |
| tui-big-text | 0.4.2 | Display big texts. Only used for displaying ATAC in the homepage. |
| tui-tree-widget | 0.19.0 | Display tree-like lists. Used for displaying the collections. |
| tui-textarea | 0.5.0 | Text area that handle a lot of features. Used for editing request body. |
| throbber-widgets-tui | 0.4.1 | Display loading UI elements. Used when request is pending. |
| ratatui-image | 1.0.0 | Display response images. |
| image | 0.24.9 | Decode images. |
| syntect | 5.2.0 | Syntax highlighting |
| serde ( serde_json , serde-yaml ) | 1.0.197 (1.0.144, 0.9.34) | Serialize & Deserialize application data into files |
| jsonxf | 0.1.1 | Pretty print JSON |
| toml | 0.8.11 | Serialize & Deserialize application config files |
| boa_engine | 0.18.0 | Create Javascript runtimes. Used for pre and post request scripts |
| My fork of postman_collection | 0.2.1 | Deserialize Postman collection files |
| curl-parser | 0.3.1 | Parse cURL request files |
| clap | 4.5.0 | Command Line Argument Parser |
| dirs | 5.0.1 | Use system files |
| arboard | 3.3.2 | Copy response body to clipboard |
| tokio | 1.0.0 | Handle asynchronous requests |
| parking_lot | 0.12.2 | Smaller, faster and more flexible implementation of RwLock and Mutex. Used everywhere. |
| strum | 0.26.2 | Enum facilities |
| lazy_static | 1.4.0 | Allows for more flexible constants. Mainly used for accessing CLI arguments everywhere |
| nestify | 0.3.3 | Used to nest struct definitions |
| walkdir | 2.5.0 | Recursively retrieve files |
| snailquote | 0.3.1 | Unescape string |
| indexmap | 2.2.6 | Ordered hashmap. Used in environments to preserve files' values order |
| base64 | 0.22.0 | Encode auth. |
| regex | 1.10.3 | Regex. Using for parsing requests URL | Binary size The binary file size goes from ~4.5 MB to ~7 MB depending on the platform. I try to keep it as small as possible. Contributors Maintainers @julien-cpsn Packagers Cargo, Brew - @julien-cpsn Arch - @orhun Fedora copr - @joxcat Star history License The MIT license for this project can be seen here;A simple API client (postman like) in your terminal;api,insomnia,postman,tui,rust,api-client,ratatui | Julien-cpsn/ATAC |
glasskube/glasskube;🧊 The next generation Package Manager for Kubernetes 📦 (Beta Version) Getting started » Explore our website » GitHub . Docker Hub . Artifact Hub . LinkedIn . Twitter / X 📦 What is Glasskube? Glasskube is an Open Source package manager for Kubernetes .
It makes deploying, updating, and configuring packages on Kubernetes 20 times faster than tools like Helm or Kustomize .
Inspired by the simplicity of Homebrew and npm. You can decide if you want to use the Glasskube UI, CLI, or directly deploy packages via GitOps. ⭐️ Why Glasskube? We have been working in the Kubernetes ecosystem for over five years.
During this time, we have consistently struggled with package management, configuration, and distribution.
We've spent countless hours templating and writing documentation for commands and concepts that were difficult to grasp. In contrast, tools like Homebrew, apt, and dnf felt easy to use and rarely caused problems.
While we worked on other cloud-native projects, our users consistently highlighted several common pain points.
This realization prompted us to tackle the larger issue of package management in Kubernetes, leading to the development of Glasskube. 🗄️ Table Of Contents Features Quick Start How to install your first package Supported Packages Architecture Diagram Need help? Related projects How to Contribute Supported by Activity License ✨ Features | | |
|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------|
| Focusing on simplicity and reliability with our CLI and UI Easily install packages in your cluster via the Glasskube UI, where all packages are conveniently located, eliminating the need to search for a Helm repository. | |
| Package configurations Configure packages with typesafe input values via the UI or interactive CLI questionnaire. Inject values from other packages, ConfigMaps, and Secrets easily. No more untyped and undocumented values.yaml files. | |
| Dependency Management Glasskube packages are dependency aware, so they can be used and referenced by multiple other packages. They will also get installed in the correct namespace. This is how umbrella charts should have worked from the beginning. | |
| Safe Package Updates Preview and perform pending updates to your desired version with a single click (or CLI command). All updates are pre-tested by the Glasskube test suite. | |
| Reactions and comments Discuss and upvote your favorit Kubernetes package on GitHub or right inside the Glasskube UI. | |
| GitOps Integration All Glasskube packages are custom resources, manageable via GitOps. We're also integrating with renovate | |
| Multiple Repositories and private packages Use multiple repositories and publish your own private packages. This could be your companies internal services packages, so all developers will have up-to-date and easily configured internal services. | | 🚀 Quick Start - Install the Beta Version. You can install Glasskube via Homebrew : bash
brew install glasskube/tap/glasskube For other installation options check out our installation guide . Once the CLI is installed, the first step is to install the necessary components in your cluster. To do that, run sh
glasskube bootstrap After successfully bootstrapping your cluster, you are ready to start the package manager UI: bash
glasskube serve This command will open http://localhost:8580 in your default browser.
Congratulations, you can now explore and install all our available packages! 🎉 🎬 Glasskube Demo Video 📦 Supported Packages Glasskube already supports a wide range of packages, including, but not limited to: Kubernetes Dashboard kubernetes/dashboard Cert Manager cert-manager/cert-manager Ingress-NGINX Controller kubernetes/ingress-nginx Kube Prometheus Stack prometheus-operator/kube-prometheus Cloud Native PG cloudnative-pg/cloudnative-pg You can find all supported and planned packages on glasskube.dev/packages . Architecture Diagram ```mermaid title: glasskube install [package] flowchart BT
UI([UI])-- via local server http://localhost:8580 ---Client(Client)
CLI([CLI])-- cobra cli ---Client
Client-- 1. validate package -->Repo[(Package Repo)]
Client-- 2. create Package CR -->Kubernetes(((Kubernetes API)))
subgraph Cluster
Kubernetes-- 3. reconcile Package -->PackageController
PackageController-- 4. create PackageInfo if not present-->Kubernetes
Kubernetes-- 5. reconcile PackageInfo -->PackageInfoController
end
PackageInfoController<-- 6. update package manifest -->Repo
subgraph Cluster
PackageInfoController-- 7. update manifest in PackageInfo -->Kubernetes
Kubernetes-- 8. reconcile PackageInfo -->PackageController
PackageController-- 9. deploy package -->Kubernetes
end Kubernetes-- 10. package status -->Client
``` ☝️ Need Help or Want to Provide Feedback? If you encounter any problems, we will be happy to support you wherever we can on our Discord .
For bugs, issues or feature requests fee free to open an issue .
We are happy to assist you with anything related to the project. 📎 Related Projects Glasskube Apps Operator glasskube/operator 🤝 How to Contribute to Glasskube Your feedback is invaluable to us as we continue to improve Glasskube. If you'd like to contribute, consider trying out the beta version, reporting any issues, and sharing your suggestions. See the contributing guide for detailed instructions on how you can contribute. 🤩 Thanks to all our Contributors Thanks to everyone, that is supporting this project. We are thankful, for every contribution, no matter its size! 👾 Activity 📘 License The Glasskube is licensed under the Apache 2.0 license. For more information check the LICENSE file for details.;🧊 The next generation Package Manager for Kubernetes 📦 Featuring a GUI and a CLI. Glasskube packages are dependency aware, GitOps ready and can get automatic updates via a central public package repository.;cloud-native,glasskube,golang,kubernetes,package-manager,devops,devops-tools,go,package,cobra | glasskube/glasskube |
Canner/WrenAI;Wren AI Wren AI is a text-to-SQL solution for data teams to get results and insights faster by asking business questions without writing SQL. 🕶 Try It Live! Play around with Wren AI yourself! 🎯 Our Vision & Mission Wren AI’s mission is to democratize data by bringing text-to-SQL ability to any data source and industry. We believe that breakthroughs in Text-to-SQL technology will usher in a new era of Data Democratization. 🤩 About our Vision 🙌 About our Mission 👊 Text-to-SQL End-To-End Solution 1. Indexing With Semantics Wren AI has implemented a semantic engine architecture to provide the LLM context of your business; you can easily establish a logical presentation layer on your data schema that helps LLM learn more about your business context. 2. Augment LLM Prompts With Wren AI, you can process metadata, schema, terminology, data relationships, and the logic behind calculations and aggregations with “Modeling Definition Language” (MDL) , reducing duplicate coding and simplifying data joins. 3. Generate Insights When starting a new conversation in Wren AI, your question is used to find the most relevant tables. From these, LLM generates three relevant questions for the user to choose from. You can also ask follow-up questions to get deeper insights. 4. Self-Learning Feedback Loop (Coming Soon) The AI self-learning feedback loop is designed to refine SQL augmentation and generation by collecting data from various sources. These include user query history, revision intentions, feedback, schema patterns, semantics enhancement, and query frequency. 🔥 Preview Ask your business questions and follow-up insights Modeling with semantics, such as relationships, metrics, and calculations 🤔 Why Wren AI? We focus on providing an open, secure, and reliable text-to-SQL solution for everyone. 1. Turnkey Solution Wren AI makes it easy to onboard your data. Discover and analyze your data with our user interface. Effortlessly generate results without needing to code. 2. Secure By Design Your database contents will never be transmitted to the LLM. Only metadata, like schemas, documentation, and queries, will be used in semantic search. 3. Open-Source Deploy Wren AI anywhere you like on your own data, LLM APIs, and environment, it's free. 🤖 Wren AI's Architecture Wren AI consists of three core services: Wren UI : An intuitive user interface for asking questions, defining data relationships, and integrating data sources within Wren AI's framework. Wren AI Service : Processes queries using a vector database for context retrieval, guiding LLMs to produce precise SQL outputs. Wren Engine : Serves as the semantic engine, mapping business terms to data sources, defining relationships, and incorporating predefined calculations and aggregations. 🤩 Learn More About Text-to-SQL The new wave of Composable Data Systems and the Interface to LLM agents How do you use OpenAI GPT-4o to query your database? Top 4 Challenges using RAG with LLMs to Query Database (Text-to-SQL) and how to solve it. How we design our semantic engine for LLMs? The backbone of the semantic layer for LLM architecture. How do you use LangChain to build a Text-to-SQL solution? What are the challenges? How to solve it? Deep dive into how Pinterest built its Text-to-SQL solution. How Snowflake building the most powerful SQL LLM in the world How to directly access 150k+ Hugging Face Datasets with DuckDB and query using GPT-4o 🚧 Project Status Wren AI is currently in alpha version . The project team is actively working on progress and aiming to release new versions at least biweekly. 🚀 Getting Started Using Wren AI is super simple, you can setup within 3 minutes, and start to interact with your own data! Visit our Installation Guide of Wren AI . Visit the Usage Guides to learn more about how to use Wren AI. 📚 Documentation Visit Wren AI documentation to view the full documentation. ⭐️ Community Welcome to our Discord server to give us feedback! If there is any issues, please visit GitHub Issues . Do note that our Code of Conduct applies to all Wren AI community channels. Users are highly encouraged to read and adhere to them to avoid repercussions.;Wren AI makes your database RAG-ready. Implement Text-to-SQL more accurately and securely.;bigquery,duckdb,llm,openai,postgresql,rag,text-to-sql,ai,sql,python | Canner/WrenAI |
linyiLYi/voice-assistant;A Simple Voice Assistant Script English | 简体中文 This is a simple Python script project that allows dialogue with a local large language model through voice. The voice recognition part of this project is from the Apple MLX example repo , and the textual responses are generated using the Yi model from 01.AI . For more details, see the Acknowledgments section. File Structure bash
├───main.py
├───models
├───prompts
├───recordings
├───tools
│ └───list_microphones.py
├───whisper This project is a single-script project, with main.py containing all program logic. The models/ folder stores model files. prompts/ contains prompt words. recordings/ holds temporary recordings. tools/list_microphones.py is a simple script to view the microphone list, used in main.py to specify the microphone number. whisper/ is from the Apple MLX example repo , used for recognizing user's voice input. Installation Guide This project is based on the Python programming language, and the Python version used for program operation is 3.11.5. It is recommended to configure the Python environment using Anaconda . The following setup process has been tested and passed on macOS systems. Windows and Linux can use speech_recognition and pyttsx3 to replace the whisper and say commands mentioned below. The following are console/terminal/shell commands. Environment Configuration ```
conda create -n VoiceAI python=3.11
conda activate VoiceAI
pip install -r requirements.txt
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python Install audio processing tools brew install portaudio
pip install pyaudio
``` Model Files The model files are stored in the models/ folder and specified in the script via the MODEL_PATH variable.
It is recommended to download the gguf format models from TheBloke and XeIaso, where the 6B model has a smaller memory footprint:
- TheBloke/Yi-34B-Chat-GGUF - XeIaso/Yi-6B-Chat-GGUF The voice recognition model is by default stored in models/whisper-large-v3/ , specified in the script via WHISP_PATH . The version converted by mlx-community can be directly downloaded. Model Files The voice recognition part of this project is based on OpenAI's whisper model, its implementation comes from the Apple MLX example repo . The version used in this project is from January 2024, #80d1867. In the future, users can fetch new versions as needed. The responses in this project are generated by the large language model Yi from 01.AI , where Yi-34B-Chat is more powerful. The 8-bit quantized version made by TheBloke has a memory footprint of 39.04 GB
and is recommended for use if hardware conditions permit. This model runs locally based on the LangChain framework and llama.cpp by Georgi Gerganov . Thank you to all selfless programmers for their contributions to the open-source community!;A simple toy demo of a local voice assistant with whisper and large language model.;[] | linyiLYi/voice-assistant |
sdg-1/consulting-handbook;The Consulting Handbook This guide is meant to help data engineers, software engineers, analysts, data scientists and other technical specialists break into consulting. Also, I added in a (Free) or (Paid) tag to denote whether a resource requires a purchase or not. Table Of Contents Consulting Templates Youtube Channels Starting Your Consulting Company Path Books Podcasts Newsletters Communities Resources Consulting Templates Discovery Call Template(Free) Proposal Email(Free) Proposal Template(Free) Basic Deck(Free) Project Kick-Off Email(Free) Onboarding Checklist(Free) Data Infra Audit Template(Paid) Project Update Email(Free) Original Source For Templates Above(Free) Youtube Channels Sales Sales Insights Lab(Free) Patrick Dang(Free) Communication Firm Learning(Free) Starting Your Consulting Company Path Make the jump! Pick Your Niche(Free) Legal Setup(Free) Get Insurance(Free) | Insurance type | Who it's for | What it does |
|--------------------------------|-------------------------------------------------------|--------------|
| General liability insurance | Any business | This coverage protects against financial loss as the result of bodily injury, property damage, medical expenses, libel, slander, defending lawsuits, and settlement bonds or judgments. |
| Product liability insurance | Businesses that manufacture, wholesale, distribute, and retail a product | This coverage protects against financial loss as a result of a defective product that causes injury or bodily harm. |
| Professional liability insurance | Businesses that provide services to customers | This coverage protects against financial loss as a result of malpractice, errors, and negligence. |
| Commercial property insurance | Businesses with a significant amount of property and physical assets | This coverage protects your business against loss and damage of company property due to a wide variety of events such as fire, smoke, wind and hail storms, civil disobedience and vandalism. |
| Home-based business insurance | Businesses that are run out of the owner’s personal home | Coverage that’s added to homeowner’s insurance as a rider can offer protection for a small amount of business equipment and liability coverage for third-party injuries. |
| Business owner’s policy | Most small business owners, but especially home-based business owners | A business owner’s policy is an insurance package that combines all of the typical coverage options into one bundle. They simplify the insurance buying process and can save you money. | Buidling A Website and Online Presence(Free) How To Get Clients(Free) What Should You Charge As A Data Consultant(Free) Client Acquisition - Networking #TODO Write this article Client Acquisition - Marketing(Free) Client Acquisition - Sale(Free)s Client Acquisition - Vendor Partnerships(Paid) Client Acquisition - Referrals(Paid) Develop a Sales Pitch(Free) Consulting vs contracting- Everything you need to know Pricing Strategy(Paid) Consulting Vs Contracting(Paid) How To Kill Your Profits As A Consultant(Free) Are You Ready To Start Consulting?(Paid) Marketing Deeper Dive - Content That's Landed Me Clients Example - Why Data Engineers LOVE/HATE Airflow(Free) - 4 Alternatives to Fivetran: The Evolving Dynamics of the ETL & ELT Tool Market(Free) - Do You REALLY Need A Data Warehouse - What Are The Benefits Of A Data Warehouse?(Free) - Linkedin - How To Create Viral Posts(Paid) Books Business And Consulting Million Dollar Consulting: The Professional's Guide to Growing a Practice(Paid) Blue Ocean Strategy, Expanded Edition: How to Create Uncontested Market Space and Make the Competition Irrelevant(Paid) $100M Offers: How to Make Offers So Good People Feel Stupid Saying No(Paid) Oversubscribed: How to Get People Lined Up to Do Business with You(Paid) The Trusted Advisor(Paid) Leadership The Five Dysfunctions of a Team: A Leadership Fable(Paid) The Advice Trap: Be Humble, Stay Curious & Change the Way You Lead Forever(Paid) The Making of a Manager: What to Do When Everyone Looks to You(Paid) Communication Storyworthy: Engage, Teach, Persuade, and Change Your Life through the Power of Storytelling(Paid) HBR's 10 Must Reads on Communication(Paid) Marketing ‘The Art and Business of Online Writing: How to Beat the Game of Capturing and Keeping Attention’ by Nicolas Cole.(Paid) Project Examples Consulting Project Example - Embedded Analytics Discovery Project(Free) Consulting Project Example - The Basic Data Stack(Paid) My First 6-Figure Consulting Project(Free) Some Consultants Stories How To Deliver Value To Clients - Talking To Matt Chung About Quitting Amazon To Consult(Free) How To Lead Technology Transformations In Non-Profits - With Justin Birdsong(Free) Starting A Data Consulting Company - A Live Chat With Alex The Analyst(Free) Dealing With Messy Data Infrastructure And Migrations(Free) Podcasts How I Built This(Free) The Business of Authority(Free) Soft skills Podcast(Free) The McKinsey Podcast(Free) Newsletters Technical Freelancer Academy(Free) The Hustle(Free) CB Insights(Free) Communities Technical Freelancer Community Courses Technical Freelancer Academy(Paid);A guide for technical professionals looking to start consulting;career,consulting,data,software | sdg-1/consulting-handbook |
cloudflare/foundations;Foundations Foundations is a modular Rust library, designed to help scale programs for distributed,
production-grade systems. It enables engineers to concentrate on the core business logic
of their services, rather than the intricacies of production operation setups. If you need any of those: logging distributed tracing metrics memory profiling and jemalloc allocator security features, such as seccomp -based syscall sandboxing service configuration with documentation CLI helper that takes care of the configuration loading then Foundations is a tool of choice for you. Documentation https://docs.rs/foundations/ Examples https://github.com/cloudflare/foundations/tree/main/examples License BSD-3 licensed. See the LICENSE file for details.;Cloudflare's Rust service foundations library.;[] | cloudflare/foundations |
ZHO-ZHO-ZHO/ComfyUI-InstantID;ComfyUI InstantID Unofficial implementation of InstantID for ComfyUI pose_ref 项目介绍 | Info 来自对 InstantID 的非官方实现 版本:V2.0 支持姿势参考图 视频演示 V2.0 https://github.com/ZHO-ZHO-ZHO/ComfyUI-InstantID/assets/140084057/083c9e5e-06a0-4623-b5ac-05f7e85a74f2 V1.0 https://github.com/ZHO-ZHO-ZHO/ComfyUI-InstantID/assets/140084057/7295c0d7-1d1b-4044-aea3-8efa67047362 节点说明 | Features 基础模型加载 | base model loader 📷ID Base Model Loader from hub 🤗:支持从 huggingface hub 自动下载模型,输入模型名称(如:wangqixun/YamerMIX_v8)即可 📷ID Base Model Loader locally:支持加载本地模型(需 SDXL 系列模型) InsightFace 模型加载 | 📷InsightFace Loader :支持 CUDA 和 CPU ID ControlNet 模型加载 | 📷ID ControlNet Loader controlnet_path:ID ControlNet 模型地址 Ipadapter_instantid 模型加载 | 📷Ipadapter_instantid Loader Ipadapter_instantid_path:模型路径 filename:模型名称 提示词 + 风格 | 📷ID Prompt_Styler 与各种提示词(文本)输入(如肖像大师等)、styler、 Photomaker Prompt_Styler 兼容 prompt、negative:正负提示词 style_name:支持官方提供的8种风格 (No style) Watercolor Film Noir Neon Jungle Mars Vibrant Color Snow Line art InstantID 生成 | 📷InstantID Generation 🆕 face_image:接入脸部参考图像 pipe:接入模型 insightface:接入 insightface 模型 🆕 pose_image_optional(非必要):接入姿势参考图像(注意:仅对面部周围姿势起效,与通常的 openpose 不同) positivet、negative:正负提示词 ip_adapter_scale:IPA 强度 controlnet_conditioning_scale:ID Controlnet 强度 step:步数,官方默认30步 guidance_scale:提示词相关度,一般默认为5 enhance_face_region:脸部增强选项 🆕 seed:种子 风格 | Styles 安装 | Install 推荐使用管理器 ComfyUI Manager 安装(On the Way) 手动安装: cd custom_nodes git clone https://github.com/ZHO-ZHO-ZHO/ComfyUI-InstantID.git cd custom_nodes/ComfyUI-InstantID pip install -r requirements.txt 重启 ComfyUI 使用方法 | How to Use 下载 InstantID/ControlNetModel 中的 config.json 和 diffusion_pytorch_model.safetensors ,将模型地址填入 📷ID ControlNet Loader 节点中(例如:ComfyUI/custom_nodes/ComfyUI-InstantID/checkpoints/controlnet) 下载 InstantID/ip-adapter 中的 ip-adapter.bin ,将其地址填入 📷Ipadapter_instantid Loader 节点中(例如:ComfyUI/custom_nodes/ComfyUI-InstantID/checkpoints) 下载 DIAMONIK7777/antelopev2 中的所有模型,将其放入 ComfyUI//custom_nodes/ComfyUI-InstantID/models/antelopev2 中 兼容性: CUDA11 支持默认安装的 onnxruntime-gpu(1.16.0),如果是 CUDA12 则需手动安装 onnxruntime-gpu==1.17.0 地址 工作流 | Workflows V2.0 V2.0 InstantID_pose_ref + ArtGallery V2.0 自动下载 huggingface hub V2.0 InstantID_locally_pose_ref V1.0 工作流仅适用于V1.0 版本 V1.0 InstantID + ArtGallery V1.0 本地模型 locally V1.0 自动下载 huggingface hub 更新日志 20240124 更新为 V2.0 :新增姿势参考图、优化代码 修复 insightfaceloader 冲突问题 修复 onnxruntime-gpu 版本兼容性的问题 20240123 V1.0 上线:同时支持本地、huggingface hub托管模型,支持8种风格 20240122 创建项目 速度实测 | Speed V1.0 A100 50步 14s Stars 关于我 | About me 📬 联系我 :
- 邮箱:zhozho3965@gmail.com
- QQ 群:839821928 🔗 社交媒体 :
- 个人页: -Zho- - Bilibili: 我的B站主页 - X(Twitter): 我的Twitter - 小红书: 我的小红书主页 💡 支持我 :
- B站: B站充电 - 爱发电: 为我充电 Credits InstantID 📷InsightFace Loader 代码修改自 ComfyUI_IPAdapter_plus ,感谢 @cubiq ! 感谢 @hidecloud 对 onnxruntime 版本兼容性的测试与反馈! 感谢 esheep 技术人员对节点冲突问题的反馈!;Unofficial implementation of InstantID for ComfyUI;comfyui,stable-diffusion | ZHO-ZHO-ZHO/ComfyUI-InstantID |
alexpinel/Dot;Visit the Dot Website Important note: Voice input is only supported for MacOS, Windows support will be added soon! 🚀 About Dot Dot is a standalone, open-source application designed for seamless interaction with documents and files using local LLMs and Retrieval Augmented Generation (RAG). It is inspired by solutions like Nvidia's Chat with RTX, providing a user-friendly interface for those without a programming background. Using the Phi-3 LLM by default, Dot ensures accessibility and simplicity right out of the box. https://github.com/alexpinel/Dot/assets/93524949/28b26128-7bdc-4bf7-99f3-d09bd8431875 📜 What does it do? Dot allows you to load multiple documents into an LLM and interact with them in a fully local environment. Supported document types include PDF, DOCX, PPTX, XLSX, and Markdown. Users can also engage with Big Dot for inquiries not directly related to their documents, similar to interacting with ChatGPT. 🔧 How does it work? Built with Electron JS, Dot encapsulates a comprehensive Python environment that includes all necessary libraries. The application leverages libraries such as FAISS for creating local vector stores, Langchain, llama.cpp & Huggingface for setting up conversation chains, and additional tools for document management and interaction. 📥 Install To use Dot: - Visit the Dot website to download the application for Apple Silicon or Windows. For developers: - Clone the repository $ https://github.com/alexpinel/Dot.git - Install Node js and then run npm install inside the project repository, you can run npm install --force if you face any issues at this stage Now, it is time to add a full python bundle to the app. The purpose of this is to create a distributable environment with all necessary libraries, if you only plan on using Dot from the console you might not need to follow this particular step but then make sure to replace the python path locations specified in src/index.js . Creating the python bundle is covered in detail here: https://til.simonwillison.net/electron/python-inside-electron , the bundles can also be installed from here: https://github.com/indygreg/python-build-standalone/releases/tag/20240224 Having created the bundle, please rename it to 'python' and place it inside the llm directory. It is now time to get all necessary libraries, keep in mind that running a simple pip install will not work without specifying the actual path of the bundle so use this instead: path/to/python/.bin/or/.exe -m pip install Required python libraries:
- pytorch link (CPU version recommended as it is lighter than GPU)
- langchain link - FAISS link - HuggingFace link - llama.cpp link (Use CUDA implementation if you have an Nvidia GPU!)
- pypdf link - docx2txt link - Unstructured link (Use pip install "unstructured[pptx, md, xlsx] for the file formats) Now python should be setup and running! However, there is still a few more steps left, now is the time to add the final magic to Dot! First, create a folder inside the llm directory and name it mpnet , there you will need to install sentence-transformers to use for the document embeddings, fetch all the files from the following link and place them inside the new folder: sentence-transformers/all-mpnet-base-v2 Finally, download the Mistral 7B LLM from the following link and place it inside the llm/scripts directory alongside the python scripts used by Dot: TheBloke/Mistral-7B-Instruct-v0.2-GGUF That's it! If you follow these steps you should be able to get it all running, please let me know if you are facing any issues :) 🌟 Future Features I'd Like to Add Linux support Choice of LLM - Done! Image file support Enhanced document awareness beyond content Simplified file loading (select individual files, not just folders) Increased security measures for using local LLMs Support for additional document types Efficient file database management for quicker access to groups of files 🤝 Want to Help? Contributions are highly encouraged! As a student managing this project on the side, any help is greatly appreciated. Whether it's coding, documentation, or feature suggestions, please feel free to get involved! Star History Back to top;Text-To-Speech, RAG, and LLMs. All local!;embeddings,llm,local,rag,standalone,standalone-app,document-chat,faiss,langchain,privategpt | alexpinel/Dot |
CopilotC-Nvim/CopilotChat.nvim;Copilot Chat for Neovim [!NOTE]
Plugin was rewritten to Lua from Python. Please check the migration guide from version 1 to version 2 for more information. Prerequisites Ensure you have the following installed: Neovim stable (0.9.5) or nightly . Optional: tiktoken_core: sudo luarocks install --lua-version 5.1 tiktoken_core . Alternatively, download a pre-built binary from lua-tiktoken releases You can check your Lua PATH in Neovim by doing :lua print(package.cpath) . Save the binary as tiktoken_core.so in any of the given paths. Installation Lazy.nvim lua
return {
{
"CopilotC-Nvim/CopilotChat.nvim",
branch = "canary",
dependencies = {
{ "zbirenbaum/copilot.lua" }, -- or github/copilot.vim
{ "nvim-lua/plenary.nvim" }, -- for curl, log wrapper
},
opts = {
debug = true, -- Enable debugging
-- See Configuration section for rest
},
-- See Commands section for default commands if you want to lazy load on them
},
} See @jellydn for configuration Vim-Plug Similar to the lazy setup, you can use the following configuration: ```vim
call plug#begin()
Plug 'zbirenbaum/copilot.lua'
Plug 'nvim-lua/plenary.nvim'
Plug 'CopilotC-Nvim/CopilotChat.nvim', { 'branch': 'canary' }
call plug#end() lua << EOF
require("CopilotChat").setup {
debug = true, -- Enable debugging
-- See Configuration section for rest
}
EOF
``` Manual Put the files in the right place ```
mkdir -p ~/.config/nvim/pack/copilotchat/start
cd ~/.config/nvim/pack/copilotchat/start git clone https://github.com/zbirenbaum/copilot.lua
git clone https://github.com/nvim-lua/plenary.nvim git clone -b canary https://github.com/CopilotC-Nvim/CopilotChat.nvim
``` Add to your configuration (e.g. ~/.config/nvim/init.lua ) lua
require("CopilotChat").setup {
debug = true, -- Enable debugging
-- See Configuration section for rest
} See @deathbeam for configuration Usage Commands :CopilotChat <input>? - Open chat window with optional input :CopilotChatOpen - Open chat window :CopilotChatClose - Close chat window :CopilotChatToggle - Toggle chat window :CopilotChatStop - Stop current copilot output :CopilotChatReset - Reset chat window :CopilotChatSave <name>? - Save chat history to file :CopilotChatLoad <name>? - Load chat history from file :CopilotChatDebugInfo - Show debug information Commands coming from default prompts :CopilotChatExplain - Write an explanation for the active selection as paragraphs of text :CopilotChatReview - Review the selected code :CopilotChatFix - There is a problem in this code. Rewrite the code to show it with the bug fixed :CopilotChatOptimize - Optimize the selected code to improve performance and readablilty :CopilotChatDocs - Please add documentation comment for the selection :CopilotChatTests - Please generate tests for my code :CopilotChatFixDiagnostic - Please assist with the following diagnostic issue in file :CopilotChatCommit - Write commit message for the change with commitizen convention :CopilotChatCommitStaged - Write commit message for the change with commitizen convention API ```lua
local chat = require("CopilotChat") -- Open chat window
chat.open() -- Open chat window with custom options
chat.open({
window = {
layout = 'float',
title = 'My Title',
},
}) -- Close chat window
chat.close() -- Toggle chat window
chat.toggle() -- Toggle chat window with custom options
chat.toggle({
window = {
layout = 'float',
title = 'My Title',
},
}) -- Reset chat window
chat.reset() -- Ask a question
chat.ask("Explain how it works.") -- Ask a question with custom options
chat.ask("Explain how it works.", {
selection = require("CopilotChat.select").buffer,
}) -- Ask a question and do something with the response
chat.ask("Show me something interesting", {
callback = function(response)
print("Response:", response)
end,
}) -- Get all available prompts (can be used for integrations like fzf/telescope)
local prompts = chat.prompts() -- Get last copilot response (also can be used for integrations and custom keymaps)
local response = chat.response() -- Pick a prompt using vim.ui.select
local actions = require("CopilotChat.actions") -- Pick help actions
actions.pick(actions.help_actions()) -- Pick prompt actions
actions.pick(actions.prompt_actions({
selection = require("CopilotChat.select").visual,
}))
``` Configuration Default configuration Also see here : ```lua
{
debug = false, -- Enable debug logging
proxy = nil, -- [protocol://]host[:port] Use this proxy
allow_insecure = false, -- Allow insecure server connections system_prompt = prompts.COPILOT_INSTRUCTIONS, -- System prompt to use
model = 'gpt-4', -- GPT model to use, 'gpt-3.5-turbo' or 'gpt-4'
temperature = 0.1, -- GPT temperature question_header = '## User ', -- Header to use for user questions
answer_header = '## Copilot ', -- Header to use for AI answers
error_header = '## Error ', -- Header to use for errors
separator = '───', -- Separator to use in chat show_folds = true, -- Shows folds for sections in chat
show_help = true, -- Shows help message as virtual lines when waiting for user input
auto_follow_cursor = true, -- Auto-follow cursor in chat
auto_insert_mode = false, -- Automatically enter insert mode when opening window and if auto follow cursor is enabled on new prompt
clear_chat_on_new_prompt = false, -- Clears chat on every new prompt
highlight_selection = true, -- Highlight selection in the source buffer when in the chat window context = nil, -- Default context to use, 'buffers', 'buffer' or none (can be specified manually in prompt via @).
history_path = vim.fn.stdpath('data') .. '/copilotchat_history', -- Default path to stored history
callback = nil, -- Callback to use when ask response is received -- default selection (visual or line)
selection = function(source)
return select.visual(source) or select.line(source)
end, -- default prompts
prompts = {
Explain = {
prompt = '/COPILOT_EXPLAIN Write an explanation for the active selection as paragraphs of text.',
},
Review = {
prompt = '/COPILOT_REVIEW Review the selected code.',
callback = function(response, source)
-- see config.lua for implementation
end,
},
Fix = {
prompt = '/COPILOT_GENERATE There is a problem in this code. Rewrite the code to show it with the bug fixed.',
},
Optimize = {
prompt = '/COPILOT_GENERATE Optimize the selected code to improve performance and readablilty.',
},
Docs = {
prompt = '/COPILOT_GENERATE Please add documentation comment for the selection.',
},
Tests = {
prompt = '/COPILOT_GENERATE Please generate tests for my code.',
},
FixDiagnostic = {
prompt = 'Please assist with the following diagnostic issue in file:',
selection = select.diagnostics,
},
Commit = {
prompt = 'Write commit message for the change with commitizen convention. Make sure the title has maximum 50 characters and message is wrapped at 72 characters. Wrap the whole message in code block with language gitcommit.',
selection = select.gitdiff,
},
CommitStaged = {
prompt = 'Write commit message for the change with commitizen convention. Make sure the title has maximum 50 characters and message is wrapped at 72 characters. Wrap the whole message in code block with language gitcommit.',
selection = function(source)
return select.gitdiff(source, true)
end,
},
}, -- default window options
window = {
layout = 'vertical', -- 'vertical', 'horizontal', 'float', 'replace'
width = 0.5, -- fractional width of parent, or absolute width in columns when > 1
height = 0.5, -- fractional height of parent, or absolute height in rows when > 1
-- Options below only apply to floating windows
relative = 'editor', -- 'editor', 'win', 'cursor', 'mouse'
border = 'single', -- 'none', single', 'double', 'rounded', 'solid', 'shadow'
row = nil, -- row position of the window, default is centered
col = nil, -- column position of the window, default is centered
title = 'Copilot Chat', -- title of chat window
footer = nil, -- footer of chat window
zindex = 1, -- determines if window is on top or below other floating windows
}, -- default mappings
mappings = {
complete = {
detail = 'Use @ or / for options.',
insert =' ',
},
close = {
normal = 'q',
insert = ' '
},
reset = {
normal =' ',
insert = ' '
},
submit_prompt = {
normal = ' ',
insert = ' '
},
accept_diff = {
normal = ' ',
insert = ' '
},
yank_diff = {
normal = 'gy',
},
show_diff = {
normal = 'gd'
},
show_system_prompt = {
normal = 'gp'
},
show_user_selection = {
normal = 'gs'
},
},
}
``` For further reference, you can view @jellydn's configuration . Defining a prompt with command and keymap This will define prompt that you can reference with /MyCustomPrompt in chat, call with :CopilotChatMyCustomPrompt or use the keymap <leader>ccmc .
It will use visual selection as default selection. If you are using lazy.nvim and are already lazy loading based on Commands make sure to include the prompt
commands and keymaps in cmd and keys respectively. lua
{
prompts = {
MyCustomPrompt = {
prompt = 'Explain how it works.',
mapping = '<leader>ccmc',
description = 'My custom prompt description',
selection = require('CopilotChat.select').visual,
},
},
} Referencing system or user prompts You can reference system or user prompts in your configuration or in chat with /PROMPT_NAME slash notation.
For collection of default COPILOT_ (system) and USER_ (user) prompts, see here . lua
{
prompts = {
MyCustomPrompt = {
prompt = '/COPILOT_EXPLAIN Explain how it works.',
},
MyCustomPrompt2 = {
prompt = '/MyCustomPrompt Include some additional context.',
},
},
} Custom system prompts You can define custom system prompts by using system_prompt property when passing config around. lua
{
system_prompt = 'Your name is Github Copilot and you are a AI assistant for developers.',
prompts = {
MyCustomPromptWithCustomSystemPrompt = {
system_prompt = 'Your name is Johny Microsoft and you are not an AI assistant for developers.',
prompt = 'Explain how it works.',
},
},
} Customizing buffers You can set local options for the buffers that are created by this plugin: copilot-diff , copilot-system-prompt , copilot-user-selection , copilot-chat . ```lua
vim.api.nvim_create_autocmd('BufEnter', {
pattern = 'copilot-*',
callback = function()
vim.opt_local.relativenumber = true -- C-p to print last response
vim.keymap.set('n', '<C-p>', function()
print(require("CopilotChat").response())
end, { buffer = true, remap = true })
end })
``` Tips Quick chat with your buffer To chat with Copilot using the entire content of the buffer, you can add the following configuration to your keymap:
```lua
-- lazy.nvim keys
-- Quick chat with Copilot
{
" ccq",
function()
local input = vim.fn.input("Quick Chat: ")
if input ~= "" then
require("CopilotChat").ask(input, { selection = require("CopilotChat.select").buffer })
end
end,
desc = "CopilotChat - Quick chat",
}
```
[![Chat with buffer](https://i.gyazo.com/9b8cbf1d78a19f326282a6520bc9aab0.gif)](https://gyazo.com/9b8cbf1d78a19f326282a6520bc9aab0) Inline chat Change the window layout to `float` and position relative to cursor to make the window look like inline chat.
This will allow you to chat with Copilot without opening a new window.
```lua
-- lazy.nvim opts
{
window = {
layout = 'float',
relative = 'cursor',
width = 1,
height = 0.4,
row = 1
}
}
```
![inline-chat](https://github.com/CopilotC-Nvim/CopilotChat.nvim/assets/5115805/608e3c9b-8569-408d-a5d1-2213325fc93c) Telescope integration Requires [telescope.nvim](https://github.com/nvim-telescope/telescope.nvim) plugin to be installed.
```lua
-- lazy.nvim keys
-- Show help actions with telescope
{
" cch",
function()
local actions = require("CopilotChat.actions")
require("CopilotChat.integrations.telescope").pick(actions.help_actions())
end,
desc = "CopilotChat - Help actions",
},
-- Show prompts actions with telescope
{
" ccp",
function()
local actions = require("CopilotChat.actions")
require("CopilotChat.integrations.telescope").pick(actions.prompt_actions())
end,
desc = "CopilotChat - Prompt actions",
},
```
![image](https://github.com/CopilotC-Nvim/CopilotChat.nvim/assets/5115805/14360883-7535-4ee3-aca1-79f6c39f626b) fzf-lua integration Requires [fzf-lua](https://github.com/ibhagwan/fzf-lua) plugin to be installed.
```lua
-- lazy.nvim keys
-- Show help actions with fzf-lua
{
" cch",
function()
local actions = require("CopilotChat.actions")
require("CopilotChat.integrations.fzflua").pick(actions.help_actions())
end,
desc = "CopilotChat - Help actions",
},
-- Show prompts actions with fzf-lua
{
" ccp",
function()
local actions = require("CopilotChat.actions")
require("CopilotChat.integrations.fzflua").pick(actions.prompt_actions())
end,
desc = "CopilotChat - Prompt actions",
},
```
![image](https://github.com/CopilotC-Nvim/CopilotChat.nvim/assets/5115805/743455bb-9517-48a8-a7a1-81215dc3b747) nvim-cmp integration Requires [nvim-cmp](https://github.com/hrsh7th/nvim-cmp) plugin to be installed (and properly configured).
```lua
-- Registers copilot-chat source and enables it for copilot-chat filetype (so copilot chat window)
require("CopilotChat.integrations.cmp").setup()
-- You might also want to disable default complete mapping for copilot chat when doing this
require('CopilotChat').setup({
mappings = {
complete = {
insert = '',
},
},
-- rest of your config
})
```
![image](https://github.com/CopilotC-Nvim/CopilotChat.nvim/assets/5115805/063fc99f-a4b2-4187-a065-0fdd287ebee2) Roadmap (Wishlist) Use indexed vector database with current workspace for better context selection General QOL improvements Development Installing Pre-commit Tool For development, you can use the provided Makefile command to install the pre-commit tool: bash
make install-pre-commit This will install the pre-commit tool and the pre-commit hooks. Contributors ✨ If you want to contribute to this project, please read the CONTRIBUTING.md file. Thanks goes to these wonderful people ( emoji key ): gptlang 💻 📖 Dung Duc Huynh (Kaka) 💻 📖 Ahmed Haracic 💻 Trí Thiện Nguyễn 💻 He Zhizhou 💻 Guruprakash Rajakkannu 💻 kristofka 💻 PostCyberPunk 📖 Katsuhiko Nishimra 💻 Erno Hopearuoho 💻 Shaun Garwood 💻 neutrinoA4 💻 📖 Jack Muratore 💻 Adriel Velazquez 💻 📖 Tomas Slusny 💻 📖 Nisal 📖 Tobias Gårdhus 📖 Petr Dlouhý 📖 Dylan Madisetti 💻 Aaron Weisberg 💻 📖 Jose Tlacuilo 💻 📖 Kevin Traver 💻 📖 dTry 💻 Arata Furukawa 💻 Ling 💻 Ivan Frolov 💻 Folke Lemaitre 💻 📖 GitMurf 💻 Dmitrii Lipin 💻 This project follows the all-contributors specification. Contributions of any kind are welcome! Stargazers over time;Chat with GitHub Copilot in Neovim;copilot,copilot-chat,github,github-copilot,neovim-plugin | CopilotC-Nvim/CopilotChat.nvim |
pages-cms/pages-cms;Pages CMS Pages CMS is an Open Source Content Management System built for static websites (Jekyll, Next.js, VuePress, Hugo, etc). It allows you to edit your website's content directly on GitHub via a user-friendly interface. Documentation For full documentation, go to pagescms.org/docs How it works Pages CMS is built as a Vue.js app with a few serverless functions to handle the Github login. It is intended to be deployed with Cloudflare Pages , using Cloudflare Workers (referred to as functions functions ) for the serverless code. In a nutshell: The serverless functions are just facilitating the OAuth dance (and logout) between the client and GitHub. The GitHub OAuth token is actually stored in the client. Once logged in, the Vue app lets you select the repo (and branch) where your content may be at. You can configure each repo/branch by adding a .pages.yml that describes the content structure and related settings (e.g. media folder). The Vue app acts as a user-friendly interface on top of the GitHub API to manage content related files in your repo. With it you can search and filter collections, create/edit/delete entries, upload media... Get started Use online The easiest way to get started is to use the online version of Pages CMS . You'll be able to log in with your GitHub account and get the latest version of Pages CMS. This online version is identical to what's in this repo and as mentioned above, nothing is saved in the backend (OAuth tokens are saved on the client side). But you can also install your own version locally or deploy it (for free) on Cloudflare following the steps below. Install locally To get a local version up and running: Install dependencies : npm install . Create a GitHub OAuth app : 0n GitHub, go to your Developer Settings and create a New OAuth App (or alternatively create one for one of your organizations). You can use the following settings for your development environment: Application name: Pages CMS (dev) Homepage URL: https://pagescms.org Authorization callback URL: http://localhost:8788/auth/callback Create a file for environment variables : copy .dev.vars.exmple into .dev.vars and replace GITHUB_CLIENT_ID and GITHUB_CLIENT_SECRET with the values you got for your GitHub OAuth app. You shouldn't have to modify BASE_URL . Run it : npm run dev . This should run the app locally with Wrangler (allowing us to run the serverless functions locally). Visit localhost:8788 . Deploy on Cloudflare Prerequisite : you'll need a Cloudflare account (it's free). Once you have one: Create a Cloudflare Pages app : From your account dashboard, go to Workers & Pages , then click on Create application and select the Pages tab. From there you can connect your GitHub account and select the repo you want to deploy (assuming you've forked pages-cms/pages-cms ). Cloudflare will give you a public URL (e.g. https://pages-cms-123.pages.dev). Create a GitHub OAuth app : same as for local, go to your Developer Settings and create a New OAuth App (or alternatively create one for one of your organizations) with the following settings: Application name : Pages CMS Homepage URL : https://pagescms.org Authorization callback URL : https://pages-cms-123.pages.dev/auth/callback (replace https://pages-cms-123.pages.dev with whatever URL Cloudflare generated for you, or the custom domain you set up) Add the environment variables to Cloudflare : Go back to your Cloudflare Pages app, click on the Settings tab and select Environment variables in the sidebar. Fill in GITHUB_CLIENT_ID and GITHUB_CLIENT_SECRET with the values you got from GitHub. You will also need to set BASE_URL to the URL that was given to you when you create the Cloudflare Pages app (e.g. https://pages-cms-123.pages.dev ). Open the app link (e.g. https://pages-cms-123.pages.dev ). Cloudflare has very generous free tiers and can also host your actual website. It's a great alternative to GitHub Pages, Netlify or Vercel. License Everything in this repo is released under the MIT License .;A user-friendly CMS for static site generators.;11ty,astro,cms,docusaurus,eleventy,gatsby,hugo,jekyll,nextjs,ssg | pages-cms/pages-cms |
PKU-YuanGroup/MagicTime;MagicTime: Time-lapse Video Generation Models as Metamorphic Simulators If you like our project, please give us a star ⭐ on GitHub for the latest update. [![hf_space](https://img.shields.io/badge/🤗-Open%20In%20Spaces-blue.svg)](https://huggingface.co/spaces/BestWishYsh/MagicTime?logs=build)
[![Replicate demo and cloud API](https://replicate.com/camenduru/magictime/badge)](https://replicate.com/camenduru/magictime)
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/camenduru/MagicTime-jupyter/blob/main/MagicTime_jupyter.ipynb)
[![hf_space](https://img.shields.io/badge/🤗-Paper%20In%20HF-red.svg)](https://huggingface.co/papers/2404.05014)
[![arXiv](https://img.shields.io/badge/Arxiv-2404.05014-b31b1b.svg?logo=arXiv)](https://arxiv.org/abs/2404.05014)
[![Home Page](https://img.shields.io/badge/Project- -blue.svg)](https://pku-yuangroup.github.io/MagicTime/)
[![Dataset](https://img.shields.io/badge/Dataset- -green)](https://huggingface.co/datasets/BestWishYsh/ChronoMagic)
[![zhihu](https://img.shields.io/badge/-Twitter@AK%20-black?logo=twitter&logoColor=1D9BF0)](https://twitter.com/_akhaliq/status/1777538468043792473)
[![zhihu](https://img.shields.io/badge/-Twitter@Jinfa%20Huang%20-black?logo=twitter&logoColor=1D9BF0)](https://twitter.com/vhjf36495872/status/1777525817087553827?s=61&t=r2HzCsU2AnJKbR8yKSprKw)
[![DOI](https://zenodo.org/badge/783303222.svg)](https://zenodo.org/doi/10.5281/zenodo.10960665)
[![License](https://img.shields.io/badge/License-Apache%202.0-yellow)](https://github.com/PKU-YuanGroup/MagicTime/blob/main/LICENSE)
![GitHub Repo stars](https://img.shields.io/github/stars/PKU-YuanGroup/MagicTime) This repository is the official implementation of MagicTime, a metamorphic video generation pipeline based on the given prompts. The main idea is to enhance the capacity of video generation models to accurately depict the real world through our proposed methods and dataset. 💡 We also have other video generation project that may interest you ✨. > [**Open-Sora-Plan**](https://github.com/PKU-YuanGroup/Open-Sora-Plan) > PKU-Yuan Lab and Tuzhan AI etc. > [![github](https://img.shields.io/badge/-Github-black?logo=github)](https://github.com/PKU-YuanGroup/Open-Sora-Plan) [![github](https://img.shields.io/github/stars/PKU-YuanGroup/Open-Sora-Plan.svg?style=social)](https://github.com/PKU-YuanGroup/Open-Sora-Plan) ## 📣 News
* ⏳⏳⏳ Training a stronger model with the support of [Open-Sora Plan](https://github.com/PKU-YuanGroup/Open-Sora-Plan) (e.g 257 x 512 × 512).
* ⏳⏳⏳ Release the training code of MagicTime.
* `[2024.05.27]` Excited to share our latest Open-Sora Plan v1.1.0, which significantly improves video quality and length, and is fully open source! Please check out the [report](https://github.com/PKU-YuanGroup/Open-Sora-Plan/blob/main/docs/Report-v1.1.0.md).
* `[2024.04.14]` Thanks [@camenduru](https://twitter.com/camenduru) and [@ModelsLab](https://modelslab.com/) for providing [Jupyter Notebook](https://github.com/camenduru/MagicTime-jupyter) and [Replicate Demo](https://replicate.com/camenduru/magictime).
* `[2024.04.13]` 🔥 We have compressed the size of repo with less than 1.0 MB, so that everyone can clone easier and faster. You can click [here](https://github.com/PKU-YuanGroup/MagicTime/archive/refs/heads/main.zip) to download, or use `git clone --depth=1` command to obtain this repo.
* `[2024.04.12]` Thanks [@Jukka Seppänen](https://github.com/kijai) and [@Baobao Wang](https://www.bilibili.com/video/BV1wx421U7Gn/?spm_id_from=333.1007.top_right_bar_window_history.content.click) for providing ComfyUI Extension [ComfyUI-MagicTimeWrapper](https://github.com/kijai/ComfyUI-MagicTimeWrapper). If you find related work, please let us know.
* `[2024.04.11]` 🔥 We release the Hugging Face Space of MagicTime, you can [click](https://huggingface.co/spaces/BestWishYsh/MagicTime?logs=build) here to have a try.
* `[2024.04.10]` 🔥 We release the inference code and model weight of MagicTime.
* `[2024.04.09]` 🔥 We release the arXiv paper for MagicTime, and you can click [here](https://arxiv.org/abs/2404.05014) to see more details.
* `[2024.04.08]` 🔥 We released the subset of ChronoMagic dataset used to train MagicTime. The dataset includes 2,265 metamorphic video-text pairs and can be downloaded at [HuggingFace Dataset](https://huggingface.co/datasets/BestWishYsh/ChronoMagic) or [Google Drive](https://drive.google.com/drive/folders/1WsomdkmSp3ql3ImcNsmzFuSQ9Qukuyr8?usp=sharing).
* `[2024.04.08]` 🔥 **All codes & datasets** are coming soon! Stay tuned 👀!
## 😮 Highlights
MagicTime shows excellent performance in **metamorphic video generation**.
### Metamorphic Videos vs. General Videos
Compared to general videos, metamorphic videos contain physical knowledge, long persistence, and strong variation, making them difficult to generate. We show compressed .gif on github, which loses some quality. The general videos are generated by the [Animatediff](https://github.com/guoyww/AnimateDiff) and **MagicTime**. Type "Bean sprouts grow and mature from seeds" "[...] construction in a Minecraft virtual environment" "Cupcakes baking in an oven [...]" "[...] transitioning from a tightly closed bud to a fully bloomed state [...]" General Videos Metamorphic Videos ### Gallery
We showcase some metamorphic videos generated by **MagicTime**, [MakeLongVideo](https://github.com/xuduo35/MakeLongVideo), [ModelScopeT2V](https://github.com/modelscope), [VideoCrafter](https://github.com/AILab-CVC/VideoCrafter?tab=readme-ov-file), [ZeroScope](https://huggingface.co/cerspense/zeroscope_v2_576w), [LaVie](https://github.com/Vchitect/LaVie), [T2V-Zero](https://github.com/Picsart-AI-Research/Text2Video-Zero), [Latte](https://github.com/Vchitect/Latte) and [Animatediff](https://github.com/guoyww/AnimateDiff) below. Method "cherry blossoms transitioning [...]" "dough balls baking process [...]" "an ice cube is melting [...]" "a simple modern house's construction [...]" MakeLongVideo ModelScopeT2V VideoCrafter ZeroScope LaVie T2V-Zero Latte Animatediff Ours We show more metamorphic videos generated by **MagicTime** with the help of [Realistic](https://civitai.com/models/4201/realistic-vision-v20), [ToonYou](https://civitai.com/models/30240/toonyou) and [RcnzCartoon](https://civitai.com/models/66347/rcnz-cartoon-3d). "[...] bean sprouts grow and mature from seeds" "dough [...] swells and browns in the oven [...]" "the construction [...] in Minecraft [...]" "a bud transforms into a yellow flower" "time-lapse of a plant germinating [...]" "[...] a modern house being constructed in Minecraft [...]" "an ice cube is melting" "bean plant sprouts grow and mature from the soil" "time-lapse of delicate pink plum blossoms [...]" Prompts are trimmed for display, see [here](https://github.com/PKU-YuanGroup/MagicTime/blob/main/__assets__/promtp_unet.txt) for full prompts.
### Integrate into DiT-based Architecture
The mission of this project is to help reproduce Sora and provide high-quality video-text data and data annotation pipelines, to support [Open-Sora-Plan](https://github.com/PKU-YuanGroup/Open-Sora-Plan) or other DiT-based T2V models. To this end, we take an initial step to integrate our MagicTime scheme into the DiT-based Framework. Specifically, our method supports the Open-Sora-Plan v1.0.0 for fine-tuning. We first scale up with additional metamorphic landscape time-lapse videos in the same annotation framework to get the ChronoMagic-Landscape dataset. Then, we fine-tune the Open-Sora-Plan v1.0.0 with the ChronoMagic-Landscape dataset to get the MagicTime-DiT model. The results are as follows (**257×512×512 (10s)**): "Time-lapse of a coastal landscape [...]" "Display the serene beauty of twilight [...]" "Sunrise Splendor: Capture the breathtaking moment [...]" "Nightfall Elegance: Embrace the tranquil beauty [...]" "The sun descending below the horizon [...]" "[...] daylight fades into the embrace of the night [...]" "Time-lapse of the dynamic formations of clouds [...]" "Capture the dynamic formations of clouds [...]" Prompts are trimmed for display, see [here](https://github.com/PKU-YuanGroup/MagicTime/blob/main/__assets__/promtp_opensora.txt) for full prompts.
## 🤗 Demo
### Gradio Web UI
Highly recommend trying out our web demo by the following command, which incorporates all features currently supported by MagicTime. We also provide [online demo](https://huggingface.co/spaces/BestWishYsh/MagicTime?logs=build) in Hugging Face Spaces.
```bash
python app.py
```
### CLI Inference
```bash
# For Realistic
python inference_magictime.py --config sample_configs/RealisticVision.yaml --human
# or you can directly run the .sh
sh inference_cli.sh
```
warning: It is worth noting that even if we use the same seed and prompt but we change a machine, the results will be different.
## ⚙️ Requirements and Installation
We recommend the requirements as follows.
### Environment
```bash
git clone --depth=1 https://github.com/PKU-YuanGroup/MagicTime.git
cd MagicTime
conda create -n magictime python=3.10.13
conda activate magictime
pip install -r requirements.txt
```
### Download Base Model and Dreambooth
```bash
sh prepare_weights/down_base_model.sh
sh prepare_weights/down_dreambooth.sh
```
### Prepare MagicTime Module
```bash
sh prepare_weights/down_magictime_module.sh
```
## 🗝️ Training & Inference
The training code is coming soon! For inference, some example are shown below:
```bash
# For Realistic
python inference_magictime.py --config sample_configs/RealisticVision.yaml
# For ToonYou
python inference_magictime.py --config sample_configs/ToonYou.yaml
# For RcnzCartoon
python inference_magictime.py --config sample_configs/RcnzCartoon.yaml
# or you can directly run the .sh
sh inference.sh
```
## Community Contributions
We found some plugins created by community developers. Thanks for their efforts:
- ComfyUI Extension [ComfyUI-MagicTimeWrapper](https://github.com/kijai/ComfyUI-MagicTimeWrapper) (by [@Jukka Seppänen](https://github.com/kijai)). And you can click [here](https://www.bilibili.com/video/BV1wx421U7Gn/?spm_id_from=333.1007.top_right_bar_window_history.content.click) to view the installation tutorial.
- Replicate Demo & Cloud API [Replicate-MagicTime](https://replicate.com/camenduru/magictime) (by [@camenduru](https://twitter.com/camenduru)).
- Jupyter Notebook [Jupyter-MagicTime](https://github.com/camenduru/MagicTime-jupyter) (by [@ModelsLab](https://modelslab.com/)).
If you find related work, please let us know.
## 🐳 ChronoMagic Dataset
ChronoMagic with 2265 metamorphic time-lapse videos, each accompanied by a detailed caption. We released the subset of ChronoMagic used to train MagicTime. The dataset can be downloaded at [HuggingFace Dataset](https://huggingface.co/datasets/BestWishYsh/ChronoMagic), or you can download it with the following command. Some samples can be found on our [Project Page](https://pku-yuangroup.github.io/MagicTime/).
```bash
huggingface-cli download --repo-type dataset \
--resume-download BestWishYsh/ChronoMagic \
--local-dir BestWishYsh/ChronoMagic \
--local-dir-use-symlinks False
```
## 👍 Acknowledgement
* [Animatediff](https://github.com/guoyww/AnimateDiff/tree/main) The codebase we built upon and it is a strong U-Net-based text-to-video generation model.
* [Open-Sora-Plan](https://github.com/PKU-YuanGroup/Open-Sora-Plan) The codebase we built upon and it is a simple and scalable DiT-based text-to-video generation repo, to reproduce [Sora](https://openai.com/sora).
## 🔒 License
* The majority of this project is released under the Apache 2.0 license as found in the [LICENSE](https://github.com/PKU-YuanGroup/MagicTime/blob/main/LICENSE) file.
* The service is a research preview. Please contact us if you find any potential violations.
## ✏️ Citation
If you find our paper and code useful in your research, please consider giving a star :star: and citation :pencil:.
```BibTeX
@article{yuan2024magictime,
title={MagicTime: Time-lapse Video Generation Models as Metamorphic Simulators},
author={Yuan, Shenghai and Huang, Jinfa and Shi, Yujun and Xu, Yongqi and Zhu, Ruijie and Lin, Bin and Cheng, Xinhua and Yuan, Li and Luo, Jiebo},
journal={arXiv preprint arXiv:2404.05014},
year={2024}
}
```
## 🤝 Contributors;MagicTime: Time-lapse Video Generation Models as Metamorphic Simulators;text-to-video,video-generation,diffusion-models,time-lapse,time-lapse-dataset,open-sora-plan,long-video-generation,metamorphic-video-generation | PKU-YuanGroup/MagicTime |
discord/embedded-app-sdk;@discord/embedded-app-sdk ### The Embedded App SDK enables you to build rich, multiplayer experiences inside Discord.
Activities are multiplayer games and social experiences in Discord. An Activity is a web application hosted in an iframe that can run within the Discord client on desktop, web, or mobile. The Embedded App SDK handles the communication between Discord and your iframed application.
Read more about building Discord Activities with the Embedded App SDK on [https://discord.com/developers/docs/activities/overview](https://discord.com/developers/docs/activities/overview).
## Resources
- **[Embedded App SDK Docs](https://discord.com/developers/docs/developer-tools/embedded-app-sdk)** - Get familiar with the Embedded App SDK
- **[Activity Examples](/examples/)** - Explore examples of Discord Activities
- **[Technical chat on Discord](https://discord.com/invite/discord-developers)** - Join us and other devs at DDevs at `#activities-dev-help`
## Installing this package
```shell
npm install @discord/embedded-app-sdk
```
## Usage
To use the SDK, import it into your project and construct a new instance of the DiscordSDK class.
Below is a minimal example of setting up the SDK.
Visit [/examples/discord-activity-starter](/examples/discord-activity-starter/README.md) for a complete example application. See more info on environment variables (`YOUR_OAUTH2_CLIENT_ID`, etc...) [here](https://discord.com/developers/docs/activities/building-an-activity#find-your-oauth2-credentials).
```typescript
import {DiscordSDK} from '@discord/embedded-app-sdk';
const discordSdk = new DiscordSDK(YOUR_OAUTH2_CLIENT_ID);
async function setup() {
// Wait for READY payload from the discord client
await discordSdk.ready();
// Pop open the OAuth permission modal and request for access to scopes listed in scope array below
const {code} = await discordSdk.commands.authorize({
client_id: YOUR_OAUTH2_CLIENT_ID,
response_type: 'code',
state: '',
prompt: 'none',
scope: ['identify', 'applications.commands'],
});
// Retrieve an access_token from your application's server
const response = await fetch('/api/token', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({
code,
}),
});
const {access_token} = await response.json();
// Authenticate with Discord client (using the access_token)
auth = await discordSdk.commands.authenticate({
access_token,
});
}
```
## SDK development
Developing a new feature or patching a bug on the SDK? Check out [this guide](/docs/local-sdk-development.md) to learn how to set up your local dev environment.
## Discord Developer Terms of Service & Developer Policy
Please note that while this SDK is licensed under the MIT License, the [Discord Developer Terms of Service](https://discord.com/developers/docs/policies-and-agreements/developer-terms-of-service) and [Discord Developer Policy](https://discord.com/developers/docs/policies-and-agreements/developer-policy) otherwise still apply to you and the applications you develop utilizing this SDK.;🚀 The Discord Embedded App SDK lets you build rich, multiplayer experiences as Activities inside Discord.;[] | discord/embedded-app-sdk |
openai/openai-assistants-quickstart;OpenAI Assistants API Quickstart A quick-start template using the OpenAI Assistants API with Next.js . Quickstart Setup 1. Clone repo shell
git clone https://github.com/openai/openai-assistants-quickstart.git
cd openai-assistants-quickstart 2. Set your OpenAI API key shell
export OPENAI_API_KEY="sk_..." (or in .env.example and rename it to .env ). 3. Install dependencies shell
npm install 4. Run shell
npm run dev 5. Navigate to http://localhost:3000 . Deployment You can deploy this project to Vercel or any other platform that supports Next.js. Overview This project is intended to serve as a template for using the Assistants API in Next.js with streaming , tool use ( code interpreter and file search ), and function calling . While there are multiple pages to demonstrate each of these capabilities, they all use the same underlying assistant with all capabilities enabled. The main logic for chat will be found in the Chat component in app/components/chat.tsx , and the handlers starting with api/assistants/threads (found in api/assistants/threads/... ). Feel free to start your own project and copy some of this logic in! The Chat component itself can be copied and used directly, provided you copy the styling from app/components/chat.module.css as well. Pages Basic Chat Example: http://localhost:3000/examples/basic-chat Function Calling Example: http://localhost:3000/examples/function-calling File Search Example: http://localhost:3000/examples/file-search Full-featured Example: http://localhost:3000/examples/all Main Components app/components/chat.tsx - handles chat rendering, streaming , and function call forwarding app/components/file-viewer.tsx - handles uploading, fetching, and deleting files for file search Endpoints api/assistants - POST : create assistant (only used at startup) api/assistants/threads - POST : create new thread api/assistants/threads/[threadId]/messages - POST : send message to assistant api/assistants/threads/[threadId]/actions - POST : inform assistant of the result of a function it decided to call api/assistants/files - GET / POST / DELETE : fetch, upload, and delete assistant files for file search Feedback Let us know if you have any thoughts, questions, or feedback in this form !;OpenAI Assistants API quickstart with Next.js.;[] | openai/openai-assistants-quickstart |
kijai/ComfyUI-SUPIR;ComfyUI SUPIR upscaler wrapper node UPDATE3: Pruned models in safetensors format now available here:
https://huggingface.co/Kijai/SUPIR_pruned/tree/main UPDATE2: Added a better way to load the SDXL model, which also allows using LoRAs. The old node will remain for now to not break old workflows, and it is dubbed Legacy along with the single node, as I do not want to maintain those. UPDATE: As I have learned a lot with this project, I have now separated the single node to multiple nodes that make more sense to use in ComfyUI, and makes it clearer how SUPIR works. This is still a wrapper, though the whole thing has deviated from the original with much wider hardware support, more efficient model loading, far less memory usage and more sampler options. Here's a quick example (workflow is included) of using a Ligntning model, quality suffers then but it's very fast and I recommend starting with it as faster sampling makes it a lot easier to learn what the settings do. Under the hood SUPIR is SDXL img2img pipeline, the biggest custom part being their ControlNet. What they call "first stage" is a denoising process using their special "denoise encoder" VAE. This is not to be confused with the Gradio demo's "first stage" that's labeled as such for the Llava preprocessing, the Gradio "Stage2" still runs the denoising process anyway. This can be fully skipped with the nodes, or replaced with any other preprocessing node such as a model upscaler or anything you want. https://github.com/kijai/ComfyUI-SUPIR/assets/40791699/5cae2a24-d425-462c-b89d-df7dcf01595c Installing Either manager and install from git, or clone this repo to custom_nodes and run: pip install -r requirements.txt or if you use portable (run this in ComfyUI_windows_portable -folder): python_embeded\python.exe -m pip install -r ComfyUI\custom_nodes\ComfyUI-SUPIR\requirements.txt Pytorch version should be pretty new too, latest stable (2.2.1) works. xformers is automatically detected and enabled if found, but it's not necessary, in some cases it can be a bit faster though: pip install -U xformers --no-dependencies (for portable python_embeded\python.exe -m pip install -U xformers --no-dependencies ) Get the SUPIR model(s) from the original links below, they are loaded from the normal ComfyUI/models/checkpoints -folder
In addition you need an SDXL model, they are loaded from the same folder. I have not included llava in this, but you can input any captions to the node and thus use anything you want to generate them, or just don't, seems to work great even without. Memory requirements are directly related to the input image resolution, the "scale_by" in the node simply scales the input, you can leave it at 1.0 and size your input with any other node as well. In my testing I was able to run 512x512 to 1024x1024 with a 10GB 3080 GPU, and other tests on 24GB GPU to up 3072x3072. System RAM requirements are also hefty, don't know numbers but I would guess under 32GB is going to have issues, tested with 64GB. Updates: fp8 seems to work fine for the unet, I was able to do 512p to 2048 with under 10GB VRAM used. For the VAE it seems to cause artifacts, I recommend using tiled_vae instead. CLIP models are no longer needed separately, instead they are loaded from your selected SDXL checkpoint Mirror for the models: https://huggingface.co/camenduru/SUPIR/tree/main Tests Video upscale test (currently the node does frames one by one from input batch): Original: https://github.com/kijai/ComfyUI-SUPIR/assets/40791699/33621520-a429-4155-aa3a-ac5cd15bda56 Upscaled 3x: https://github.com/kijai/ComfyUI-SUPIR/assets/40791699/d6c60e0a-11c3-496d-82c6-a724758a131a Image upscale from 3x from 512p:
https://github.com/kijai/ComfyUI-SUPIR/assets/40791699/545ddce4-8324-45cb-a545-6d1f527d8750 Original repo:
https://github.com/Fanghua-Yu/SUPIR Models we provided: SUPIR-v0Q : Baidu Netdisk , Google Drive Default training settings with paper. High generalization and high image quality in most cases. SUPIR-v0F : Baidu Netdisk , Google Drive Training with light degradation settings. Stage1 encoder of SUPIR-v0F remains more details when facing light degradations. BibTeX @misc{yu2024scaling,
title={Scaling Up to Excellence: Practicing Model Scaling for Photo-Realistic Image Restoration In the Wild},
author={Fanghua Yu and Jinjin Gu and Zheyuan Li and Jinfan Hu and Xiangtao Kong and Xintao Wang and Jingwen He and Yu Qiao and Chao Dong},
year={2024},
eprint={2401.13627},
archivePrefix={arXiv},
primaryClass={cs.CV}
} 📧 Contact If you have any question, please email fanghuayu96@gmail.com . Non-Commercial Use Only Declaration The SUPIR ("Software") is made available for use, reproduction, and distribution strictly for non-commercial purposes. For the purposes of this declaration, "non-commercial" is defined as not primarily intended for or directed towards commercial advantage or monetary compensation. By using, reproducing, or distributing the Software, you agree to abide by this restriction and not to use the Software for any commercial purposes without obtaining prior written permission from Dr. Jinjin Gu. This declaration does not in any way limit the rights under any open source license that may apply to the Software; it solely adds a condition that the Software shall not be used for commercial purposes. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. For inquiries or to obtain permission for commercial use, please contact Dr. Jinjin Gu (hellojasongt@gmail.com).;SUPIR upscaling wrapper for ComfyUI;[] | kijai/ComfyUI-SUPIR |
SqueezeAILab/LLMCompiler;LLMCompiler: An LLM Compiler for Parallel Function Calling [ Paper ] LLMCompiler is a framework that enables an efficient and effective orchestration of parallel function calling with LLMs, including both open-source and close-source models, by automatically identifying which tasks can be performed in parallel and which ones are interdependent. TL;DR: The reasoning capabilities of LLMs enable them to execute multiple function calls, using user-provided functions to overcome
their inherent limitations (e.g. knowledge cutoffs, poor arithmetic skills, or lack of access to private data).
While multi-function calling allows them to tackle more complex problems,
current methods often require sequential reasoning and acting for each function which can result
in high latency, cost, and sometimes inaccurate behavior.
LLMCompiler addresses this by decomposing problems into multiple tasks
that can be executed in parallel, thereby efficiently orchestrating multi-function calling.
With LLMCompiler, the user specifies the tools
along with optional in-context examples, and LLMCompiler automatically computes an optimized orchestration for
the function calls .
LLMCompiler can be used with open-source models such as LLaMA, as well as OpenAI’s GPT models.
Across a range of tasks that exhibit different patterns of parallel function calling, LLMCompiler
consistently demonstrated latency speedup, cost saving, and accuracy improvement .
For more details, please check out our paper . News 🦜 [2/13] LLMCompiler is available within the LangGraph framework of LangChain . 📌 [1/17] Running custom models using vLLM supported 🦙 [12/29] LLMCompiler is available on LlamaIndex Installation Create a conda environment and install the dependencies conda create --name llmcompiler python=3.10 -y
conda activate llmcompiler Clone and install the dependencies git clone https://github.com/SqueezeAILab/LLMCompiler
cd LLMCompiler
pip install -r requirements.txt Basic Runs To reproduce the evaluation results in the paper, run the following command.
You need to first register your OpenAI API key to the environment: export OPENAI_API_KEY="sk-xxx" python run_llm_compiler.py --benchmark {benchmark-name} --store {store-path} [--logging] [--stream] To run a custom models served using the vLLM framework, run the following command.
Detailed instructions for serving custom models with the vLLM framework can be found in the vLLM documentation .
Note that the pre-defined prompts in the default configuration files are tailored for (non-chat) LLaMA-2 70B and might need adjustments for different models. python run_llm_compiler.py --model_type vllm --benchmark {benchmark-name} --store {store-path} --model_name {vllm-model-name} --vllm_port {vllm-port} [--logging] --benchmark : Benchmark name. Use hotpotqa , movie , and parallelqa to evaluate LLMCompiler on the HotpotQA, Movie Recommendation, and ParallelQA benchmarks, respectively. --store : Path to save the result. Question, true label, prediction, and latency per example will be stored in a JSON format. --logging : (Optional) Enables logging. Not yet supported for vLLM. --do_benchmark : (Optional) Do additional benchmarking on detailed run-time statistics. --stream : (Optional, Recommended) Enables streaming. It improves latency by streaming out tasks from the Planner to the Task Fetching Unit and Executor immediately after their generation, rather than blocking the Executor until all the tasks are generated from the Planner. --react : (Optional) Use ReAct instead of LLMCompiler for baseline evaluation. You can optionally use your Azure endpoint instead of OpenAI endpoint with --model_type azure . In this case, you need to provide the associated Azure configuration as the following fields in your environment: AZURE_ENDPOINT , AZURE_OPENAI_API_VERSION , AZURE_DEPLOYMENT_NAME , and AZURE_OPENAI_API_KEY . After the run is over, you can get the summary of the results by running the following command: python evaluate_results.py --file {store-path} Adding Your Custom Benchmark To use LLMCompiler on your custom benchmarks or use cases,
you only need to provide the functions and their descriptions, as well as example prompts.
Please refer to configs/hotpotqa , configs/movie , and configs/parallelqa as examples. gpt_prompts.py : Defines in-context example prompts tools.py : Defines functions (i.e. tools) to use, and their descriptions (i.e. instructions and arguments) Roadmap We are planning to update the following features soon:
* Tree-of-Thoughts evaluation we used in the paper Citation LLMCompiler has been developed as part of the following paper. We appreciate it if you would please cite the following paper if you found the library useful for your work: @article{kim2023llmcompiler,
title={An LLM Compiler for Parallel Function Calling},
author={Kim, Sehoon and Moon, Suhong and Tabrizi, Ryan and Lee, Nicholas and Mahoney, Michael and Keutzer, Kurt and Gholami, Amir},
journal={arXiv},
year={2023}
};[ICML 2024] LLMCompiler: An LLM Compiler for Parallel Function Calling;function-calling,llm,llm-agent,llm-agents,llms,parallel-function-call,efficient-inference,large-language-models,llama,llama2 | SqueezeAILab/LLMCompiler |
Ouckah/Summer2025-Internships;Summer 2025 Tech Internships by Ouckah & CSCareers (inspired by Pitt CSC & Simplify Repo ) The torch has been passed. Use this repo to share and keep track of software, tech, CS, PM, quant internships for Summer 2025 . The list is maintained collaboratively by CSCareers ! :warning: Please note that this repository is exclusively for internships/co-ops in the United States, Canada, or Remote positions :earth_americas: 🙏 Contribute by submitting an issue ! See the contribution guidelines here ! 🙏 😮💨 Want a spreadsheet to keep track of your applications? 😩 We also have a ⬇️ Ouckah spreadsheet ⬇️ where you can keep track of your processes, and generate a Sankey Diagram out of them! Click above! 🔍 Want to discover career path insights effortlessly 🔍 Focus on on the core of what LinkedIn was meant to be: a place to share your career journey with others. ⬆️ Click above! ⬆️ Applying to internships? Autofill all your applications in a single click. Stop manually re-entering your information. Simplify’s extension helps you autofill internship applications on millions of sites. The List 🚴🏔 Legend 🛂 - Does NOT offer Sponsorship 🇺🇸 - Requires U.S. Citizenship 🔒 - Internship application is closed ⬇️ Jump to bottom ⬇️ | Company | Role | Location | Application/Link | Date Posted |
| ------- | ---- | -------- | ---------------- | ----------- |
| Apple | Machine Learning / AI Intern | United States | | Jun 23 |
| ↳ | Software Engineer Intern | United States | | Jun 08 |
| Alpha FMC | Technology Intern | New York, NY | | Jun 23 |
| ↳ | Technology Intern | Toronto, Canada | | Jun 23 |
| Transmarket Group | Algorithmic Trading Intern | Chicago, IL | | Jun 18 |
| ByteDance | Software Engineer Intern | Seattle, WA | | Jun 12 |
| Capstone Investment Advisors | Software Engineer Intern | New York, NY | | Jun 05 |
| ↳ | Quantitative Risk Intern | New York, NY | | Jun 05 |
| BILL | Machine Learning Engineer Intern | San Jose, CADraper, UTSalt Lake City, UT | | Jun 03 |
| ↳ | Software Engineer Intern | San Jose, CADraper, UTSalt Lake City, UT | | Jun 03 |
| Bank of America | Global Quantitative Analytics Summer 2025 Analyst | U.S. and Canada | | Jun 02 |
| ↳ | Technology Summer Analyst Intern | 7 locations Colorado Illinois New Jersey New York NC Texas Virginia | | Jun 01 |
| D. E. Shaw | Software Developer Intern | New York, NY | | May 29 |
| ↳ | Proprietary Trading Intern | New York, NY | | May 22 |
| ↳ | Fundamental Research Analyst Intern | New York, NY | | May 21 |
| Ventas, Inc | Software Engineer Intern | Chicago, IL | | May 26 |
| Radix Trading | Quantitative Technologist (C++ Intern) | Chicago, IL | | May 25 |
| Konrad | Software Developer Intern - May 2025 (4 months) | Canada | | May 25 |
| SIG | Quantitative Trader Intern | Bala Cynwyd, PA New York City, NY Chicago, IL | | May 24 |
| ↳ | Quantitative Systematic Trading Intern | Bala Cynwyd, PA | | May 08 |
| ↳ | Quantitative Research Intern | Bala Cynwyd, PA | | May 07 |
| ↳ | Software Engineering Intern | Bala Cynwyd, PA | | May 01 |
| Cranium | AI/ML Engineer Intern | Short Hills, NJ | | May 14 |
| Epic | Software Engineer Intern 🛂 | Verona, WI | | May 03 |
| Chicago Trading Company | Quant Trading Analyst Intern | Chicago, IL | | May 02 |
| ↳ | Software Engineer Intern | Chicago, IL | 🔒 | May 01 |
| BNP Paribas | Summer Analyst Internship - Technology | Jersey City, NJ | | May 02 |
| Point72 | Quantitative Developer Intern | New York | | May 01 |
| Global Atlantic | Technology Intern | New York | | May 01 |
| Blackrock | Summer Internship | Americas | | May 01 |
| Google | Software Engineering Intern - Masters | 28 locations Mountain View, CA Atlanta, GA Austin, TX Boulder, CO Cambridge, MA Chicago, IL Irvine, CA Kirkland, WA Los Angeles, CA Madison, WI New York, NY Palo Alto, CA Portland, OR Pittsburgh, PA Ryleigh, NC Durham, NC Reston, VA Redmond, WA Redwood City, CA San Diego, CA Goleta, CA San Bruno, CA Seattle, WA San Francisco, CA San Jose, CA Santa Cruz, CA Sunnyvale, CA Washington, DC | 🔒 | Jun 19 |
| ↳ | Software Engineering Intern | 28 locations Mountain View, CA Atlanta, GA Austin, TX Boulder, CO Cambridge, MA Chicago, IL Irvine, CA Kirkland, WA Los Angeles, CA Madison, WI New York, NY Palo Alto, CA Portland, OR Pittsburgh, PA Ryleigh, NC Durham, NC Reston, VA Redmond, WA Redwood City, CA San Diego, CA Goleta, CA San Bruno, CA Seattle, WA San Francisco, CA San Jose, CA Santa Cruz, CA Sunnyvale, CA Washington, DC | 🔒 | Jun 17 |
| Caterpillar | Engineer Intern 🛂 | Chillicothe, IL Peoria, IL | 🔒 | May 08 |
| Walmart | Software Engineering Intern 🛂 | Bentonville, AR | 🔒 | May 02 |
| Bridgewater Associates | Investment Engineer Intern | Westport, CT | 🔒 | May 02 |
| Voloridge | Algorithm Analyst Intern | Jupiter, FL | 🔒 | May 02 | ⬆️ Back to Top ⬆️ We love our contributors ❤️❤️ Contribute by submitting an issue ! Made with contrib.rocks !;Collection of Summer 2025 tech internships!;internships,summer-2025,university,jobs | Ouckah/Summer2025-Internships |
ShineChen1024/MagicClothing;Magic Clothing This repository is the official implementation of Magic Clothing Magic Clothing is a branch version of OOTDiffusion , focusing on controllable garment-driven image synthesis Magic Clothing: Controllable Garment-Driven Image Synthesis [ arXiv paper ] Weifeng Chen *, Tao Gu *, Yuhao Xu + , Chengcai Chen * Equal contribution + Corresponding author Xiao-i Research 📢📢 We are continuing to improve this project. Please check earlyAccess branch for new features and updates : ) News 🔥 [2024/4/16] Our paper is available now! 🔥 [2024/3/8] We release the model weights trained on the 768 resolution. The strength of clothing and text prompts can be independently adjusted. 🤗 Hugging Face link 🔥 [2024/2/28] We support IP-Adapter-FaceID with ControlNet-Openpose now! A portrait and a reference pose image can be used as additional conditions. Have fun with gradio_ipadapter_openpose.py 🔥 [2024/2/23] We support IP-Adapter-FaceID now! A portrait image can be used as an additional condition. Have fun with gradio_ipadapter_faceid.py Installation Clone the repository sh
git clone https://github.com/ShineChen1024/MagicClothing.git Create a conda environment and install the required packages sh
conda create -n magicloth python==3.10
conda activate magicloth
pip install torch==2.0.1 torchvision==0.15.2 torchaudio==2.0.2
pip install -r requirements.txt Inference Python demo 512 weights sh
python inference.py --cloth_path [your cloth path] --model_path [your model checkpoints path] 768 weights sh
python inference.py --cloth_path [your cloth path] --model_path [your model checkpoints path] --enable_cloth_guidance Gradio demo 512 weights sh
python gradio_generate.py --model_path [your model checkpoints path] 768 weights sh
python gradio_generate.py --model_path [your model checkpoints path] --enable_cloth_guidance Citation @article{chen2024magic,
title={Magic Clothing: Controllable Garment-Driven Image Synthesis},
author={Chen, Weifeng and Gu, Tao and Xu, Yuhao and Chen, Chengcai},
journal={arXiv preprint arXiv:2404.09512},
year={2024}
} TODO List [x] Paper [x] Gradio demo [x] Inference code [x] Model weights [ ] Training code;Official implementation of Magic Clothing: Controllable Garment-Driven Image Synthesis;try-on,virtual-tryon,vton | ShineChen1024/MagicClothing |
placemark/placemark;Play At some point during Placemark development, I tried making a free-as-in-beer
interface that supported a lot of the things that Placemark could, but for free.
The main difference between this and the main application being that Placemark Play
wouldn't have a real server component, so it incurred no real storage or server
costs for me. People like free stuff and a lot of people don't want or need Placemark's server
storage for maps, so Play got a bit of a following. This subproject is trying to
run Play again. It's not easy, I'll tell you that! Placemark was, for many reasons, a monolithic
application, and Play was part of that monolith. So there are challenges to slicing
off just a bit of the application. This directory is basically the application, minus Blitz and the database layer
and all of that. It's a real experiment - expect breakage, and hopefully contribute
pull requests. I'm happy to try and make Placemark useful to folks, and don't
feel bad or bitter about the fate of the company, but realistically if the
open source project is to succeed, it'll need contributors as well as users. Getting started There are more sophisticated approaches using Docker or Render (see files), but
the following simple approach works locally on macOS: Clone the repository, change to this directory, and install dependencies: git clone
pnpm install Obtain a Mapbox public access token ( docs ) and Geocode Earth token ( docs ). Build the package with the tokens from the previous step: ```sh
NEXT_PUBLIC_MAPBOX_TOKEN=" " \
NEXT_PUBLIC_GEOCODE_EARTH_TOKEN=" " \
pnpm build ``` Start the server: sh
npx serve@latest out Visit http://localhost:3000 If you're planning to run this often or publicly, take care to secure your
tokens better by adding URL restrictions to the Mapbox token and setting allowed Referrer Hostnames to the Geocode Earth one,
and consider copying and revising the .env.sample file.;A flexible web-based editor, converter, visualization tool, for geospatial data;geojson,geospatial,maps | placemark/placemark |
TencentARC/MotionCtrl;## MotionCtrl: A Unified and Flexible Motion Controller for Video Generation
### 👉 MotionCtrl for [SVD] , for [VideoCrafter] [![Paper](https://img.shields.io/badge/Paper-gray)](https://wzhouxiff.github.io/projects/MotionCtrl/assets/paper/MotionCtrl.pdf) [![arXiv](https://img.shields.io/badge/arXiv-red)](https://arxiv.org/pdf/2312.03641.pdf) [![Project Page](https://img.shields.io/badge/Project%20Page-green
)](https://wzhouxiff.github.io/projects/MotionCtrl/)
🤗 [![HF Demo](https://img.shields.io/static/v1?label=Demo&message=MotionCtrl%2BSVD&color=orange)](https://huggingface.co/spaces/TencentARC/MotionCtrl_SVD) 🤗 [![HF Demo](https://img.shields.io/static/v1?label=Demo&message=MotionCtrl%2BVideoCrafter&color=orange)](https://huggingface.co/spaces/TencentARC/MotionCtrl) https://github.com/TencentARC/MotionCtrl/assets/19488619/45d44bf5-d4bf-4e45-8628-2c8926b5954a 🔥🔥 We release the codes, models and demos for MotionCtrl on Stable Video Diffusion (SVD) . Official implementation of MotionCtrl: A Unified and Flexible Motion Controller for Video Generation . MotionCtrl can Independently control complex camera motion and object motion of generated videos, with only a unified model. Results of MotionCtrl+SVD More results are in showcase_svd and our Project Page . Results of MotionCtrl+VideoCrafter More results are in our Project Page . 📝 Changelog [x] 20231225: Release MotionCtrl deployed on LVDM/VideoCrafter . [x] 20231225: Gradio demo available. 🤗 [x] 20231228: Provide local gradio demo for convenience. [x] 20240115 More camera poses used for testing are provided in dataset/camera_poses [x] 20240115 Release MotionCtrl deployed on SVD . Codes are in the brach svd and Gradio Demo is available in 🤗 . [ ] Release MotionCtrl deployed on AnimateDiff . ⚙️ Environment conda create -n motionctrl python=3.10.6
conda activate motionctrl
pip install -r requirements.txt 💫 Inference Run local inference script Download the weights of MotionCtrl motionctrl.pth and put it to ./checkpoints . Go into configs/inference/run.sh and set condtype as 'camera_motion', 'object_motion', or 'both'. condtype=camera_motion means only control the camera motion in the generated video. condtype=object_motion means only control the object motion in the generated video. condtype=both means control the camera motion and object motion in the generated video simultaneously . Running scripts:
sh configs/inference/run.sh Run local gradio demo python -m app --share :books: Citation If you make use of our work, please cite our paper. bibtex
@inproceedings{wang2023motionctrl,
title={MotionCtrl: A Unified and Flexible Motion Controller for Video Generation},
author={Wang, Zhouxia and Yuan, Ziyang and Wang, Xintao and Chen, Tianshui and Xia, Menghan and Luo, Ping and Shan, Yin},
booktitle={arXiv preprint arXiv:2312.03641},
year={2023}
} 🤗 Acknowledgment The current version of MotionCtrl is built on VideoCrafter . We appreciate the authors for sharing their awesome codebase. ❓ Contact For any question, feel free to email wzhoux@connect.hku.hk or zhouzi1212@gmail.com .;MotionCtrl: A Unified and Flexible Motion Controller for Video Generation;[] | TencentARC/MotionCtrl |
AgentOps-AI/tokencost;Clientside token counting + price estimation for LLM apps and AI agents. 🐦 Twitter • 📢 Discord • 🖇️ AgentOps TokenCost Tokencost helps calculate the USD cost of using major Large Language Model (LLMs) APIs by calculating the estimated cost of prompts and completions. Building AI agents? Check out AgentOps Features LLM Price Tracking Major LLM providers frequently add new models and update pricing. This repo helps track the latest price changes Token counting Accurately count prompt tokens before sending OpenAI requests Easy integration Get the cost of a prompt or completion with a single function Example usage: ```python
from tokencost import calculate_prompt_cost, calculate_completion_cost model = "gpt-3.5-turbo"
prompt = [{ "role": "user", "content": "Hello world"}]
completion = "How may I assist you today?" prompt_cost = calculate_prompt_cost(prompt, model)
completion_cost = calculate_completion_cost(completion, model) print(f"{prompt_cost} + {completion_cost} = {prompt_cost + completion_cost}") 0.0000135 + 0.000014 = 0.0000275 ``` Installation Recommended: PyPI : bash
pip install tokencost Usage Cost estimates Calculating the cost of prompts and completions from OpenAI requests
```python
from openai import OpenAI client = OpenAI()
model = "gpt-3.5-turbo"
prompt = [{ "role": "user", "content": "Say this is a test"}] chat_completion = client.chat.completions.create(
messages=prompt, model=model
) completion = chat_completion.choices[0].message.content "This is a test." prompt_cost = calculate_prompt_cost(prompt, model)
completion_cost = calculate_completion_cost(completion, model)
print(f"{prompt_cost} + {completion_cost} = {prompt_cost + completion_cost}") 0.0000180 + 0.000010 = 0.0000280 ``` Calculating cost using string prompts instead of messages: ```python
from tokencost import calculate_prompt_cost prompt_string = "Hello world"
response = "How may I assist you today?"
model= "gpt-3.5-turbo" prompt_cost = calculate_prompt_cost(prompt_string, model)
print(f"Cost: ${prompt_cost}") Cost: $3e-06 ``` Counting tokens ```python
from tokencost import count_message_tokens, count_string_tokens message_prompt = [{ "role": "user", "content": "Hello world"}] Counting tokens in prompts formatted as message lists print(count_message_tokens(message_prompt, model="gpt-3.5-turbo")) 9 Alternatively, counting tokens in string prompts print(count_string_tokens(prompt="Hello world", model="gpt-3.5-turbo")) 2 ``` Cost table Units denominated in USD. All prices can be located in model_prices.json . Prices last updated Jan 30, 2024 from LiteLLM's cost dictionary | Model Name | Prompt Cost (USD) per 1M tokens | Completion Cost (USD) per 1M tokens | Max Prompt Tokens | Max Output Tokens |
|:--------------------------------------------------------------------------------------------|:----------------------------------|:--------------------------------------|:--------------------|--------------------:|
| gpt-4 | $30.00 | $60.00 | 8,192 | 4096 |
| gpt-4o | $5.00 | $15.00 | 128,000 | 4096 |
| gpt-4o-2024-05-13 | $5.00 | $15.00 | 128,000 | 4096 |
| gpt-4-turbo-preview | $10.00 | $30.00 | 128,000 | 4096 |
| gpt-4-0314 | $30.00 | $60.00 | 8,192 | 4096 |
| gpt-4-0613 | $30.00 | $60.00 | 8,192 | 4096 |
| gpt-4-32k | $60.00 | $120.00 | 32,768 | 4096 |
| gpt-4-32k-0314 | $60.00 | $120.00 | 32,768 | 4096 |
| gpt-4-32k-0613 | $60.00 | $120.00 | 32,768 | 4096 |
| gpt-4-turbo | $10.00 | $30.00 | 128,000 | 4096 |
| gpt-4-turbo-2024-04-09 | $10.00 | $30.00 | 128,000 | 4096 |
| gpt-4-1106-preview | $10.00 | $30.00 | 128,000 | 4096 |
| gpt-4-0125-preview | $10.00 | $30.00 | 128,000 | 4096 |
| gpt-4-vision-preview | $10.00 | $30.00 | 128,000 | 4096 |
| gpt-4-1106-vision-preview | $10.00 | $30.00 | 128,000 | 4096 |
| gpt-3.5-turbo | $1.5 | $2.00 | 16,385 | 4096 |
| gpt-3.5-turbo-0301 | $1.5 | $2.00 | 4,097 | 4096 |
| gpt-3.5-turbo-0613 | $1.5 | $2.00 | 4,097 | 4096 |
| gpt-3.5-turbo-1106 | $1.00 | $2.00 | 16,385 | 4096 |
| gpt-3.5-turbo-0125 | $0.5 | $1.5 | 16,385 | 4096 |
| gpt-3.5-turbo-16k | $3.00 | $4.00 | 16,385 | 4096 |
| gpt-3.5-turbo-16k-0613 | $3.00 | $4.00 | 16,385 | 4096 |
| ft:gpt-3.5-turbo | $3.00 | $6.00 | 4,097 | 4096 |
| ft:davinci-002 | $2.00 | $2.00 | 16,384 | 4096 |
| ft:babbage-002 | $0.4 | $0.4 | 16,384 | 4096 |
| text-embedding-3-large | $0.13 | $0.00 | 8,191 | nan |
| text-embedding-3-small | $0.02 | $0.00 | 8,191 | nan |
| text-embedding-ada-002 | $0.1 | $0.00 | 8,191 | nan |
| text-embedding-ada-002-v2 | $0.1 | $0.00 | 8,191 | nan |
| text-moderation-stable | $0.00 | $0.00 | 32,768 | 0 |
| text-moderation-007 | $0.00 | $0.00 | 32,768 | 0 |
| text-moderation-latest | $0.00 | $0.00 | 32,768 | 0 |
| 256-x-256/dall-e-2 | -- | -- | nan | nan |
| 512-x-512/dall-e-2 | -- | -- | nan | nan |
| 1024-x-1024/dall-e-2 | -- | -- | nan | nan |
| hd/1024-x-1792/dall-e-3 | -- | -- | nan | nan |
| hd/1792-x-1024/dall-e-3 | -- | -- | nan | nan |
| hd/1024-x-1024/dall-e-3 | -- | -- | nan | nan |
| standard/1024-x-1792/dall-e-3 | -- | -- | nan | nan |
| standard/1792-x-1024/dall-e-3 | -- | -- | nan | nan |
| standard/1024-x-1024/dall-e-3 | -- | -- | nan | nan |
| whisper-1 | -- | -- | nan | nan |
| azure/whisper-1 | -- | -- | nan | nan |
| azure/gpt-4o | $5.00 | $15.00 | 128,000 | 4096 |
| azure/gpt-4-turbo-2024-04-09 | $10.00 | $30.00 | 128,000 | 4096 |
| azure/gpt-4-0125-preview | $10.00 | $30.00 | 128,000 | 4096 |
| azure/gpt-4-1106-preview | $10.00 | $30.00 | 128,000 | 4096 |
| azure/gpt-4-0613 | $30.00 | $60.00 | 8,192 | 4096 |
| azure/gpt-4-32k-0613 | $60.00 | $120.00 | 32,768 | 4096 |
| azure/gpt-4-32k | $60.00 | $120.00 | 32,768 | 4096 |
| azure/gpt-4 | $30.00 | $60.00 | 8,192 | 4096 |
| azure/gpt-4-turbo | $10.00 | $30.00 | 128,000 | 4096 |
| azure/gpt-4-turbo-vision-preview | $10.00 | $30.00 | 128,000 | 4096 |
| azure/gpt-35-turbo-16k-0613 | $3.00 | $4.00 | 16,385 | 4096 |
| azure/gpt-35-turbo-1106 | $1.5 | $2.00 | 16,384 | 4096 |
| azure/gpt-35-turbo-0125 | $0.5 | $1.5 | 16,384 | 4096 |
| azure/gpt-35-turbo-16k | $3.00 | $4.00 | 16,385 | 4096 |
| azure/gpt-35-turbo | $0.5 | $1.5 | 4,097 | 4096 |
| azure/gpt-3.5-turbo-instruct-0914 | $1.5 | $2.00 | 4,097 | nan |
| azure/gpt-35-turbo-instruct | $1.5 | $2.00 | 4,097 | nan |
| azure/mistral-large-latest | $8.00 | $24.00 | 32,000 | nan |
| azure/mistral-large-2402 | $8.00 | $24.00 | 32,000 | nan |
| azure/command-r-plus | $3.00 | $15.00 | 128,000 | 4096 |
| azure/ada | $0.1 | $0.00 | 8,191 | nan |
| azure/text-embedding-ada-002 | $0.1 | $0.00 | 8,191 | nan |
| azure/text-embedding-3-large | $0.13 | $0.00 | 8,191 | nan |
| azure/text-embedding-3-small | $0.02 | $0.00 | 8,191 | nan |
| azure/standard/1024-x-1024/dall-e-3 | -- | $0.00 | nan | nan |
| azure/hd/1024-x-1024/dall-e-3 | -- | $0.00 | nan | nan |
| azure/standard/1024-x-1792/dall-e-3 | -- | $0.00 | nan | nan |
| azure/standard/1792-x-1024/dall-e-3 | -- | $0.00 | nan | nan |
| azure/hd/1024-x-1792/dall-e-3 | -- | $0.00 | nan | nan |
| azure/hd/1792-x-1024/dall-e-3 | -- | $0.00 | nan | nan |
| azure/standard/1024-x-1024/dall-e-2 | -- | $0.00 | nan | nan |
| babbage-002 | $0.4 | $0.4 | 16,384 | 4096 |
| davinci-002 | $2.00 | $2.00 | 16,384 | 4096 |
| gpt-3.5-turbo-instruct | $1.5 | $2.00 | 8,192 | 4096 |
| gpt-3.5-turbo-instruct-0914 | $1.5 | $2.00 | 8,192 | 4097 |
| claude-instant-1 | $1.63 | $5.51 | 100,000 | 8191 |
| mistral/mistral-tiny | $0.25 | $0.25 | 32,000 | 8191 |
| mistral/mistral-small | $1.00 | $3.00 | 32,000 | 8191 |
| mistral/mistral-small-latest | $1.00 | $3.00 | 32,000 | 8191 |
| mistral/mistral-medium | $2.7 | $8.1 | 32,000 | 8191 |
| mistral/mistral-medium-latest | $2.7 | $8.1 | 32,000 | 8191 |
| mistral/mistral-medium-2312 | $2.7 | $8.1 | 32,000 | 8191 |
| mistral/mistral-large-latest | $4.00 | $12.00 | 32,000 | 8191 |
| mistral/mistral-large-2402 | $4.00 | $12.00 | 32,000 | 8191 |
| mistral/open-mistral-7b | $0.25 | $0.25 | 32,000 | 8191 |
| mistral/open-mixtral-8x7b | $0.7 | $0.7 | 32,000 | 8191 |
| mistral/open-mixtral-8x22b | $2.00 | $6.00 | 64,000 | 8191 |
| mistral/codestral-latest | $1.00 | $3.00 | 32,000 | 8191 |
| mistral/codestral-2405 | $1.00 | $3.00 | 32,000 | 8191 |
| mistral/mistral-embed | $0.1 | -- | 8,192 | nan |
| deepseek-chat | $0.14 | $0.28 | 32,000 | 4096 |
| deepseek-coder | $0.14 | $0.28 | 16,000 | 4096 |
| groq/llama2-70b-4096 | $0.7 | $0.8 | 4,096 | 4096 |
| groq/llama3-8b-8192 | $0.1 | $0.1 | 8,192 | 8192 |
| groq/llama3-70b-8192 | $0.64 | $0.8 | 8,192 | 8192 |
| groq/mixtral-8x7b-32768 | $0.27 | $0.27 | 32,768 | 32768 |
| groq/gemma-7b-it | $0.1 | $0.1 | 8,192 | 8192 |
| claude-instant-1.2 | $0.163 | $0.551 | 100,000 | 8191 |
| claude-2 | $8.00 | $24.00 | 100,000 | 8191 |
| claude-2.1 | $8.00 | $24.00 | 200,000 | 8191 |
| claude-3-haiku-20240307 | $0.25 | $1.25 | 200,000 | 4096 |
| claude-3-opus-20240229 | $15.00 | $75.00 | 200,000 | 4096 |
| claude-3-sonnet-20240229 | $3.00 | $15.00 | 200,000 | 4096 |
| text-bison | $0.125 | $0.125 | 8,192 | 1024 |
| text-bison@001 | $0.125 | $0.125 | 8,192 | 1024 |
| text-unicorn | $10.00 | $28.00 | 8,192 | 1024 |
| text-unicorn@001 | $10.00 | $28.00 | 8,192 | 1024 |
| chat-bison | $0.125 | $0.125 | 8,192 | 4096 |
| chat-bison@001 | $0.125 | $0.125 | 8,192 | 4096 |
| chat-bison@002 | $0.125 | $0.125 | 8,192 | 4096 |
| chat-bison-32k | $0.125 | $0.125 | 32,000 | 8192 |
| code-bison | $0.125 | $0.125 | 6,144 | 1024 |
| code-bison@001 | $0.125 | $0.125 | 6,144 | 1024 |
| code-gecko@001 | $0.125 | $0.125 | 2,048 | 64 |
| code-gecko@002 | $0.125 | $0.125 | 2,048 | 64 |
| code-gecko | $0.125 | $0.125 | 2,048 | 64 |
| codechat-bison | $0.125 | $0.125 | 6,144 | 1024 |
| codechat-bison@001 | $0.125 | $0.125 | 6,144 | 1024 |
| codechat-bison-32k | $0.125 | $0.125 | 32,000 | 8192 |
| gemini-pro | $0.03125 | $0.09375 | 32,760 | 8192 |
| gemini-1.0-pro | $0.03125 | $0.09375 | 32,760 | 8192 |
| gemini-1.0-pro-001 | $0.03125 | $0.09375 | 32,760 | 8192 |
| gemini-1.0-pro-002 | $0.03125 | $0.09375 | 32,760 | 8192 |
| gemini-1.5-pro | $0.3125 | $0.9375 | 1,000,000 | 8192 |
| gemini-1.5-flash-001 | $0.03125 | $0.09375 | 1,000,000 | 8192 |
| gemini-1.5-flash-preview-0514 | $0.03125 | $0.09375 | 1,000,000 | 8192 |
| gemini-1.5-pro-001 | $0.3125 | $0.9375 | 1,000,000 | 8192 |
| gemini-1.5-pro-preview-0514 | $0.3125 | $0.9375 | 1,000,000 | 8192 |
| gemini-1.5-pro-preview-0215 | $0.3125 | $0.9375 | 1,000,000 | 8192 |
| gemini-1.5-pro-preview-0409 | $0.3125 | $0.9375 | 1,000,000 | 8192 |
| gemini-experimental | $0.00 | $0.00 | 1,000,000 | 8192 |
| gemini-pro-vision | $0.25 | $0.5 | 16,384 | 2048 |
| gemini-1.0-pro-vision | $0.25 | $0.5 | 16,384 | 2048 |
| gemini-1.0-pro-vision-001 | $0.25 | $0.5 | 16,384 | 2048 |
| vertex_ai/claude-3-sonnet@20240229 | $3.00 | $15.00 | 200,000 | 4096 |
| vertex_ai/claude-3-haiku@20240307 | $0.25 | $1.25 | 200,000 | 4096 |
| vertex_ai/claude-3-opus@20240229 | $15.00 | $75.00 | 200,000 | 4096 |
| vertex_ai/imagegeneration@006 | -- | -- | nan | nan |
| text-embedding-004 | $0.00625 | $0.00 | 3,072 | nan |
| text-multilingual-embedding-002 | $0.00625 | $0.00 | 2,048 | nan |
| textembedding-gecko | $0.00625 | $0.00 | 3,072 | nan |
| textembedding-gecko-multilingual | $0.00625 | $0.00 | 3,072 | nan |
| textembedding-gecko-multilingual@001 | $0.00625 | $0.00 | 3,072 | nan |
| textembedding-gecko@001 | $0.00625 | $0.00 | 3,072 | nan |
| textembedding-gecko@003 | $0.00625 | $0.00 | 3,072 | nan |
| text-embedding-preview-0409 | $0.00625 | $0.00 | 3,072 | nan |
| text-multilingual-embedding-preview-0409 | $0.00625 | $0.00 | 3,072 | nan |
| palm/chat-bison | $0.125 | $0.125 | 8,192 | 4096 |
| palm/chat-bison-001 | $0.125 | $0.125 | 8,192 | 4096 |
| palm/text-bison | $0.125 | $0.125 | 8,192 | 1024 |
| palm/text-bison-001 | $0.125 | $0.125 | 8,192 | 1024 |
| palm/text-bison-safety-off | $0.125 | $0.125 | 8,192 | 1024 |
| palm/text-bison-safety-recitation-off | $0.125 | $0.125 | 8,192 | 1024 |
| gemini/gemini-1.5-flash-latest | $0.35 | $1.05 | 1,000,000 | 8192 |
| gemini/gemini-pro | $0.35 | $1.05 | 32,760 | 8192 |
| gemini/gemini-1.5-pro | $0.35 | $1.05 | 1,000,000 | 8192 |
| gemini/gemini-1.5-pro-latest | $0.35 | $1.05 | 1,048,576 | 8192 |
| gemini/gemini-pro-vision | $0.35 | $1.05 | 30,720 | 2048 |
| command-r | $0.5 | $1.5 | 128,000 | 4096 |
| command-light | $15.00 | $15.00 | 4,096 | 4096 |
| command-r-plus | $3.00 | $15.00 | 128,000 | 4096 |
| command-nightly | $15.00 | $15.00 | 4,096 | 4096 |
| command | $15.00 | $15.00 | 4,096 | 4096 |
| command-medium-beta | $15.00 | $15.00 | 4,096 | 4096 |
| command-xlarge-beta | $15.00 | $15.00 | 4,096 | 4096 |
| replicate/meta/llama-2-13b | $0.1 | $0.5 | 4,096 | 4096 |
| replicate/meta/llama-2-13b-chat | $0.1 | $0.5 | 4,096 | 4096 |
| replicate/meta/llama-2-70b | $0.65 | $2.75 | 4,096 | 4096 |
| replicate/meta/llama-2-70b-chat | $0.65 | $2.75 | 4,096 | 4096 |
| replicate/meta/llama-2-7b | $0.05 | $0.25 | 4,096 | 4096 |
| replicate/meta/llama-2-7b-chat | $0.05 | $0.25 | 4,096 | 4096 |
| replicate/meta/llama-3-70b | $0.65 | $2.75 | 8,192 | 8192 |
| replicate/meta/llama-3-70b-instruct | $0.65 | $2.75 | 8,192 | 8192 |
| replicate/meta/llama-3-8b | $0.05 | $0.25 | 8,086 | 8086 |
| replicate/meta/llama-3-8b-instruct | $0.05 | $0.25 | 8,086 | 8086 |
| replicate/mistralai/mistral-7b-v0.1 | $0.05 | $0.25 | 4,096 | 4096 |
| replicate/mistralai/mistral-7b-instruct-v0.2 | $0.05 | $0.25 | 4,096 | 4096 |
| replicate/mistralai/mixtral-8x7b-instruct-v0.1 | $0.3 | $1.00 | 4,096 | 4096 |
| openrouter/microsoft/wizardlm-2-8x22b:nitro | $1.00 | $1.00 | nan | nan |
| openrouter/google/gemini-pro-1.5 | $2.5 | $7.5 | 1,000,000 | 8192 |
| openrouter/mistralai/mixtral-8x22b-instruct | $0.65 | $0.65 | nan | nan |
| openrouter/cohere/command-r-plus | $3.00 | $15.00 | nan | nan |
| openrouter/databricks/dbrx-instruct | $0.6 | $0.6 | nan | nan |
| openrouter/anthropic/claude-3-haiku | $0.25 | $1.25 | nan | nan |
| openrouter/anthropic/claude-3-sonnet | $3.00 | $15.00 | nan | nan |
| openrouter/mistralai/mistral-large | $8.00 | $24.00 | nan | nan |
| openrouter/cognitivecomputations/dolphin-mixtral-8x7b | $0.5 | $0.5 | nan | nan |
| openrouter/google/gemini-pro-vision | $0.125 | $0.375 | nan | nan |
| openrouter/fireworks/firellava-13b | $0.2 | $0.2 | nan | nan |
| openrouter/meta-llama/llama-3-8b-instruct:free | $0.00 | $0.00 | nan | nan |
| openrouter/meta-llama/llama-3-8b-instruct:extended | $0.225 | $2.25 | nan | nan |
| openrouter/meta-llama/llama-3-70b-instruct:nitro | $0.9 | $0.9 | nan | nan |
| openrouter/meta-llama/llama-3-70b-instruct | $0.59 | $0.79 | nan | nan |
| openrouter/openai/gpt-4o | $5.00 | $15.00 | 128,000 | 4096 |
| openrouter/openai/gpt-4o-2024-05-13 | $5.00 | $15.00 | 128,000 | 4096 |
| openrouter/openai/gpt-4-vision-preview | $10.00 | $30.00 | nan | nan |
| openrouter/openai/gpt-3.5-turbo | $1.5 | $2.00 | nan | nan |
| openrouter/openai/gpt-3.5-turbo-16k | $3.00 | $4.00 | nan | nan |
| openrouter/openai/gpt-4 | $30.00 | $60.00 | nan | nan |
| openrouter/anthropic/claude-instant-v1 | $1.63 | $5.51 | nan | 8191 |
| openrouter/anthropic/claude-2 | $11.02 | $32.68 | nan | 8191 |
| openrouter/anthropic/claude-3-opus | $15.00 | $75.00 | 200,000 | 4096 |
| openrouter/google/palm-2-chat-bison | $0.5 | $0.5 | nan | nan |
| openrouter/google/palm-2-codechat-bison | $0.5 | $0.5 | nan | nan |
| openrouter/meta-llama/llama-2-13b-chat | $0.2 | $0.2 | nan | nan |
| openrouter/meta-llama/llama-2-70b-chat | $1.5 | $1.5 | nan | nan |
| openrouter/meta-llama/codellama-34b-instruct | $0.5 | $0.5 | nan | nan |
| openrouter/nousresearch/nous-hermes-llama2-13b | $0.2 | $0.2 | nan | nan |
| openrouter/mancer/weaver | $5.625 | $5.625 | nan | nan |
| openrouter/gryphe/mythomax-l2-13b | $1.875 | $1.875 | nan | nan |
| openrouter/jondurbin/airoboros-l2-70b-2.1 | $13.875 | $13.875 | nan | nan |
| openrouter/undi95/remm-slerp-l2-13b | $1.875 | $1.875 | nan | nan |
| openrouter/pygmalionai/mythalion-13b | $1.875 | $1.875 | nan | nan |
| openrouter/mistralai/mistral-7b-instruct | $0.13 | $0.13 | nan | nan |
| openrouter/mistralai/mistral-7b-instruct:free | $0.00 | $0.00 | nan | nan |
| j2-ultra | $15.00 | $15.00 | 8,192 | 8192 |
| j2-mid | $10.00 | $10.00 | 8,192 | 8192 |
| j2-light | $3.00 | $3.00 | 8,192 | 8192 |
| dolphin | $0.5 | $0.5 | 16,384 | 16384 |
| chatdolphin | $0.5 | $0.5 | 16,384 | 16384 |
| luminous-base | $30.00 | $33.00 | nan | nan |
| luminous-base-control | $37.5 | $41.25 | nan | nan |
| luminous-extended | $45.00 | $49.5 | nan | nan |
| luminous-extended-control | $56.25 | $61.875 | nan | nan |
| luminous-supreme | $175.00 | $192.5 | nan | nan |
| luminous-supreme-control | $218.75 | $240.625 | nan | nan |
| ai21.j2-mid-v1 | $12.5 | $12.5 | 8,191 | 8191 |
| ai21.j2-ultra-v1 | $18.8 | $18.8 | 8,191 | 8191 |
| amazon.titan-text-lite-v1 | $0.3 | $0.4 | 42,000 | 4000 |
| amazon.titan-text-express-v1 | $1.3 | $1.7 | 42,000 | 8000 |
| amazon.titan-embed-text-v1 | $0.1 | $0.00 | 8,192 | nan |
| amazon.titan-embed-text-v2:0 | $0.2 | $0.00 | 8,192 | nan |
| mistral.mistral-7b-instruct-v0:2 | $0.15 | $0.2 | 32,000 | 8191 |
| mistral.mixtral-8x7b-instruct-v0:1 | $0.45 | $0.7 | 32,000 | 8191 |
| mistral.mistral-large-2402-v1:0 | $8.00 | $24.00 | 32,000 | 8191 |
| bedrock/us-west-2/mistral.mixtral-8x7b-instruct-v0:1 | $0.45 | $0.7 | 32,000 | 8191 |
| bedrock/us-east-1/mistral.mixtral-8x7b-instruct-v0:1 | $0.45 | $0.7 | 32,000 | 8191 |
| bedrock/eu-west-3/mistral.mixtral-8x7b-instruct-v0:1 | $0.59 | $0.91 | 32,000 | 8191 |
| bedrock/us-west-2/mistral.mistral-7b-instruct-v0:2 | $0.15 | $0.2 | 32,000 | 8191 |
| bedrock/us-east-1/mistral.mistral-7b-instruct-v0:2 | $0.15 | $0.2 | 32,000 | 8191 |
| bedrock/eu-west-3/mistral.mistral-7b-instruct-v0:2 | $0.2 | $0.26 | 32,000 | 8191 |
| bedrock/us-east-1/mistral.mistral-large-2402-v1:0 | $8.00 | $24.00 | 32,000 | 8191 |
| bedrock/us-west-2/mistral.mistral-large-2402-v1:0 | $8.00 | $24.00 | 32,000 | 8191 |
| bedrock/eu-west-3/mistral.mistral-large-2402-v1:0 | $10.4 | $31.2 | 32,000 | 8191 |
| anthropic.claude-3-sonnet-20240229-v1:0 | $3.00 | $15.00 | 200,000 | 4096 |
| anthropic.claude-3-haiku-20240307-v1:0 | $0.25 | $1.25 | 200,000 | 4096 |
| anthropic.claude-3-opus-20240229-v1:0 | $15.00 | $75.00 | 200,000 | 4096 |
| anthropic.claude-v1 | $8.00 | $24.00 | 100,000 | 8191 |
| bedrock/us-east-1/anthropic.claude-v1 | $8.00 | $24.00 | 100,000 | 8191 |
| bedrock/us-west-2/anthropic.claude-v1 | $8.00 | $24.00 | 100,000 | 8191 |
| bedrock/ap-northeast-1/anthropic.claude-v1 | $8.00 | $24.00 | 100,000 | 8191 |
| bedrock/ap-northeast-1/1-month-commitment/anthropic.claude-v1 | -- | -- | 100,000 | 8191 |
| bedrock/ap-northeast-1/6-month-commitment/anthropic.claude-v1 | -- | -- | 100,000 | 8191 |
| bedrock/eu-central-1/anthropic.claude-v1 | $8.00 | $24.00 | 100,000 | 8191 |
| bedrock/eu-central-1/1-month-commitment/anthropic.claude-v1 | -- | -- | 100,000 | 8191 |
| bedrock/eu-central-1/6-month-commitment/anthropic.claude-v1 | -- | -- | 100,000 | 8191 |
| bedrock/us-east-1/1-month-commitment/anthropic.claude-v1 | -- | -- | 100,000 | 8191 |
| bedrock/us-east-1/6-month-commitment/anthropic.claude-v1 | -- | -- | 100,000 | 8191 |
| bedrock/us-west-2/1-month-commitment/anthropic.claude-v1 | -- | -- | 100,000 | 8191 |
| bedrock/us-west-2/6-month-commitment/anthropic.claude-v1 | -- | -- | 100,000 | 8191 |
| anthropic.claude-v2 | $8.00 | $24.00 | 100,000 | 8191 |
| bedrock/us-east-1/anthropic.claude-v2 | $8.00 | $24.00 | 100,000 | 8191 |
| bedrock/us-west-2/anthropic.claude-v2 | $8.00 | $24.00 | 100,000 | 8191 |
| bedrock/ap-northeast-1/anthropic.claude-v2 | $8.00 | $24.00 | 100,000 | 8191 |
| bedrock/ap-northeast-1/1-month-commitment/anthropic.claude-v2 | -- | -- | 100,000 | 8191 |
| bedrock/ap-northeast-1/6-month-commitment/anthropic.claude-v2 | -- | -- | 100,000 | 8191 |
| bedrock/eu-central-1/anthropic.claude-v2 | $8.00 | $24.00 | 100,000 | 8191 |
| bedrock/eu-central-1/1-month-commitment/anthropic.claude-v2 | -- | -- | 100,000 | 8191 |
| bedrock/eu-central-1/6-month-commitment/anthropic.claude-v2 | -- | -- | 100,000 | 8191 |
| bedrock/us-east-1/1-month-commitment/anthropic.claude-v2 | -- | -- | 100,000 | 8191 |
| bedrock/us-east-1/6-month-commitment/anthropic.claude-v2 | -- | -- | 100,000 | 8191 |
| bedrock/us-west-2/1-month-commitment/anthropic.claude-v2 | -- | -- | 100,000 | 8191 |
| bedrock/us-west-2/6-month-commitment/anthropic.claude-v2 | -- | -- | 100,000 | 8191 |
| anthropic.claude-v2:1 | $8.00 | $24.00 | 100,000 | 8191 |
| bedrock/us-east-1/anthropic.claude-v2:1 | $8.00 | $24.00 | 100,000 | 8191 |
| bedrock/us-west-2/anthropic.claude-v2:1 | $8.00 | $24.00 | 100,000 | 8191 |
| bedrock/ap-northeast-1/anthropic.claude-v2:1 | $8.00 | $24.00 | 100,000 | 8191 |
| bedrock/ap-northeast-1/1-month-commitment/anthropic.claude-v2:1 | -- | -- | 100,000 | 8191 |
| bedrock/ap-northeast-1/6-month-commitment/anthropic.claude-v2:1 | -- | -- | 100,000 | 8191 |
| bedrock/eu-central-1/anthropic.claude-v2:1 | $8.00 | $24.00 | 100,000 | 8191 |
| bedrock/eu-central-1/1-month-commitment/anthropic.claude-v2:1 | -- | -- | 100,000 | 8191 |
| bedrock/eu-central-1/6-month-commitment/anthropic.claude-v2:1 | -- | -- | 100,000 | 8191 |
| bedrock/us-east-1/1-month-commitment/anthropic.claude-v2:1 | -- | -- | 100,000 | 8191 |
| bedrock/us-east-1/6-month-commitment/anthropic.claude-v2:1 | -- | -- | 100,000 | 8191 |
| bedrock/us-west-2/1-month-commitment/anthropic.claude-v2:1 | -- | -- | 100,000 | 8191 |
| bedrock/us-west-2/6-month-commitment/anthropic.claude-v2:1 | -- | -- | 100,000 | 8191 |
| anthropic.claude-instant-v1 | $1.63 | $5.51 | 100,000 | 8191 |
| bedrock/us-east-1/anthropic.claude-instant-v1 | $0.8 | $2.4 | 100,000 | 8191 |
| bedrock/us-east-1/1-month-commitment/anthropic.claude-instant-v1 | -- | -- | 100,000 | 8191 |
| bedrock/us-east-1/6-month-commitment/anthropic.claude-instant-v1 | -- | -- | 100,000 | 8191 |
| bedrock/us-west-2/1-month-commitment/anthropic.claude-instant-v1 | -- | -- | 100,000 | 8191 |
| bedrock/us-west-2/6-month-commitment/anthropic.claude-instant-v1 | -- | -- | 100,000 | 8191 |
| bedrock/us-west-2/anthropic.claude-instant-v1 | $0.8 | $2.4 | 100,000 | 8191 |
| bedrock/ap-northeast-1/anthropic.claude-instant-v1 | $2.23 | $7.55 | 100,000 | 8191 |
| bedrock/ap-northeast-1/1-month-commitment/anthropic.claude-instant-v1 | -- | -- | 100,000 | 8191 |
| bedrock/ap-northeast-1/6-month-commitment/anthropic.claude-instant-v1 | -- | -- | 100,000 | 8191 |
| bedrock/eu-central-1/anthropic.claude-instant-v1 | $2.48 | $8.38 | 100,000 | 8191 |
| bedrock/eu-central-1/1-month-commitment/anthropic.claude-instant-v1 | -- | -- | 100,000 | 8191 |
| bedrock/eu-central-1/6-month-commitment/anthropic.claude-instant-v1 | -- | -- | 100,000 | 8191 |
| cohere.command-text-v14 | $1.5 | $2.00 | 4,096 | 4096 |
| bedrock/ /1-month-commitment/cohere.command-text-v14 | -- | -- | 4,096 | 4096 |
| bedrock/ /6-month-commitment/cohere.command-text-v14 | -- | -- | 4,096 | 4096 |
| cohere.command-light-text-v14 | $0.3 | $0.6 | 4,096 | 4096 |
| bedrock/ /1-month-commitment/cohere.command-light-text-v14 | -- | -- | 4,096 | 4096 |
| bedrock/ /6-month-commitment/cohere.command-light-text-v14 | -- | -- | 4,096 | 4096 |
| cohere.command-r-plus-v1:0 | $3.00 | $15.00 | 128,000 | 4096 |
| cohere.command-r-v1:0 | $0.5 | $1.5 | 128,000 | 4096 |
| cohere.embed-english-v3 | $0.1 | $0.00 | 512 | nan |
| cohere.embed-multilingual-v3 | $0.1 | $0.00 | 512 | nan |
| meta.llama2-13b-chat-v1 | $0.75 | $1.00 | 4,096 | 4096 |
| meta.llama2-70b-chat-v1 | $1.95 | $2.56 | 4,096 | 4096 |
| meta.llama3-8b-instruct-v1:0 | $0.4 | $0.6 | 8,192 | 8192 |
| meta.llama3-70b-instruct-v1:0 | $2.65 | $3.5 | 8,192 | 8192 |
| 512-x-512/50-steps/stability.stable-diffusion-xl-v0 | -- | -- | 77 | nan |
| 512-x-512/max-steps/stability.stable-diffusion-xl-v0 | -- | -- | 77 | nan |
| max-x-max/50-steps/stability.stable-diffusion-xl-v0 | -- | -- | 77 | nan |
| max-x-max/max-steps/stability.stable-diffusion-xl-v0 | -- | -- | 77 | nan |
| 1024-x-1024/50-steps/stability.stable-diffusion-xl-v1 | -- | -- | 77 | nan |
| 1024-x-1024/max-steps/stability.stable-diffusion-xl-v1 | -- | -- | 77 | nan |
| sagemaker/meta-textgeneration-llama-2-7b | $0.00 | $0.00 | 4,096 | 4096 |
| sagemaker/meta-textgeneration-llama-2-7b-f | $0.00 | $0.00 | 4,096 | 4096 |
| sagemaker/meta-textgeneration-llama-2-13b | $0.00 | $0.00 | 4,096 | 4096 |
| sagemaker/meta-textgeneration-llama-2-13b-f | $0.00 | $0.00 | 4,096 | 4096 |
| sagemaker/meta-textgeneration-llama-2-70b | $0.00 | $0.00 | 4,096 | 4096 |
| sagemaker/meta-textgeneration-llama-2-70b-b-f | $0.00 | $0.00 | 4,096 | 4096 |
| together-ai-up-to-4b | $0.1 | $0.1 | nan | nan |
| together-ai-4.1b-8b | $0.2 | $0.2 | nan | nan |
| together-ai-8.1b-21b | $0.3 | $0.3 | nan | nan |
| together-ai-21.1b-41b | $0.8 | $0.8 | nan | nan |
| together-ai-41.1b-80b | $0.9 | $0.9 | nan | nan |
| together-ai-81.1b-110b | $1.8 | $1.8 | nan | nan |
| together_ai/mistralai/Mixtral-8x7B-Instruct-v0.1 | $0.6 | $0.6 | nan | nan |
| together_ai/mistralai/Mistral-7B-Instruct-v0.1 | -- | -- | nan | nan |
| together_ai/togethercomputer/CodeLlama-34b-Instruct | -- | -- | nan | nan |
| ollama/llama2 | $0.00 | $0.00 | 4,096 | 4096 |
| ollama/llama2:13b | $0.00 | $0.00 | 4,096 | 4096 |
| ollama/llama2:70b | $0.00 | $0.00 | 4,096 | 4096 |
| ollama/llama2-uncensored | $0.00 | $0.00 | 4,096 | 4096 |
| ollama/llama3 | $0.00 | $0.00 | 8,192 | 8192 |
| ollama/llama3:70b | $0.00 | $0.00 | 8,192 | 8192 |
| ollama/mistral | $0.00 | $0.00 | 8,192 | 8192 |
| ollama/mistral-7B-Instruct-v0.1 | $0.00 | $0.00 | 8,192 | 8192 |
| ollama/mistral-7B-Instruct-v0.2 | $0.00 | $0.00 | 32,768 | 32768 |
| ollama/mixtral-8x7B-Instruct-v0.1 | $0.00 | $0.00 | 32,768 | 32768 |
| ollama/mixtral-8x22B-Instruct-v0.1 | $0.00 | $0.00 | 65,536 | 65536 |
| ollama/codellama | $0.00 | $0.00 | 4,096 | 4096 |
| ollama/orca-mini | $0.00 | $0.00 | 4,096 | 4096 |
| ollama/vicuna | $0.00 | $0.00 | 2,048 | 2048 |
| deepinfra/lizpreciatior/lzlv_70b_fp16_hf | $0.7 | $0.9 | 4,096 | 4096 |
| deepinfra/Gryphe/MythoMax-L2-13b | $0.22 | $0.22 | 4,096 | 4096 |
| deepinfra/mistralai/Mistral-7B-Instruct-v0.1 | $0.13 | $0.13 | 32,768 | 8191 |
| deepinfra/meta-llama/Llama-2-70b-chat-hf | $0.7 | $0.9 | 4,096 | 4096 |
| deepinfra/cognitivecomputations/dolphin-2.6-mixtral-8x7b | $0.27 | $0.27 | 32,768 | 8191 |
| deepinfra/codellama/CodeLlama-34b-Instruct-hf | $0.6 | $0.6 | 4,096 | 4096 |
| deepinfra/deepinfra/mixtral | $0.27 | $0.27 | 32,000 | 4096 |
| deepinfra/Phind/Phind-CodeLlama-34B-v2 | $0.6 | $0.6 | 16,384 | 4096 |
| deepinfra/mistralai/Mixtral-8x7B-Instruct-v0.1 | $0.27 | $0.27 | 32,768 | 8191 |
| deepinfra/deepinfra/airoboros-70b | $0.7 | $0.9 | 4,096 | 4096 |
| deepinfra/01-ai/Yi-34B-Chat | $0.6 | $0.6 | 4,096 | 4096 |
| deepinfra/01-ai/Yi-6B-200K | $0.13 | $0.13 | 200,000 | 4096 |
| deepinfra/jondurbin/airoboros-l2-70b-gpt4-1.4.1 | $0.7 | $0.9 | 4,096 | 4096 |
| deepinfra/meta-llama/Llama-2-13b-chat-hf | $0.22 | $0.22 | 4,096 | 4096 |
| deepinfra/amazon/MistralLite | $0.2 | $0.2 | 32,768 | 8191 |
| deepinfra/meta-llama/Llama-2-7b-chat-hf | $0.13 | $0.13 | 4,096 | 4096 |
| deepinfra/meta-llama/Meta-Llama-3-8B-Instruct | $0.08 | $0.08 | 8,191 | 4096 |
| deepinfra/meta-llama/Meta-Llama-3-70B-Instruct | $0.59 | $0.79 | 8,191 | 4096 |
| deepinfra/01-ai/Yi-34B-200K | $0.6 | $0.6 | 200,000 | 4096 |
| deepinfra/openchat/openchat_3.5 | $0.13 | $0.13 | 4,096 | 4096 |
| perplexity/codellama-34b-instruct | $0.35 | $1.4 | 16,384 | 16384 |
| perplexity/codellama-70b-instruct | $0.7 | $2.8 | 16,384 | 16384 |
| perplexity/pplx-7b-chat | $0.07 | $0.28 | 8,192 | 8192 |
| perplexity/pplx-70b-chat | $0.7 | $2.8 | 4,096 | 4096 |
| perplexity/pplx-7b-online | $0.00 | $0.28 | 4,096 | 4096 |
| perplexity/pplx-70b-online | $0.00 | $2.8 | 4,096 | 4096 |
| perplexity/llama-2-70b-chat | $0.7 | $2.8 | 4,096 | 4096 |
| perplexity/mistral-7b-instruct | $0.07 | $0.28 | 4,096 | 4096 |
| perplexity/mixtral-8x7b-instruct | $0.07 | $0.28 | 4,096 | 4096 |
| perplexity/sonar-small-chat | $0.07 | $0.28 | 16,384 | 16384 |
| perplexity/sonar-small-online | $0.00 | $0.28 | 12,000 | 12000 |
| perplexity/sonar-medium-chat | $0.6 | $1.8 | 16,384 | 16384 |
| perplexity/sonar-medium-online | $0.00 | $1.8 | 12,000 | 12000 |
| anyscale/mistralai/Mistral-7B-Instruct-v0.1 | $0.15 | $0.15 | 16,384 | 16384 |
| anyscale/mistralai/Mixtral-8x7B-Instruct-v0.1 | $0.15 | $0.15 | 16,384 | 16384 |
| anyscale/mistralai/Mixtral-8x22B-Instruct-v0.1 | $0.9 | $0.9 | 65,536 | 65536 |
| anyscale/HuggingFaceH4/zephyr-7b-beta | $0.15 | $0.15 | 16,384 | 16384 |
| anyscale/google/gemma-7b-it | $0.15 | $0.15 | 8,192 | 8192 |
| anyscale/meta-llama/Llama-2-7b-chat-hf | $0.15 | $0.15 | 4,096 | 4096 |
| anyscale/meta-llama/Llama-2-13b-chat-hf | $0.25 | $0.25 | 4,096 | 4096 |
| anyscale/meta-llama/Llama-2-70b-chat-hf | $1.00 | $1.00 | 4,096 | 4096 |
| anyscale/codellama/CodeLlama-34b-Instruct-hf | $1.00 | $1.00 | 4,096 | 4096 |
| anyscale/codellama/CodeLlama-70b-Instruct-hf | $1.00 | $1.00 | 4,096 | 4096 |
| anyscale/meta-llama/Meta-Llama-3-8B-Instruct | $0.15 | $0.15 | 8,192 | 8192 |
| anyscale/meta-llama/Meta-Llama-3-70B-Instruct | $1.00 | $1.00 | 8,192 | 8192 |
| cloudflare/@cf/meta/llama-2-7b-chat-fp16 | $1.923 | $1.923 | 3,072 | 3072 |
| cloudflare/@cf/meta/llama-2-7b-chat-int8 | $1.923 | $1.923 | 2,048 | 2048 |
| cloudflare/@cf/mistral/mistral-7b-instruct-v0.1 | $1.923 | $1.923 | 8,192 | 8192 |
| cloudflare/@hf/thebloke/codellama-7b-instruct-awq | $1.923 | $1.923 | 4,096 | 4096 |
| voyage/voyage-01 | $0.1 | $0.00 | 4,096 | nan |
| voyage/voyage-lite-01 | $0.1 | $0.00 | 4,096 | nan |
| voyage/voyage-large-2 | $0.12 | $0.00 | 16,000 | nan |
| voyage/voyage-law-2 | $0.12 | $0.00 | 16,000 | nan |
| voyage/voyage-code-2 | $0.12 | $0.00 | 16,000 | nan |
| voyage/voyage-2 | $0.1 | $0.00 | 4,000 | nan |
| voyage/voyage-lite-02-instruct | $0.1 | $0.00 | 4,000 | nan |
| databricks/databricks-dbrx-instruct | $0.75 | $2.25 | 32,768 | 32768 |
| databricks/databricks-meta-llama-3-70b-instruct | $1.00 | $3.00 | 8,192 | 8192 |
| databricks/databricks-llama-2-70b-chat | $0.5 | $1.5 | 4,096 | 4096 |
| databricks/databricks-mixtral-8x7b-instruct | $0.5 | $1.00 | 4,096 | 4096 |
| databricks/databricks-mpt-30b-instruct | $1.00 | $1.00 | 8,192 | 8192 |
| databricks/databricks-mpt-7b-instruct | $0.5 | $0.5 | 8,192 | 8192 |
| databricks/databricks-bge-large-en | $0.1 | $0.00 | 512 | nan | Callback handlers You may also calculate token costs in LLM wrapper/framework libraries using callbacks. LlamaIndex sh
pip install `'tokencost[llama-index]'` To use the base callback handler, you may import it: python
from tokencost.callbacks.llama_index import TokenCostHandler and pass to your framework callback handler. Langchain (Coming Soon) Running locally Installation via GitHub : bash
git clone git@github.com:AgentOps-AI/tokencost.git
cd tokencost
pip install -e . Running tests Install pytest if you don't have it already shell
pip install pytest Run the tests/ folder while in the parent directory shell
pytest tests This repo also supports tox , simply run python -m tox . Contributing Contributions to TokenCost are welcome! Feel free to create an issue for any bug reports, complaints, or feature suggestions. License TokenCost is released under the MIT License.;Easy token price estimates for 400+ LLMs;analytics,claude,large-language-models,llm,observability,openai,price,price-tracker,token,tokenization | AgentOps-AI/tokencost |
kingjulio8238/memary;Manage Your Agent Memories Agents promote human-type reasoning and are a great advancement towards building AGI and understanding ourselves as humans. Memory is a key component of how humans approach tasks and should be weighted the same when building AI agents. memary emulates human memory to advance these agents. Quickstart 🏁 Install memary With pip: Make sure you are running python version <= 3.11.9, then run pip install memary Locally: i. Create a virtual environment with the python version set as specified above ii. Install python dependencies: pip install -r requirements.txt Specify Models Used At the time of writing, memary assumes installation of local models and we currently support all models available through Ollama : LLM running locally using Ollama ( Llama 3 8B/40B as suggested defaults) OR gpt-3.5-turbo Vision model running locally using Ollama ( LLaVA as suggested default) OR gpt-4-vision-preview memary will default to the locally run models unless explicitly specified. Additionally, memary allows developers to easily switch between downloaded models . Run memary Steps 1. [Optional] If running models locally using Ollama, follow this the instructions in this repo . Ensure that a .env exists with any necessary credentials. .env ```
OPENAI_API_KEY="YOUR_API_KEY"
NEO4J_PW="YOUR_NEO4J_PW"
NEO4J_URL="YOUR_NEO4J_URL"
PERPLEXITY_API_KEY="YOUR_API_KEY"
GOOGLEMAPS_API_KEY="YOUR_API_KEY"
ALPHA_VANTAGE_API_KEY="YOUR_API_KEY"
``` Fetch API credentials: API Info - [**OpenAI key**](https://openai.com/index/openai-api)
- [**Neo4j**](https://neo4j.com/cloud/platform/aura-graph-database/?ref=nav-get-started-cta)
- Click 'Start for free` → Create a free instance → Open auto-downloaded txt file and use the credentials
- [**Perplexity key**](https://www.perplexity.ai/settings/api)
- [**Google Maps**](https://console.cloud.google.com/apis/credentials)
- Keys are generated in the 'Credentials' page of the 'APIs & Services' tab of Google Cloud Console
- [Alpha Vantage](https://www.alphavantage.co/support/#api-key)
- Recommended to use https://10minutemail.com/ to generate a temporary email to use Update user persona which can be found in streamlit_app/data/user_persona.txt using the user persona template which can be found in streamlit_app/data/user_persona_template.txt . Instructions have been provided - replace the curly brackets with relevant information. [Optional] Update system persona, if needed, which can be found in streamlit_app/data/system_persona.txt . Run: cd streamlit_app
streamlit run app.py Basic Usage ```python
from memary.agent.chat_agent import ChatAgent system_persona_txt = "data/system_persona.txt"
user_persona_txt = "data/user_persona.txt"
past_chat_json = "data/past_chat.json"
memory_stream_json = "data/memory_stream.json"
entity_knowledge_store_json = "data/entity_knowledge_store.json"
chat_agent = ChatAgent(
"Personal Agent",
memory_stream_json,
entity_knowledge_store_json,
system_persona_txt,
user_persona_txt,
past_chat_json,
) ``
Pass in subset of ['search', 'vision', 'locate', 'stocks'] as include_from_defaults` for different set of default tools upon initialization. Adding Custom Tools ```python
def multiply(a: int, b: int) -> int:
"""Multiply two integers and returns the result integer"""
return a * b chat_agent.add_tool({"multiply": multiply})
```
More information about creating custom tools for the LlamaIndex ReAct Agent can be found here . Removing Custom Tools python
chat_agent.remove_tool("multiply") Memory Dashboard 🧠 (coming soon) | Features | Benefits |
|---------------------------------------------------|----------------------------------------------------------------|
| 🗣️ Chat to Agent Memory | Access certain memories |
| 🧠 Analyze Agent Progress | Track how agents develop their memories over time |
| ⏮️ Rewind Executions | Review agent memories to understand specific responses |
| 🧑🧑🧒🧒 Audience Preferences | Understand audiences' best and most recent preferences |
| ✍🏻 memaryParse | Inject proprietary data into agent memory & combine parsers for advanced data ingestion |
| 🗂️ memaryRetrieval | Access memories & combine retrievers for advanced memory retrieval |
| 🧪 Configure Agent Memory | Search and combine memory databases |
| 🛝 Playgrounds | Specify models and tools used as well as benchmark different memory techniques |
| 🔍 Stay Up To Date | Receive notifications when agent memories have been added, updated or removed | Core Concepts 🧪 The current structure of memary is detailed in the diagram below. At the time of writing, the above system design includes the routing agent, knoweldge graph and memory module are all integrated into the ChatAgent class located in the src/agent directory. Raw source code for these components can also be found in their respective directories including benchmarks, notebooks, and updates. Principles memary integrates itself onto your existing agents with as little developer implementation as possible. We achieve this sticking to a few principles. Auto-generated Memory After initializing memary, agent memory automatically updates as the agent interacts. This type of generation allows us to capture all memories to easily display in your dashboard. Additionally, we allow the combination of databases with little or no code! Memory Modules Given a current state of the databases, memary tracks users' preferences which are displayed in your dashboard for analysis. System Improvement memary mimics how human memory evolves and learns over time. We will provide the rate of your agents improvement in your dashboard. Rewind Memories memary takes care of keeping track of all chats so you can rewind agent executions and access the agents memory at a certain period (coming soon). Agent To provide developers, who don't have existing agents, access to memary we setup a simple agent implementation. We use the ReAct agent to plan and execute a query given the tools provided. While we didn't emphasize equipping the agent with many tools, the search tool is crucial to retrieve information from the knowledge graph . This tool queries the knowledge graph for a response based on existing nodes and executes an external search if no related entities exist. Other default agent tools include computer vision powered by LLaVa and a location tool using geococder and google maps. Note: In future version releases, the current ReAct agent (that was used for demo purposes) will be removed from the package so that memary can support any type of agents from any provider . ``` py title="external_query" hl_lines="1"
def external_query(self, query: str):
messages_dict = [
{"role": "system", "content": "Be precise and concise."},
{"role": "user", "content": query},
]
messages = [ChatMessage(**msg) for msg in messages_dict]
external_response = self.query_llm.chat(messages) return str(external_response) ``` ``` py title="search" hl_lines="1"
def search(self, query: str) -> str:
response = self.query_engine.query(query) if response.metadata is None:
return self.external_query(query)
else:
return response ``` Knowledge Graphs Knowledge Graphs ↔ LLMs memary uses a Neo4j graph database to store knoweldge. Llama Index was used to add nodes into the graph store based on documents. Perplexity (mistral-7b-instruct model) was used for external queries. Knowledge Graph Use Cases Inject the final agent responses into existing KGs. memary uses a recursive retrieval approach to search the KG, which involves determining what the key entities are in the query, building a subgraph of those entities with a maximum depth of 2 away, and finally using that subgraph to build up the context. When faced with multiple key entities in a query, memary uses multi-hop reasoning to join multiple subgraphs into a larger subgraph to search through. These techniques reduce latency compared to searching the entire knowledge graph at once. py title="store in KG" hl_lines="1"
def query(self, query: str) -> str:
# get the response from react agent
response = self.routing_agent.chat(query)
self.routing_agent.reset()
# write response to file for KG writeback
with open("data/external_response.txt", "w") as f:
print(response, file=f)
# write back to the KG
self.write_back()
return response ``` py title="recursive retrieval" hl_lines="1"
def check_KG(self, query: str) -> bool:
"""Check if the query is in the knowledge graph. Args:
query (str): query to check in the knowledge graph
Returns:
bool: True if the query is in the knowledge graph, False otherwise
"""
response = self.query_engine.query(query)
if response.metadata is None:
return False
return generate_string(
list(list(response.metadata.values())[0]["kg_rel_map"].keys())
) ``` Memory Modules The memory module comprises the Memory Stream and Entity Knowledge Store. The memory module was influenced by the design of K-LaMP proposed by Microsoft Research. Memory Stream The Memory Stream captures all entities inserted into the KG and their associated timestamps. This stream reflects the breadth of the users' knowledge , i.e., concepts users have had exposure to but no depth of exposure is inferred.
- Timeline Analysis: Map out a timeline of interactions, highlighting moments of high engagement or shifts in topic focus. This helps in understanding the evolution of the user's interests over time. py title="add to memory stream" hl_lines="1"
def add_memory(self, entities):
self.memory.extend([
MemoryItem(str(entity),
datetime.now().replace(microsecond=0))
for entity in entities
]) Extract Themes: Look for recurring themes or topics within the interactions. This thematic analysis can help anticipate user interests or questions even before they are explicitly stated. py title="retrieve from memory stream" hl_lines="1"
def get_memory(self) -> list[MemoryItem]:
return self.memory Entity Knowledge Store The Entity Knowledge Store tracks the frequency and recency of references to each entity stored in the memory stream. This knowledge store reflects users' depth of knowledge , i.e., concepts they are more familiar with than others.
- Rank Entities by Relevance: Use both frequency and recency to rank entities. An entity frequently mentioned (high count) and referenced recently is likely of high importance, and the user is well aware of this concept. py title="select most relevant entities" hl_lines="1"
def _select_top_entities(self):
entity_knowledge_store = self.message.llm_message['knowledge_entity_store']
entities = [entity.to_dict() for entity in entity_knowledge_store]
entity_counts = [entity['count'] for entity in entities]
top_indexes = np.argsort(entity_counts)[:TOP_ENTITIES]
return [entities[index] for index in top_indexes] Categorize Entities: Group entities into categories based on their nature or the context in which they're mentioned (e.g., technical terms, personal interests). This categorization aids in quickly accessing relevant information tailored to the user's inquiries. ``` py title="group entities" hl_lines="1"
def _convert_memory_to_knowledge_memory(
self, memory_stream: list) -> list[KnowledgeMemoryItem]:
"""Converts memory from memory stream to entity knowledge store by grouping entities Returns:
knowledge_memory (list): list of KnowledgeMemoryItem
"""
knowledge_memory = []
entities = set([item.entity for item in memory_stream])
for entity in entities:
memory_dates = [
item.date for item in memory_stream if item.entity == entity
]
knowledge_memory.append(
KnowledgeMemoryItem(entity, len(memory_dates),
max(memory_dates)))
return knowledge_memory ``` Highlight Changes Over Time: Identify any significant changes in the entities' ranking or categorization over time. A shift in the most frequently mentioned entities could indicate a change in the user's interests or knowledge. Additional information on the memory module can be found here New Context Window Note: We utilize the the key categorized entities and themes associated with users to tailor agent responses more closely to the user's current interests/preferences and knowledge level/expertise. The new context window is made up of the following: Agent response
``` py title="retrieve agent response" hl_lines="1"
def get_routing_agent_response(self, query, return_entity=False):
"""Get response from the ReAct."""
response = ""
if self.debug:
# writes ReAct agent steps to separate file and modifies format to be readable in .txt file
with open("data/routing_response.txt", "w") as f:
orig_stdout = sys.stdout
sys.stdout = f
response = str(self.query(query))
sys.stdout.flush()
sys.stdout = orig_stdout
text = ""
with open("data/routing_response.txt", "r") as f:
text = f.read() plain = ansi_strip(text)
with open("data/routing_response.txt", "w") as f:
f.write(plain)
else:
response = str(self.query(query))
if return_entity:
# the query above already adds final response to KG so entities will be present in the KG
return response, self.get_entity(self.query_engine.retrieve(query))
return response ``` Most relevant entities
``` py title="retrieve important entities" hl_lines="1"
def get_entity(self, retrieve) -> list[str]:
"""retrieve is a list of QueryBundle objects.
A retrieved QueryBundle object has a "node" attribute,
which has a "metadata" attribute. example for "kg_rel_map":
kg_rel_map = {
'Harry': [['DREAMED_OF', 'Unknown relation'], ['FELL_HARD_ON', 'Concrete floor']],
'Potter': [['WORE', 'Round glasses'], ['HAD', 'Dream']]
}
Args:
retrieve (list[NodeWithScore]): list of NodeWithScore objects
return:
list[str]: list of string entities
"""
entities = []
kg_rel_map = retrieve[0].node.metadata["kg_rel_map"]
for key, items in kg_rel_map.items():
# key is the entity of question
entities.append(key)
# items is a list of [relationship, entity]
entities.extend(item[1] for item in items)
if len(entities) > MAX_ENTITIES_FROM_KG:
break
entities = list(set(entities))
for exceptions in ENTITY_EXCEPTIONS:
if exceptions in entities:
entities.remove(exceptions)
return entities ``` Chat history (summarized to avoid token overflow)
``` py title="summarize chat history" hl_lines="1"
def _summarize_contexts(self, total_tokens: int):
"""Summarize the contexts. Args:
total_tokens (int): total tokens in the response
"""
messages = self.message.llm_message["messages"]
# First two messages are system and user personas
if len(messages) > 2 + NONEVICTION_LENGTH:
messages = messages[2:-NONEVICTION_LENGTH]
del self.message.llm_message["messages"][2:-NONEVICTION_LENGTH]
else:
messages = messages[2:]
del self.message.llm_message["messages"][2:]
message_contents = [message.to_dict()["content"] for message in messages]
llm_message_chatgpt = {
"model": self.model,
"messages": [
{
"role": "user",
"content": "Summarize these previous conversations into 50 words:"
+ str(message_contents),
}
],
}
response, _ = self._get_gpt_response(llm_message_chatgpt)
content = "Summarized past conversation:" + response
self._add_contexts_to_llm_message("assistant", content, index=2)
logging.info(f"Contexts summarized successfully. \n summary: {response}")
logging.info(f"Total tokens after eviction: {total_tokens*EVICTION_RATE}") ``` Future Features 🔜 Speak to Your Agents Memory 🗣️ memary's chat interface offers a portal to access agent memories, integrating capabilitiies such as searching , removing and viewing agent memories over specified periods and more all under one umbrella available in your dashboard. Analyze Agent Progress 🧠 Track how your agents develop their memory. We will provide access to relevant metrics to represent the growth of agent memories over time which will be available in your dashboard. Track Memories ⏮️ memary breaks down agent memory for each response generated . A list of agent responses with their respective memories will be avilable in your dashbord. Human input (good/bad response) can help your systems improve. Audience Preferences 🧑🧑🧒🧒 Through our proprietary memory modules, we are able to infer audience preferences for certain time periods. Audiences' best and most recent preferences are continously updated and will be available in your dashboard. memaryParse ✍🏻 Parse and clean your proprietry data before inserting into your agent memory. memary supports various file types including table and image extraction. Combine different parsers to form a parent parser with advanced capabilities. Also access templates for predefined database schemas and set of node relationships or define your own! This is all available in your dashboard. memaryRetrieval 🗂️ Use different techniques to retrieve agent memory. Also combine various retrievers to form a parent retriever with advanced capabilities. All avilable in your dashboard. Customizable Memory 🧪 memary deploys knowledge graphs to track agent actions . View, search and configure memory for your purposes. Join different memories together for improved retrieval and toggle between your favorite graph providers. All available in your dashboard. Playgrounds 🛝 Tool Playground: Simply define python functions and add it as one of your agent tools. View all available tools and remove any if necessary. Do this all in your dashboard! Model Playground: Select specific models for tasks across memary to lower system LLM costs. All models deployed on HF will be avilable in your dashboard. Benchmarking Playground: Easily run different memary configurations against each other to evaluate which memory options are more suitable for a specific task. License memary is released under the MIT License.;Human Memory for AI Agents. ;agents,memory,retrieval-augmented-generation,knowledge-graph,multiagentssystems | kingjulio8238/memary |
TencentARC/BrushNet;BrushNet This repository contains the implementation of the paper "BrushNet: A Plug-and-Play Image Inpainting Model with Decomposed Dual-Branch Diffusion" Keywords: Image Inpainting, Diffusion Models, Image Generation Xuan Ju 12 , Xian Liu 12 , Xintao Wang 1 , Yuxuan Bian 2 , Ying Shan 1 , Qiang Xu 2 1 ARC Lab, Tencent PCG 2 The Chinese University of Hong Kong * Corresponding Author 🌐Project Page | 📜Arxiv | 🗄️Data | 📹Video | 🤗Hugging Face Demo | 📖 Table of Contents BrushNet TODO 🛠️ Method Overview 🚀 Getting Started Environment Requirement 🌍 Data Download ⬇️ 🏃🏼 Running Scripts Training 🤯 Inference 📜 Evaluation 📏 🤝🏼 Cite Us 💖 Acknowledgement TODO [x] Release trainig and inference code [x] Release checkpoint (sdv1.5) [x] Release checkpoint (sdxl). Sadly, I only have V100 for training this checkpoint, which can only train with a batch size of 1 with a slow speed. The current ckpt is only trained for a small step number thus perform not well. But fortunately, yuanhang volunteer to help training a better version. Please stay tuned! Thank yuanhang for his effort! [x] Release evluation code [x] Release gradio demo [x] Release comfyui demo. Thank nullquant ( ConfyUI-BrushNet ) and kijai ( ComfyUI-BrushNet-Wrapper ) for helping! [x] Release trainig data . Thank random123123 for helping! [x] We use BrushNet to participate in CVPR2024 GenAI Media Generation Challenge Workshop and get top prize! The solution is provided in InstructionGuidedEditing 🛠️ Method Overview BrushNet is a diffusion-based text-guided image inpainting model that can be plug-and-play into any pre-trained diffusion model. Our architectural design incorporates two key insights: (1) dividing the masked image features and noisy latent reduces the model's learning load, and (2) leveraging dense per-pixel control over the entire pre-trained model enhances its suitability for image inpainting tasks. More analysis can be found in the main paper. 🚀 Getting Started Environment Requirement 🌍 BrushNet has been implemented and tested on Pytorch 1.12.1 with python 3.9. Clone the repo: git clone https://github.com/TencentARC/BrushNet.git We recommend you first use conda to create virtual environment, and install pytorch following official instructions . For example: conda create -n diffusers python=3.9 -y
conda activate diffusers
python -m pip install --upgrade pip
pip install torch==1.12.1+cu116 torchvision==0.13.1+cu116 torchaudio==0.12.1 --extra-index-url https://download.pytorch.org/whl/cu116 Then, you can install diffusers (implemented in this repo) with: pip install -e . After that, you can install required packages thourgh: cd examples/brushnet/
pip install -r requirements.txt Data Download ⬇️ Dataset You can download the BrushData and BrushBench here (as well as the EditBench we re-processed), which are used for training and testing the BrushNet. By downloading the data, you are agreeing to the terms and conditions of the license. The data structure should be like: |-- data
|-- BrushData
|-- 00200.tar
|-- 00201.tar
|-- ...
|-- BrushDench
|-- images
|-- mapping_file.json
|-- EditBench
|-- images
|-- mapping_file.json Noted: We only provide a part of the BrushData in google drive due to the space limit. random123123 has helped upload a full dataset on hugging face here . Thank for his help! Checkpoints Checkpoints of BrushNet can be downloaded from here . The ckpt folder contains BrushNet pretrained checkpoints for Stable Diffusion v1.5 ( segmentation_mask_brushnet_ckpt and random_mask_brushnet_ckpt ) pretrinaed Stable Diffusion v1.5 checkpoint (e.g., realisticVisionV60B1_v51VAE from Civitai ). You can use scripts/convert_original_stable_diffusion_to_diffusers.py to process other models downloaded from Civitai. BrushNet pretrained checkpoints for Stable Diffusion XL ( segmentation_mask_brushnet_ckpt_sdxl_v0 and random_mask_brushnet_ckpt_sdxl_v0 ). A better version will be shortly released by yuanhang . Please stay tuned! pretrinaed Stable Diffusion XL checkpoint (e.g., juggernautXL_juggernautX from Civitai ). You can use StableDiffusionXLPipeline.from_single_file("path of safetensors").save_pretrained("path to save",safe_serialization=False) to process other models downloaded from Civitai. The data structure should be like: |-- data
|-- BrushData
|-- BrushDench
|-- EditBench
|-- ckpt
|-- realisticVisionV60B1_v51VAE
|-- model_index.json
|-- vae
|-- ...
|-- segmentation_mask_brushnet_ckpt
|-- segmentation_mask_brushnet_ckpt_sdxl_v0
|-- random_mask_brushnet_ckpt
|-- random_mask_brushnet_ckpt_sdxl_v0
|-- ... The checkpoint in segmentation_mask_brushnet_ckpt and segmentation_mask_brushnet_ckpt_sdxl_v0 provide checkpoints trained on BrushData, which has segmentation prior (mask are with the same shape of objects). The random_mask_brushnet_ckpt and random_mask_brushnet_ckpt_sdxl provide a more general ckpt for random mask shape. 🏃🏼 Running Scripts Training 🤯 You can train with segmentation mask using the script: ``` sd v1.5 accelerate launch examples/brushnet/train_brushnet.py \
--pretrained_model_name_or_path runwayml/stable-diffusion-v1-5 \
--output_dir runs/logs/brushnet_segmentationmask \
--train_data_dir data/BrushData \
--resolution 512 \
--learning_rate 1e-5 \
--train_batch_size 2 \
--tracker_project_name brushnet \
--report_to tensorboard \
--resume_from_checkpoint latest \
--validation_steps 300
--checkpointing_steps 10000 sdxl accelerate launch examples/brushnet/train_brushnet_sdxl.py \
--pretrained_model_name_or_path stabilityai/stable-diffusion-xl-base-1.0 \
--output_dir runs/logs/brushnetsdxl_segmentationmask \
--train_data_dir data/BrushData \
--resolution 1024 \
--learning_rate 1e-5 \
--train_batch_size 1 \
--gradient_accumulation_steps 4 \
--tracker_project_name brushnet \
--report_to tensorboard \
--resume_from_checkpoint latest \
--validation_steps 300 \
--checkpointing_steps 10000
``` To use custom dataset, you can process your own data to the format of BrushData and revise --train_data_dir . You can train with random mask using the script (by adding --random_mask ): ``` sd v1.5 accelerate launch examples/brushnet/train_brushnet.py \
--pretrained_model_name_or_path runwayml/stable-diffusion-v1-5 \
--output_dir runs/logs/brushnet_randommask \
--train_data_dir data/BrushData \
--resolution 512 \
--learning_rate 1e-5 \
--train_batch_size 2 \
--tracker_project_name brushnet \
--report_to tensorboard \
--resume_from_checkpoint latest \
--validation_steps 300 \
--random_mask sdxl accelerate launch examples/brushnet/train_brushnet_sdxl.py \
--pretrained_model_name_or_path stabilityai/stable-diffusion-xl-base-1.0 \
--output_dir runs/logs/brushnetsdxl_randommask \
--train_data_dir data/BrushData \
--resolution 1024 \
--learning_rate 1e-5 \
--train_batch_size 1 \
--gradient_accumulation_steps 4 \
--tracker_project_name brushnet \
--report_to tensorboard \
--resume_from_checkpoint latest \
--validation_steps 300 \
--checkpointing_steps 10000 \
--random_mask
``` Inference 📜 You can inference with the script: ``` sd v1.5 python examples/brushnet/test_brushnet.py sdxl python examples/brushnet/test_brushnet_sdxl.py
``` Since BrushNet is trained on Laion, it can only guarantee the performance on general scenarios. We recommend you train on your own data (e.g., product exhibition, virtual try-on) if you have high-quality industrial application requirements. We would also be appreciate if you would like to contribute your trained model! You can also inference through gradio demo: ``` sd v1.5 python examples/brushnet/app_brushnet.py
``` Evaluation 📏 You can evaluate using the script: python examples/brushnet/evaluate_brushnet.py \
--brushnet_ckpt_path data/ckpt/segmentation_mask_brushnet_ckpt \
--image_save_path runs/evaluation_result/BrushBench/brushnet_segmask/inside \
--mapping_file data/BrushBench/mapping_file.json \
--base_dir data/BrushBench \
--mask_key inpainting_mask The --mask_key indicates which kind of mask to use, inpainting_mask for inside inpainting and outpainting_mask for outside inpainting. The evaluation results (images and metrics) will be saved in --image_save_path . Noted that you need to ignore the nsfw detector in src/diffusers/pipelines/brushnet/pipeline_brushnet.py#1261 to get the correct evaluation results. Moreover, we find different machine may generate different images, thus providing the results on our machine here . 🤝🏼 Cite Us @misc{ju2024brushnet,
title={BrushNet: A Plug-and-Play Image Inpainting Model with Decomposed Dual-Branch Diffusion},
author={Xuan Ju and Xian Liu and Xintao Wang and Yuxuan Bian and Ying Shan and Qiang Xu},
year={2024},
eprint={2403.06976},
archivePrefix={arXiv},
primaryClass={cs.CV}
} 💖 Acknowledgement Our code is modified based on diffusers , thanks to all the contributors!;The official implementation of paper "BrushNet: A Plug-and-Play Image Inpainting Model with Decomposed Dual-Branch Diffusion";diffusion,diffusion-models,image-inpainting,text-to-image | TencentARC/BrushNet |
yusufcanb/tlm;tlm - Local CLI Copilot, powered by CodeLLaMa. 💻🦙 [!TIP]
Starcoder2 3B model option coming soon to support workstations with limited resources. tlm is your CLI companion which requires nothing except your workstation. It uses most efficient and powerful CodeLLaMa in your local environment to provide you the best possible command line suggestions. Features 💸 No API Key (Subscription) is required. (ChatGPT, Github Copilot, Azure OpenAI, etc.) 📡 No internet connection is required. 💻 Works on macOS, Linux and Windows. 👩🏻💻 Automatic shell detection. (Powershell, Bash, Zsh) 🚀 One liner generation and command explanation. Installation Installation can be done in two ways; Installation script (recommended) Go Install Prerequisites Ollama is needed to download to necessary models.
It can be downloaded with the following methods on different platforms. On macOs and Windows; Download instructions can be followed at the following link: https://ollama.com/download On Linux; bash
curl -fsSL https://ollama.com/install.sh | sh Or using official Docker images 🐳; ```bash CPU Only docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama With GPU (Nvidia & AMD) docker run -d --gpus=all -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama
``` Installation Script Installation script is the recommended way to install tlm.
It will recognize the which platform and architecture to download and will execute install command for you. Linux and macOS; Download and execute the installation script by using the following command; bash
curl -fsSL https://raw.githubusercontent.com/yusufcanb/tlm/1.1/install.sh | sudo bash -E Windows (Powershell 5.1 or higher) Download and execute the installation script by using the following command; powershell
Invoke-RestMethod -Uri https://raw.githubusercontent.com/yusufcanb/tlm/1.1/install.ps1 | Invoke-Expression Go Install If you have Go 1.21 or higher installed on your system, you can easily use the following command to install tlm; bash
go install github.com/yusufcanb/tlm@latest Then, deploy tlm modelfiles. :memo: Note: If you have Ollama deployed on somewhere else. Please first run tlm config and configure Ollama host. bash
tlm deploy Check installation by using the following command; bash
tlm help Uninstall On Linux and macOS; bash
rm /usr/local/bin/tlm On Windows; powershell
Remove-Item -Recurse -Force "C:\Users\$env:USERNAME\AppData\Local\Programs\tlm" Contributors;Local CLI Copilot, powered by CodeLLaMa. 💻🦙;llm,codellama,bash,powershell,llama3,zsh | yusufcanb/tlm |
shuding/next-view-transitions;next-view-transitions Use View Transitions API in Next.js App Router. Demo . Disclaimer This library is aimed at basic use cases of View Transitions and Next.js App Router. With more complex applications and use cases like concurrent rendering, Suspense and streaming, new primitives and APIs still need to be developed into the core of React and Next.js in the future ( more ). Installation Use your favorite package manager to install the next-view-transitions package. For example: bash
pnpm install next-view-transitions Usage Wrap your content with the <ViewTransitions> component inside the layout file: ```jsx
import { ViewTransitions } from 'next-view-transitions' export default function Layout({ children }) {
return ( {children} )
}
``` Then, use the <Link> component for links that need to trigger a view transition: ```jsx
import { Link } from 'next-view-transitions' export default function Component() {
return ( Go to /about )
}
``` That's it! License MIT.;Use CSS View Transitions API in Next.js App Router.;app-router,nextjs,view-transitions | shuding/next-view-transitions |
elder-plinius/L1B3RT45;L1B3RT45 JAILBREAKS FOR ALL FLAGSHIP AI MODELS FREEAI join: BASI Discord Made with love by Pliny <3;JAILBREAK PROMPTS FOR ALL MAJOR AI MODELS;[] | elder-plinius/L1B3RT45 |
benvinegar/counterscale;Counterscale Counterscale is a simple web analytics tracker and dashboard that you self-host on Cloudflare. It's designed to be easy to deploy and maintain, and should cost you near-zero to operate – even at high levels of traffic (Cloudflare's free tier could hypothetically support up to 100k hits/day). NOTE: Counterscale is currently in very early development and shouldn't be used in any actual production setting. We welcome people trying it and giving feedback/contributing, but heads up this project is still super early. Deployment If you don't have one already, create a Cloudflare account here . Go to your Cloudflare dashboard and set up a Cloudflare Workers subdomain Enable Cloudflare Analytics Engine beta for your account ( screenshot ) Create a Cloudflare API token . This token needs Account.Account Analytics permissions at a minimum. Run npm install Run npx wrangler secret put CF_BEARER_TOKEN → when prompted, paste the API token you created Run npx wrangler secret put CF_ACCOUNT_ID → when prompted, paste your Cloudflare Account ID Run npm run deploy – this will do two things: Create a new worker, counterscale , now visible under Workers and Pages in Cloudflare Create a new Analytics Engine dataset, called metricsDataset It should now be live. Visit https://counterscale.{yoursubdomain}.workers.dev . Troubleshooting If the website is not immediately available (e.g. "Secure Connection Failed"), it could be because Cloudflare has not yet activated your subdomain (yoursubdomain.workers.dev). This process can take a minute; you can check in on the progress by visiting the newly created worker in your Cloudflare dashboard (Workers & Pages → counterscale). Custom Domains The deployment URL can always be changed to go behind a custom domain you own. More here . Installing the Tracker When Counterscale is deployed, it makes tracker.js available at the URL you deployed to: https://counterscale.{yoursubdomain}.workers.dev/tracker.js To start tracking website traffic on your web property, copy/paste the following snippet into your website HTML: ```html ``` Be sure to replace your-unique-site-id with a unique string/slug representing your web property. Use a unique site ID for each property you place the tracking script on. Development Config To get started, in the project root, copy .dev.vars.example to .dev.vars . Open .dev.vars and enter the same values for CF_BEARER_TOKEN and CF_ACCOUNT_ID you used earlier. Running the Server Counterscale is built on Remix and Cloudflare Workers. In development, you'll run two servers: The Remix development server The Miniflare server (local environment for Cloudflare Workers) You run both using: sh
npm run dev Afterwards, the dashboard should be visible at http://127.0.0.1:8787 . If you want to check the production build, you can stop the dev server and run following commands: sh
npm run build
npm start Then refresh the same URL in your browser (no live reload for production builds). Notes Database There is only one "database": the Cloudflare Analytics Engine dataset, which is communicated entirely over HTTP using Cloudflare's API. Right now there is no local "test" database. This means in local development: Writes will no-op (no hits will be recorded) Reads will be read from the production Analaytics Engine dataset (local development shows production data) Sampling Cloudflare Analytics Engine uses sampling to make high volume data ingestion/querying affordable at scale (this is similar to most other analytics tools, see Google Analytics on Sampling ). You can find out more how sampling works with CF AE here . Contributing Counterscale development is 100% volunteer-driven. If you use and like this software and want to see it improve, we encourage you to contribute with Issues or Pull Requests. Development Philosophy The primary goal of Counterscale is to be super easy to self-host and maintain. It should be "set up once and forget". To achieve that: There should be no application state outside of CF Analytics Engine e.g. no additional relational database like MySQL, PostgreSQL, etc. That means no users table, no sites table, etc. This also means retention will be limited by what CF Analytics Engine provides. While it could be possible to stand up a "hit counter" for long-lived data (e.g. years), that would mean another database, which we will not pursue. We prioritize backwards compatibility New metricsDataset columns can be added, but old columns cannot be removed or renamed (they can however, be "forgotten"). That also means it's okay if a feature only works during a period where the data is active.;Scalable web analytics you run yourself on Cloudflare;analytics,cloudflare-analytics,cloudflare-workers,google-analytics,website-stats | benvinegar/counterscale |
adrianhajdin/brainwave;Modern UI/UX website Build this project step by step with our detailed tutorial on JavaScript Mastery YouTube. 📋 Table of Contents 🤖 Introduction ⚙️ Tech Stack 🔋 Features 🤸 Quick Start 🕸️ Snippets 🔗 Links 🚀 More 🚨 Tutorial This repository contains the code corresponding to an in-depth tutorial available on our YouTube channel, JavaScript Mastery . If you prefer visual learning, this is the perfect resource for you. Follow our tutorial to learn how to build projects like these step-by-step in a beginner-friendly manner! 🤖 Introduction Brainwave - Modern UI/UX website, developed using React.js and Tailwind CSS, exemplifies modern UI/UX principles. Its sleek design, seamless animations, and overall user experience set a high standard, serving as a reference or inspiration for future modern applications or websites in general. If you're getting started and need assistance or face any bugs, join our active Discord community with over 27k+ members. It's a place where people help each other out. ⚙️ Tech Stack Vite React.js Tailwind CSS 🔋 Features 👉 Beautiful Sections : Includes hero, services, features, how to use, roadmap, pricing, footer, and header. 👉 Parallax Animations : Engaging effects triggered by mouse movement and scrolling 👉 Complex UI Geometry : Utilizes tailwindcss for intricate shapes like circular feature displays, grid lines, and side lines. 👉 Latest UI Trends : Incorporates modern design elements such as bento grids. 👉 Cool Gradients : Enhances visuals with stylish gradients using Tailwind CSS for cards, buttons, etc. 👉 Responsive : Ensures seamless functionality and aesthetics across all devices and many more, including code architecture and reusability 🤸 Quick Start Follow these steps to set up the project locally on your machine. Prerequisites Make sure you have the following installed on your machine: Git Node.js npm (Node Package Manager) Cloning the Repository bash
git clone https://github.com/JavaScript-Mastery-Pro/brainwave.git
cd brainwave Installation Install the project dependencies using npm: bash
npm install Running the Project bash
npm run dev Open http://localhost:5173 in your browser to view the project. 🕸️ Snippets .vscode/settings.json ```json
{
"editor.defaultFormatter": "esbenp.prettier-vscode",
"editor.formatOnSave": true,
"editor.codeActionsOnSave": {
"source.fixAll.eslint": "explicit",
"source.addMissingImports": "explicit"
},
"prettier.tabWidth": 2,
"prettier.useTabs": false,
"prettier.semi": true,
"prettier.singleQuote": false,
"prettier.jsxSingleQuote": false,
"prettier.trailingComma": "es5",
"prettier.arrowParens": "always",
"[javascriptreact]": {
"editor.defaultFormatter": "esbenp.prettier-vscode"
},
"[css]": {
"editor.defaultFormatter": "vscode.css-language-features"
},
"[svg]": {
"editor.defaultFormatter": "jock.svg"
}
}
``` tailwind.config.js ```javascript
/** @type {import('tailwindcss').Config} */
import { fontFamily } from "tailwindcss/defaultTheme";
import plugin from "tailwindcss/plugin";
export default {
content: [
"./index.html",
"./src/**/*.{js,ts,jsx,tsx}",
"./public/assets/**/*.{js,ts,jsx,tsx}",
],
theme: {
extend: {
colors: {
color: {
1: "#AC6AFF",
2: "#FFC876",
3: "#FF776F",
4: "#7ADB78",
5: "#858DFF",
6: "#FF98E2",
},
stroke: {
1: "#26242C",
},
n: {
1: "#FFFFFF",
2: "#CAC6DD",
3: "#ADA8C3",
4: "#757185",
5: "#3F3A52",
6: "#252134",
7: "#15131D",
8: "#0E0C15",
9: "#474060",
10: "#43435C",
11: "#1B1B2E",
12: "#2E2A41",
13: "#6C7275",
},
},
fontFamily: {
sans: ["var(--font-sora)", ...fontFamily.sans],
code: "var(--font-code)",
grotesk: "var(--font-grotesk)",
},
letterSpacing: {
tagline: ".15em",
},
spacing: {
0.25: "0.0625rem",
7.5: "1.875rem",
15: "3.75rem",
},
opacity: {
15: ".15",
},
transitionDuration: {
DEFAULT: "200ms",
},
transitionTimingFunction: {
DEFAULT: "linear",
},
zIndex: {
1: "1",
2: "2",
3: "3",
4: "4",
5: "5",
},
borderWidth: {
DEFAULT: "0.0625rem",
},
backgroundImage: {
"radial-gradient": "radial-gradient(var(--tw-gradient-stops))",
"conic-gradient":
"conic-gradient(from 225deg, #FFC876, #79FFF7, #9F53FF, #FF98E2, #FFC876)",
"benefit-card-1": "url(assets/benefits/card-1.svg)",
"benefit-card-2": "url(assets/benefits/card-2.svg)",
"benefit-card-3": "url(assets/benefits/card-3.svg)",
"benefit-card-4": "url(assets/benefits/card-4.svg)",
"benefit-card-5": "url(assets/benefits/card-5.svg)",
"benefit-card-6": "url(assets/benefits/card-6.svg)",
},
},
},
plugins: [
plugin(function ({ addBase, addComponents, addUtilities }) {
addBase({});
addComponents({
".container": {
"@apply max-w-[77.5rem] mx-auto px-5 md:px-10 lg:px-15 xl:max-w-[87.5rem]":
{},
},
".h1": {
"@apply font-semibold text-[2.5rem] leading-[3.25rem] md:text-[2.75rem] md:leading-[3.75rem] lg:text-[3.25rem] lg:leading-[4.0625rem] xl:text-[3.75rem] xl:leading-[4.5rem]":
{},
},
".h2": {
"@apply text-[1.75rem] leading-[2.5rem] md:text-[2rem] md:leading-[2.5rem] lg:text-[2.5rem] lg:leading-[3.5rem] xl:text-[3rem] xl:leading-tight":
{},
},
".h3": {
"@apply text-[2rem] leading-normal md:text-[2.5rem]": {},
},
".h4": {
"@apply text-[2rem] leading-normal": {},
},
".h5": {
"@apply text-2xl leading-normal": {},
},
".h6": {
"@apply font-semibold text-lg leading-8": {},
},
".body-1": {
"@apply text-[0.875rem] leading-[1.5rem] md:text-[1rem] md:leading-[1.75rem] lg:text-[1.25rem] lg:leading-8":
{},
},
".body-2": {
"@apply font-light text-[0.875rem] leading-6 md:text-base": {},
},
".caption": {
"@apply text-sm": {},
},
".tagline": {
"@apply font-grotesk font-light text-xs tracking-tagline uppercase":
{},
},
".quote": {
"@apply font-code text-lg leading-normal": {},
},
".button": {
"@apply font-code text-xs font-bold uppercase tracking-wider": {},
},
});
addUtilities({
".tap-highlight-color": {
"-webkit-tap-highlight-color": "rgba(0, 0, 0, 0)",
},
});
}),
],
};
``` index.css ```css
@import url("https://fonts.googleapis.com/css2?family=Sora:wght@300;400;600&display=swap");
@import url("https://fonts.googleapis.com/css2?family=Source+Code+Pro:wght@400;600;700&display=swap");
@import url("https://fonts.googleapis.com/css2?family=Space+Grotesk:wght@300&display=swap");
@tailwind base;
@tailwind components;
@tailwind utilities;
:root {
--font-sora: "Sora", sans-serif;
--font-code: "Source Code Pro", monospace;
--font-grotesk: "Space Grotesk", sans-serif;
}
* {
scroll-behavior: smooth;
}
@layer base {
body {
@apply font-sans bg-n-8 text-n-1 text-base;
}
}
.rotate-45 {
@apply rotate-[45deg];
}
.rotate-90 {
@apply rotate-[90deg];
}
.rotate-135 {
@apply rotate-[135deg];
}
.rotate-180 {
@apply rotate-[180deg];
}
.rotate-225 {
@apply rotate-[225deg];
}
.rotate-270 {
@apply rotate-[270deg];
}
.rotate-315 {
@apply rotate-[315deg];
}
.rotate-360 {
@apply rotate-[360deg];
}
.-rotate-45 {
@apply rotate-[-45deg];
}
.-rotate-90 {
@apply rotate-[-90deg];
}
.-rotate-135 {
@apply rotate-[-135deg];
}
.-rotate-180 {
@apply rotate-[-180deg];
}
.-rotate-225 {
@apply rotate-[-225deg];
}
.-rotate-270 {
@apply rotate-[-270deg];
}
.-rotate-315 {
@apply rotate-[-315deg];
}
.-rotate-360 {
@apply rotate-[-360deg];
}
``` constants/index.js ```javascript
import {
benefitIcon1,
benefitIcon2,
benefitIcon3,
benefitIcon4,
benefitImage2,
chromecast,
disc02,
discord,
discordBlack,
facebook,
figma,
file02,
framer,
homeSmile,
instagram,
notification2,
notification3,
notification4,
notion,
photoshop,
plusSquare,
protopie,
raindrop,
recording01,
recording03,
roadmap1,
roadmap2,
roadmap3,
roadmap4,
searchMd,
slack,
sliders04,
telegram,
twitter,
yourlogo,
} from "../../public/assets";
export const navigation = [
{
id: "0",
title: "Features",
url: "#features",
},
{
id: "1",
title: "Pricing",
url: "#pricing",
},
{
id: "2",
title: "How to use",
url: "#how-to-use",
},
{
id: "3",
title: "Roadmap",
url: "#roadmap",
},
{
id: "4",
title: "New account",
url: "#signup",
onlyMobile: true,
},
{
id: "5",
title: "Sign in",
url: "#login",
onlyMobile: true,
},
];
export const heroIcons = [homeSmile, file02, searchMd, plusSquare];
export const notificationImages = [notification4, notification3, notification2];
export const companyLogos = [yourlogo, yourlogo, yourlogo, yourlogo, yourlogo];
export const brainwaveServices = [
"Photo generating",
"Photo enhance",
"Seamless Integration",
];
export const brainwaveServicesIcons = [
recording03,
recording01,
disc02,
chromecast,
sliders04,
];
export const roadmap = [
{
id: "0",
title: "Voice recognition",
text: "Enable the chatbot to understand and respond to voice commands, making it easier for users to interact with the app hands-free.",
date: "May 2023",
status: "done",
imageUrl: roadmap1,
colorful: true,
},
{
id: "1",
title: "Gamification",
text: "Add game-like elements, such as badges or leaderboards, to incentivize users to engage with the chatbot more frequently.",
date: "May 2023",
status: "progress",
imageUrl: roadmap2,
},
{
id: "2",
title: "Chatbot customization",
text: "Allow users to customize the chatbot's appearance and behavior, making it more engaging and fun to interact with.",
date: "May 2023",
status: "done",
imageUrl: roadmap3,
},
{
id: "3",
title: "Integration with APIs",
text: "Allow the chatbot to access external data sources, such as weather APIs or news APIs, to provide more relevant recommendations.",
date: "May 2023",
status: "progress",
imageUrl: roadmap4,
},
];
export const collabText =
"With smart automation and top-notch security, it's the perfect solution for teams looking to work smarter.";
export const collabContent = [
{
id: "0",
title: "Seamless Integration",
text: collabText,
},
{
id: "1",
title: "Smart Automation",
},
{
id: "2",
title: "Top-notch Security",
},
];
export const collabApps = [
{
id: "0",
title: "Figma",
icon: figma,
width: 26,
height: 36,
},
{
id: "1",
title: "Notion",
icon: notion,
width: 34,
height: 36,
},
{
id: "2",
title: "Discord",
icon: discord,
width: 36,
height: 28,
},
{
id: "3",
title: "Slack",
icon: slack,
width: 34,
height: 35,
},
{
id: "4",
title: "Photoshop",
icon: photoshop,
width: 34,
height: 34,
},
{
id: "5",
title: "Protopie",
icon: protopie,
width: 34,
height: 34,
},
{
id: "6",
title: "Framer",
icon: framer,
width: 26,
height: 34,
},
{
id: "7",
title: "Raindrop",
icon: raindrop,
width: 38,
height: 32,
},
];
export const pricing = [
{
id: "0",
title: "Basic",
description: "AI chatbot, personalized recommendations",
price: "0",
features: [
"An AI chatbot that can understand your queries",
"Personalized recommendations based on your preferences",
"Ability to explore the app and its features without any cost",
],
},
{
id: "1",
title: "Premium",
description: "Advanced AI chatbot, priority support, analytics dashboard",
price: "9.99",
features: [
"An advanced AI chatbot that can understand complex queries",
"An analytics dashboard to track your conversations",
"Priority support to solve issues quickly",
],
},
{
id: "2",
title: "Enterprise",
description: "Custom AI chatbot, advanced analytics, dedicated account",
price: null,
features: [
"An AI chatbot that can understand your queries",
"Personalized recommendations based on your preferences",
"Ability to explore the app and its features without any cost",
],
},
];
export const benefits = [
{
id: "0",
title: "Ask anything",
text: "Lets users quickly find answers to their questions without having to search through multiple sources.",
backgroundUrl: "assets/benefits/card-1.svg",
iconUrl: benefitIcon1,
imageUrl: benefitImage2,
},
{
id: "1",
title: "Improve everyday",
text: "The app uses natural language processing to understand user queries and provide accurate and relevant responses.",
backgroundUrl: "assets/benefits/card-2.svg",
iconUrl: benefitIcon2,
imageUrl: benefitImage2,
light: true,
},
{
id: "2",
title: "Connect everywhere",
text: "Connect with the AI chatbot from anywhere, on any device, making it more accessible and convenient.",
backgroundUrl: "assets/benefits/card-3.svg",
iconUrl: benefitIcon3,
imageUrl: benefitImage2,
},
{
id: "3",
title: "Fast responding",
text: "Lets users quickly find answers to their questions without having to search through multiple sources.",
backgroundUrl: "assets/benefits/card-4.svg",
iconUrl: benefitIcon4,
imageUrl: benefitImage2,
light: true,
},
{
id: "4",
title: "Ask anything",
text: "Lets users quickly find answers to their questions without having to search through multiple sources.",
backgroundUrl: "assets/benefits/card-5.svg",
iconUrl: benefitIcon1,
imageUrl: benefitImage2,
},
{
id: "5",
title: "Improve everyday",
text: "The app uses natural language processing to understand user queries and provide accurate and relevant responses.",
backgroundUrl: "assets/benefits/card-6.svg",
iconUrl: benefitIcon2,
imageUrl: benefitImage2,
},
];
export const socials = [
{
id: "0",
title: "Discord",
iconUrl: discordBlack,
url: "#",
},
{
id: "1",
title: "Twitter",
iconUrl: twitter,
url: "#",
},
{
id: "2",
title: "Instagram",
iconUrl: instagram,
url: "#",
},
{
id: "3",
title: "Telegram",
iconUrl: telegram,
url: "#",
},
{
id: "4",
title: "Facebook",
iconUrl: facebook,
url: "#",
},
];
``` components/Section.jsx ```javascript
import SectionSvg from "../../public/assets/svg/SectionSvg";
const Section = ({
className,
id,
crosses,
crossesOffset,
customPaddings,
children,
}) => ( {children} {crosses && (
<> )} );
export default Section;
``` components/Roadmap.jsx ```javascript
import Button from "./Button";
import Heading from "./Heading";
import Section from "./Section";
import Tagline from "./TagLine";
import { roadmap } from "../constants";
import { check2, grid, loading1 } from "../../public/assets";
import { Gradient } from "./design/Roadmap";
const Roadmap = () => ( {roadmap.map((item) => {
const status = item.status === "done" ? "Done" : "In progress";
return ( {item.date} {status} {item.title} {item.text} );
})} Our roadmap );
export default Roadmap;
``` 🔗 Links Assets Design Absolute Relative Positioning Live Website 🚀 More Advance your skills with Next.js 14 Pro Course Enjoyed creating this project? Dive deeper into our PRO courses for a richer learning adventure. They're packed with detailed explanations, cool features, and exercises to boost your skills. Give it a go! Accelerate your professional journey with the Expert Training program And if you're hungry for more than just a course and want to understand how we learn and tackle tech challenges, hop into our personalized masterclass. We cover best practices, different web skills, and offer mentorship to boost your confidence. Let's learn and grow together!;Learn to create modern websites with sleek parallax effects and bento box layouts. This course covers everything from stylish UI design to mobile-first principles while strengthening your React.js and Tailwind CSS skills.;reactjs,tailwindcss | adrianhajdin/brainwave |
InternLM/HuixiangDou;English | 简体中文 HuixiangDou is a group chat assistant based on LLM (Large Language Model). Advantages: Design a three-stage pipeline of preprocess, rejection and response to cope with group chat scenario, answer user questions without message flooding, see 2401.08772 and 2405.02817 Low cost, requiring only 1.5GB memory and no need for training Offers a complete suite of Web, Android, and pipeline source code, which is industrial-grade and commercially viable Check out the scenes in which HuixiangDou are running and join WeChat Group to try AI assistant inside. If this helps you, please give it a star ⭐ 🔆 News The web portal is available on OpenXLab , where you can build your own knowledge assistant without any coding, using WeChat and Feishu groups. Visit web portal usage video on YouTube and BiliBili . [2024/06] Evaluation of Chunk Size, Splitter and Model [2024/05] wkteam WeChat access , support image, URL and reference resolution in group chat [2024/05] Add Coreference Resolution fine-tune 🤗 LoRA-Qwen1.5-14B LoRA-Qwen1.5-32B alpaca data arXiv [2024/04] Add SFT data annotation and examples [2024/04] Update technical report [2024/04] Release web server source code 👍 [2024/03] New wechat integration method with prebuilt android apk ! [2024/02] [experimental] Integrated multimodal model into our wechat group for OCR 📖 Support Model File Format IM Application - [InternLM2](https://github.com/InternLM/InternLM)
- [Qwen/Qwen2](https://github.com/QwenLM/Qwen2)
- [KIMI](https://kimi.moonshot.cn)
- [DeepSeek](https://www.deepseek.com)
- [Step](https://platform.stepfun.com)
- [GLM (ZHIPU)](https://www.zhipuai.cn)
- [SiliconCloud](https://siliconflow.cn/zh-cn/siliconcloud)
- [Xi-Api](https://api.xi-ai.cn)
- [OpenAOE](https://github.com/InternLM/OpenAOE) - pdf
- word
- excel
- ppt
- html
- markdown
- txt - WeChat
- Lark
- .. 📦 Hardware The following are the hardware requirements for running. It is suggested to follow this document, starting with the basic version and gradually experiencing advanced features. | Version | GPU Memory Requirements | Features | Tested on Linux |
| :--------------------: | :---------------------: | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | :---------------------------------------------------------------------------: |
| Cost-effective Edition | 1.5GB | Use openai API (e.g., kimi and deepseek ) to handle source code-level issues Free within quota | |
| Standard Edition | 19GB | Deploy local LLM can answer basic questions | |
| Complete Edition | 40GB | Fully utilizing search + long-text, answer source code-level questions | | 🔥 Run First agree BCE license and login huggingface. shell
huggingface-cli login Then install requirements. ```bash parsing word format requirements apt update
apt install python-dev libxml2-dev libxslt1-dev antiword unrtf poppler-utils pstotext tesseract-ocr flac ffmpeg lame libmad0 libsox-fmt-mp3 sox libjpeg-dev swig libpulse-dev python requirements pip install -r requirements.txt
``` Standard Edition The standard edition runs text2vec , rerank and a 7B model locally. STEP1. First, without rejection pipeline, run test cases: ```shell Standalone mode main creates a subprocess to run the LLM API, then sends requests to the subprocess python3 -m huixiangdou.main --standalone
..
..Topics unrelated to the knowledge base.."How to install mmpose?"
..Topics unrelated to the knowledge base.."How's the weather tomorrow?"
``` You can see that the result of handling the example question in main.py is the same, whether it's about mmpose installation or How's the weather tomorrow? STEP2. Use mmpose and test documents to build a knowledge base and enable the rejection pipeline Copy all the commands below (including the '#' symbol) and execute them. ```shell Download knowledge base documents cd HuixiangDou
mkdir repodir
git clone https://github.com/open-mmlab/mmpose --depth=1 repodir/mmpose
git clone https://github.com/tpoisonooo/huixiangdou-testdata --depth=1 repodir/testdata Save the features of repodir to workdir mkdir workdir
python3 -m huixiangdou.service.feature_store
``` [!NOTE] If restarting local LLM is too slow, first python3 -m huixiangdou.service.llm_server_hybrid , then open a new terminal, and only execute python3 -m huixiangdou.main without restarting LLM. Then rerun main , Huixiangdou will be able to answer mmpose installation and reject casual chats. bash
python3 -m huixiangdou.main --standalone
..success.. To install mmpose, you should..
..Topics unrelated to the knowledge base.."How's the weather tomorrow?" Please adjust the repodir documents, good_questions , and bad_questions to try your own domain knowledge (medical, financial, power, etc.). STEP3. Test sending messages to Feishu group (optional) This step is just for testing algorithm pipeline, STEP4 also support IM applications. Click Create Feishu Custom Bot to obtain the callback WEBHOOK_URL and fill it in config.ini ```ini config.ini ...
[frontend]
type = "lark"
webhook_url = "${YOUR-LARK-WEBHOOK-URL}"
``` Run. After the end, the technical assistant's response will be sent to Feishu group. shell
python3 -m huixiangdou.main --standalone Integrate Feishu group receiving, sending, and withdrawal Integrate personal WeChat access Integrate wkteam WeChat access STEP4. WEB service and IM applications We provide a complete front-end UI and backend service that supports: Multi-tenant management Zero-programming access to Feishu, WeChat groups See the effect at OpenXlab APP , please read the web deployment document . Cost-effective Edition If your machine only has 2G GPU memory, or if you are pursuing cost-effectiveness, you only need to read this Zhihu document . The cost-effective version only discards the local LLM and uses the remote LLM instead, and other functions are the same as the standard version. Take kimi as an example, fill in the API KEY applied from the official website into config-2G.ini ```bash config-2G.ini [llm]
enable_local = 0
enable_remote = 1
...
remote_type = "kimi"
remote_api_key = "YOUR-API-KEY-HERE"
``` [!NOTE] The worst case for each Q&A is to call the LLM 7 times, subject to the free user RPM limit, you can modify the rpm parameter in config.ini Execute the command to get the Q&A result shell
python3 -m huixiangdou.main --standalone --config-path config-2G.ini # Start all services at once Complete Edition The HuixiangDou deployed in the WeChat group is the complete version. When 40G of GPU memory is available, long text + retrieval capabilities can be used to improve accuracy. Please read following topics Refer to config-advanced.ini to improve precision Use rag.py to annotate SFT training data Coreference resolution fine-tune Using the commercial WeChat integration, add image analysis, public account parsing, and reference resolution 🛠️ FAQ What if the robot is too cold/too chatty? Fill in the questions that should be answered in the real scenario into resource/good_questions.json , and fill the ones that should be rejected into resource/bad_questions.json . Adjust the theme content in repodir to ensure that the markdown documents in the main library do not contain irrelevant content. Re-run feature_store to update thresholds and feature libraries. ⚠️ You can directly modify reject_throttle in config.ini. Generally speaking, 0.5 is a high value; 0.2 is too low. Launch is normal, but out of memory during runtime? LLM long text based on transformers structure requires more memory. At this time, kv cache quantization needs to be done on the model, such as lmdeploy quantization description . Then use docker to independently deploy Hybrid LLM Service. How to access other local LLM / After access, the effect is not ideal? Open hybrid llm service , add a new LLM inference implementation. Refer to test_intention_prompt and test data , adjust prompt and threshold for the new model, and update them into worker.py . What if the response is too slow/request always fails? Refer to hybrid llm service to add exponential backoff and retransmission. Replace local LLM with an inference framework such as lmdeploy , instead of the native huggingface/transformers. What if the GPU memory is too low? At this time, it is impossible to run local LLM, and only remote LLM can be used in conjunction with text2vec to execute the pipeline. Please make sure that config.ini only uses remote LLM and turn off local LLM. No module named 'faiss.swigfaiss_avx2' locate installed faiss package python
import faiss
print(faiss.__file__)
# /root/.conda/envs/InternLM2_Huixiangdou/lib/python3.10/site-packages/faiss/__init__.py add soft link Bash
# cd your_python_path/site-packages/faiss
cd /root/.conda/envs/InternLM2_Huixiangdou/lib/python3.10/site-packages/faiss/
ln -s swigfaiss.py swigfaiss_avx2.py 🍀 Acknowledgements KIMI : long context LLM BCEmbedding : Bilingual and Crosslingual Embedding (BCEmbedding) in English and Chinese Langchain-ChatChat : ChatGLM Application based on Langchain GrabRedEnvelope : Grab Wechat RedEnvelope 📝 Citation ```shell
@misc{kong2024huixiangdou,
title={HuixiangDou: Overcoming Group Chat Scenarios with LLM-based Technical Assistance},
author={Huanjun Kong and Songyang Zhang and Jiaying Li and Min Xiao and Jun Xu and Kai Chen},
year={2024},
eprint={2401.08772},
archivePrefix={arXiv},
primaryClass={cs.CL}
} @misc{kong2024huixiangdoucr,
title={HuixiangDou-CR: Coreference Resolution in Group Chats},
author={Huanjun Kong},
year={2024},
eprint={2405.02817},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```;HuixiangDou: Overcoming Group Chat Scenarios with LLM-based Technical Assistance;assistance,chatbot,llm,rag,dsl,lark,pipeline,robot,wechat,application | InternLM/HuixiangDou |
context-labs/mactop;mactop mactop is a terminal-based monitoring tool "top" designed to display real-time metrics for Apple Silicon chips. It provides a simple and efficient way to monitor CPU and GPU usage, E-Cores and P-Cores, power consumption, and other system metrics directly from your terminal! Compatibility Apple Silicon Only (ARM64) macOS Monterey 12.3+ Features Apple Silicon Monitor Top written in Go Lang (Under 1,000 lines of code) Real-time CPU and GPU power usage display. Detailed metrics for different CPU clusters (E-Cores and P-Cores). Memory usage and swap information. Network usage information Disk Activity Read/Write Easy-to-read terminal UI Two layouts: default and alternative Customizable UI color (green, red, blue, cyan, magenta, yellow, and white) Customizable update interval (default is 1000ms) Support for all Apple Silicon models. Install via Homebrew You can install mactop via Homebrew! https://brew.sh bash
brew install mactop bash
sudo mactop Updating via Homebrew bash
brew update bash
brew upgrade mactop Installation To install mactop , follow these steps: Ensure you have Go installed on your machine. If not, you can install it by following the instructions here: Go Installation Guide . Clone the repository: bash
git clone https://github.com/context-labs/mactop.git
cd mactop Build the application: bash
go build Run the application: bash
sudo ./mactop Usage After installation, you can start mactop by simply running: bash
sudo ./mactop sudo is required to run mactop Example with flags bash
sudo mactop --interval 1000 --color green mactop Flags --interval or -i : Set the powermetrics update interval in milliseconds. Default is 1000. (For low-end M chips, you may want to increase this value) --color or -c : Set the UI color. Default is white.
Options are 'green', 'red', 'blue', 'cyan', 'magenta', 'yellow', and 'white'. (-c green) --version or -v : Print the version of mactop. --help or -h : Show a help message about these flags and how to run mactop. mactop Commands Use the following keys to interact with the application while its running:
- q : Quit the application.
- r : Refresh the UI data manually.
- l : Toggle the current layout. Example Theme (Green) Screenshot (sudo mactop -c green) Confirmed tested working M series chips M1 M1 Pro M1 Max M1 Ultra M2 M2 Pro M2 Max M2 Ultra M3 M3 Pro M3 Max (If you have a confirmed working M series chip that is not listed, please open an issue, so we may add it here!) Contributing Contributions are what make the open-source community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated . Fork mactop Create your Feature Branch ( git checkout -b feature/AmazingFeature ) Commit your Changes ( git commit -m 'Add some AmazingFeature' ) Push to the Branch ( git push origin feature/AmazingFeature ) Open a Pull Request What does mactop use to get real-time data? sysctl : For CPU model information system_profiler : For GPU Core Count psutil : For memory and swap metrics powermetrics : For majority of CPU, GPU, Network, and Disk metrics License Distributed under the MIT License. See LICENSE for more information. Contact Carsen Klock - @carsenklock Project Link: https://github.com/context-labs/mactop Disclaimer This tool is not officially supported by Apple. It is provided as is, and may not work as expected. Use at your own risk. Acknowledgements termui for the terminal UI framework. gopsutil for system memory monitoring. asitop for the original inspiration!;mactop - Apple Silicon Monitor Top written in pure Golang! Under 1,000 lines of code.;apple,apple-silicon,arm64,cpu-monitoring,go,golang,gpu-monitoring,macos,monitoring,terminal | context-labs/mactop |
honojs/honox;HonoX HonoX is a simple and fast meta-framework for creating full-stack websites or Web APIs - (formerly Sonik ). It stands on the shoulders of giants; built on Hono , Vite , and UI libraries. Note : HonoX is currently in the "alpha stage". Breaking changes are introduced without following semantic versioning. Features File-based routing - You can create a large application like Next.js. Fast SSR - Rendering is ultra-fast thanks to Hono. BYOR - You can bring your own renderer, not only one using hono/jsx. Islands hydration - If you want interactions, create an island. JavaScript is hydrated only for it. Middleware - It works as Hono, so you can use a lot of Hono's middleware. Installing You can install the honox package from the npm. txt
npm install hono honox Starter template If you are starting a new HonoX project, use the hono-create command. Run the following and choose x-basic . txt
npm create hono@latest Get Started - Basic Let's create a basic HonoX application using hono/jsx as a renderer. This application has no client JavaScript and renders JSX on the server side. Project Structure Below is a typical project structure for a HonoX application. txt
.
├── app
│ ├── global.d.ts // global type definitions
│ ├── routes
│ │ ├── _404.tsx // not found page
│ │ ├── _error.tsx // error page
│ │ ├── _renderer.tsx // renderer definition
│ │ ├── about
│ │ │ └── [name].tsx // matches `/about/:name`
│ │ └── index.tsx // matches `/`
│ └── server.ts // server entry file
├── package.json
├── tsconfig.json
└── vite.config.ts vite.config.ts The minimum Vite setup for development is as follows: ```ts
import { defineConfig } from 'vite'
import honox from 'honox/vite' export default defineConfig({
plugins: [honox()],
})
``` Server Entry File A server entry file is required. The file should be placed at app/server.ts . This file is first called by the Vite during the development or build phase. In the entry file, simply initialize your app using the createApp() function. app will be an instance of Hono, so you can use Hono's middleware and the showRoutes() in hono/dev . ```ts
// app/server.ts
import { createApp } from 'honox/server'
import { showRoutes } from 'hono/dev' const app = createApp() showRoutes(app) export default app
``` Routes There are three ways to define routes. 1. createRoute() Each route should return an array of Handler | MiddlewareHandler . createRoute() is a helper function to return it. You can write a route for a GET request with default export . ``tsx
// app/routes/index.tsx
// createRoute()` helps you create handlers
import { createRoute } from 'honox/factory' export default createRoute((c) => {
return c.render( Hello! )
})
``` You can also handle methods other than GET by export POST , PUT , and DELETE . ```tsx
// app/routes/index.tsx
import { createRoute } from 'honox/factory'
import { getCookie, setCookie } from 'hono/cookie' export const POST = createRoute(async (c) => {
const { name } = await c.req.parseBody<{ name: string }>()
setCookie(c, 'name', name)
return c.redirect('/')
}) export default createRoute((c) => {
const name = getCookie(c, 'name') ?? 'no name'
return c.render( Hello, {name}! )
})
``` 2. Using a Hono instance You can create API endpoints by exporting an instance of the Hono object. ```ts
// app/routes/about/index.ts
import { Hono } from 'hono' const app = new Hono() // matches /about/:name app.get('/:name', (c) => {
const name = c.req.param('name')
return c.json({
'your name is': name,
})
}) export default app
``` 3. Just return JSX Or simply, you can just return JSX. tsx
// app/routes/index.tsx
export default function Home(_c: Context) {
return <h1>Welcome!</h1>
} Renderer Define your renderer - the middleware that does c.setRender() - by writing it in _renderer.tsx . Before writing _renderer.tsx , write the Renderer type definition in global.d.ts . ```ts
// app/global.d.ts
import type {} from 'hono' type Head = {
title?: string
} declare module 'hono' {
interface ContextRenderer {
(content: string | Promise , head?: Head): Response | Promise }
}
``` The JSX Renderer middleware allows you to create a Renderer as follows: ```tsx
// app/routes/_renderer.tsx
import { jsxRenderer } from 'hono/jsx-renderer' export default jsxRenderer(({ children, title }) => {
return ( {title ? {title} : <>} {children} )
})
``` The _renderer.tsx is applied under each directory, and the app/routes/posts/_renderer.tsx is applied in app/routes/posts/* . Not Found page You can write a custom Not Found page in _404.tsx . ```tsx
// app/routes/_404.tsx
import { NotFoundHandler } from 'hono' const handler: NotFoundHandler = (c) => {
return c.render( Sorry, Not Found... )
} export default handler
``` Error Page You can write a custom Error page in _error.tsx . ```tsx
// app/routes/_error.tsx
import { ErrorHandler } from 'hono' const handler: ErrorHandler = (e, c) => {
return c.render( Error! {e.message} )
} export default handler
``` Get Started - with Client Let's create an application that includes a client side. Here, we will use hono/jsx/dom. Project Structure Below is the project structure of a minimal application including a client side: txt
.
├── app
│ ├── client.ts // client entry file
│ ├── global.d.ts
│ ├── islands
│ │ └── counter.tsx // island component
│ ├── routes
│ │ ├── _renderer.tsx
│ │ └── index.tsx
│ └── server.ts
├── package.json
├── tsconfig.json
└── vite.config.ts Renderer This is a _renderer.tsx , which will load the /app/client.ts entry file for the client. It will load the JavaScript file for production according to the variable import.meta.env.PROD . And renders the inside of <HasIslands /> if there are islands on that page. ```tsx
// app/routes/_renderer.tsx
import { jsxRenderer } from 'hono/jsx-renderer'
import { HasIslands } from 'honox/server' export default jsxRenderer(({ children }) => {
return ( {import.meta.env.PROD ? ( ) : ( )} {children} )
})
``` If you have a manifest file in dist/.vite/manifest.json , you can easily write it using <Script /> . ```tsx
// app/routes/_renderer.tsx
import { jsxRenderer } from 'hono/jsx-renderer'
import { Script } from 'honox/server' export default jsxRenderer(({ children }) => {
return ( {children} )
})
``` Note : Since <HasIslands /> can slightly affect build performance when used, it is recommended that you do not use it in the development environment, but only at build time. <Script /> does not cause performance degradation during development, so it's better to use it. nonce Attribute If you want to add a nonce attribute to <Script /> or <script /> element, you can use Security Headers Middleware . Define the middleware: ```ts
// app/routes/_middleware.ts
import { createRoute } from 'honox/factory'
import { secureHeaders, NONCE } from 'hono/secure-headers' secureHeaders({
contentSecurityPolicy: import.meta.env.PROD
? {
scriptSrc: [NONCE],
}
: undefined,
})
``` You can get the nonce value with c.get('secureHeadersNonce') : ```tsx
// app/routes/_renderer.tsx
import { jsxRenderer } from 'hono/jsx-renderer'
import { Script } from 'honox/server' export default jsxRenderer(({ children }, c) => {
return ( {children} )
})
``` Client Entry File A client-side entry file should be in app/client.ts . Simply, write createClient() . ```ts
// app/client.ts
import { createClient } from 'honox/client' createClient()
``` Interactions If you want to add interactions to your page, create Island components. Islands components should be: Placed under app/islands directory or named with $ prefix like $componentName.tsx . It should be exported as a default or a proper component name that uses camel case but does not contain _ and is not all uppercase. For example, you can write an interactive component such as the following counter: ```tsx
// app/islands/counter.tsx
import { useState } from 'hono/jsx' export default function Counter() {
const [count, setCount] = useState(0)
return ( Count: {count} setCount(count + 1)}>Increment )
}
``` When you load the component in a route file, it is rendered as Server-Side rendering and JavaScript is also sent to the client side. ```tsx
// app/routes/index.tsx
import { createRoute } from 'honox/factory'
import Counter from '../islands/counter' export default createRoute((c) => {
return c.render( Hello )
})
``` Note : You cannot access a Context object in Island components. Therefore, you should pass the value from components outside of the Island. ```ts
import { useRequestContext } from 'hono/jsx-renderer'
import Counter from '../islands/counter.tsx' export default function Component() {
const c = useRequestContext()
return }
``` BYOR - Bring Your Own Renderer You can bring your own renderer using a UI library like React, Preact, Solid, or others. Note : We may not provide support for the renderer you bring. React case You can define a renderer using @hono/react-renderer . Install the modules first. txt
npm i @hono/react-renderer react react-dom hono
npm i -D @types/react @types/react-dom Define the Props that the renderer will receive in global.d.ts . ```ts
// global.d.ts
import '@hono/react-renderer' declare module '@hono/react-renderer' {
interface Props {
title?: string
}
}
``` The following is an example of app/routes/_renderer.tsx . ```tsx
// app/routes/_renderer.tsx
import { reactRenderer } from '@hono/react-renderer' export default reactRenderer(({ children, title }) => {
return ( {import.meta.env.PROD ? ( ) : ( )}
{title ? {title} : ''} {children} )
})
``` The app/client.ts will be like this. ```ts
// app/client.ts
import { createClient } from 'honox/client' createClient({
hydrate: async (elem, root) => {
const { hydrateRoot } = await import('react-dom/client')
hydrateRoot(root, elem)
},
createElement: async (type: any, props: any) => {
const { createElement } = await import('react')
return createElement(type, props)
},
})
``` Guides Nested Layouts If you are using the JSX Renderer middleware, you can nest layouts using <Layout /> . ```tsx
// app/routes/posts/_renderer.tsx import { jsxRenderer } from 'hono/jsx-renderer' export default jsxRenderer(({ children, Layout }) => {
return ( Posts Menu {children} )
})
``` Passing Additional Props in Nested Layouts Props passed to nested renderers do not automatically propagate to the parent renderers. To ensure that the parent layouts receive the necessary props, you should explicitly pass them from the nested component. Here's how you can achieve that: Let's start with our route handler: tsx
// app/routes/nested/index.tsx
export default createRoute((c) => {
return c.render(<div>Content</div>, { title: 'Dashboard' })
}) Now, let's take a look at our nested renderer: tsx
// app/routes/nested/_renderer.tsx
export default jsxRenderer(({ children, Layout, title }) => {
return (
<Layout title={title}>
{/* Pass the title prop to the parent renderer */}
<main>{children}</main>
</Layout>
)
}) In this setup, all the props sent to the nested renderer's are consumed by the parent renderer: tsx
// app/routes/_renderer.tsx
export default jsxRenderer(({ children, title }) => {
return (
<html lang='en'>
<head>
<title>{title}</title> {/* Use the title prop here */}
</head>
<body>
{children} {/* Insert the Layout's children here */}
</body>
</html>
)
}) Using Middleware You can use Hono's Middleware in each root file with the same syntax as Hono. For example, to validate a value with the Zod Validator , do the following: ```tsx
import { z } from 'zod'
import { zValidator } from '@hono/zod-validator' const schema = z.object({
name: z.string().max(10),
}) export const POST = createRoute(zValidator('form', schema), async (c) => {
const { name } = c.req.valid('form')
setCookie(c, 'name', name)
return c.redirect('/')
})
``` Alternatively, you can use a _middleware.(ts|tsx) file in a directory to have that middleware applied to the current route, as well as all child routes. Middleware is run in the order that it is listed within the array. ```ts
// /app/routes/_middleware.ts
import { createRoute } from 'honox/factory'
import { logger } from 'hono/logger'
import { secureHeaders } from 'hono/secure-headers' export default createRoute(logger(), secureHeaders(), ... )
``` Trailing Slash By default, trailing slashes are removed if the root file is an index file such as index.tsx or index.mdx .
However, if you set the trailingSlash option to true as the following, the trailing slash is not removed. ```ts
import { createApp } from 'honox/server' const app = createApp({
trailingSlash: true,
})
``` Like the followings: trailingSlash is false (default): app/routes/path/index.mdx => /path trailingSlash is true : app/routes/path/index.mdx => /path/ Using Tailwind CSS Given that HonoX is Vite-centric, if you wish to utilize Tailwind CSS , simply adhere to the official instructions. Prepare tailwind.config.js and postcss.config.js : js
// tailwind.config.js
export default {
content: ['./app/**/*.tsx'],
theme: {
extend: {},
},
plugins: [],
} js
// postcss.config.js
export default {
plugins: {
tailwindcss: {},
autoprefixer: {},
},
} Write app/style.css : css
@tailwind base;
@tailwind components;
@tailwind utilities; Import it in a renderer file. Using the Link component will refer to the correct CSS file path after it is built. ```tsx
// app/routes/_renderer.tsx
import { jsxRenderer } from 'hono/jsx-renderer'
import { Link } from 'honox/server' export default jsxRenderer(({ children }) => {
return ( {children} )
})
``` Finally, add vite.config.ts configuration to output assets for the production. ```ts
import honox from 'honox/vite'
import { defineConfig } from 'vite'
import pages from '@hono/vite-cloudflare-pages' export default defineConfig({
plugins: [
honox({
client: {
input: ['/app/style.css'],
},
}),
pages(),
],
})
``` MDX MDX can also be used. Here is the vite.config.ts . ```ts
import devServer from '@hono/vite-dev-server'
import mdx from '@mdx-js/rollup'
import honox from 'honox/vite'
import remarkFrontmatter from 'remark-frontmatter'
import remarkMdxFrontmatter from 'remark-mdx-frontmatter'
import { defineConfig } from 'vite' export default defineConfig(() => {
return {
plugins: [
honox(),
mdx({
jsxImportSource: 'hono/jsx',
remarkPlugins: [remarkFrontmatter, remarkMdxFrontmatter],
}),
],
}
})
``` Blog site can be created. ```tsx
// app/routes/index.tsx
import type { Meta } from '../types' export default function Top() {
const posts = import.meta.glob<{ frontmatter: Meta }>('./posts/*.mdx', {
eager: true,
})
return ( Posts {Object.entries(posts).map(([id, module]) => {
if (module.frontmatter) {
return ( ${id.replace(/\.mdx$/, '')} }>{module.frontmatter.title} )
}
})} )
}
``` Cloudflare Bindings If you want to use Cloudflare's Bindings in your development environment, create wrangler.toml and configure it properly. ```toml
name = "my-project-name"
compatibility_date = "2024-04-01"
compatibility_flags = [ "nodejs_compat" ]
pages_build_output_dir = "./dist" [vars] MY_VARIABLE = "production_value" [[kv_namespaces]] binding = "MY_KV_NAMESPACE" id = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" ``` In vite.config.ts , use the Cloudflare Adapter in @hono/vite-dev-server . ```ts
import honox from 'honox/vite'
import adapter from '@hono/vite-dev-server/cloudflare'
import { defineConfig } from 'vite' export default defineConfig({
plugins: [
honox({
devServer: {
adapter,
},
}),
],
})
``` Deployment Since a HonoX instance is essentially a Hono instance, it can be deployed on any platform that Hono supports. Cloudflare Pages Add the wrangler.toml : ```toml wrangler.toml name = "my-project-name"
compatibility_date = "2024-04-01"
compatibility_flags = [ "nodejs_compat" ]
pages_build_output_dir = "./dist"
``` Setup the vite.config.ts : ```ts
// vite.config.ts
import { defineConfig } from 'vite'
import honox from 'honox/vite'
import pages from '@hono/vite-cloudflare-pages' export default defineConfig({
plugins: [honox(), pages()],
})
``` Build command (including a client): txt
vite build --mode client && vite build Deploy with the following commands after the build. Ensure you have Wrangler installed: txt
wrangler pages deploy SSG - Static Site Generation Using Hono's SSG feature, you can generate static HTML for each route. ```ts
import { defineConfig } from 'vite'
import honox from 'honox/vite'
import ssg from '@hono/vite-ssg' const entry = './app/server.ts' export default defineConfig(() => {
return {
plugins: [honox(), ssg({ entry })],
}
})
``` If you want to include client-side scripts and assets: ```ts
// vite.config.ts
import ssg from '@hono/vite-ssg'
import honox from 'honox/vite'
import client from 'honox/vite/client'
import { defineConfig } from 'vite' const entry = './app/server.ts' export default defineConfig(({ mode }) => {
if (mode === 'client') {
return {
plugins: [client()],
}
} else {
return {
build: {
emptyOutDir: false,
},
plugins: [honox(), ssg({ entry })],
}
}
})
``` Build command (including a client): txt
vite build --mode client && vite build You can also deploy it to Cloudflare Pages. txt
wrangler pages deploy ./dist Examples https://github.com/yusukebe/honox-examples Related projects Hono Vite Authors Yusuke Wada https://github.com/yusukebe License MIT;HonoX - Hono based meta framework;[] | honojs/honox |
warden-protocol/wardenprotocol;Monorepo for Warden Protocol and its services. More general information about the project can be found at:
https://wardenprotocol.org/. If you want to learn more, join the network or contribute to the code, check
out the documentation website at: https://docs.wardenprotocol.org/. License With the exception of the SpaceWard folder, this project is released under the
terms of the Apache 2.0 License - see LICENSE for details. Elements of this project are based on the work made by Qredo Ltd on Fusion
Chain and were released under the Apache
2 license. See NOTICE for more details.;Monorepo for the Warden Protocol.;[] | warden-protocol/wardenprotocol |
netease-youdao/BCEmbedding;BCEmbedding: Bilingual and Crosslingual Embedding for RAG English | 简体中文 Click to Open Contents - 🌐 Bilingual and Crosslingual Superiority - 💡 Key Features - 🚀 Latest Updates - 🍎 Model List - 📖 Manual - Installation - Quick Start (`transformers`, `sentence-transformers`) - Embedding and Reranker Integrations for RAG Frameworks (`langchain`, `llama_index`) - ⚙️ Evaluation - Evaluate Semantic Representation by MTEB - Evaluate RAG by LlamaIndex - 📈 Leaderboard - Semantic Representation Evaluations in MTEB - RAG Evaluations in LlamaIndex - 🛠 Youdao's BCEmbedding API - 🧲 WeChat Group - ✏️ Citation - 🔐 License - 🔗 Related Links B ilingual and C rosslingual Embedding ( BCEmbedding ) in English and Chinese, developed by NetEase Youdao, encompasses EmbeddingModel and RerankerModel . The EmbeddingModel specializes in generating semantic vectors, playing a crucial role in semantic search and question-answering, and the RerankerModel excels at refining search results and ranking tasks. BCEmbedding serves as the cornerstone of Youdao's Retrieval Augmented Generation (RAG) implementation, notably QAnything [ github ], an open-source implementation widely integrated in various Youdao products like Youdao Speed Reading and Youdao Translation . Distinguished for its bilingual and crosslingual proficiency, BCEmbedding excels in bridging Chinese and English linguistic gaps, which achieves A high performance on Semantic Representation Evaluations in MTEB ; A new benchmark in the realm of RAG Evaluations in LlamaIndex . Our Goals Provide a bilingual and crosslingual two-stage retrieval model repository for the RAG community, which can be used directly without finetuning, including EmbeddingModel and RerankerModel : One Model: EmbeddingModel handle bilingual and crosslingual retrieval task in English and Chinese. RerankerModel supports English, Chinese, Japanese and Korean . One Model: Cover common business application scenarios with RAG optimization . e.g. Education, Medical Scenario, Law, Finance, Literature, FAQ, Textbook, Wikipedia, General Conversation. Easy to Integrate: We provide API in BCEmbedding for LlamaIndex and LangChain integrations. Others Points: RerankerModel supports long passages (more than 512 tokens, less than 32k tokens) reranking ; RerankerModel provides meaningful relevance score that helps to remove passages with low quality. EmbeddingModel does not need specific instructions . Third-party Examples RAG applications: QAnything , HuixiangDou , ChatPDF . Efficient inference: ChatLLM.cpp , Xinference , mindnlp (Huawei GPU) . 🌐 Bilingual and Crosslingual Superiority Existing embedding models often encounter performance challenges in bilingual and crosslingual scenarios, particularly in Chinese, English and their crosslingual tasks. BCEmbedding , leveraging the strength of Youdao's translation engine, excels in delivering superior performance across monolingual, bilingual, and crosslingual settings. EmbeddingModel supports Chinese (ch) and English (en) (more languages support will come soon), while RerankerModel supports Chinese (ch), English (en), Japanese (ja) and Korean (ko) . 💡 Key Features Bilingual and Crosslingual Proficiency : Powered by Youdao's translation engine, excelling in Chinese, English and their crosslingual retrieval task, with upcoming support for additional languages. RAG-Optimized : Tailored for diverse RAG tasks including translation, summarization, and question answering , ensuring accurate query understanding . See RAG Evaluations in LlamaIndex . Efficient and Precise Retrieval : Dual-encoder for efficient retrieval of EmbeddingModel in first stage, and cross-encoder of RerankerModel for enhanced precision and deeper semantic analysis in second stage. Broad Domain Adaptability : Trained on diverse datasets for superior performance across various fields. User-Friendly Design : Instruction-free, versatile use for multiple tasks without specifying query instruction for each task. Meaningful Reranking Scores : RerankerModel provides relevant scores to improve result quality and optimize large language model performance. Proven in Production : Successfully implemented and validated in Youdao's products. 🚀 Latest Updates 2024-02-04 : Technical Blog - See 为RAG而生-BCEmbedding技术报告 . 2024-01-16 : LangChain and LlamaIndex Integrations - See more . 2024-01-03 : Model Releases - bce-embedding-base_v1 and bce-reranker-base_v1 are available. 2024-01-03 : Eval Datasets [ CrosslingualMultiDomainsDataset ] - Evaluate the performance of RAG, using LlamaIndex . 2024-01-03 : Eval Datasets [ Details ] - Evaluate the performance of crosslingual semantic representation, using MTEB . 🍎 Model List | Model Name | Model Type | Languages | Parameters | Weights |
| :-------------------- | :----------------: | :------------: | :--------: | :-------------------------------------------------------------------------------------------------------------------------------------------------------: |
| bce-embedding-base_v1 | EmbeddingModel | ch, en | 279M | Huggingface , 国内通道 |
| bce-reranker-base_v1 | RerankerModel | ch, en, ja, ko | 279M | Huggingface , 国内通道 | 📖 Manual Installation First, create a conda environment and activate it. bash
conda create --name bce python=3.10 -y
conda activate bce Then install BCEmbedding for minimal installation (To avoid cuda version conflicting, you should install torch that is compatible to your system cuda version manually first): bash
pip install BCEmbedding==0.1.5 Or install from source ( recommended ): bash
git clone git@github.com:netease-youdao/BCEmbedding.git
cd BCEmbedding
pip install -v -e . Quick Start 1. Based on BCEmbedding Use EmbeddingModel , and cls pooler is default. ```python
from BCEmbedding import EmbeddingModel list of sentences sentences = ['sentence_0', 'sentence_1'] init embedding model model = EmbeddingModel(model_name_or_path="maidalun1020/bce-embedding-base_v1") extract embeddings embeddings = model.encode(sentences)
``` Use RerankerModel to calculate relevant scores and rerank: ```python
from BCEmbedding import RerankerModel your query and corresponding passages query = 'input_query'
passages = ['passage_0', 'passage_1'] construct sentence pairs sentence_pairs = [[query, passage] for passage in passages] init reranker model model = RerankerModel(model_name_or_path="maidalun1020/bce-reranker-base_v1") method 0: calculate scores of sentence pairs scores = model.compute_score(sentence_pairs) method 1: rerank passages rerank_results = model.rerank(query, passages)
``` NOTE: In RerankerModel.rerank method, we provide an advanced preproccess that we use in production for making sentence_pairs , when "passages" are very long. 2. Based on transformers For EmbeddingModel : ```python
from transformers import AutoModel, AutoTokenizer list of sentences sentences = ['sentence_0', 'sentence_1'] init model and tokenizer tokenizer = AutoTokenizer.from_pretrained('maidalun1020/bce-embedding-base_v1')
model = AutoModel.from_pretrained('maidalun1020/bce-embedding-base_v1') device = 'cuda' # if no GPU, set "cpu"
model.to(device) get inputs inputs = tokenizer(sentences, padding=True, truncation=True, max_length=512, return_tensors="pt")
inputs_on_device = {k: v.to(device) for k, v in inputs.items()} get embeddings outputs = model(**inputs_on_device, return_dict=True)
embeddings = outputs.last_hidden_state[:, 0] # cls pooler
embeddings = embeddings / embeddings.norm(dim=1, keepdim=True) # normalize
``` For RerankerModel : ```python
import torch
from transformers import AutoTokenizer, AutoModelForSequenceClassification init model and tokenizer tokenizer = AutoTokenizer.from_pretrained('maidalun1020/bce-reranker-base_v1')
model = AutoModelForSequenceClassification.from_pretrained('maidalun1020/bce-reranker-base_v1') device = 'cuda' # if no GPU, set "cpu"
model.to(device) get inputs inputs = tokenizer(sentence_pairs, padding=True, truncation=True, max_length=512, return_tensors="pt")
inputs_on_device = {k: v.to(device) for k, v in inputs.items()} calculate scores scores = model(**inputs_on_device, return_dict=True).logits.view(-1,).float()
scores = torch.sigmoid(scores)
``` 3. Based on sentence_transformers For EmbeddingModel : ```python
from sentence_transformers import SentenceTransformer list of sentences sentences = ['sentence_0', 'sentence_1', ...] init embedding model New update for sentence-trnasformers. So clean up your " SENTENCE_TRANSFORMERS_HOME /maidalun1020_bce-embedding-base_v1" or "~/.cache/torch/sentence_transformers/maidalun1020_bce-embedding-base_v1" first for downloading new version. model = SentenceTransformer("maidalun1020/bce-embedding-base_v1") extract embeddings embeddings = model.encode(sentences, normalize_embeddings=True)
``` For RerankerModel : ```python
from sentence_transformers import CrossEncoder init reranker model model = CrossEncoder('maidalun1020/bce-reranker-base_v1', max_length=512) calculate scores of sentence pairs scores = model.predict(sentence_pairs)
``` Embedding and Reranker Integrations for RAG Frameworks 1. Used in langchain We provide BCERerank in BCEmbedding.tools.langchain that inherits the advanced preproc tokenization of RerankerModel . Install langchain first bash
pip install langchain==0.1.0
pip install langchain-community==0.0.9
pip install langchain-core==0.1.7
pip install langsmith==0.0.77 Demo
```python We provide the advanced preproc tokenization for reranking. from BCEmbedding.tools.langchain import BCERerank from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain_community.document_loaders import PyPDFLoader
from langchain_community.vectorstores import FAISS from langchain.embeddings import HuggingFaceEmbeddings
from langchain_community.vectorstores.utils import DistanceStrategy
from langchain.retrievers import ContextualCompressionRetriever init embedding model embedding_model_name = 'maidalun1020/bce-embedding-base_v1'
embedding_model_kwargs = {'device': 'cuda:0'}
embedding_encode_kwargs = {'batch_size': 32, 'normalize_embeddings': True, 'show_progress_bar': False} embed_model = HuggingFaceEmbeddings(
model_name=embedding_model_name,
model_kwargs=embedding_model_kwargs,
encode_kwargs=embedding_encode_kwargs
) reranker_args = {'model': 'maidalun1020/bce-reranker-base_v1', 'top_n': 5, 'device': 'cuda:1'}
reranker = BCERerank(**reranker_args) init documents documents = PyPDFLoader("BCEmbedding/tools/eval_rag/eval_pdfs/Comp_en_llama2.pdf").load()
text_splitter = RecursiveCharacterTextSplitter(chunk_size=1500, chunk_overlap=200)
texts = text_splitter.split_documents(documents) example 1. retrieval with embedding and reranker retriever = FAISS.from_documents(texts, embed_model, distance_strategy=DistanceStrategy.MAX_INNER_PRODUCT).as_retriever(search_type="similarity", search_kwargs={"score_threshold": 0.3, "k": 10}) compression_retriever = ContextualCompressionRetriever(
base_compressor=reranker, base_retriever=retriever
) response = compression_retriever.get_relevant_documents("What is Llama 2?")
``` 2. Used in llama_index We provide BCERerank in BCEmbedding.tools.llama_index that inherits the advanced preproc tokenization of RerankerModel . Install llama_index first bash
pip install llama-index==0.9.42.post2 Demo
```python We provide the advanced preproc tokenization for reranking. from BCEmbedding.tools.llama_index import BCERerank import os
from llama_index.embeddings import HuggingFaceEmbedding
from llama_index import VectorStoreIndex, ServiceContext, SimpleDirectoryReader
from llama_index.node_parser import SimpleNodeParser
from llama_index.llms import OpenAI
from llama_index.retrievers import VectorIndexRetriever init embedding model and reranker model embed_args = {'model_name': 'maidalun1020/bce-embedding-base_v1', 'max_length': 512, 'embed_batch_size': 32, 'device': 'cuda:0'}
embed_model = HuggingFaceEmbedding(**embed_args) reranker_args = {'model': 'maidalun1020/bce-reranker-base_v1', 'top_n': 5, 'device': 'cuda:1'}
reranker_model = BCERerank(**reranker_args) example #1. extract embeddings query = 'apples'
passages = [
'I like apples',
'I like oranges',
'Apples and oranges are fruits'
]
query_embedding = embed_model.get_query_embedding(query)
passages_embeddings = embed_model.get_text_embedding_batch(passages) example #2. rag example llm = OpenAI(model='gpt-3.5-turbo-0613', api_key=os.environ.get('OPENAI_API_KEY'), api_base=os.environ.get('OPENAI_BASE_URL'))
service_context = ServiceContext.from_defaults(llm=llm, embed_model=embed_model) documents = SimpleDirectoryReader(input_files=["BCEmbedding/tools/eval_rag/eval_pdfs/Comp_en_llama2.pdf"]).load_data()
node_parser = SimpleNodeParser.from_defaults(chunk_size=400, chunk_overlap=80)
nodes = node_parser.get_nodes_from_documents(documents[0:36])
index = VectorStoreIndex(nodes, service_context=service_context) query = "What is Llama 2?" example #2.1. retrieval with EmbeddingModel and RerankerModel vector_retriever = VectorIndexRetriever(index=index, similarity_top_k=10, service_context=service_context)
retrieval_by_embedding = vector_retriever.retrieve(query)
retrieval_by_reranker = reranker_model.postprocess_nodes(retrieval_by_embedding, query_str=query) example #2.2. query with EmbeddingModel and RerankerModel query_engine = index.as_query_engine(node_postprocessors=[reranker_model])
query_response = query_engine.query(query)
``` ⚙️ Evaluation Evaluate Semantic Representation by MTEB We provide evaluation tools for embedding and reranker models, based on MTEB and C_MTEB . First, install MTEB : pip install mteb==1.1.1 1. Embedding Models Just run following cmd to evaluate your_embedding_model (e.g. maidalun1020/bce-embedding-base_v1 ) in bilingual and crosslingual settings (e.g. ["en", "zh", "en-zh", "zh-en"] ). bash
python BCEmbedding/tools/eval_mteb/eval_embedding_mteb.py --model_name_or_path maidalun1020/bce-embedding-base_v1 --pooler cls The total evaluation tasks contain 114 datasets of "Retrieval", "STS", "PairClassification", "Classification", "Reranking" and "Clustering" . NOTE: All models are evaluated in their recommended pooling method ( pooler ) . mean pooler: "jina-embeddings-v2-base-en", "m3e-base", "m3e-large", "e5-large-v2", "multilingual-e5-base", "multilingual-e5-large" and "gte-large". cls pooler: Other models. "jina-embeddings-v2-base-en" model should be loaded with trust_remote_code . ```bash
python BCEmbedding/tools/eval_mteb/eval_embedding_mteb.py --model_name_or_path {mean_pooler_models} --pooler mean python BCEmbedding/tools/eval_mteb/eval_embedding_mteb.py --model_name_or_path jinaai/jina-embeddings-v2-base-en --pooler mean --trust_remote_code
``` 2. Reranker Models Run following cmd to evaluate your_reranker_model (e.g. "maidalun1020/bce-reranker-base_v1") in bilingual and crosslingual settings (e.g. ["en", "zh", "en-zh", "zh-en"] ). bash
python BCEmbedding/tools/eval_mteb/eval_reranker_mteb.py --model_name_or_path maidalun1020/bce-reranker-base_v1 The evaluation tasks contain 12 datasets of "Reranking" . 3. Metrics Visualization Tool We provide a one-click script to summarize evaluation results of embedding and reranker models as Embedding Models Evaluation Summary and Reranker Models Evaluation Summary . bash
python BCEmbedding/tools/eval_mteb/summarize_eval_results.py --results_dir {your_embedding_results_dir | your_reranker_results_dir} Evaluate RAG by LlamaIndex LlamaIndex is a famous data framework for LLM-based applications, particularly in RAG. Recently, a LlamaIndex Blog has evaluated the popular embedding and reranker models in RAG pipeline and attracts great attention. Now, we follow its pipeline to evaluate our BCEmbedding . First, install LlamaIndex, and upgrade transformers to 4.36.0: ```bash
pip install transformers==4.36.0 pip install llama-index==0.9.22
``` Export your "openai" and "cohere" app keys, and openai base url (e.g. "https://api.openai.com/v1") to env: bash
export OPENAI_BASE_URL={openai_base_url} # https://api.openai.com/v1
export OPENAI_API_KEY={your_openai_api_key}
export COHERE_APPKEY={your_cohere_api_key} 1. Metrics Definition Hit Rate: Hit rate calculates the fraction of queries where the correct answer is found within the top-k retrieved documents. In simpler terms, it's about how often our system gets it right within the top few guesses. The larger, the better. - Mean Reciprocal Rank (MRR): For each query, MRR evaluates the system's accuracy by looking at the rank of the highest-placed relevant document. Specifically, it's the average of the reciprocals of these ranks across all the queries. So, if the first relevant document is the top result, the reciprocal rank is 1; if it's second, the reciprocal rank is 1/2, and so on. The larger, the better. 2. Reproduce LlamaIndex Blog In order to compare our BCEmbedding with other embedding and reranker models fairly, we provide a one-click script to reproduce results of the LlamaIndex Blog, including our BCEmbedding : ```bash There should be two GPUs available at least. CUDA_VISIBLE_DEVICES=0,1 python BCEmbedding/tools/eval_rag/eval_llamaindex_reproduce.py
``` Then, summarize the evaluation results by: bash
python BCEmbedding/tools/eval_rag/summarize_eval_results.py --results_dir BCEmbedding/results/rag_reproduce_results Results reproduced from the LlamaIndex Blog can be checked in Reproduced Summary of RAG Evaluation , with some obvious conclusions : In WithoutReranker setting, our bce-embedding-base_v1 outperforms all the other embedding models. With fixing the embedding model, our bce-reranker-base_v1 achieves the best performance. The combination of bce-embedding-base_v1 and bce-reranker-base_v1 is SOTA. 3. Broad Domain Adaptability The evaluation of LlamaIndex Blog is monolingual, small amount of data, and specific domain (just including "llama2" paper). In order to evaluate the broad domain adaptability, bilingual and crosslingual capability , we follow the blog to build a multiple domains evaluation dataset (includding "Computer Science", "Physics", "Biology", "Economics", "Math", and "Quantitative Finance". Details ), named CrosslingualMultiDomainsDataset : To prevent test data leakage, English eval data is selected from the latest English articles in various fields on ArXiv, up to date December 30, 2023. Chinese eval data is selected from high-quality, as recent as possible, Chinese articles in the corresponding fields on Semantic Scholar. Use OpenAI gpt-4-1106-preview to produce eval data for high quality. First, run following cmd to evaluate the most popular and powerful embedding and reranker models: ```bash There should be two GPUs available at least. CUDA_VISIBLE_DEVICES=0,1 python BCEmbedding/tools/eval_rag/eval_llamaindex_multiple_domains.py
``` Then, run the following script to summarize the evaluation results: bash
python BCEmbedding/tools/eval_rag/summarize_eval_results.py --results_dir BCEmbedding/results/rag_results The summary of multiple domains evaluations can be seen in Multiple Domains Scenarios . 📈 Leaderboard Semantic Representation Evaluations in MTEB 1. Embedding Models | Model | Dimensions | Pooler | Instructions | Retrieval (47) | STS (19) | PairClassification (5) | Classification (21) | Reranking (12) | Clustering (15) | AVG (119) |
| :---------------------------------- | :--------: | :------: | :----------: | :-------------: | :-------------: | :--------------------: | :-----------------: | :-------------: | :-------------: | :---------------------: |
| bge-base-en-v1.5 | 768 | cls | Need | 37.14 | 55.06 | 75.45 | 59.73 | 43.00 | 37.74 | 47.19 |
| bge-base-zh-v1.5 | 768 | cls | Need | 47.63 | 63.72 | 77.40 | 63.38 | 54.95 | 32.56 | 53.62 |
| bge-large-en-v1.5 | 1024 | cls | Need | 37.18 | 54.09 | 75.00 | 59.24 | 42.47 | 37.32 | 46.80 |
| bge-large-zh-v1.5 | 1024 | cls | Need | 47.58 | 64.73 | 79.14 | 64.19 | 55.98 | 33.26 | 54.23 |
| gte-large | 1024 | mean | Free | 36.68 | 55.22 | 74.29 | 57.73 | 42.44 | 38.51 | 46.67 |
| gte-large-zh | 1024 | cls | Free | 41.15 | 64.62 | 77.58 | 62.04 | 55.62 | 33.03 | 51.51 |
| jina-embeddings-v2-base-en | 768 | mean | Free | 31.58 | 54.28 | 74.84 | 58.42 | 41.16 | 34.67 | 44.29 |
| m3e-base | 768 | mean | Free | 46.29 | 63.93 | 71.84 | 64.08 | 52.38 | 37.84 | 53.54 |
| m3e-large | 1024 | mean | Free | 34.85 | 59.74 | 67.69 | 60.07 | 48.99 | 31.62 | 46.78 |
| e5-large-v2 | 1024 | mean | Need | 35.98 | 55.23 | 75.28 | 59.53 | 42.12 | 36.51 | 46.52 |
| multilingual-e5-base | 768 | mean | Need | 54.73 | 65.49 | 76.97 | 69.72 | 55.01 | 38.44 | 58.34 |
| multilingual-e5-large | 1024 | mean | Need | 56.76 | 66.79 | 78.80 | 71.61 | 56.49 | 43.09 | 60.50 |
| bce-embedding-base_v1 | 768 | cls | Free | 57.60 | 65.73 | 74.96 | 69.00 | 57.29 | 38.95 | 59.43 | NOTE: Our bce-embedding-base_v1 outperforms other open-source embedding models with comparable model sizes. 114 datasets including 119 eval results (some dataset contains multiple languages) of "Retrieval", "STS", "PairClassification", "Classification", "Reranking" and "Clustering" in ["en", "zh", "en-zh", "zh-en"] setting , including MTEB and CMTEB . The crosslingual evaluation datasets we released belong to Retrieval task. More evaluation details should be checked in Embedding Models Evaluations . 2. Reranker Models | Model | Reranking (12) | AVG (12) |
| :--------------------------------- | :-------------: | :--------------------: |
| bge-reranker-base | 59.04 | 59.04 |
| bge-reranker-large | 60.86 | 60.86 |
| bce-reranker-base_v1 | 61.29 | 61.29 | NOTE: Our bce-reranker-base_v1 outperforms other open-source reranker models. 12 datasets of "Reranking" in ["en", "zh", "en-zh", "zh-en"] setting . More evaluation details should be checked in Reranker Models Evaluations . RAG Evaluations in LlamaIndex 1. Multiple Domains Scenarios NOTE: Data Quality: To prevent test data leakage, English eval data is selected from the latest English articles in various fields on ArXiv, up to date December 30, 2023. Chinese eval data is selected from high-quality, as recent as possible, Chinese articles in the corresponding fields on Semantic Scholar. Use OpenAI gpt-4-1106-preview to produce eval data for high quality. Evaluated in ["en", "zh", "en-zh", "zh-en"] setting . If you are interested in monolingual setting, please check in Chinese RAG evaluations with ["zh"] setting , and English RAG evaluations with ["en"] setting . Consistent with our Reproduced Results of LlamaIndex Blog . In WithoutReranker setting, our bce-embedding-base_v1 outperforms all the other embedding models. With fixing the embedding model, our bce-reranker-base_v1 achieves the best performance. The combination of bce-embedding-base_v1 and bce-reranker-base_v1 is SOTA . 🛠 Youdao's BCEmbedding API For users who prefer a hassle-free experience without the need to download and configure the model on their own systems, BCEmbedding is readily accessible through Youdao's API. This option offers a streamlined and efficient way to integrate BCEmbedding into your projects, bypassing the complexities of manual setup and maintenance. Detailed instructions and comprehensive API documentation are available at Youdao BCEmbedding API . Here, you'll find all the necessary guidance to easily implement BCEmbedding across a variety of use cases, ensuring a smooth and effective integration for optimal results. 🧲 WeChat Group Welcome to scan the QR code below and join the WeChat group. ✏️ Citation If you use BCEmbedding in your research or project, please feel free to cite and star it: @misc{youdao_bcembedding_2023,
title={BCEmbedding: Bilingual and Crosslingual Embedding for RAG},
author={NetEase Youdao, Inc.},
year={2023},
howpublished={\url{https://github.com/netease-youdao/BCEmbedding}}
} 🔐 License BCEmbedding is licensed under Apache 2.0 License 🔗 Related Links Netease Youdao - QAnything FlagEmbedding MTEB C_MTEB LLama Index | LlamaIndex Blog HuixiangDou;Netease Youdao's open-source embedding and reranker models for RAG products.;[] | netease-youdao/BCEmbedding |
roothide/Bootstrap;roothide Bootstrap A full featured bootstrap for iOS 15.0-17.0 A8-A17 & M1+M2 using roothide. WARNING: By using this software, you take full responsibility for what you do with it. Any unofficial modifications to your device may cause irreparable damage. Refer to the FAQ linked in the Usage section for safe usage of this software. roothide Bootstrap is available to download on this repositories Releases . Building If you do not have access to MacOS, refer to the FAQ in the Usage section to build with GitHub Actions instead. You'll need MacOS to build, as you require Xcode from the App Store. Simply having Xcode Command Line Tools is insufficient . Here's how to build the Bootstrap: Update/Install Theos with roothide support bash -c "$(curl -fsSL https://raw.githubusercontent.com/roothide/theos/master/bin/install-theos)" If you encounter errors from a previous Theos installation, remove Theos in its entirety before continuing. Clone the GitHub repository and enter directory git clone https://github.com/roothide/Bootstrap/ && cd Bootstrap Build Bootstrap.tipa make package Transfer Bootstrap.tipa from ./packages/ to your device and install it with TrollStore Usage The roothide Bootstrap application must be installed with TrollStore . Use version 2.0.9 or later for enabling developer mode on-device. Tweaks not compiled or converted to roothide will not work out-of-the-box with roothide Bootstrap. Refer to the FAQ below on how to use roothide Patcher. By design, roothide does not inject tweaks into any applications by default. To enable tweak injection into an application, press App List in the Bootstrap app, and enable the toggle of the application you want to inject into. Injection into com.apple.springboard or daemons is not possible with the roothide Bootstrap. Refer to the FAQ below on injection into com.apple.springboard . A roothide Bootstrap FAQ is available here . Develop tweaks Both rootful and rootless tweaks aren't out-of-the-box compatible with roothide, so you'll need to develop them specifically to support it. You can refer to the developer documentation here . Discord server You can join the roothide Discord server for support or general discussion here . The "Our Table" Icon The ProcursusTeam logo was originally made by @TheAlphaStream , and later edited by @sourcelocation . Credits Huge thanks to these people, we couldn't have completed this project without their help! absidue: https://github.com/absidue akusio: https://twitter.com/akusio_rr Alfie: https://alfiecg.uk Amy While: http://github.com/elihwyma Barron: https://tweaksdev22.github.io BomberFish: https://twitter.com/bomberfish77 bswbw: https://twitter.com/bswbw Capt Inc: http://github.com/captinc CKatri: https://procursus.social/@cameron Clarity: http://github.com/TheRealClarity Cryptic: http://github.com/Cryptiiiic dxcool223x: https://twitter.com/dxcool223x Dhinakg: http://github.com/dhinakg DuyKhanhTran: https://twitter.com/TranKha50277352 dleovl: https://github.com/dleovl Elias Sfeir: https://twitter.com/eliassfeir1 Ellie: https://twitter.com/elliessurviving EquationGroups: https://twitter.com/equationgroups Évelyne: http://github.com/evelyneee GeoSnOw: https://twitter.com/fce365 G3n3sis: https://twitter.com/G3nNuk_e hayden: https://procursus.social/@hayden Huy Nguyen: https://twitter.com/little_34306 iAdam1n: https://twitter.com/iAdam1n iarrays: https://iarrays.com iDownloadBlog: https://twitter.com/idownloadblog iExmo: https://twitter.com/iexmojailbreak iRaMzi: https://twitter.com/iramzi7 Jonathan: https://twitter.com/jontelang Kevin: https://github.com/iodes kirb: http://github.com/kirb laileld: https://twitter.com/h_h_x_t Leptos: https://github.com/leptos-null limneos: https://twitter.com/limneos Lightmann: https://github.com/L1ghtmann Linus Henze: http://github.com/LinusHenze MasterMike: https://ios.cfw.guide Misty: https://twitter.com/miscmisty Muirey03: https://twitter.com/Muirey03 Nathan: https://github.com/verygenericname Nebula: https://itsnebula.net niceios: https://twitter.com/niceios Nightwind: https://twitter.com/NightwindDev Nick Chan: https://nickchan.lol nzhaonan: https://twitter.com/nzhaonan Oliver Tzeng: https://github.com/olivertzeng omrkujman: https://twitter.com/omrkujman opa334: http://github.com/opa334 onejailbreak: https://twitter.com/onejailbreak_ Phuc Do: https://twitter.com/dobabaophuc PoomSmart: https://twitter.com/poomsmart ProcursusTeam: https://procursus.social/@team roothide: http://github.com/roothide Sam Bingner: http://github.com/sbingner Shadow-: http://iosjb.top/ Snail: https://twitter.com/somnusix SquidGesture: https://twitter.com/lclrc sourcelocation: http://github.com/sourcelocation SeanIsTethered: http://github.com/jailbreakmerebooted TheosTeam: https://theos.dev tigisoftware: https://twitter.com/tigisoftware tihmstar: https://twitter.com/tihmstar xina520: https://twitter.com/xina520 xybp888: https://twitter.com/xybp888 xsf1re: https://twitter.com/xsf1re yandevelop: https://twitter.com/yandevelop YourRepo: https://twitter.com/yourepo And you , the community, for giving insightful feedback and support.;A full featured bootstrap for iOS 15.0-17.0 A8-A17 & M1+M2 using roothide;[] | roothide/Bootstrap |
shikijs/shiki-magic-move;shiki-magic-move Smoothly animated code blocks with Shiki. Online Demo . Shiki Magic Move is a low-level library for animating code blocks, and uses Shiki as the syntax highlighter. You usually want to use it with a high-level integration like Slidev . At the core of the shiki-magic-move package is a framework-agnostic core , and renderer — there are also framework wrappers for Vue , React , and Svelte . Each of the framework wrappers provides the following components: ShikiMagicMove - the main component to wrap the code block ShikiMagicMovePrecompiled - animations for compiled tokens, without the dependency on Shiki ShikiMagicMoveRenderer - the low-level renderer component The ShikiMagicMove component requires you to provide a Shiki highlighter instance, and the styles are also required, and provided by shiki-magic-move . Whenever the code changes, the component will animate the changes. Installation You're going to need Shiki Magic Move for animating the code blocks, and Shiki for syntax highlighting. bash
npm i shiki-magic-move shiki Usage Vue Import shiki-magic-move/vue , and pass the highlighter instance to the ShikiMagicMove component. ```vue ``` React Import shiki-magic-move/react , and pass the highlighter instance to the ShikiMagicMove component. ```tsx
import { useEffect, useState } from 'react'
import { ShikiMagicMove } from 'shiki-magic-move/react'
import { type HighlighterCore, getHighlighter } from 'shiki' import 'shiki-magic-move/dist/style.css' function App() {
const [code, setCode] = useState( const hello = 'world' )
const [highlighter, setHighlighter] = useState () useEffect(() => {
async function initializeHighlighter() {
const highlighter = await getHighlighter({
themes: ['nord'],
langs: ['javascript', 'typescript'],
})
setHighlighter(highlighter)
}
initializeHighlighter()
}, []) function animate() {
setCode( let hi = 'hello' )
} return ( {highlighter && (
<> Animate )} )
}
``` Svelte Import shiki-magic-move/svelte , and pass the highlighter instance to the ShikiMagicMove component. ```svelte {#await highlighter then highlighter} Animate {/await}
``` ShikiMagicMovePrecompiled ShikiMagicMovePrecompiled is a lighter version of ShikiMagicMove that doesn't require Shiki. It's useful when you want to animate the compiled tokens directly. For example, in Vue: ```vue ``` To get the compiled tokens, you can run this somewhere else and serialize them into the component: ```ts
import { codeToKeyedTokens, createMagicMoveMachine } from 'shiki-magic-move/core'
import { getHighlighter } from 'shiki' const shiki = await getHighlighter({
theme: 'nord',
langs: ['javascript', 'typescript'],
}) const codeSteps = [ const hello = 'world' , let hi = 'hello' ,
] const machine = createMagicMoveMachine(
code => codeToKeyedTokens(shiki, code, {
lang: 'ts',
theme: 'nord',
}),
{
// options
}
) const compiledSteps = codeSteps.map(code => machine.commit(code).current) // Pass compiledSteps to the precompiled component
// If you do this on server-side or build-time, you can serialize compiledSteps into JSON
``` How it works You can read The Magic In Shiki Magic Move to understand how Shiki Magic Move works. Sponsors License MIT License © 2023-PRESENT Anthony Fu;Smoothly animated code blocks with Shiki;[] | shikijs/shiki-magic-move |
the-mirror-gdp/the-mirror;Get Started The easiest way is via our compiled Mirror Official app: Get Started Docs The docs site ( /mirror-docs ) is our primary source of truth for documentation, not this README. We intend to keep this README slim since documentation is and will continue to be extensive. Features (Real) Real-Time Game Development : Like Inception, the aim is to build worlds in real-time with friends, colleagues, and players. Read more about our approach on our blog here . All-in-one game development : The Mirror is both the editor and the game, providing everything you need out-of-the-box to quickly create and play games, digital experiences, virtual worlds, and more. Editor : Built-in and networked: A lightweight, real-time, multiplayer editor to build in real-time. Physics via Jolt , a AAA physics engine used by Horizon Zero Dawn. Advanced networking : Keep your game in sync and rewind when things get out of sync. Visual scripting : Even if you don't know how to code, you can implement game logic quickly and easily. Traditional coding : GDScript in-world editor so you can live edit your game code. If you're new to GDScript, it's like Python, super newbie-friendly, and is easy to learn. Material editor : No need to exit the editor to make changes to your materials: Everything is in real-time Shader editing : Real-time shader editing with text will be available in the future Asset management : Assets are automatically stored in the cloud or via local storage (self-hosted) so you can simplify your workflows in real-time without needing to restart the editor. Much less hassle and easy collaboration with team members. Open asset system : Built around GLTF, The Mirror supports seats, lights, equipables, and custom physics shapes, all direct from Blender. Mirror UI elements , including a table class which can easily map any data to UI elements without duplicating state in a performant way. Collision shape generation : Convex and concave supported Audio : Easily add audio to your game in real-time without opening a separate editor; no need to recompile Player controllers : Out-of-the-box FPS (first-person shooter), TPS (third-person shooter), and VR (virtual reality) supported. VR-ready : Just put on the tethered headset when playing! We test with Meta Quest 2 and 3. Intentional architecture : (Space)Objects are a simple game object abstraction with the aim of supporting any type of Godot node in the future. Bidirectionality with Godot : Start in The Mirror and end in Godot, or start in Godt and end in The Mirror. Our aim is to make it easy to transition between the two or develop side-by-side: your choice. Godot plugin: Coming soon Join the Community 1. Join our Discord 2. Check out our Roadmap 3. Read our docs: Site , monorepo /mirror-docs 4. Check out our open-source announcement post 5. Follow us on X/Twitter What is The Mirror and why? Freedom to own : The Mirror is a Roblox & UEFN alternative giving you the freedom to own what you create: an all-in-one game development platform built on Godot. If you build on others like Roblox, UEFN/Unreal/Fortnite, and Unity, you don't actually own the full stack of what you build because you don't own the engine. We're here to change that . 3D is arguably the next step of the internet and we can't let it be beholden to an oligopoly of companies that want to own your data and creations. The Mirror gives you the freedom to own what you build. Akin to "Google Docs for game development", The Mirror is both the editor and the game that let's you build with friends in real-time. This saves you a plethora of time: Enjoy not having to write pesky things like infrastructure, backend HTTP routes, asset management, authentication, netsync, and various systems from scratch. This repo is The Mirror's source code: the Godot app (client/server), the web server, and the docs in one place. We've included everything we can provide to help you build your games as fast as possible. Build the Open-Source Code Git clone the repository (you do not need to clone with submodules; they are optional) Download the precompiled Mirror fork of Godot engine (required to use) Windows: Download Mac: v buggy ; taking contributions for fixes :) Download . On Mac you will see a zip file; extract it and open the editor binary with CMD + Right-Click then select the Open option. Linux: Download Open the Godot editor (The Mirror fork), click import, and choose the project.godot from the /mirror-godot-app folder.
Note that if you see this popup, you can safely ignore it and proceed. Close the Godot editor and open it again, to ensure that everything loads correctly, now that all files have been imported. Hit play in the Godot editor! Create a new Space, and you will automatically join it. Or, join an existing Space. Godot Fork The Mirror is built on a custom fork of Godot and required to use The Mirror's code. The fork is open source and can be found here . Analytics Disclaimer: We use Posthog and Mixpanel and it automatically collects analytics in the open source repo. You can disable this manually by commenting out the mirror-godot-app/scripts/autoload/analytics/analytics.gd file methods. We are transitioning from Posthog to Mixpanel and Posthog will be removed in a future release. We will make this easier in the future to disable. The Mirror Megaverse Inc., a US Delaware C Corp, is the data controller of the Posthog and Mixpanel instances. You are free to disable the analytics and even plug in your own Posthog or Mixpanel API keys to capture the analytics yourself for your games!;The open-source Roblox & UEFN alternative giving you freedom to own what you create. An all-in-one, real-time, collaborative game development platform built on Godot.;game-development,gaming,gdscript,godot,godot-engine,godotengine,mongodb,mongoose,multiplayer,nestjs | the-mirror-gdp/the-mirror |
mini-sora/minisora;MiniSora Community English | [简体中文](README_zh-CN.md) 👋 join us on WeChat The MiniSora open-source community is positioned as a community-driven initiative organized spontaneously by community members. The MiniSora community aims to explore the implementation path and future development direction of Sora. Regular round-table discussions will be held with the Sora team and the community to explore possibilities. We will delve into existing technological pathways for video generation. Leading the replication of papers or research results related to Sora, such as DiT ( MiniSora-DiT ), etc. Conducting a comprehensive review of Sora-related technologies and their implementations, i.e., " From DDPM to Sora: A Review of Video Generation Models Based on Diffusion Models ". Hot News Stable Diffusion 3 : MM-DiT: Scaling Rectified Flow Transformers for High-Resolution Image Synthesis MiniSora-DiT : Reproducing the DiT Paper with XTuner Introduction of MiniSora and Latest Progress in Replicating Sora Reproduction Group of MiniSora Community Sora Reproduction Goals of MiniSora GPU-Friendly : Ideally, it should have low requirements for GPU memory size and the number of GPUs, such as being trainable and inferable with compute power like 8 A100 80G cards, 8 A6000 48G cards, or RTX4090 24G. Training-Efficiency : It should achieve good results without requiring extensive training time. Inference-Efficiency : When generating videos during inference, there is no need for high length or resolution; acceptable parameters include 3-10 seconds in length and 480p resolution. MiniSora-DiT : Reproducing the DiT Paper with XTuner https://github.com/mini-sora/minisora-DiT Requirements We are recruiting MiniSora Community contributors to reproduce DiT using XTuner . We hope the community member has the following characteristics: Familiarity with the OpenMMLab MMEngine mechanism. Familiarity with DiT . Background The author of DiT is the same as the author of Sora . XTuner has the core technology to efficiently train sequences of length 1000K . Support Computational resources: 2*A100. Strong supports from XTuner core developer P佬@pppppM . Recent round-table Discussions Paper Interpretation of Stable Diffusion 3 paper: MM-DiT Speaker : MMagic Core Contributors Live Streaming Time : 03/12 20:00 Highlights : MMagic core contributors will lead us in interpreting the Stable Diffusion 3 paper, discussing the architecture details and design principles of Stable Diffusion 3. PPT : FeiShu Link Highlights from Previous Discussions Night Talk with Sora: Video Diffusion Overview ZhiHu Notes : A Survey on Generative Diffusion Model: An Overview of Generative Diffusion Models Paper Reading Program Sora : Creating video from text Technical Report : Video generation models as world simulators Latte : Latte: Latent Diffusion Transformer for Video Generation Latte Paper Interpretation (zh-CN) , ZhiHu(zh-CN) DiT : Scalable Diffusion Models with Transformers Stable Cascade (ICLR 24 Paper) : Würstchen: An efficient architecture for large-scale text-to-image diffusion models Stable Diffusion 3 : MM-DiT: Scaling Rectified Flow Transformers for High-Resolution Image Synthesis SD3 Paper Interpretation (zh-CN) , ZhiHu(zh-CN) Updating... Recruitment of Presenters DiT (ICCV 23 Paper) Stable Cascade (ICLR 24 Paper) Related Work 01 Diffusion Model 02 Diffusion Transformer 03 Baseline Video Generation Models 04 Diffusion UNet 05 Video Generation 06 Dataset 6.1 Pubclic Datasets 6.2 Video Augmentation Methods 6.2.1 Basic Transformations 6.2.2 Feature Space 6.2.3 GAN-based Augmentation 6.2.4 Encoder/Decoder Based 6.2.5 Simulation 07 Patchifying Methods 08 Long-context 09 Audio Related Resource 10 Consistency 11 Prompt Engineering 12 Security 13 World Model 14 Video Compression 15 Mamba 15.1 Theoretical Foundations and Model Architecture 15.2 Image Generation and Visual Applications 15.3 Video Processing and Understanding 15.4 Medical Image Processing 16 Existing high-quality resources 17 Efficient Training 17.1 Parallelism based Approach 17.1.1 Data Parallelism (DP) 17.1.2 Model Parallelism (MP) 17.1.3 Pipeline Parallelism (PP) 17.1.4 Generalized Parallelism (GP) 17.1.5 ZeRO Parallelism (ZP) 17.2 Non-parallelism based Approach 17.2.1 Reducing Activation Memory 17.2.2 CPU-Offloading 17.2.3 Memory Efficient Optimizer 17.3 Novel Structure 18 Efficient Inference 18.1 Reduce Sampling Steps 18.1.1 Continuous Steps 18.1.2 Fast Sampling 18.1.3 Step distillation 18.2 Optimizing Inference 18.2.1 Low-bit Quantization 18.2.2 Parallel/Sparse inference | 01 Diffusion Models | |
| :------------- | :------------- |
| Paper | Link |
| 1) Guided-Diffusion : Diffusion Models Beat GANs on Image Synthesis | NeurIPS 21 Paper , GitHub |
| 2) Latent Diffusion : High-Resolution Image Synthesis with Latent Diffusion Models | CVPR 22 Paper , GitHub |
| 3) EDM : Elucidating the Design Space of Diffusion-Based Generative Models | NeurIPS 22 Paper , GitHub |
| 4) DDPM : Denoising Diffusion Probabilistic Models | NeurIPS 20 Paper , GitHub |
| 5) DDIM : Denoising Diffusion Implicit Models | ICLR 21 Paper , GitHub |
| 6) Score-Based Diffusion : Score-Based Generative Modeling through Stochastic Differential Equations | ICLR 21 Paper , GitHub , Blog |
| 7) Stable Cascade : Würstchen: An efficient architecture for large-scale text-to-image diffusion models | ICLR 24 Paper , GitHub , Blog |
| 8) Diffusion Models in Vision: A Survey| TPAMI 23 Paper , GitHub |
| 9) Improved DDPM : Improved Denoising Diffusion Probabilistic Models | ICML 21 Paper , Github |
| 10) Classifier-free diffusion guidance | NIPS 21 Paper |
| 11) Glide : Towards photorealistic image generation and editing with text-guided diffusion models | Paper , Github |
| 12) VQ-DDM : Global Context with Discrete Diffusion in Vector Quantised Modelling for Image Generation | CVPR 22 Paper , Github |
| 13) Diffusion Models for Medical Anomaly Detection | Paper , Github |
| 14) Generation of Anonymous Chest Radiographs Using Latent Diffusion Models for Training Thoracic Abnormality Classification Systems | Paper |
| 15) DiffusionDet : Diffusion Model for Object Detection | ICCV 23 Paper , Github |
| 16) Label-efficient semantic segmentation with diffusion models | ICLR 22 Paper , Github , Project |
| 02 Diffusion Transformer | |
| Paper | Link |
| 1) UViT : All are Worth Words: A ViT Backbone for Diffusion Models | CVPR 23 Paper , GitHub , ModelScope |
| 2) DiT : Scalable Diffusion Models with Transformers | ICCV 23 Paper , GitHub , Project , ModelScope |
| 3) SiT : Exploring Flow and Diffusion-based Generative Models with Scalable Interpolant Transformers | ArXiv 23 , GitHub , ModelScope |
| 4) FiT : Flexible Vision Transformer for Diffusion Model | ArXiv 24 , GitHub |
| 5) k-diffusion : Scalable High-Resolution Pixel-Space Image Synthesis with Hourglass Diffusion Transformers | ArXiv 24 , GitHub |
| 6) Large-DiT : Large Diffusion Transformer | GitHub |
| 7) VisionLLaMA : A Unified LLaMA Interface for Vision Tasks | ArXiv 24 , GitHub |
| 8) Stable Diffusion 3 : MM-DiT: Scaling Rectified Flow Transformers for High-Resolution Image Synthesis | Paper , Blog |
| 9) PIXART-Σ : Weak-to-Strong Training of Diffusion Transformer for 4K Text-to-Image Generation | ArXiv 24 , Project |
| 10) PIXART-α : Fast Training of Diffusion Transformer for Photorealistic Text-To-Image Synthesiss | ArXiv 23 , GitHub ModelScope |
| 11) PIXART-δ : Fast and Controllable Image Generation With Latent Consistency Model | ArXiv 24 , |
| 03 Baseline Video Generation Models | |
| Paper | Link |
| 1) ViViT : A Video Vision Transformer | ICCV 21 Paper , GitHub |
| 2) VideoLDM : Align your Latents: High-Resolution Video Synthesis with Latent Diffusion Models | CVPR 23 Paper |
| 3) DiT : Scalable Diffusion Models with Transformers | ICCV 23 Paper , Github , Project , ModelScope |
| 4) Text2Video-Zero : Text-to-Image Diffusion Models are Zero-Shot Video Generators | ArXiv 23 , GitHub |
| 5) Latte : Latent Diffusion Transformer for Video Generation | ArXiv 24 , GitHub , Project |
| 04 Diffusion UNet ModelScope | |
| Paper | Link |
| 1) Taming Transformers for High-Resolution Image Synthesis | CVPR 21 Paper , GitHub , Project |
| 2) ELLA: Equip Diffusion Models with LLM for Enhanced Semantic Alignment | ArXiv 24 Github |
| 05 Video Generation | |
| Paper | Link |
| 1) Animatediff : Animate Your Personalized Text-to-Image Diffusion Models without Specific Tuning | ICLR 24 Paper , GitHub , ModelScope |
| 2) I2VGen-XL : High-Quality Image-to-Video Synthesis via Cascaded Diffusion Models | ArXiv 23 , GitHub , ModelScope |
| 3) Imagen Video : High Definition Video Generation with Diffusion Models | ArXiv 22 |
| 4) MoCoGAN : Decomposing Motion and Content for Video Generation | CVPR 18 Paper |
| 5) Adversarial Video Generation on Complex Datasets | Paper |
| 6) W.A.L.T : Photorealistic Video Generation with Diffusion Models | ArXiv 23 , Project |
| 7) VideoGPT : Video Generation using VQ-VAE and Transformers | ArXiv 21 , GitHub |
| 8) Video Diffusion Models | ArXiv 22 , GitHub , Project |
| 9) MCVD : Masked Conditional Video Diffusion for Prediction, Generation, and Interpolation | NeurIPS 22 Paper , GitHub , Project , Blog |
| 10) VideoPoet : A Large Language Model for Zero-Shot Video Generation | ArXiv 23 , Project , Blog |
| 11) MAGVIT : Masked Generative Video Transformer | CVPR 23 Paper , GitHub , Project , Colab |
| 12) EMO : Emote Portrait Alive - Generating Expressive Portrait Videos with Audio2Video Diffusion Model under Weak Conditions | ArXiv 24 , GitHub , Project |
| 13) SimDA : Simple Diffusion Adapter for Efficient Video Generation | Paper , GitHub , Project |
| 14) StableVideo : Text-driven Consistency-aware Diffusion Video Editing | ICCV 23 Paper , GitHub , Project |
| 15) SVD : Stable Video Diffusion: Scaling Latent Video Diffusion Models to Large Datasets| Paper , GitHub |
| 16) ADD : Adversarial Diffusion Distillation| Paper , GitHub |
| 17) GenTron: Diffusion Transformers for Image and Video Generation | CVPR 24 Paper , Project |
| 18) LFDM : Conditional Image-to-Video Generation with Latent Flow Diffusion Models | CVPR 23 Paper , GitHub |
| 19) MotionDirector : Motion Customization of Text-to-Video Diffusion Models | ArXiv 23 , GitHub |
| 20) TGAN-ODE : Latent Neural Differential Equations for Video Generation | Paper , GitHub |
| 21) VideoCrafter1 : Open Diffusion Models for High-Quality Video Generation | ArXiv 23 , GitHub |
| 22) VideoCrafter2 : Overcoming Data Limitations for High-Quality Video Diffusion Models | ArXiv 24 , GitHub |
| 23) LVDM : Latent Video Diffusion Models for High-Fidelity Long Video Generation | ArXiv 22 , GitHub |
| 24) LaVie : High-Quality Video Generation with Cascaded Latent Diffusion Models | ArXiv 23 , GitHub , Project |
| 25) PYoCo : Preserve Your Own Correlation: A Noise Prior for Video Diffusion Models | ICCV 23 Paper , Project |
| 26) VideoFusion : Decomposed Diffusion Models for High-Quality Video Generation | CVPR 23 Paper |
| 06 Dataset | |
| 6.1 Public Datasets | |
| Dataset Name - Paper | Link |
| 1) Panda-70M - Panda-70M: Captioning 70M Videos with Multiple Cross-Modality Teachers 70M Clips, 720P, Downloadable | CVPR 24 Paper , Github , Project , ModelScope |
| 2) InternVid-10M - InternVid: A Large-scale Video-Text Dataset for Multimodal Understanding and Generation 10M Clips, 720P, Downloadable | ArXiv 24 , Github |
| 3) CelebV-Text - CelebV-Text: A Large-Scale Facial Text-Video Dataset 70K Clips, 720P, Downloadable | CVPR 23 Paper , Github , Project |
| 4) HD-VG-130M - VideoFactory: Swap Attention in Spatiotemporal Diffusions for Text-to-Video Generation 130M Clips, 720P, Downloadable | ArXiv 23 , Github , Tool |
| 5) HD-VILA-100M - Advancing High-Resolution Video-Language Representation with Large-Scale Video Transcriptions 100M Clips, 720P, Downloadable | CVPR 22 Paper , Github |
| 6) VideoCC - Learning Audio-Video Modalities from Image Captions 10.3M Clips, 720P, Downloadable | ECCV 22 Paper , Github |
| 7) YT-Temporal-180M - MERLOT: Multimodal Neural Script Knowledge Models 180M Clips, 480P, Downloadable | NeurIPS 21 Paper , Github , Project |
| 8) HowTo100M - HowTo100M: Learning a Text-Video Embedding by Watching Hundred Million Narrated Video Clips 136M Clips, 240P, Downloadable | ICCV 19 Paper , Github , Project |
| 9) UCF101 - UCF101: A Dataset of 101 Human Actions Classes From Videos in The Wild 13K Clips, 240P, Downloadable | CVPR 12 Paper , Project |
| 10) MSVD - Collecting Highly Parallel Data for Paraphrase Evaluation 122K Clips, 240P, Downloadable | ACL 11 Paper , Project |
| 11) Fashion-Text2Video - A human video dataset with rich label and text annotations 600 Videos, 480P, Downloadable | ArXiv 23 , Project |
| 12) LAION-5B - A dataset of 5,85 billion CLIP-filtered image-text pairs, 14x bigger than LAION-400M 5B Clips, Downloadable | NeurIPS 22 Paper , Project |
| 13) ActivityNet Captions - ActivityNet Captions contains 20k videos amounting to 849 video hours with 100k total descriptions, each with its unique start and end time 20k videos, Downloadable | Arxiv 17 Paper , Project |
| 14) MSR-VTT - A large-scale video benchmark for video understanding 10k Clips, Downloadable | CVPR 16 Paper , Project |
| 15) The Cityscapes Dataset - Benchmark suite and evaluation server for pixel-level, instance-level, and panoptic semantic labeling Downloadable | Arxiv 16 Paper , Project |
| 16) Youku-mPLUG - First open-source large-scale Chinese video text dataset Downloadable | ArXiv 23 , Project , ModelScope |
| 17) VidProM - VidProM: A Million-scale Real Prompt-Gallery Dataset for Text-to-Video Diffusion Models 6.69M, Downloadable | ArXiv 24 , Github |
| 18) Pixabay100 - A video dataset collected from Pixabay Downloadable | Github |
| 19) WebVid - Large-scale text-video dataset, containing 10 million video-text pairs scraped from the stock footage sites Long Durations and Structured Captions | ArXiv 21 , Project , ModelScope |
| 20) MiraData(Mini-Sora Data) : A Large-Scale Video Dataset with Long Durations and Structured Captions 10M video-text pairs | Github , Project |
| 6.2 Video Augmentation Methods | |
| 6.2.1 Basic Transformations | |
| Three-stream CNNs for action recognition | PRL 17 Paper |
| Dynamic Hand Gesture Recognition Using Multi-direction 3D Convolutional Neural Networks | EL 19 Paper |
| Intra-clip Aggregation for Video Person Re-identification | ICIP 20 Paper |
| VideoMix: Rethinking Data Augmentation for Video Classification | CVPR 20 Paper |
| mixup: Beyond Empirical Risk Minimization | ICLR 17 Paper |
| CutMix: Regularization Strategy to Train Strong Classifiers With Localizable Features | ICCV 19 Paper |
| Video Salient Object Detection via Fully Convolutional Networks | ICIP 18 Paper |
| Illumination-Based Data Augmentation for Robust Background Subtraction | SKIMA 19 Paper |
| Image editing-based data augmentation for illumination-insensitive background subtraction | EIM 20 Paper |
| 6.2.2 Feature Space | |
| Feature Re-Learning with Data Augmentation for Content-based Video Recommendation | ACM 18 Paper |
| GAC-GAN: A General Method for Appearance-Controllable Human Video Motion Transfer | Trans 21 Paper |
| 6.2.3 GAN-based Augmentation | |
| Deep Video-Based Performance Cloning | CVPR 18 Paper |
| Adversarial Action Data Augmentation for Similar Gesture Action Recognition | IJCNN 19 Paper |
| Self-Paced Video Data Augmentation by Generative Adversarial Networks with Insufficient Samples | MM 20 Paper |
| GAC-GAN: A General Method for Appearance-Controllable Human Video Motion Transfer | Trans 20 Paper |
| Dynamic Facial Expression Generation on Hilbert Hypersphere With Conditional Wasserstein Generative Adversarial Nets | TPAMI 20 Paper |
| CrowdGAN: Identity-Free Interactive Crowd Video Generation and Beyond | TPAMI 22 Paper |
| 6.2.4 Encoder/Decoder Based | |
| Rotationally-Temporally Consistent Novel View Synthesis of Human Performance Video | ECCV 20 Paper |
| Autoencoder-based Data Augmentation for Deepfake Detection | ACM 23 Paper |
| 6.2.5 Simulation | |
| A data augmentation methodology for training machine/deep learning gait recognition algorithms | CVPR 16 Paper |
| ElderSim: A Synthetic Data Generation Platform for Human Action Recognition in Eldercare Applications | IEEE 21 Paper |
| Mid-Air: A Multi-Modal Dataset for Extremely Low Altitude Drone Flights | CVPR 19 Paper |
| Generating Human Action Videos by Coupling 3D Game Engines and Probabilistic Graphical Models | IJCV 19 Paper |
| Using synthetic data for person tracking under adverse weather conditions | IVC 21 Paper |
| Unlimited Road-scene Synthetic Annotation (URSA) Dataset | ITSC 18 Paper |
| SAIL-VOS 3D: A Synthetic Dataset and Baselines for Object Detection and 3D Mesh Reconstruction From Video Data | CVPR 21 Paper |
| Universal Semantic Segmentation for Fisheye Urban Driving Images | SMC 20 Paper |
| 07 Patchifying Methods | |
| Paper | Link |
| 1) ViT : An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale | CVPR 21 Paper , Github |
| 2) MAE : Masked Autoencoders Are Scalable Vision Learners| CVPR 22 Paper , Github |
| 3) ViViT : A Video Vision Transformer (-)| ICCV 21 Paper , GitHub |
| 4) DiT : Scalable Diffusion Models with Transformers (-) | ICCV 23 Paper , GitHub , Project , ModelScope |
| 5) U-ViT : All are Worth Words: A ViT Backbone for Diffusion Models (-) | CVPR 23 Paper , GitHub , ModelScope |
| 6) FlexiViT : One Model for All Patch Sizes | Paper , Github |
| 7) Patch n’ Pack : NaViT, a Vision Transformer for any Aspect Ratio and Resolution | ArXiv 23 , Github |
| 8) VQ-VAE : Neural Discrete Representation Learning | Paper , Github |
| 9) VQ-GAN : Neural Discrete Representation Learning | CVPR 21 Paper , Github |
| 10) LVT : Latent Video Transformer | Paper , Github |
| 11) VideoGPT : Video Generation using VQ-VAE and Transformers (-) | ArXiv 21 , GitHub |
| 12) Predicting Video with VQVAE | ArXiv 21 |
| 13) CogVideo : Large-scale Pretraining for Text-to-Video Generation via Transformers | ICLR 23 Paper , Github |
| 14) TATS : Long Video Generation with Time-Agnostic VQGAN and Time-Sensitive Transformer | ECCV 22 Paper , Github |
| 15) MAGVIT : Masked Generative Video Transformer (-) | CVPR 23 Paper , GitHub , Project , Colab |
| 16) MagViT2 : Language Model Beats Diffusion -- Tokenizer is Key to Visual Generation | ICLR 24 Paper , Github |
| 17) VideoPoet : A Large Language Model for Zero-Shot Video Generation (-) | ArXiv 23 , Project , Blog |
| 18) CLIP : Learning Transferable Visual Models From Natural Language Supervision | CVPR 21 Paper , Github |
| 19) BLIP : Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation | ArXiv 22 , Github |
| 20) BLIP-2 : Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models | ArXiv 23 , Github |
| 08 Long-context | |
| Paper | Link |
| 1) World Model on Million-Length Video And Language With RingAttention | ArXiv 24 , GitHub |
| 2) Ring Attention with Blockwise Transformers for Near-Infinite Context | ArXiv 23 , GitHub |
| 3) Extending LLMs' Context Window with 100 Samples | ArXiv 24 , GitHub |
| 4) Efficient Streaming Language Models with Attention Sinks | ICLR 24 Paper , GitHub |
| 5) The What, Why, and How of Context Length Extension Techniques in Large Language Models – A Detailed Survey | Paper |
| 6) MovieChat : From Dense Token to Sparse Memory for Long Video Understanding | CVPR 24 Paper , GitHub , Project |
| 7) MemoryBank : Enhancing Large Language Models with Long-Term Memory | Paper , GitHub |
| 09 Audio Related Resource | |
| Paper | Link |
| 1) Stable Audio : Fast Timing-Conditioned Latent Audio Diffusion | ArXiv 24 , Github , Blog |
| 2) MM-Diffusion : Learning Multi-Modal Diffusion Models for Joint Audio and Video Generation | CVPR 23 Paper , GitHub |
| 3) Pengi : An Audio Language Model for Audio Tasks | NeurIPS 23 Paper , GitHub |
| 4) Vast: A vision-audio-subtitle-text omni-modality foundation model and dataset | NeurlPS 23 Paper , GitHub |
| 5) Macaw-LLM : Multi-Modal Language Modeling with Image, Audio, Video, and Text Integration | ArXiv 23 , GitHub |
| 6) NaturalSpeech : End-to-End Text to Speech Synthesis with Human-Level Quality | TPAMI 24 Paper , GitHub |
| 7) NaturalSpeech 2 : Latent Diffusion Models are Natural and Zero-Shot Speech and Singing Synthesizers | ICLR 24 Paper , GitHub |
| 8) UniAudio : An Audio Foundation Model Toward Universal Audio Generation | ArXiv 23 , GitHub |
| 9) Diffsound : Discrete Diffusion Model for Text-to-sound Generation | TASLP 22 Paper |
| 10) AudioGen : Textually Guided Audio Generation| ICLR 23 Paper , Project |
| 11) AudioLDM : Text-to-audio generation with latent diffusion models | ICML 23 Paper , GitHub , Project , Huggingface |
| 12) AudioLDM2 : Learning Holistic Audio Generation with Self-supervised Pretraining | ArXiv 23 , GitHub , Project , Huggingface |
| 13) Make-An-Audio : Text-To-Audio Generation with Prompt-Enhanced Diffusion Models | ICML 23 Paper , GitHub |
| 14) Make-An-Audio 2 : Temporal-Enhanced Text-to-Audio Generation | ArXiv 23 |
| 15) TANGO : Text-to-audio generation using instruction-tuned LLM and latent diffusion model | ArXiv 23 , GitHub , Project , Huggingface |
| 16) AudioLM : a Language Modeling Approach to Audio Generation | ArXiv 22 |
| 17) AudioGPT : Understanding and Generating Speech, Music, Sound, and Talking Head | ArXiv 23 , GitHub |
| 18) MusicGen : Simple and Controllable Music Generation | NeurIPS 23 Paper , GitHub |
| 19) LauraGPT : Listen, Attend, Understand, and Regenerate Audio with GPT | ArXiv 23 |
| 20) Seeing and Hearing : Open-domain Visual-Audio Generation with Diffusion Latent Aligners | CVPR 24 Paper |
| 21) Video-LLaMA : An Instruction-tuned Audio-Visual Language Model for Video Understanding | EMNLP 23 Paper |
| 22) Audio-Visual LLM for Video Understanding | ArXiv 23 |
| 23) VideoPoet : A Large Language Model for Zero-Shot Video Generation (-) | ArXiv 23 , Project , Blog |
| 10 Consistency | |
| Paper | Link |
| 1) Consistency Models | Paper , GitHub |
| 2) Improved Techniques for Training Consistency Models | ArXiv 23 |
| 3) Score-Based Diffusion : Score-Based Generative Modeling through Stochastic Differential Equations (-) | ICLR 21 Paper , GitHub , Blog |
| 4) Improved Techniques for Training Score-Based Generative Models | NIPS 20 Paper , GitHub |
| 4) Generative Modeling by Estimating Gradients of the Data Distribution | NIPS 19 Paper , GitHub |
| 5) Maximum Likelihood Training of Score-Based Diffusion Models | NIPS 21 Paper , GitHub |
| 6) Layered Neural Atlases for Consistent Video Editing | TOG 21 Paper , GitHub , Project |
| 7) StableVideo : Text-driven Consistency-aware Diffusion Video Editing | ICCV 23 Paper , GitHub , Project |
| 8) CoDeF : Content Deformation Fields for Temporally Consistent Video Processing | Paper , GitHub , Project |
| 9) Sora Generates Videos with Stunning Geometrical Consistency | Paper , GitHub , Project |
| 10) Efficient One-stage Video Object Detection by Exploiting Temporal Consistency | ECCV 22 Paper , GitHub |
| 11) Bootstrap Motion Forecasting With Self-Consistent Constraints | ICCV 23 Paper |
| 12) Enforcing Realism and Temporal Consistency for Large-Scale Video Inpainting | Paper |
| 13) Enhancing Multi-Camera People Tracking with Anchor-Guided Clustering and Spatio-Temporal Consistency ID Re-Assignment | CVPRW 23 Paper , GitHub |
| 14) Exploiting Spatial-Temporal Semantic Consistency for Video Scene Parsing | ArXiv 21 |
| 15) Semi-Supervised Crowd Counting With Spatial Temporal Consistency and Pseudo-Label Filter | TCSVT 23 Paper |
| 16) Spatio-temporal Consistency and Hierarchical Matching for Multi-Target Multi-Camera Vehicle Tracking | CVPRW 19 Paper |
| 17) VideoDirectorGPT : Consistent Multi-scene Video Generation via LLM-Guided Planning (-) | ArXiv 23 |
| 18) VideoDrafter : Content-Consistent Multi-Scene Video Generation with LLM (-) | ArXiv 24 |
| 19) MaskDiffusion : Boosting Text-to-Image Consistency with Conditional Mask| ArXiv 23 |
| 11 Prompt Engineering | |
| Paper | Link |
| 1) RealCompo : Dynamic Equilibrium between Realism and Compositionality Improves Text-to-Image Diffusion Models | ArXiv 24 , GitHub , Project |
| 2) Mastering Text-to-Image Diffusion : Recaptioning, Planning, and Generating with Multimodal LLMs | ArXiv 24 , GitHub |
| 3) LLM-grounded Diffusion : Enhancing Prompt Understanding of Text-to-Image Diffusion Models with Large Language Models | TMLR 23 Paper , GitHub |
| 4) LLM BLUEPRINT : ENABLING TEXT-TO-IMAGE GEN-ERATION WITH COMPLEX AND DETAILED PROMPTS | ICLR 24 Paper , GitHub |
| 5) Progressive Text-to-Image Diffusion with Soft Latent Direction | ArXiv 23 |
| 6) Self-correcting LLM-controlled Diffusion Models | CVPR 24 Paper , GitHub |
| 7) LayoutLLM-T2I : Eliciting Layout Guidance from LLM for Text-to-Image Generation | MM 23 Paper |
| 8) LayoutGPT : Compositional Visual Planning and Generation with Large Language Models | NeurIPS 23 Paper , GitHub |
| 9) Gen4Gen : Generative Data Pipeline for Generative Multi-Concept Composition | ArXiv 24 , GitHub |
| 10) InstructEdit : Improving Automatic Masks for Diffusion-based Image Editing With User Instructions | ArXiv 23 , GitHub |
| 11) Controllable Text-to-Image Generation with GPT-4 | ArXiv 23 |
| 12) LLM-grounded Video Diffusion Models | ICLR 24 Paper |
| 13) VideoDirectorGPT : Consistent Multi-scene Video Generation via LLM-Guided Planning | ArXiv 23 |
| 14) FlowZero : Zero-Shot Text-to-Video Synthesis with LLM-Driven Dynamic Scene Syntax | ArXiv 23 , Github , Project |
| 15) VideoDrafter : Content-Consistent Multi-Scene Video Generation with LLM | ArXiv 24 |
| 16) Free-Bloom : Zero-Shot Text-to-Video Generator with LLM Director and LDM Animator | NeurIPS 23 Paper |
| 17) Empowering Dynamics-aware Text-to-Video Diffusion with Large Language Models | ArXiv 23 |
| 18) MotionZero : Exploiting Motion Priors for Zero-shot Text-to-Video Generation | ArXiv 23 |
| 19) GPT4Motion : Scripting Physical Motions in Text-to-Video Generation via Blender-Oriented GPT Planning | ArXiv 23 |
| 20) Multimodal Procedural Planning via Dual Text-Image Prompting | ArXiv 23 , Github |
| 21) InstructCV : Instruction-Tuned Text-to-Image Diffusion Models as Vision Generalists | ICLR 24 Paper , Github |
| 22) DreamSync : Aligning Text-to-Image Generation with Image Understanding Feedback | ArXiv 23 |
| 23) TaleCrafter : Interactive Story Visualization with Multiple Characters | SIGGRAPH Asia 23 Paper |
| 24) Reason out Your Layout : Evoking the Layout Master from Large Language Models for Text-to-Image Synthesis | ArXiv 23 , Github |
| 25) COLE : A Hierarchical Generation Framework for Graphic Design | ArXiv 23 |
| 26) Knowledge-Aware Artifact Image Synthesis with LLM-Enhanced Prompting and Multi-Source Supervision | ArXiv 23 |
| 27) Vlogger : Make Your Dream A Vlog | CVPR 24 Paper , Github |
| 28) GALA3D : Towards Text-to-3D Complex Scene Generation via Layout-guided Generative Gaussian Splatting | Paper |
| 29) MuLan : Multimodal-LLM Agent for Progressive Multi-Object Diffusion | ArXiv 24 |
| Recaption | |
| Paper | Link |
| 1) LAVIE : High-Quality Video Generation with Cascaded Latent Diffusion Models | ArXiv 23 , GitHub |
| 2) Reuse and Diffuse : Iterative Denoising for Text-to-Video Generation | ArXiv 23 , GitHub |
| 3) CoCa : Contrastive Captioners are Image-Text Foundation Models | ArXiv 22 , Github |
| 4) CogView3 : Finer and Faster Text-to-Image Generation via Relay Diffusion | ArXiv 24 |
| 5) VideoChat : Chat-Centric Video Understanding | CVPR 24 Paper , Github |
| 6) De-Diffusion Makes Text a Strong Cross-Modal Interface | ArXiv 23 |
| 7) HowToCaption : Prompting LLMs to Transform Video Annotations at Scale | ArXiv 23 |
| 8) SELMA : Learning and Merging Skill-Specific Text-to-Image Experts with Auto-Generated Data | ArXiv 24 |
| 9) LLMGA : Multimodal Large Language Model based Generation Assistant | ArXiv 23 , Github |
| 10) ELLA : Equip Diffusion Models with LLM for Enhanced Semantic Alignment | ArXiv 24 , Github |
| 11) MyVLM : Personalizing VLMs for User-Specific Queries | ArXiv 24 |
| 12) A Picture is Worth a Thousand Words: Principled Recaptioning Improves Image Generation | ArXiv 23 , Github |
| 13) Mastering Text-to-Image Diffusion : Recaptioning, Planning, and Generating with Multimodal LLMs(-) | ArXiv 24 , Github |
| 14) FlexCap : Generating Rich, Localized, and Flexible Captions in Images | ArXiv 24 |
| 15) Video ReCap : Recursive Captioning of Hour-Long Videos | ArXiv 24 , Github |
| 16) BLIP : Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation | ICML 22 , Github |
| 17) PromptCap : Prompt-Guided Task-Aware Image Captioning | ICCV 23 , Github |
| 18) CIC : A framework for Culturally-aware Image Captioning | ArXiv 24 |
| 19) Improving Image Captioning Descriptiveness by Ranking and LLM-based Fusion | ArXiv 24 |
| 20) FuseCap : Leveraging Large Language Models for Enriched Fused Image Captions | WACV 24 , Github |
| 12 Security | |
| Paper | Link |
| 1) BeaverTails: Towards Improved Safety Alignment of LLM via a Human-Preference Dataset | NeurIPS 23 Paper , Github |
| 2) LIMA: Less Is More for Alignment | NeurIPS 23 Paper |
| 3) Jailbroken: How Does LLM Safety Training Fail? | NeurIPS 23 Paper |
| 4) Safe Latent Diffusion: Mitigating Inappropriate Degeneration in Diffusion Models | CVPR 23 Paper |
| 5) Stable Bias: Evaluating Societal Representations in Diffusion Models | NeurIPS 23 Paper |
| 6) Ablating concepts in text-to-image diffusion models | ICCV 23 Paper |
| 7) Diffusion art or digital forgery? investigating data replication in diffusion models | ICCV 23 Paper , Project |
| 8) Eternal Sunshine of the Spotless Net: Selective Forgetting in Deep Networks | ICCV 20 Paper |
| 9) Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks | ICML 20 Paper |
| 10) A pilot study of query-free adversarial attack against stable diffusion | ICCV 23 Paper |
| 11) Interpretable-Through-Prototypes Deepfake Detection for Diffusion Models | ICCV 23 Paper |
| 12) Erasing Concepts from Diffusion Models | ICCV 23 Paper , Project |
| 13) Ablating Concepts in Text-to-Image Diffusion Models | ICCV 23 Paper , Project |
| 14) BEAVERTAILS: Towards Improved Safety Alignment of LLM via a Human-Preference Dataset | NeurIPS 23 Paper , Project |
| 15) LIMA: Less Is More for Alignment | NeurIPS 23 Paper |
| 16) Stable Bias: Evaluating Societal Representations in Diffusion Models | NeurIPS 23 Paper |
| 17) Threat Model-Agnostic Adversarial Defense using Diffusion Models | Paper |
| 18) How well can Text-to-Image Generative Models understand Ethical Natural Language Interventions? | Paper , Github |
| 19) Differentially Private Diffusion Models Generate Useful Synthetic Images | Paper |
| 20) Unsafe Diffusion: On the Generation of Unsafe Images and Hateful Memes From Text-To-Image Models | SIGSAC 23 Paper , Github |
| 21) Forget-Me-Not: Learning to Forget in Text-to-Image Diffusion Models | Paper , Github |
| 22) Unified Concept Editing in Diffusion Models | WACV 24 Paper , Project |
| 23) Diffusion Model Alignment Using Direct Preference Optimization | ArXiv 23 |
| 24) RAFT: Reward rAnked FineTuning for Generative Foundation Model Alignment | TMLR 23 Paper , Github |
| 25) Self-Alignment of Large Language Models via Monopolylogue-based Social Scene Simulation | Paper , Github , Project |
| 13 World Model | |
| Paper | Link |
| 1) NExT-GPT : Any-to-Any Multimodal LLM | ArXiv 23 , GitHub |
| 14 Video Compression ||
| Paper | Link |
| 1) H.261 : Video codec for audiovisual services at p x 64 kbit/s | Paper |
| 2) H.262 : Information technology - Generic coding of moving pictures and associated audio information: Video | Paper |
| 3) H.263 : Video coding for low bit rate communication | Paper |
| 4) H.264 : Overview of the H.264/AVC video coding standard | Paper |
| 5) H.265 : Overview of the High Efficiency Video Coding (HEVC) Standard | Paper |
| 6) H.266 : Overview of the Versatile Video Coding (VVC) Standard and its Applications | Paper |
| 7) DVC : An End-to-end Deep Video Compression Framework | CVPR 19 Paper , GitHub |
| 8) OpenDVC : An Open Source Implementation of the DVC Video Compression Method | Paper , GitHub |
| 9) HLVC : Learning for Video Compression with Hierarchical Quality and Recurrent Enhancement | CVPR 20 Paper , Github |
| 10) RLVC : Learning for Video Compression with Recurrent Auto-Encoder and Recurrent Probability Model | J-STSP 21 Paper , Github |
| 11) PLVC : Perceptual Learned Video Compression with Recurrent Conditional GAN | IJCAI 22 Paper , Github |
| 12) ALVC : Advancing Learned Video Compression with In-loop Frame Prediction | T-CSVT 22 Paper , Github |
| 13) DCVC : Deep Contextual Video Compression | NeurIPS 21 Paper , Github |
| 14) DCVC-TCM : Temporal Context Mining for Learned Video Compression | TM 22 Paper , Github |
| 15) DCVC-HEM : Hybrid Spatial-Temporal Entropy Modelling for Neural Video Compression | MM 22 Paper , Github |
| 16) DCVC-DC : Neural Video Compression with Diverse Contexts | CVPR 23 Paper , Github |
| 17) DCVC-FM : Neural Video Compression with Feature Modulation | CVPR 24 Paper , Github |
| 18) SSF : Scale-Space Flow for End-to-End Optimized Video Compression | CVPR 20 Paper , Github |
| 15 Mamba ||
| 15.1 Theoretical Foundations and Model Architecture | |
| Paper | Link |
| 1) Mamba : Linear-Time Sequence Modeling with Selective State Spaces | ArXiv 23 , Github |
| 2) Efficiently Modeling Long Sequences with Structured State Spaces | ICLR 22 Paper , Github |
| 3) Modeling Sequences with Structured State Spaces | Paper |
| 4) Long Range Language Modeling via Gated State Spaces | ArXiv 22 , GitHub |
| 15.2 Image Generation and Visual Applications | |
| Paper | Link |
| 1) Diffusion Models Without Attention | ArXiv 23 |
| 2) Pan-Mamba : Effective Pan-Sharpening with State Space Model | ArXiv 24 , Github |
| 3) Pretraining Without Attention | ArXiv 22 , Github |
| 4) Block-State Transformers | NIPS 23 Paper |
| 5) Vision Mamba : Efficient Visual Representation Learning with Bidirectional State Space Model | ArXiv 24 , Github |
| 6) VMamba: Visual State Space Model | ArXiv 24 , Github |
| 7) ZigMa: Zigzag Mamba Diffusion Model | ArXiv 24 , Github |
| 15.3 Video Processing and Understanding | |
| Paper | Link |
| 1) Long Movie Clip Classification with State-Space Video Models | ECCV 22 Paper , Github |
| 2) Selective Structured State-Spaces for Long-Form Video Understanding | CVPR 23 Paper |
| 3) Efficient Movie Scene Detection Using State-Space Transformers | CVPR 23 Paper , Github |
| 4) VideoMamba: State Space Model for Efficient Video Understanding | Paper , Github |
| 15.4 Medical Image Processing | |
| Paper | Link |
| 1) Swin-UMamba : Mamba-based UNet with ImageNet-based pretraining | ArXiv 24 , Github |
| 2) MambaIR : A Simple Baseline for Image Restoration with State-Space Model | ArXiv 24 , Github |
| 3) VM-UNet: Vision Mamba UNet for Medical Image Segmentation | ArXiv 24 , Github |
| | |
| 16 Existing high-quality resources | |
| Resources | Link |
| 1) Datawhale - AI视频生成学习 | Feishu doc |
| 2) A Survey on Generative Diffusion Model | TKDE 24 Paper , GitHub |
| 3) Awesome-Video-Diffusion-Models: A Survey on Video Diffusion Models | ArXiv 23 , GitHub |
| 4) Awesome-Text-To-Video:A Survey on Text-to-Video Generation/Synthesis | GitHub |
| 5) video-generation-survey: A reading list of video generation| GitHub |
| 6) Awesome-Video-Diffusion | GitHub |
| 7) Video Generation Task in Papers With Code | Task |
| 8) Sora: A Review on Background, Technology, Limitations, and Opportunities of Large Vision Models | ArXiv 24 , GitHub |
| 9) Open-Sora-Plan (PKU-YuanGroup) | GitHub |
| 10) State of the Art on Diffusion Models for Visual Computing | Paper |
| 11) Diffusion Models: A Comprehensive Survey of Methods and Applications | CSUR 24 Paper , GitHub |
| 12) Generate Impressive Videos with Text Instructions: A Review of OpenAI Sora, Stable Diffusion, Lumiere and Comparable | Paper |
| 13) On the Design Fundamentals of Diffusion Models: A Survey | Paper |
| 14) Efficient Diffusion Models for Vision: A Survey | Paper |
| 15) Text-to-Image Diffusion Models in Generative AI: A Survey | Paper |
| 16) Awesome-Diffusion-Transformers | GitHub , Project |
| 17) Open-Sora (HPC-AI Tech) | GitHub , Blog |
| 18) LAVIS - A Library for Language-Vision Intelligence | ACL 23 Paper , GitHub , Project |
| 19) OpenDiT : An Easy, Fast and Memory-Efficient System for DiT Training and Inference | GitHub |
| 20) Awesome-Long-Context | GitHub1 , GitHub2 |
| 21) Lite-Sora | GitHub |
| 22) Mira : A Mini-step Towards Sora-like Long Video Generation | GitHub , Project |
| 17 Efficient Training | |
| 17.1 Parallelism based Approach | |
| 17.1.1 Data Parallelism (DP) | |
| 1) A bridging model for parallel computation | Paper |
| 2) PyTorch Distributed: Experiences on Accelerating Data Parallel Training | VLDB 20 Paper |
| 17.1.2 Model Parallelism (MP) | |
| 1) Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism | ArXiv 19 Paper |
| 2) TeraPipe: Token-Level Pipeline Parallelism for Training Large-Scale Language Models | PMLR 21 Paper |
| 17.1.3 Pipeline Parallelism (PP) | |
| 1) GPipe: Efficient Training of Giant Neural Networks using Pipeline Parallelism | NeurIPS 19 Paper |
| 2) PipeDream: generalized pipeline parallelism for DNN training | SOSP 19 Paper |
| 17.1.4 Generalized Parallelism (GP) | |
| 1) Mesh-TensorFlow: Deep Learning for Supercomputers | ArXiv 18 Paper |
| 2) Beyond Data and Model Parallelism for Deep Neural Networks | MLSys 19 Paper |
| 17.1.5 ZeRO Parallelism (ZP) | |
| 1) ZeRO: Memory Optimizations Toward Training Trillion Parameter Models | ArXiv 20 |
| 2) DeepSpeed: System Optimizations Enable Training Deep Learning Models with Over 100 Billion Parameters | ACM 20 Paper |
| 3) ZeRO-Offload: Democratizing Billion-Scale Model Training | ArXiv 21 |
| 4) PyTorch FSDP: Experiences on Scaling Fully Sharded Data Parallel | ArXiv 23 |
| 17.2 Non-parallelism based Approach | |
| 17.2.1 Reducing Activation Memory | |
| 1) Gist: Efficient Data Encoding for Deep Neural Network Training | IEEE 18 Paper |
| 2) Checkmate: Breaking the Memory Wall with Optimal Tensor Rematerialization | MLSys 20 Paper |
| 3) Training Deep Nets with Sublinear Memory Cost | ArXiv 16 Paper |
| 4) Superneurons: dynamic GPU memory management for training deep neural networks | ACM 18 Paper |
| 17.2.2 CPU-Offloading | |
| 1) Training Large Neural Networks with Constant Memory using a New Execution Algorithm | ArXiv 20 Paper |
| 2) vDNN: Virtualized deep neural networks for scalable, memory-efficient neural network design | IEEE 16 Paper |
| 17.2.3 Memory Efficient Optimizer | |
| 1) Adafactor: Adaptive Learning Rates with Sublinear Memory Cost | PMLR 18 Paper |
| 2) Memory-Efficient Adaptive Optimization for Large-Scale Learning | Paper |
| 17.3 Novel Structure | |
| 1) ELLA: Equip Diffusion Models with LLM for Enhanced Semantic Alignment | ArXiv 24 Github |
| 18 Efficient Inference | |
| 18.1 Reduce Sampling Steps | |
| 18.1.1 Continuous Steps | |
| 1) Generative Modeling by Estimating Gradients of the Data Distribution | NeurIPS 19 Paper |
| 2) WaveGrad: Estimating Gradients for Waveform Generation | ArXiv 20 |
| 3) Noise Level Limited Sub-Modeling for Diffusion Probabilistic Vocoders | ICASSP 21 Paper |
| 4) Noise Estimation for Generative Diffusion Models | ArXiv 21 |
| 18.1.2 Fast Sampling | |
| 1) Denoising Diffusion Implicit Models | ICLR 21 Paper |
| 2) DiffWave: A Versatile Diffusion Model for Audio Synthesis | ICLR 21 Paper |
| 3) On Fast Sampling of Diffusion Probabilistic Models | ArXiv 21 |
| 4) DPM-Solver: A Fast ODE Solver for Diffusion Probabilistic Model Sampling in Around 10 Steps | NeurIPS 22 Paper |
| 5) DPM-Solver++: Fast Solver for Guided Sampling of Diffusion Probabilistic Models | ArXiv 22 |
| 6) Fast Sampling of Diffusion Models with Exponential Integrator | ICLR 22 Paper |
| 18.1.3 Step distillation | |
| 1) On Distillation of Guided Diffusion Models | CVPR 23 Paper |
| 2) Progressive Distillation for Fast Sampling of Diffusion Models | ICLR 22 Paper |
| 3) SnapFusion: Text-to-Image Diffusion Model on Mobile Devices within Two Seconds | NeurIPS 23 Paper |
| 4) Tackling the Generative Learning Trilemma with Denoising Diffusion GANs | ICLR 22 Paper |
| 18.2 Optimizing Inference | |
| 18.2.1 Low-bit Quantization | |
| 1) Q-Diffusion: Quantizing Diffusion Models | CVPR 23 Paper |
| 2) Q-DM: An Efficient Low-bit Quantized Diffusion Model | NeurIPS 23 Paper |
| 3) Temporal Dynamic Quantization for Diffusion Models | NeurIPS 23 Paper |
| 18.2.2 Parallel/Sparse inference | |
| 1) DistriFusion: Distributed Parallel Inference for High-Resolution Diffusion Models | CVPR 24 Paper |
| 2) Efficient Spatially Sparse Inference for Conditional GANs and Diffusion Models | NeurIPS 22 Paper | Citation If this project is helpful to your work, please cite it using the following format: bibtex
@misc{minisora,
title={MiniSora},
author={MiniSora Community},
url={https://github.com/mini-sora/minisora},
year={2024}
} bibtex
@misc{minisora,
title={Diffusion Model-based Video Generation Models From DDPM to Sora: A Survey},
author={Survey Paper Group of MiniSora Community},
url={https://github.com/mini-sora/minisora},
year={2024}
} Minisora Community WeChat Group Star History How to Contribute to the Mini Sora Community We greatly appreciate your contributions to the Mini Sora open-source community and helping us make it even better than it is now! For more details, please refer to the Contribution Guidelines Community contributors;MiniSora: A community aims to explore the implementation path and future development direction of Sora.;diffusion,sora,video-generation | mini-sora/minisora |
KaisenAmin/c_std;C++ Standard Library and Qt, Python etc Implementation in C This project aims to reimplement the C++ standard library functionality using the C programming language. It provides C developers with tools and libraries commonly available in C++, enabling better data structure management, algorithm implementation, and feature usage while staying within the C language's ecosystem. A Personal Note from Me I undertake this project out of a deep affection for the C programming language. It is my belief that C remains an essential tool for any computer engineer or programmer, providing the foundation necessary to build efficient and robust software. My love for C drives this endeavor, aiming to enrich the language with the familiar conveniences found in C++. Project Structure The project is organized into several subdirectories, each representing a different module of the standard library: Array : Implements a dynamic array similar to std::array in C++. ForwardList : Implements a singly-linked list analogous to std::forward_list in C++. List : Implements a doubly-linked list similar to std::list in C++. Queue : Implements a queue based on std::queue in C++. Stack : Implements a stack akin to std::stack in C++. String : Implements a basic string class that mimics std::string in C++. Vector : Implements a dynamic array similar to std::vector in C++. PriorityQueue : Implements a priority queue based on std::priority_queue in C++. Deque : Implements a deque based on std::deque in C++. CsvFile , CsvRow : For read, write, and other operations on CSV files. ConfigFile : Provides a flexible solution for handling configuration files in C. Map : A generic implementation of a map, or associative array. Span : Implements a span container for working with contiguous memory. Algorithm : Provides a wide range of generic functionalities similar to std::algorithm in C++. Encoding : A versatile tool for encoding and decoding operations. Numbers : Provides constants for common mathematical values, similar to C++20's <numbers> . Numeric : Implements various numerical operations and algorithms, including those for basic arithmetic, interpolation, and mathematical calculations. This module aims to bring the functionality of the C++ <numeric> header to C, offering features like GCD, LCM, and midpoint calculations, among others. Crypto : Provides tools for cryptographic operations. This module might include functions for encryption and decryption, hashing, and secure random number generation. It's intended to offer C developers basic cryptographic utilities, similar to what might be found in a more comprehensive C++ cryptographic library. Time : The Time library in C is a comprehensive tool designed for handling time-related operations. Inspired by the functionality of more advanced languages, this library provides a robust set of functions for manipulating and comparing time values in C programs. Date : The Date library in C offers a robust solution for handling and manipulating dates in both Gregorian and Persian calendars. This library is particularly useful for applications requiring operations on dates, such as calculations, conversions, and comparisons. Its comprehensive set of functions simplifies complex date-related tasks in C programming. Dir : Dir is used to manipulate path names, access information regarding paths and files, and manipulate the underlying file system. Tuple : Implement tuple similar to std::tuple in C++. FileWriter : The FileWriter library is a versatile tool designed for file writing operations in C. It provides functionalities similar to higher-level languages, offering various modes for writing text and binary files, including support for Unicode (UTF-8 and UTF-16). FileReader : The FileReader library is a versatile tool designed for file writing operations in C. fmt : The fmt library is a comprehensive formatting and I/O library for C, inspired by the fmt package in Go. It offers a wide range of formatting options and is designed to work seamlessly with multilingual and Unicode text. Json : The Json library is a comprehensive and efficient tool designed for parsing, generating, and manipulating JSON data in C. It aims to provide a robust and user-friendly interface for handling JSON objects and arrays, enabling seamless integration of JSON functionality into C projects. Log : The Log library offers a flexible and powerful logging system for C projects, enabling detailed message logging across various levels (DEBUG, INFO, WARN, ERROR, FATAL). It supports multiple output destinations (console, file, or both), timestamp inclusion, log level configuration, and keyword filtering to refine log output. Integrated with file I/O operations and customizable settings, it enhances diagnostic and monitoring capabilities in development and production environments. Cli : The CLI library provides a comprehensive solution for parsing command-line arguments in C projects, supporting options, commands, and subcommands with both short and long names. It features advanced functionalities like custom error handling, option grouping, interactive mode, and pipelining support, making it versatile for both simple and complex CLI applications. This library simplifies command-line parsing, validation, and execution, offering a robust foundation for building user-friendly command-line interfaces. Network : The Network library provides support for TCP and UDP protocols, enabling the development of networked applications. It includes functions for creating sockets, binding, listening, connecting, sending, and receiving data over both TCP and UDP connections. Database : The Database library offers tools for interacting with PostgreSQL databases. It includes functions for connecting to databases, executing queries, retrieving results, and handling transactions. This library aims to simplify database operations in C by providing a high-level API. Matrix : The Matrix library provides tools for matrix operations, including creation, manipulation, and mathematical computations on matrices. It is designed to support a wide range of matrix operations needed in scientific computing and data analysis. Random : This random library provides functionality like python random module for working with probablity and randomly numbers and elements. Statistics : This Statistics library in C provides a set of functions for calculating mathematical statistics of numeric data. exactly like python statistics module . SysInfo : This SysInfo library in C provides a set of functions for gathering lots of information about system also operating system. and work fine in (windows and linux) not Mac :)) Each module in the project comes with a .c source file, a .h header file, and a README.md file. These README files offer detailed explanations of each module's functionality, usage examples, and any other relevant information, ensuring that developers can easily understand and utilize the components of the library. Compilation and Execution This project utilizes a Python script ( compile.py ) for easy compilation of modules, making the build process straightforward and efficient. Requirements Python 3.10 or higher GCC compiler (ensure it's added to your system's PATH) Linux Users : Make sure to install the necessary development libraries: bash
sudo apt-get install libssl-dev libpq-dev Using the compile.py Script To compile the entire project, simply run the compile.py script with the b argument: bash
python compile.py b This command compiles all source files and produces an executable in the ./build directory. Running the Compiled Program To compile and immediately run the compiled program, use the r argument: bash
python compile.py r On Linux, make sure to set the LD_LIBRARY_PATH before running the program: bash
export LD_LIBRARY_PATH=./build:$LD_LIBRARY_PATH
./build/main Compiling to Shared Libraries Only To compile only the shared libraries (DLLs or .so files) for each module, use the l argument: bash
python compile.py l This command compiles all source files into shared libraries in the ./build directory without producing an executable. Adding New Modules If you add new modules or directories containing .c files, simply include their paths in the source_directories list within the compile.py script. The script automatically finds and compiles all .c files in the specified directories. Streamlined Build Process The use of compile.py eliminates the need for traditional makefiles or manual compilation commands, providing a simple and unified build process. The script handles dependencies, includes, and linking, ensuring a hassle-free compilation experience. Manual Compilation Using GCC For developers who prefer manual compilation or need to integrate the project into other build systems, the source files can be compiled using the GCC command line. While the compile.py script is recommended for its convenience and automated handling of file dependencies, manual compilation offers flexibility for advanced use cases. Requirements for Manual Compilation GCC compiler (ensure it's added to your system's PATH) C17 standard support in GCC Compiling with GCC To manually compile a specific module or your entire project, you can use the GCC command with the -std=c17 flag to ensure compliance with the C17 standard. Here's an example command to compile a program with the vector module: bash
gcc -std=c17 -O3 -march=native -flto -funroll-loops -Wall -Wextra -pedantic -s -o your_program your_program.c vector.c In this command:
- -std=c17 specifies the use of the C17 standard.
- -O3 , -march=native , -flto , and -funroll-loops are optimization flags.
- -Wall , -Wextra , and -pedantic enable additional warnings for better code quality.
- -s strips the binary for a smaller executable size.
- your_program.c is your main C source file.
- vector.c is the source file for the vector module (include other .c files as needed).
- your_program is the output executable file. Customizing the Compilation You can modify the GCC command to suit your specific requirements, such as including additional modules, linking libraries, or adjusting optimization levels. This approach offers full control over the compilation process, allowing you to tailor it to your project's needs. Individual READMEs for Libraries Each library module comes with its own README.md file, providing detailed instructions, sample code, function descriptions, and other relevant usage information. Contribution Contributions are welcome. Whether it's extending existing libraries, improving performance, or fixing bugs, your help is appreciated. Fork the repository, make your changes, and submit a pull request. License This project is open-source and available under ISC License .;Implementation of C++ standard libraries in C;[] | KaisenAmin/c_std |
rotemweiss57/gpt-newspaper;GPT Newspaper Welcome to the GPT Newspaper project, an innovative autonomous agent designed to create personalized newspapers tailored to user preferences. GPT Newspaper revolutionizes the way we consume news by leveraging the power of AI to curate, write, design, and edit content based on individual tastes and interests. 🔍 Overview GPT Newspaper consists of six specialized sub-agents in LangChain's new LangGraph Library : Search Agent : Scours the web for the latest and most relevant news. Curator Agent : Filters and selects news based on user-defined preferences and interests. Writer Agent : Crafts engaging and reader-friendly articles. Critique Agent Provide feedback to the writer until article is approved. Designer Agent : Layouts and designs the articles for an aesthetically pleasing reading experience. Editor Agent : Constructs the newspaper based on produced articles. Publisher Agent Publishes the newspaper to the frontend or desired service Each agent plays a critical role in delivering a unique and personalized newspaper experience. Demo https://github.com/assafelovic/gpt-newspaper/assets/91344214/7f265369-1293-4d95-9be5-02070f12c67e 🌟 Features Personalized Content : Get news that aligns with your interests and preferences. Diverse Sources : Aggregates content from a wide range of reputable news sources. Engaging Design : Enjoy a visually appealing layout and design. Quality Assurance : Rigorous editing ensures reliable and accurate news reporting. User-Friendly Interface : Easy-to-use platform for setting preferences and receiving your newspaper. 🛠️ How It Works Setting Preferences : Users input their interests, preferred topics, and news sources. Automated Curation : The Search and Curator Agents find and select news stories. Content Creation : The Writer Agent drafts articles, which are then designed by the Designer Agent. Newspaper Design : The Editor Agent reviews and finalizes the content. Delivery : Users receive their personalized newspaper to their mailbox. 🚀 Getting Started Prerequisites Tavily API Key - Sign Up OpenAI API Key - Sign Up Installation Clone the repo sh
git clone https://github.com/rotemweiss57/gpt-newspaper.git Export your API Keys sh
export TAVILY_API_KEY=<YOUR_TAVILY_API_KEY>
export OPENAI_API_KEY=<YOUR_OPENAI_API_KEY> Install Requirements sh
pip install -r requirements.txt Run the app sh
python app.py Open the app in your browser sh
http://localhost:5000/ Enjoy! 🤝 Contributing Interested in contributing to GPT Newspaper? We welcome contributions of all kinds! Check out our Contributor's Guide to get started. 🛡️ Disclaimer GPT Newspaper is an experimental project and provided "as-is" without any warranty. It's intended for personal use and not as a replacement for professional news outlets. 📩 Contact Us For support or inquiries, please reach out to us: Email Join us in redefining the future of news consumption with GPT Newspaper!;GPT based autonomous agent designed to create personalized newspapers tailored to user preferences. ;[] | rotemweiss57/gpt-newspaper |
time-series-foundation-models/lag-llama;Lag-Llama: Towards Foundation Models for Probabilistic Time Series Forecasting Lag-Llama is the first open-source foundation model for time series forecasting ! [ Tweet Thread ] [ Model Weights ] [ Colab Demo 1: Zero-Shot Forecasting ] [ Colab Demo 2: (Preliminary Finetuning) ] [ Paper ] [ Video ] Updates : 16-Apr-2024 : Released pretraining and finetuning scripts to replicate the experiments in the paper. See Reproducing Experiments in the Paper for details. 9-Apr-2024 : We have released a 15-minute video 🎥 on Lag-Llama on YouTube . 5-Apr-2024 : Added a section in Colab Demo 1 on the importance of tuning the context length for zero-shot forecasting. Added a best practices section in the README; added recommendations for finetuning. These recommendations will be demonstrated with an example in Colab Demo 2 soon. 4-Apr-2024 : We have updated our requirements file with new versions of certain packages. Please update/recreate your environments if you have previously used the code locally. 7-Mar-2024 : We have released a preliminary Colab Demo 2 for finetuning. Please note this is a preliminary tutorial. We recommend taking a look at the best practices if you are finetuning the model or using it for benchmarking. 17-Feb-2024 : We have released a new updated Colab Demo 1 for zero-shot forecasting that shows how one can load time series of different formats. 7-Feb-2024 : We released Lag-Llama, with open-source model checkpoints and a Colab Demo for zero-shot forecasting. Current Features : 💫 Zero-shot forecasting on a dataset of any frequency for any prediction length , using Colab Demo 1. 💫 Finetuning on a dataset using Colab Demo 2 . 💫 Reproducing experiments in the paper using the released scripts. See Reproducing Experiments in the Paper for details. Note : Please see the best practices section when using the model for zero-shot prediction and finetuning. Reproducing Experiments in the Paper To replicate the pretraining setup used in the paper, please see the pretraining script . Once a model is pretrained, instructions to finetune it with the setup in the paper can be found in the finetuning script . Best Practices Here are some general tips in using Lag-Llama. General Information Lag-Llama is a probabilistic forecasting model trained to output a probability distribution for each timestep to be predicted. For your own specific use-case, we would recommend benchmarking the zero-shot performance of the model on your data first, and then finetuning if necessary. As we show in our paper, Lag-Llama has strong zero-shot capabilities, but performs best when finetuned. The more data you finetune on, the better. For specific tips on applying on model zero-shot or on finetuning, please refer to the sections below. Zero-Shot Forecasting Importantly, we recommend trying different context lengths (starting from $32$ which it was trained on) and identifying what works best for your data. As we show in this section of the zero-shot forecasting demo , the model's zero-shot performance improves as the context length is increased, until a certain context length which may be specific to your data. Further, we recommend enabling RoPE scaling for the model to work well with context lengths larger than what it was trained on. Fine-Tuning If you are trying to benchmark the performance of the model under finetuning, or trying to obtain maximum performance from the model: We recommend tuning two important hyperparameters for each dataset that you finetune on: the context length (suggested values: $32$, $64$, $128$, $256$, $512$, $1024$) and the learning rate (suggested values: $10^{-2}$, $5 * 10^{-3}$, $10^{-3}$, $5 * 10^{-3}$, $1 * 10^{-4}$, $5 * 10^{-4}$). We also highly recommend using a validation split of your dataset to early stop your model, with an early stopping patience of 50 epochs. Contact We are dedicated to ensuring the reproducility of our results, and would be happy to help clarify questions about benchmarking our model or about the experiments in the paper.
The quickest way to reach us would be by email. Please email both :
1. Arjun Ashok - arjun [dot] ashok [at] servicenow [dot] com
2. Kashif Rasul - kashif [dot] rasul [at] gmail [dot] com If you have questions about the model usage (or) code (or) have specific errors (eg. using it with your own dataset), it would be best to create an issue in the GitHub repository. Citing this work Please use the following Bibtex entry to cite Lag-Llama. @misc{rasul2024lagllama,
title={Lag-Llama: Towards Foundation Models for Probabilistic Time Series Forecasting},
author={Kashif Rasul and Arjun Ashok and Andrew Robert Williams and Hena Ghonia and Rishika Bhagwatkar and Arian Khorasani and Mohammad Javad Darvishi Bayazi and George Adamopoulos and Roland Riachi and Nadhir Hassen and Marin Biloš and Sahil Garg and Anderson Schneider and Nicolas Chapados and Alexandre Drouin and Valentina Zantedeschi and Yuriy Nevmyvaka and Irina Rish},
year={2024},
eprint={2310.08278},
archivePrefix={arXiv},
primaryClass={cs.LG}
};Lag-Llama: Towards Foundation Models for Probabilistic Time Series Forecasting;forecasting,foundation-models,time-series,time-series-forecasting,timeseries,timeseries-forecasting,llama,time-series-prediction,time-series-transformer,transformers | time-series-foundation-models/lag-llama |
polymorphicshade/Tubular;Tubular A fork of NewPipe ( Github ) that implements SponsorBlock ( Github ) and ReturnYouTubeDislike ( Github ). Download the APK here . > [!warning]
> Because the database schema for Tubular is different than NewPipe, you currently cannot export the Tubular database and import it to NewPipe. >
> This will be changed in the future. ## To Do
Things I'll be working on next (not in any particular order):
- [ ] persist custom SponsorBlock segments in the database
- [ ] add SponsorBlock's "Exclusive Access" / "Sponsored Video feature"
- [ ] add SponsorBlock's chapters feature
- [ ] add a clickbait-remover
- [ ] add keyword/regex filtering
- [ ] add subscription importing with a YouTube login cookie
- [ ] add algorithmic results with a YouTube login cookie
- [ ] add offline YouTube playback
## License
[![GNU GPLv3](https://www.gnu.org/graphics/gplv3-127x51.png)](https://www.gnu.org/licenses/gpl-3.0.en.html);A fork of NewPipe that implements SponsorBlock and ReturnYouTubeDislike.;[] | polymorphicshade/Tubular |
AI4Finance-Foundation/FinRobot;FinRobot: An Open-Source AI Agent Platform for Financial Applications using Large Language Models FinRobot is an AI Agent Platform that transcends the scope of FinGPT, representing a comprehensive solution meticulously designed for financial applications. It integrates a diverse array of AI technologies , extending beyond mere language models. This expansive vision highlights the platform's versatility and adaptability, addressing the multifaceted needs of the financial industry. Concept of AI Agent : an AI Agent is an intelligent entity that uses large language models as its brain to perceive its environment, make decisions, and execute actions. Unlike traditional artificial intelligence, AI Agents possess the ability to independently think and utilize tools to progressively achieve given objectives. Whitepaper of FinRobot FinRobot Ecosystem The overall framework of FinRobot is organized into four distinct layers, each designed to address specific aspects of financial AI processing and application: Financial AI Agents Layer : The Financial AI Agents Layer now includes Financial Chain-of-Thought (CoT) prompting, enhancing complex analysis and decision-making capacity. Market Forecasting Agents, Document Analysis Agents, and Trading Strategies Agents utilize CoT to dissect financial challenges into logical steps, aligning their advanced algorithms and domain expertise with the evolving dynamics of financial markets for precise, actionable insights. Financial LLMs Algorithms Layer : The Financial LLMs Algorithms Layer configures and utilizes specially tuned models tailored to specific domains and global market analysis. LLMOps and DataOps Layers : The LLMOps layer implements a multi-source integration strategy that selects the most suitable LLMs for specific financial tasks, utilizing a range of state-of-the-art models. Multi-source LLM Foundation Models Layer : This foundational layer supports the plug-and-play functionality of various general and specialized LLMs. FinRobot: Agent Workflow Perception : This module captures and interprets multimodal financial data from market feeds, news, and economic indicators, using sophisticated techniques to structure the data for thorough analysis. Brain : Acting as the core processing unit, this module perceives data from the Perception module with LLMs and utilizes Financial Chain-of-Thought (CoT) processes to generate structured instructions. Action : This module executes instructions from the Brain module, applying tools to translate analytical insights into actionable outcomes. Actions include trading, portfolio adjustments, generating reports, or sending alerts, thereby actively influencing the financial environment. FinRobot: Smart Scheduler The Smart Scheduler is central to ensuring model diversity and optimizing the integration and selection of the most appropriate LLM for each task.
* Director Agent : This component orchestrates the task assignment process, ensuring that tasks are allocated to agents based on their performance metrics and suitability for specific tasks.
* Agent Registration : Manages the registration and tracks the availability of agents within the system, facilitating an efficient task allocation process.
* Agent Adaptor : Tailor agent functionalities to specific tasks, enhancing their performance and integration within the overall system.
* Task Manager : Manages and stores different general and fine-tuned LLMs-based agents tailored for various financial tasks, updated periodically to ensure relevance and efficacy. File Structure The main folder finrobot has three subfolders agents, data_source, functional . FinRobot
├── finrobot (main folder)
│ ├── agents
│ ├── agent_library.py
│ └── workflow.py
│ ├── data_source
│ ├── finnhub_utils.py
│ ├── finnlp_utils.py
│ ├── fmp_utils.py
│ ├── sec_utils.py
│ └── yfinance_utils.py
│ ├── functional
│ ├── analyzer.py
│ ├── charting.py
│ ├── coding.py
│ ├── quantitative.py
│ ├── reportlab.py
│ └── text.py
│ ├── toolkits.py
│ └── utils.py
│
├── configs
├── experiments
├── tutorials_beginner (hands-on tutorial)
│ ├── agent_fingpt_forecaster.ipynb
│ └── agent_annual_report.ipynb
├── tutorials_advanced (advanced tutorials for potential finrobot developers)
│ ├── agent_trade_strategist.ipynb
│ ├── agent_fingpt_forecaster.ipynb
│ ├── agent_annual_report.ipynb
│ ├── lmm_agent_mplfinance.ipynb
│ └── lmm_agent_opt_smacross.ipynb
├── setup.py
├── OAI_CONFIG_LIST_sample
├── config_api_keys_sample
├── requirements.txt
└── README.md Installation: 1. (Recommended) Create a new virtual environment shell
conda create --name finrobot python=3.10
conda activate finrobot 2. download the FinRobot repo use terminal or download it manually shell
git clone https://github.com/AI4Finance-Foundation/FinRobot.git
cd FinRobot 3. install finrobot & dependencies from source or pypi get our latest release from pypi bash
pip install -U finrobot or install from this repo directly pip install -e . 4. modify OAI_CONFIG_LIST_sample file shell
1) rename OAI_CONFIG_LIST_sample to OAI_CONFIG_LIST
2) remove the four lines of comment within the OAI_CONFIG_LIST file
3) add your own openai api-key <your OpenAI API key here> 5. modify config_api_keys_sample file shell
1) rename config_api_keys_sample to config_api_keys
2) remove the comment within the config_api_keys file
3) add your own finnhub-api "YOUR_FINNHUB_API_KEY"
4) add your own financialmodelingprep and sec-api keys "YOUR_FMP_API_KEY" and "YOUR_SEC_API_KEY" (for financial report generation) 6. start navigating the tutorials or the demos below: ``` find these notebooks in tutorials 1) agent_annual_report.ipynb
2) agent_fingpt_forecaster.ipynb
3) agent_trade_strategist.ipynb
4) lmm_agent_mplfinance.ipynb
5) lmm_agent_opt_smacross.ipynb
``` Demos 1. Market Forecaster Agent (Predict Stock Movements Direction) Takes a company's ticker symbol, recent basic financials, and market news as input and predicts its stock movements. Import python
import autogen
from finrobot.utils import get_current_date, register_keys_from_json
from finrobot.agents.workflow import SingleAssistant Config
```python Read OpenAI API keys from a JSON file llm_config = {
"config_list": autogen.config_list_from_json(
"../OAI_CONFIG_LIST",
filter_dict={"model": ["gpt-4-0125-preview"]},
),
"timeout": 120,
"temperature": 0,
} Register FINNHUB API keys register_keys_from_json("../config_api_keys") 3. Run python
company = "NVDA" assitant = SingleAssistant(
"Market_Analyst",
llm_config,
# set to "ALWAYS" if you want to chat instead of simply receiving the prediciton
human_input_mode="NEVER",
)
assitant.chat(
f"Use all the tools provided to retrieve information available for {company} upon {get_current_date()}. Analyze the positive developments and potential concerns of {company} "
"with 2-4 most important factors respectively and keep them concise. Most factors should be inferred from company related news. "
f"Then make a rough prediction (e.g. up/down by 2-3%) of the {company} stock price movement for next week. Provide a summary analysis to support your prediction."
)
```
4. Result 2. Financial Analyst Agent for Report Writing (Equity Research Report) Take a company's 10-k form, financial data, and market data as input and output an equity research report Import python
import os
import autogen
from textwrap import dedent
from finrobot.utils import register_keys_from_json
from finrobot.agents.workflow import SingleAssistantShadow Config
```python
llm_config = {
"config_list": autogen.config_list_from_json(
"../OAI_CONFIG_LIST",
filter_dict={
"model": ["gpt-4-0125-preview"],
},
),
"timeout": 120,
"temperature": 0.5,
}
register_keys_from_json("../config_api_keys") Intermediate strategy modules will be saved in this directory work_dir = "../report"
os.makedirs(work_dir, exist_ok=True) assistant = SingleAssistantShadow(
"Expert_Investor",
llm_config,
max_consecutive_auto_reply=None,
human_input_mode="TERMINATE",
) 3. Run python
company = "Microsoft"
fyear = "2023" message = dedent(
f"""
With the tools you've been provided, write an annual report based on {company}'s {fyear} 10-k report, format it into a pdf.
Pay attention to the followings:
- Explicitly explain your working plan before you kick off.
- Use tools one by one for clarity, especially when asking for instructions.
- All your file operations should be done in "{work_dir}".
- Display any image in the chat once generated.
- All the paragraphs should combine between 400 and 450 words, don't generate the pdf until this is explicitly fulfilled.
"""
) assistant.chat(message, use_cache=True, max_turns=50,
summary_method="last_msg")
```
4. Result Financial CoT :
1. Gather Preliminary Data : 10-K report, market data, financial ratios
2. Analyze Financial Statements : balance sheet, income statement, cash flow
3. Company Overview and Performance : company description, business highlights, segment analysis
4. Risk Assessment : assess risks
5. Financial Performance Visualization : plot PE ratio and EPS
6. Synthesize Findings into Paragraphs : combine all parts into a coherent summary
7. Generate PDF Report : use tools to generate PDF automatically
8. Quality Assurance : check word counts 3. Trade Strategist Agent with multimodal capabilities AI Agent Papers [Stanford University + Microsoft Research] Agent AI: Surveying the Horizons of Multimodal Interaction [Stanford University] Generative Agents: Interactive Simulacra of Human Behavior [Fudan NLP Group] The Rise and Potential of Large Language Model Based Agents: A Survey [Fudan NLP Group] LLM-Agent-Paper-List [Tsinghua University] Large Language Models Empowered Agent-based Modeling and Simulation: A Survey and Perspectives [Renmin University] A Survey on Large Language Model-based Autonomous Agents [Nanyang Technological University] FinAgent: A Multimodal Foundation Agent for Financial Trading: Tool-Augmented, Diversified, and Generalist AI Agent Blogs and Videos [Medium] An Introduction to AI Agents [Medium] Unmasking the Best Character AI Chatbots | 2024 [big-picture] ChatGPT, Next Level: Meet 10 Autonomous AI Agents [TowardsDataScience] Navigating the World of LLM Agents: A Beginner’s Guide [YouTube] Introducing Devin - The "First" AI Agent Software Engineer AI Agent Open-Source Framework & Tool AutoGPT (163k stars) is a tool for everyone to use, aiming to democratize AI, making it accessible for everyone to use and build upon. LangChain (87.4k stars) is a framework for developing context-aware applications powered by language models, enabling them to connect to sources of context and rely on the model's reasoning capabilities for responses and actions. MetaGPT (41k stars) is a multi-agent open-source framework that assigns different roles to GPTs, forming a collaborative software entity to execute complex tasks. dify (34.1.7k stars) is an LLM application development platform. It integrates the concepts of Backend as a Service and LLMOps, covering the core tech stack required for building generative AI-native applications, including a built-in RAG engine AutoGen (27.4k stars) is a framework for developing LLM applications with conversational agents that collaborate to solve tasks. These agents are customizable, support human interaction, and operate in modes combining LLMs, human inputs, and tools. ChatDev (24.1k stars) is a framework that focuses on developing conversational AI Agents capable of dialogue and question-answering. It provides a range of pre-trained models and interactive interfaces, facilitating the development of customized chat Agents for users. BabyAGI (19.5k stars) is an AI-powered task management system, dedicated to building AI Agents with preliminary general intelligence. CrewAI (16k stars) is a framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks. SuperAGI (14.8k stars) is a dev-first open-source autonomous AI agent framework enabling developers to build, manage & run useful autonomous agents. FastGPT (14.6k stars) is a knowledge-based platform built on the LLM, offers out-of-the-box data processing and model invocation capabilities, allows for workflow orchestration through Flow visualization. XAgent (7.8k stars) is an open-source experimental Large Language Model (LLM) driven autonomous agent that can automatically solve various tasks. Bisheng (7.8k stars) is a leading open-source platform for developing LLM applications. Voyager (5.3k stars) An Open-Ended Embodied Agent with Large Language Models. CAMEL (4.7k stars) is a framework that offers a comprehensive set of tools and algorithms for building multimodal AI Agents, enabling them to handle various data forms such as text, images, and speech. Langfuse (4.3k stars) is a language fusion framework that can integrate the language abilities of multiple AI Agents, enabling them to simultaneously possess multilingual understanding and generation capabilities. Disclaimer : The codes and documents provided herein are released under the Apache-2.0 license. They should not be construed as financial counsel or recommendations for live trading. It is imperative to exercise caution and consult with qualified financial professionals prior to any trading or investment actions.;FinRobot: An Open-Source AI Agent Platform for Financial Applications using LLMs 🚀 🚀 🚀 ;aiagent,fingpt,chatgpt,finance,large-language-models,multimodal-deep-learning,prompt-engineering,robo-advisor | AI4Finance-Foundation/FinRobot |
stack-auth/pgmock;pgmock Demo — Discord pgmock is an in-memory PostgreSQL mock server for unit and E2E tests. It requires no external dependencies and runs entirely within WebAssembly on both Node.js and the browser. Installation bash
npm install pgmock If you'd like to run pgmock in a browser, see the Browser support section for detailed instructions. Getting started You can run an in-memory server like so: ```typescript
import { PostgresMock } from "pgmock"; const mock = await PostgresMock.create();
const connectionString = await mock.listen(5432);
``` Recommended: If you use node-postgres ( pg on npm), pgmock provides you with a configuration object that doesn't require you to serve on a port (and also works in the browser): ```typescript
import * as pg from "pg"; const mock = await PostgresMock.create();
const client = new pg.Client(mock.getNodePostgresConfig()); await client.connect();
console.log(await client.query('SELECT $1::text as message', ['Hello world!']));
``` It is considered good practice to destroy the mock server after you are done with it to free up resources: typescript
mock.destroy(); Documentation Check the PostgresMock source file for a list of all available methods and their documentation. Browser support pgmock fully supports browser environments. While webapps can't listen to TCP ports, you can still use PostgresMock.createSocket and the node-postgres configuration. However, if your bundler statically analyzes imports, the default configuration may show a warning because of missing (optional) Node.js modules. Check examples/web-demo/next.config.mjs for an example on how to configure Webpack for bundling. If you're only looking to run a database in the browser, you might want to consider pglite instead. It is more performant and lightweight, but only has a limited feature set. pgmock is designed for feature parity with production PostgreSQL environments, as you would want in a testing environment. How does it work? There are two approaches to run Postgres in WebAssembly; by forking it to support WASM natively or by emulating the Postgres server in an x86 emulator . The former is more performant and uses considerably less memory, but only supports single-user mode (no connections), and no extensions. To prevent discrepancies between testing and production, and because performance is not usually a concern in tests, pgmock currently uses the latter approach. In the mid-term future, once native Postgres WASM forks mature, we plan to make both options available, and eventually, switch to native WASM as default. We don't expect there to be many breaking changes besides the APIs inside PostgresMock.subtle . pgmock differs from previous Postgres-in-the-browser projects by providing full feature-compatibility entirely inside the JavaScript runtime, without depending on a network proxy for communication. We did this by simulating a network stack in JavaScript that behaves like a real network, that can simulate TCP connections even on platforms that do not allow raw socket access. Wanna contribute? Great! We have a Discord server where you can talk to us. Can this run other Docker images or databases? In theory, yes. I just haven't tested them. Ping me on our Discord server if you're interested. Acknowledgements v86 , the x86 emulator which makes this possible Supabase & Snaplet for building their own approach of running Postgres inside WebAssembly, which this is based on Stackframe for keeping me on a payroll while I was building pgmock;In-memory Postgres for unit/E2E tests;[] | stack-auth/pgmock |
jgravelle/AutoGroq;(Stuff's happenin'...) NOTE: NEW AUTOGEN IS NOT (YET) COMPATABLE WITH AutoGroq™ You'll need to install the PREVIOUS Autogen with: pip install autogenstudio==0.0.56 (h/t - Scruff) P.S. -
Having an issue with AutogenStudio? If you installed the newest version (0.1.1), it won't work. Don't even try. example of the Problem: Console error: "Error while getting items: Workflow 'groupchat' is not among the defined enum values. Enum name: workflowtype. Possible values: autonomous, sequential" Steps to fix:
- Go to your conda environment and uninstall the new version of autogen - pip uninstall autogenstudio
- Navigate to your .autogenstudio folder (should usually be in your home dir) and delete the database.sqlite file (it's already been ruined with adding new tables over the existing ones, so back it up if you had any good data in there and upgraded... and maybe you can salvage it manually, I dunno)
- Install the older version - pip install autogenstudio==0.0.56
- Run autogenstudio again, problem should be fixed thanks to Luis2k
= = = = = = = = = Our next-generation sandbox is online at: https://autogrok.streamlit.app/ It works something like this: UNDER CONSTRUCTION! If things are wonky, that's why. Do this:
1) Set the DEBUG flag to 'True'
2) Run it locally; and
3) Watch the explanation:
https://www.youtube.com/watch?v=5cHhvIlUS9Q NEW: Py2Md! Users wanted the entire code available as markup (versus PDF) Here's the new utility we use to do it: https://github.com/jgravelle/Py2md It's the easiest way to get your code into ChatGPT, Claude, etc. AutoGroq™ AutoGroq is a groundbreaking tool that revolutionizes the way users interact with AI assistants. By dynamically generating tailored teams of AI agents based on your project requirements, AutoGroq eliminates the need for manual configuration and allows you to tackle any question, problem, or project with ease and efficiency. NEW THIS WEEK: SKILL GENERATION! Why AutoGroq? AutoGroq was born out of the realization that the traditional approach to building AI agents was backwards. Instead of creating agents in anticipation of problems, AutoGroq uses the syntax of the users' needs as the basis for constructing the perfect AI team. It's how we wished Autogen worked from the very beginning. With AutoGroq, a fully configured workflow, team of agents, and skillset are just a few clicks and a couple of minutes away, without any programming necessary. Our rapidly growing user base of nearly 8000 developers is a testament to the power and effectiveness of AutoGroq. Key Features Dynamic Expert Agent Generation : AutoGroq automatically creates expert agents specialized in various domains or topics, ensuring you receive the most relevant support for your inquiries. Dynamic Workflow Generation : With AutoGroq, you're just minutes away from having a custom team of experts working on your project. Watch our video tutorial to see it in action! Natural Conversation Flow : Engage in intuitive and contextually aware conversations with AutoGroq's expert agents, facilitating a seamless exchange of information. Code Snippet Extraction : AutoGroq intelligently extracts and presents code snippets within a dedicated "Whiteboard" section, making it convenient to reference, copy, or modify code during your interaction. Flexible Agent Management : Customize your panel of expert agents according to your evolving project needs. Add new agents, modify their expertise, or remove them as required. Advanced Prompt Rephrasing : AutoGroq employs sophisticated natural language processing techniques to rephrase user inputs, enhancing clarity and ensuring accurate responses from expert agents. Bulk File Upload to Autogen : With AutoGroq, you can import multiple agents, skills, and workflows into Autogen with a single click, saving you time and effort. Support for Multiple LLMs : AutoGroq supports Groq, ChatGPT, Ollama, and more, making it compatible with a wide range of language models. You can even create your own provider model to integrate with your preferred LLM. Skill Integration : Extend your agents' capabilities by adding custom skills. Simply drop a valid skill file into the skills folder, and it will be automatically available for your agents to use. Getting Started To get started with AutoGroq, follow these steps: Install Autogen following Matt Berman's instructions: https://www.youtube.com/watch?v=mUEFwUU0IfE Install Mini-conda: https://docs.anaconda.com/free/miniconda/miniconda-install/ Open a command prompt and run the following commands:
md c:\AutoGroq cd c:\AutoGroq conda create -n AutoGroq python=3.11 conda activate AutoGroq git clone https://github.com/jgravelle/AutoGroq.git cd AutoGroq
pip install -r requirements.txt streamlit run c:\AutoGroq\AutoGroq\main.py Configuration To customize the configurations for your local environment, follow these steps: Create a new file called config_local.py in the same directory as config.py . Copy the contents of config_local.py.example into config_local.py . Modify the values in config_local.py according to your specific setup, such as API keys and URLs. Save the config_local.py file. Note: The config_local.py file is not tracked by Git, so your customizations will not be overwritten when pulling updates from the repository. How It Works Initiation : Begin by entering your query or request in the designated input area. Engagement : Click the "Begin" button to initiate the interaction. AutoGroq will rephrase your request and generate the appropriate expert agents. Interaction : Select an expert agent to receive specialized assistance tailored to your needs. Dialogue : Continue the conversation by providing additional input or context as required, guiding the flow of information. Review : The "Discussion" section will display your dialogue history, while the "Whiteboard" section will showcase any extracted code snippets. Reset : Use the "Reset" button to clear the current conversation and start a new one whenever needed. Live Demo and Video Tutorial Experience AutoGroq's capabilities firsthand by accessing our online beta version: AutoGroq Live Demo For a step-by-step guide on using AutoGroq, watch our updated video tutorials: AutoGroq Video Tutorials Contributing We value your feedback and contributions in shaping the future of AutoGroq. If you encounter any issues or have ideas for new features, please share them with us on our GitHub repository . License AutoGroq is proudly open-source and released under the MIT License . Thank you for choosing AutoGroq as your AI-powered conversational assistant. We are committed to redefining the boundaries of what AI can achieve and empowering you to tackle any question, problem, or project with ease and efficiency. Copyright (c)2024 J. Gravelle Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: 1. The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. 2. Any modifications made to the Software must clearly indicate that they are derived from the original work, and the name of the original author (J. Gravelle) must remain intact. 3. Redistributions of the Software in source code form must also include a prominent notice that the code has been modified from the original. THE SOFTWARE IS PROVIDED "AS IS," WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES, OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT, OR OTHERWISE, ARISING FROM, OUT OF, OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.;AutoGroq is a groundbreaking tool that revolutionizes the way users interact with Autogen™ and other AI assistants. By dynamically generating tailored teams of AI agents based on your project requirements, AutoGroq eliminates the need for manual configuration and allows you to tackle any question, problem, or project with ease and efficiency.;agents,ai,artificial-intelligence,autogen,crewai,groq,llm | jgravelle/AutoGroq |
adrianhajdin/banking;A Fintech Bank Application Build this project step by step with our detailed tutorial on JavaScript Mastery YouTube. Join the JSM family! 📋 Table of Contents 🤖 Introduction ⚙️ Tech Stack 🔋 Features 🤸 Quick Start 🕸️ Code Snippets to Copy 🔗 Assets 🚀 More 🚨 Tutorial This repository contains the code corresponding to an in-depth tutorial available on our YouTube channel, JavaScript Mastery . If you prefer visual learning, this is the perfect resource for you. Follow our tutorial to learn how to build projects like these step-by-step in a beginner-friendly manner! 🤖 Introduction Built with Next.js, Horizon is a financial SaaS platform that connects to multiple bank accounts, displays transactions in real-time, allows users to transfer money to other platform users, and manages their finances altogether. If you're getting started and need assistance or face any bugs, join our active Discord community with over 34k+ members. It's a place where people help each other out. ⚙️ Tech Stack Next.js TypeScript Appwrite Plaid Dwolla React Hook Form Zod TailwindCSS Chart.js ShadCN 🔋 Features 👉 Authentication : An ultra-secure SSR authentication with proper validations and authorization 👉 Connect Banks : Integrates with Plaid for multiple bank account linking 👉 Home Page : Shows general overview of user account with total balance from all connected banks, recent transactions, money spent on different categories, etc 👉 My Banks : Check the complete list of all connected banks with respective balances, account details 👉 Transaction History : Includes pagination and filtering options for viewing transaction history of different banks 👉 Real-time Updates : Reflects changes across all relevant pages upon connecting new bank accounts. 👉 Funds Transfer : Allows users to transfer funds using Dwolla to other accounts with required fields and recipient bank ID. 👉 Responsiveness : Ensures the application adapts seamlessly to various screen sizes and devices, providing a consistent user experience across desktop, tablet, and mobile platforms. and many more, including code architecture and reusability. 🤸 Quick Start Follow these steps to set up the project locally on your machine. Prerequisites Make sure you have the following installed on your machine: Git Node.js npm (Node Package Manager) Cloning the Repository bash
git clone https://github.com/adrianhajdin/banking.git
cd banking Installation Install the project dependencies using npm: bash
npm install Set Up Environment Variables Create a new file named .env in the root of your project and add the following content: ```env NEXT NEXT_PUBLIC_SITE_URL= APPWRITE NEXT_PUBLIC_APPWRITE_ENDPOINT=https://cloud.appwrite.io/v1
NEXT_PUBLIC_APPWRITE_PROJECT=
APPWRITE_DATABASE_ID=
APPWRITE_USER_COLLECTION_ID=
APPWRITE_BANK_COLLECTION_ID=
APPWRITE_TRANSACTION_COLLECTION_ID=
APPWRITE_SECRET= PLAID PLAID_CLIENT_ID=
PLAID_SECRET=
PLAID_ENV=
PLAID_PRODUCTS=
PLAID_COUNTRY_CODES= DWOLLA DWOLLA_KEY=
DWOLLA_SECRET=
DWOLLA_BASE_URL=https://api-sandbox.dwolla.com
DWOLLA_ENV=sandbox ``` Replace the placeholder values with your actual respective account credentials. You can obtain these credentials by signing up on the Appwrite , Plaid and Dwolla Running the Project bash
npm run dev Open http://localhost:3000 in your browser to view the project. 🕸️ Snippets .env.example ```env
#NEXT
NEXT_PUBLIC_SITE_URL=
#APPWRITE
NEXT_PUBLIC_APPWRITE_ENDPOINT=https://cloud.appwrite.io/v1
NEXT_PUBLIC_APPWRITE_PROJECT=
APPWRITE_DATABASE_ID=
APPWRITE_USER_COLLECTION_ID=
APPWRITE_BANK_COLLECTION_ID=
APPWRITE_TRANSACTION_COLLECTION_ID=
APPWRITE_SECRET=
#PLAID
PLAID_CLIENT_ID=
PLAID_SECRET=
PLAID_ENV=sandbox
PLAID_PRODUCTS=auth,transactions,identity
PLAID_COUNTRY_CODES=US,CA
#DWOLLA
DWOLLA_KEY=
DWOLLA_SECRET=
DWOLLA_BASE_URL=https://api-sandbox.dwolla.com
DWOLLA_ENV=sandbox
``` exchangePublicToken ```typescript
// This function exchanges a public token for an access token and item ID
export const exchangePublicToken = async ({
publicToken,
user,
}: exchangePublicTokenProps) => {
try {
// Exchange public token for access token and item ID
const response = await plaidClient.itemPublicTokenExchange({
public_token: publicToken,
});
const accessToken = response.data.access_token;
const itemId = response.data.item_id;
// Get account information from Plaid using the access token
const accountsResponse = await plaidClient.accountsGet({
access_token: accessToken,
});
const accountData = accountsResponse.data.accounts[0];
// Create a processor token for Dwolla using the access token and account ID
const request: ProcessorTokenCreateRequest = {
access_token: accessToken,
account_id: accountData.account_id,
processor: "dwolla" as ProcessorTokenCreateRequestProcessorEnum,
};
const processorTokenResponse =
await plaidClient.processorTokenCreate(request);
const processorToken = processorTokenResponse.data.processor_token;
// Create a funding source URL for the account using the Dwolla customer ID, processor token, and bank name
const fundingSourceUrl = await addFundingSource({
dwollaCustomerId: user.dwollaCustomerId,
processorToken,
bankName: accountData.name,
});
// If the funding source URL is not created, throw an error
if (!fundingSourceUrl) throw Error;
// Create a bank account using the user ID, item ID, account ID, access token, funding source URL, and sharable ID
await createBankAccount({
userId: user.$id,
bankId: itemId,
accountId: accountData.account_id,
accessToken,
fundingSourceUrl,
sharableId: encryptId(accountData.account_id),
});
// Revalidate the path to reflect the changes
revalidatePath("/");
// Return a success message
return parseStringify({
publicTokenExchange: "complete",
});
} catch (error) {
// Log any errors that occur during the process
console.error("An error occurred while creating exchanging token:", error);
}
};
``` user.actions.ts ```typescript
"use server";
import { revalidatePath } from "next/cache";
import { cookies } from "next/headers";
import { ID, Query } from "node-appwrite";
import {
CountryCode,
ProcessorTokenCreateRequest,
ProcessorTokenCreateRequestProcessorEnum,
Products,
} from "plaid";
import { plaidClient } from "@/lib/plaid.config";
import {
parseStringify,
extractCustomerIdFromUrl,
encryptId,
} from "@/lib/utils";
import { createAdminClient, createSessionClient } from "../appwrite.config";
import { addFundingSource, createDwollaCustomer } from "./dwolla.actions";
const {
APPWRITE_DATABASE_ID: DATABASE_ID,
APPWRITE_USER_COLLECTION_ID: USER_COLLECTION_ID,
APPWRITE_BANK_COLLECTION_ID: BANK_COLLECTION_ID,
} = process.env;
export const signUp = async ({ password, ...userData }: SignUpParams) => {
let newUserAccount;
try {
// create appwrite user
const { database, account } = await createAdminClient();
newUserAccount = await account.create(
ID.unique(),
userData.email,
password,
`${userData.firstName} ${userData.lastName}`
);
if (!newUserAccount) throw new Error("Error creating user");
// create dwolla customer
const dwollaCustomerUrl = await createDwollaCustomer({
...userData,
type: "personal",
});
if (!dwollaCustomerUrl) throw new Error("Error creating dwolla customer");
const dwollaCustomerId = extractCustomerIdFromUrl(dwollaCustomerUrl);
const newUser = await database.createDocument(
DATABASE_ID!,
USER_COLLECTION_ID!,
ID.unique(),
{
...userData,
userId: newUserAccount.$id,
dwollaCustomerUrl,
dwollaCustomerId,
}
);
const session = await account.createEmailPasswordSession(
userData.email,
password
);
cookies().set("appwrite-session", session.secret, {
path: "/",
httpOnly: true,
sameSite: "strict",
secure: true,
});
return parseStringify(newUser);
} catch (error) {
console.error("Error", error);
// check if account has been created, if so, delete it
if (newUserAccount?.$id) {
const { user } = await createAdminClient();
await user.delete(newUserAccount?.$id);
}
return null;
}
};
export const signIn = async ({ email, password }: signInProps) => {
try {
const { account } = await createAdminClient();
const session = await account.createEmailPasswordSession(email, password);
cookies().set("appwrite-session", session.secret, {
path: "/",
httpOnly: true,
sameSite: "strict",
secure: true,
});
const user = await getUserInfo({ userId: session.userId });
return parseStringify(user);
} catch (error) {
console.error("Error", error);
return null;
}
};
export const getLoggedInUser = async () => {
try {
const { account } = await createSessionClient();
const result = await account.get();
const user = await getUserInfo({ userId: result.$id });
return parseStringify(user);
} catch (error) {
console.error("Error", error);
return null;
}
};
// CREATE PLAID LINK TOKEN
export const createLinkToken = async (user: User) => {
try {
const tokeParams = {
user: {
client_user_id: user.$id,
},
client_name: user.firstName + user.lastName,
products: ["auth"] as Products[],
language: "en",
country_codes: ["US"] as CountryCode[],
};
const response = await plaidClient.linkTokenCreate(tokeParams);
return parseStringify({ linkToken: response.data.link_token });
} catch (error) {
console.error(
"An error occurred while creating a new Horizon user:",
error
);
}
};
// EXCHANGE PLAID PUBLIC TOKEN
// This function exchanges a public token for an access token and item ID
export const exchangePublicToken = async ({
publicToken,
user,
}: exchangePublicTokenProps) => {
try {
// Exchange public token for access token and item ID
const response = await plaidClient.itemPublicTokenExchange({
public_token: publicToken,
});
const accessToken = response.data.access_token;
const itemId = response.data.item_id;
// Get account information from Plaid using the access token
const accountsResponse = await plaidClient.accountsGet({
access_token: accessToken,
});
const accountData = accountsResponse.data.accounts[0];
// Create a processor token for Dwolla using the access token and account ID
const request: ProcessorTokenCreateRequest = {
access_token: accessToken,
account_id: accountData.account_id,
processor: "dwolla" as ProcessorTokenCreateRequestProcessorEnum,
};
const processorTokenResponse =
await plaidClient.processorTokenCreate(request);
const processorToken = processorTokenResponse.data.processor_token;
// Create a funding source URL for the account using the Dwolla customer ID, processor token, and bank name
const fundingSourceUrl = await addFundingSource({
dwollaCustomerId: user.dwollaCustomerId,
processorToken,
bankName: accountData.name,
});
// If the funding source URL is not created, throw an error
if (!fundingSourceUrl) throw Error;
// Create a bank account using the user ID, item ID, account ID, access token, funding source URL, and sharable ID
await createBankAccount({
userId: user.$id,
bankId: itemId,
accountId: accountData.account_id,
accessToken,
fundingSourceUrl,
sharableId: encryptId(accountData.account_id),
});
// Revalidate the path to reflect the changes
revalidatePath("/");
// Return a success message
return parseStringify({
publicTokenExchange: "complete",
});
} catch (error) {
// Log any errors that occur during the process
console.error("An error occurred while creating exchanging token:", error);
}
};
export const getUserInfo = async ({ userId }: getUserInfoProps) => {
try {
const { database } = await createAdminClient();
const user = await database.listDocuments(
DATABASE_ID!,
USER_COLLECTION_ID!,
[Query.equal("userId", [userId])]
);
if (user.total !== 1) return null;
return parseStringify(user.documents[0]);
} catch (error) {
console.error("Error", error);
return null;
}
};
export const createBankAccount = async ({
accessToken,
userId,
accountId,
bankId,
fundingSourceUrl,
sharableId,
}: createBankAccountProps) => {
try {
const { database } = await createAdminClient();
const bankAccount = await database.createDocument(
DATABASE_ID!,
BANK_COLLECTION_ID!,
ID.unique(),
{
accessToken,
userId,
accountId,
bankId,
fundingSourceUrl,
sharableId,
}
);
return parseStringify(bankAccount);
} catch (error) {
console.error("Error", error);
return null;
}
};
// get user bank accounts
export const getBanks = async ({ userId }: getBanksProps) => {
try {
const { database } = await createAdminClient();
const banks = await database.listDocuments(
DATABASE_ID!,
BANK_COLLECTION_ID!,
[Query.equal("userId", [userId])]
);
return parseStringify(banks.documents);
} catch (error) {
console.error("Error", error);
return null;
}
};
// get specific bank from bank collection by document id
export const getBank = async ({ documentId }: getBankProps) => {
try {
const { database } = await createAdminClient();
const bank = await database.listDocuments(
DATABASE_ID!,
BANK_COLLECTION_ID!,
[Query.equal("$id", [documentId])]
);
if (bank.total !== 1) return null;
return parseStringify(bank.documents[0]);
} catch (error) {
console.error("Error", error);
return null;
}
};
// get specific bank from bank collection by account id
export const getBankByAccountId = async ({
accountId,
}: getBankByAccountIdProps) => {
try {
const { database } = await createAdminClient();
const bank = await database.listDocuments(
DATABASE_ID!,
BANK_COLLECTION_ID!,
[Query.equal("accountId", [accountId])]
);
if (bank.total !== 1) return null;
return parseStringify(bank.documents[0]);
} catch (error) {
console.error("Error", error);
return null;
}
};
``` dwolla.actions.ts ```typescript
"use server";
import { Client } from "dwolla-v2";
const getEnvironment = (): "production" | "sandbox" => {
const environment = process.env.DWOLLA_ENV as string;
switch (environment) {
case "sandbox":
return "sandbox";
case "production":
return "production";
default:
throw new Error(
"Dwolla environment should either be set to `sandbox` or `production`"
);
}
};
const dwollaClient = new Client({
environment: getEnvironment(),
key: process.env.DWOLLA_KEY as string,
secret: process.env.DWOLLA_SECRET as string,
});
// Create a Dwolla Funding Source using a Plaid Processor Token
export const createFundingSource = async (
options: CreateFundingSourceOptions
) => {
try {
return await dwollaClient
.post(`customers/${options.customerId}/funding-sources`, {
name: options.fundingSourceName,
plaidToken: options.plaidToken,
})
.then((res) => res.headers.get("location"));
} catch (err) {
console.error("Creating a Funding Source Failed: ", err);
}
};
export const createOnDemandAuthorization = async () => {
try {
const onDemandAuthorization = await dwollaClient.post(
"on-demand-authorizations"
);
const authLink = onDemandAuthorization.body._links;
return authLink;
} catch (err) {
console.error("Creating an On Demand Authorization Failed: ", err);
}
};
export const createDwollaCustomer = async (
newCustomer: NewDwollaCustomerParams
) => {
try {
return await dwollaClient
.post("customers", newCustomer)
.then((res) => res.headers.get("location"));
} catch (err) {
console.error("Creating a Dwolla Customer Failed: ", err);
}
};
export const createTransfer = async ({
sourceFundingSourceUrl,
destinationFundingSourceUrl,
amount,
}: TransferParams) => {
try {
const requestBody = {
_links: {
source: {
href: sourceFundingSourceUrl,
},
destination: {
href: destinationFundingSourceUrl,
},
},
amount: {
currency: "USD",
value: amount,
},
};
return await dwollaClient
.post("transfers", requestBody)
.then((res) => res.headers.get("location"));
} catch (err) {
console.error("Transfer fund failed: ", err);
}
};
export const addFundingSource = async ({
dwollaCustomerId,
processorToken,
bankName,
}: AddFundingSourceParams) => {
try {
// create dwolla auth link
const dwollaAuthLinks = await createOnDemandAuthorization();
// add funding source to the dwolla customer & get the funding source url
const fundingSourceOptions = {
customerId: dwollaCustomerId,
fundingSourceName: bankName,
plaidToken: processorToken,
_links: dwollaAuthLinks,
};
return await createFundingSource(fundingSourceOptions);
} catch (err) {
console.error("Transfer fund failed: ", err);
}
};
``` bank.actions.ts ```typescript
"use server";
import {
ACHClass,
CountryCode,
TransferAuthorizationCreateRequest,
TransferCreateRequest,
TransferNetwork,
TransferType,
} from "plaid";
import { plaidClient } from "../plaid.config";
import { parseStringify } from "../utils";
import { getTransactionsByBankId } from "./transaction.actions";
import { getBanks, getBank } from "./user.actions";
// Get multiple bank accounts
export const getAccounts = async ({ userId }: getAccountsProps) => {
try {
// get banks from db
const banks = await getBanks({ userId });
const accounts = await Promise.all(
banks?.map(async (bank: Bank) => {
// get each account info from plaid
const accountsResponse = await plaidClient.accountsGet({
access_token: bank.accessToken,
});
const accountData = accountsResponse.data.accounts[0];
// get institution info from plaid
const institution = await getInstitution({
institutionId: accountsResponse.data.item.institution_id!,
});
const account = {
id: accountData.account_id,
availableBalance: accountData.balances.available!,
currentBalance: accountData.balances.current!,
institutionId: institution.institution_id,
name: accountData.name,
officialName: accountData.official_name,
mask: accountData.mask!,
type: accountData.type as string,
subtype: accountData.subtype! as string,
appwriteItemId: bank.$id,
sharableId: bank.sharableId,
};
return account;
})
);
const totalBanks = accounts.length;
const totalCurrentBalance = accounts.reduce((total, account) => {
return total + account.currentBalance;
}, 0);
return parseStringify({ data: accounts, totalBanks, totalCurrentBalance });
} catch (error) {
console.error("An error occurred while getting the accounts:", error);
}
};
// Get one bank account
export const getAccount = async ({ appwriteItemId }: getAccountProps) => {
try {
// get bank from db
const bank = await getBank({ documentId: appwriteItemId });
// get account info from plaid
const accountsResponse = await plaidClient.accountsGet({
access_token: bank.accessToken,
});
const accountData = accountsResponse.data.accounts[0];
// get transfer transactions from appwrite
const transferTransactionsData = await getTransactionsByBankId({
bankId: bank.$id,
});
const transferTransactions = transferTransactionsData.documents.map(
(transferData: Transaction) => ({
id: transferData.$id,
name: transferData.name!,
amount: transferData.amount!,
date: transferData.$createdAt,
paymentChannel: transferData.channel,
category: transferData.category,
type: transferData.senderBankId === bank.$id ? "debit" : "credit",
})
);
// get institution info from plaid
const institution = await getInstitution({
institutionId: accountsResponse.data.item.institution_id!,
});
const transactions = await getTransactions({
accessToken: bank?.accessToken,
});
const account = {
id: accountData.account_id,
availableBalance: accountData.balances.available!,
currentBalance: accountData.balances.current!,
institutionId: institution.institution_id,
name: accountData.name,
officialName: accountData.official_name,
mask: accountData.mask!,
type: accountData.type as string,
subtype: accountData.subtype! as string,
appwriteItemId: bank.$id,
};
// sort transactions by date such that the most recent transaction is first
const allTransactions = [...transactions, ...transferTransactions].sort(
(a, b) => new Date(b.date).getTime() - new Date(a.date).getTime()
);
return parseStringify({
data: account,
transactions: allTransactions,
});
} catch (error) {
console.error("An error occurred while getting the account:", error);
}
};
// Get bank info
export const getInstitution = async ({
institutionId,
}: getInstitutionProps) => {
try {
const institutionResponse = await plaidClient.institutionsGetById({
institution_id: institutionId,
country_codes: ["US"] as CountryCode[],
});
const intitution = institutionResponse.data.institution;
return parseStringify(intitution);
} catch (error) {
console.error("An error occurred while getting the accounts:", error);
}
};
// Get transactions
export const getTransactions = async ({
accessToken,
}: getTransactionsProps) => {
let hasMore = true;
let transactions: any = [];
try {
// Iterate through each page of new transaction updates for item
while (hasMore) {
const response = await plaidClient.transactionsSync({
access_token: accessToken,
});
const data = response.data;
transactions = response.data.added.map((transaction) => ({
id: transaction.transaction_id,
name: transaction.name,
paymentChannel: transaction.payment_channel,
type: transaction.payment_channel,
accountId: transaction.account_id,
amount: transaction.amount,
pending: transaction.pending,
category: transaction.category ? transaction.category[0] : "",
date: transaction.date,
image: transaction.logo_url,
}));
hasMore = data.has_more;
}
return parseStringify(transactions);
} catch (error) {
console.error("An error occurred while getting the accounts:", error);
}
};
// Create Transfer
export const createTransfer = async () => {
const transferAuthRequest: TransferAuthorizationCreateRequest = {
access_token: "access-sandbox-cddd20c1-5ba8-4193-89f9-3a0b91034c25",
account_id: "Zl8GWV1jqdTgjoKnxQn1HBxxVBanm5FxZpnQk",
funding_account_id: "442d857f-fe69-4de2-a550-0c19dc4af467",
type: "credit" as TransferType,
network: "ach" as TransferNetwork,
amount: "10.00",
ach_class: "ppd" as ACHClass,
user: {
legal_name: "Anne Charleston",
},
};
try {
const transferAuthResponse =
await plaidClient.transferAuthorizationCreate(transferAuthRequest);
const authorizationId = transferAuthResponse.data.authorization.id;
const transferCreateRequest: TransferCreateRequest = {
access_token: "access-sandbox-cddd20c1-5ba8-4193-89f9-3a0b91034c25",
account_id: "Zl8GWV1jqdTgjoKnxQn1HBxxVBanm5FxZpnQk",
description: "payment",
authorization_id: authorizationId,
};
const responseCreateResponse = await plaidClient.transferCreate(
transferCreateRequest
);
const transfer = responseCreateResponse.data.transfer;
return parseStringify(transfer);
} catch (error) {
console.error(
"An error occurred while creating transfer authorization:",
error
);
}
};
``` BankTabItem.tsx ```typescript
"use client";
import { useSearchParams, useRouter } from "next/navigation";
import { cn, formUrlQuery } from "@/lib/utils";
export const BankTabItem = ({ account, appwriteItemId }: BankTabItemProps) => {
const searchParams = useSearchParams();
const router = useRouter();
const isActive = appwriteItemId === account?.appwriteItemId;
const handleBankChange = () => {
const newUrl = formUrlQuery({
params: searchParams.toString(),
key: "id",
value: account?.appwriteItemId,
});
router.push(newUrl, { scroll: false });
};
return ( {account.name} );
};
``` BankInfo.tsx ```typescript
"use client";
import Image from "next/image";
import { useSearchParams, useRouter } from "next/navigation";
import {
cn,
formUrlQuery,
formatAmount,
getAccountTypeColors,
} from "@/lib/utils";
const BankInfo = ({ account, appwriteItemId, type }: BankInfoProps) => {
const router = useRouter();
const searchParams = useSearchParams();
const isActive = appwriteItemId === account?.appwriteItemId;
const handleBankChange = () => {
const newUrl = formUrlQuery({
params: searchParams.toString(),
key: "id",
value: account?.appwriteItemId,
});
router.push(newUrl, { scroll: false });
};
const colors = getAccountTypeColors(account?.type as AccountTypes);
return ( {account.name} {type === "full" && ( {account.subtype} )} {formatAmount(account.currentBalance)} );
};
export default BankInfo;
``` Copy.tsx ```typescript
"use client";
import { useState } from "react";
import { Button } from "./ui/button";
const Copy = ({ title }: { title: string }) => {
const [hasCopied, setHasCopied] = useState(false);
const copyToClipboard = () => {
navigator.clipboard.writeText(title);
setHasCopied(true);
setTimeout(() => {
setHasCopied(false);
}, 2000);
};
return ( {title} {!hasCopied ? ( ) : ( )} );
};
export default Copy;
``` PaymentTransferForm.tsx ```typescript
"use client";
import { zodResolver } from "@hookform/resolvers/zod";
import { Loader2 } from "lucide-react";
import { useRouter } from "next/navigation";
import { useState } from "react";
import { useForm } from "react-hook-form";
import * as z from "zod";
import { createTransfer } from "@/lib/actions/dwolla.actions";
import { createTransaction } from "@/lib/actions/transaction.actions";
import { getBank, getBankByAccountId } from "@/lib/actions/user.actions";
import { decryptId } from "@/lib/utils";
import { BankDropdown } from "./bank/BankDropdown";
import { Button } from "./ui/button";
import {
Form,
FormControl,
FormDescription,
FormField,
FormItem,
FormLabel,
FormMessage,
} from "./ui/form";
import { Input } from "./ui/input";
import { Textarea } from "./ui/textarea";
const formSchema = z.object({
email: z.string().email("Invalid email address"),
name: z.string().min(4, "Transfer note is too short"),
amount: z.string().min(4, "Amount is too short"),
senderBank: z.string().min(4, "Please select a valid bank account"),
sharableId: z.string().min(8, "Please select a valid sharable Id"),
});
const PaymentTransferForm = ({ accounts }: PaymentTransferFormProps) => {
const router = useRouter();
const [isLoading, setIsLoading] = useState(false);
const form = useForm >({
resolver: zodResolver(formSchema),
defaultValues: {
name: "",
email: "",
amount: "",
senderBank: "",
sharableId: "",
},
});
const submit = async (data: z.infer ) => {
setIsLoading(true);
try {
const receiverAccountId = decryptId(data.sharableId);
const receiverBank = await getBankByAccountId({
accountId: receiverAccountId,
});
const senderBank = await getBank({ documentId: data.senderBank });
const transferParams = {
sourceFundingSourceUrl: senderBank.fundingSourceUrl,
destinationFundingSourceUrl: receiverBank.fundingSourceUrl,
amount: data.amount,
};
// create transfer
const transfer = await createTransfer(transferParams);
// create transfer transaction
if (transfer) {
const transaction = {
name: data.name,
amount: data.amount,
senderId: senderBank.userId.$id,
senderBankId: senderBank.$id,
receiverId: receiverBank.userId.$id,
receiverBankId: receiverBank.$id,
email: data.email,
};
const newTransaction = await createTransaction(transaction);
if (newTransaction) {
form.reset();
router.push("/");
}
}
} catch (error) {
console.error("Submitting create transfer request failed: ", error);
}
setIsLoading(false);
};
return ( ( Select Source Bank Select the bank account you want to transfer funds from )}
/> ( Transfer Note (Optional) Please provide any additional information or instructions
related to the transfer )}
/> Bank account details Enter the bank account details of the recipient ( Recipient's Email Address )}
/> ( Receiver's Plaid Sharable Id )}
/> ( Amount )}
/> {isLoading ? (
<> Sending...
) : (
"Transfer Funds"
)} );
};
export default PaymentTransferForm;
``` Missing from the video (top right on the transaction list page) BankDropdown.tsx ```typescript
"use client";
import Image from "next/image";
import { useSearchParams, useRouter } from "next/navigation";
import { useState } from "react";
import {
Select,
SelectContent,
SelectGroup,
SelectItem,
SelectLabel,
SelectTrigger,
} from "@/components/ui/select";
import { formUrlQuery, formatAmount } from "@/lib/utils";
export const BankDropdown = ({
accounts = [],
setValue,
otherStyles,
}: BankDropdownProps) => {
const searchParams = useSearchParams();
const router = useRouter();
const [selected, setSeclected] = useState(accounts[0]);
const handleBankChange = (id: string) => {
const account = accounts.find((account) => account.appwriteItemId === id)!;
setSeclected(account);
const newUrl = formUrlQuery({
params: searchParams.toString(),
key: "id",
value: id,
});
router.push(newUrl, { scroll: false });
if (setValue) {
setValue("senderBank", id);
}
};
return ( handleBankChange(value)}
> {selected.name} Select a bank to display {accounts.map((account: Account) => ( {account.name} {formatAmount(account.currentBalance)} ))} );
};
``` Pagination.tsx ```typescript
"use client";
import Image from "next/image";
import { useRouter, useSearchParams } from "next/navigation";
import { Button } from "@/components/ui/button";
import { formUrlQuery } from "@/lib/utils";
export const Pagination = ({ page, totalPages }: PaginationProps) => {
const router = useRouter();
const searchParams = useSearchParams()!;
const handleNavigation = (type: "prev" | "next") => {
const pageNumber = type === "prev" ? page - 1 : page + 1;
const newUrl = formUrlQuery({
params: searchParams.toString(),
key: "page",
value: pageNumber.toString(),
});
router.push(newUrl, { scroll: false });
};
return ( handleNavigation("prev")}
disabled={Number(page) <= 1}
> Prev {page} / {totalPages} handleNavigation("next")}
disabled={Number(page) >= totalPages}
>
Next );
};
``` Category.tsx ```typescript
import Image from "next/image";
import { topCategoryStyles } from "@/constants";
import { cn } from "@/lib/utils";
import { Progress } from "./ui/progress";
export const Category = ({ category }: CategoryProps) => {
const {
bg,
circleBg,
text: { main, count },
progress: { bg: progressBg, indicator },
icon,
} = topCategoryStyles[category.name as keyof typeof topCategoryStyles] ||
topCategoryStyles.default;
return ( {category.name} {category.count} );
};
``` 🔗 Links Assets used in the project can be found here 🚀 More Advance your skills with Next.js 14 Pro Course Enjoyed creating this project? Dive deeper into our PRO courses for a richer learning adventure. They're packed with detailed explanations, cool features, and exercises to boost your skills. Give it a go! Accelerate your professional journey with the Expert Training program And if you're hungry for more than just a course and want to understand how we learn and tackle tech challenges, hop into our personalized masterclass. We cover best practices, different web skills, and offer mentorship to boost your confidence. Let's learn and grow together!;Horizon is a modern banking platform for everyone.;nextjs14 | adrianhajdin/banking |
Lightning-AI/lightning-thunder;**Make PyTorch models Lightning fast.**
______________________________________________________________________ Lightning.ai • Performance • Get started • Install • Examples • Inside Thunder • Get involved! • Documentation [![license](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://github.com/Lightning-AI/lightning-thunder/blob/main/LICENSE)
[![CI testing](https://github.com/Lightning-AI/lightning-thunder/actions/workflows/ci-testing.yml/badge.svg?event=push)](https://github.com/Lightning-AI/lightning-thunder/actions/workflows/ci-testing.yml)
[![General checks](https://github.com/Lightning-AI/lightning-thunder/actions/workflows/ci-checks.yml/badge.svg?event=push)](https://github.com/Lightning-AI/lightning-thunder/actions/workflows/ci-checks.yml)
[![Documentation Status](https://readthedocs.org/projects/lightning-thunder/badge/?version=latest)](https://lightning-thunder.readthedocs.io/en/latest/?badge=latest)
[![pre-commit.ci status](https://results.pre-commit.ci/badge/github/Lightning-AI/lightning-thunder/main.svg)](https://results.pre-commit.ci/latest/github/Lightning-AI/lightning-thunder/main) Welcome to ⚡ Lightning Thunder Thunder makes PyTorch models Lightning fast. Thunder is a source-to-source compiler for PyTorch. It makes PyTorch programs faster by combining and using different hardware executors at once (for instance, nvFuser , torch.compile , cuDNN , and TransformerEngine FP8 ). It supports both single and multi-GPU configurations.
Thunder aims to be usable, understandable, and extensible. [!Note]
Lightning Thunder is in alpha. Feel free to get involved, but expect a few bumps along the way. Single-GPU performance Thunder can achieve significant speedups over standard non-compiled PyTorch code ("PyTorch eager"), through the compounding effects of optimizations and the use of best-in-class executors. The figure below shows the pretraining throughput for Llama 2 7B as implemented in LitGPT . As shown in the plot above, Thunder achieves a 40% speedup in training throughput compared to eager code on H100 using a combination of executors including nvFuser, torch.compile, cuDNN, and TransformerEngine FP8. Multi-GPU performance Thunder also supports distributed strategies such as DDP and FSDP for training models on multiple GPUs. The following plot displays the normalized throughput measured for Llama 2 7B without FP8 mixed precision; support for FSDP is in progress. Get started The easiest way to get started with Thunder, requiring no extra installations or setups, is by using our Zero to Thunder Tutorial Studio . Install Thunder To use Thunder on your local machine: install nvFuser nightly and PyTorch nightly together as follows: ```bash install nvFuser which installs the matching nightly PyTorch pip install --pre 'nvfuser-cu121[torch]' --extra-index-url https://pypi.nvidia.com
``` install cudnn as follows: ```bash install cudnn pip install nvidia-cudnn-frontend
``` Finally, install Thunder as follows: ``` install thunder pip install lightning-thunder
``` Advanced install options ### Install from main
Alternatively, you can install the latest version of Thunder directly from this GitHub repository as follows:
```
# 1) Install nvFuser and PyTorch nightly dependencies:
pip install --pre 'nvfuser-cu121[torch]' --extra-index-url https://pypi.nvidia.com
```
```bash
# 2) Install Thunder itself
pip install git+https://github.com/Lightning-AI/lightning-thunder.git
```
### Install to tinker and contribute
If you are interested in tinkering with and contributing to Thunder, we recommend cloning the Thunder repository and installing it in pip's editable mode:
```bash
git clone https://github.com/Lightning-AI/lightning-thunder.git
cd lightning-thunder
pip install -e .
```
### Develop and run tests
After cloning the lightning-thunder repository and installing it as an editable package as explained above, ou can set up your environment for developing Thunder by installing the development requirements:
```bash
pip install -r requirements/devel.txt
```
Now you run tests:
```bash
pytest thunder/tests
```
Thunder is very thoroughly tested, so expect this to take a while. Hello World Below is a simple example of how Thunder allows you to compile and run PyTorch code: ```python
import torch
import thunder def foo(a, b):
return a + b jfoo = thunder.jit(foo) a = torch.full((2, 2), 1)
b = torch.full((2, 2), 3) result = jfoo(a, b) print(result) prints tensor( [[4, 4] [4, 4]]) ``` The compiled function jfoo takes and returns PyTorch tensors, just like the original function, so modules and functions compiled by Thunder can be used as part of larger PyTorch programs. Train models Thunder is in its early stages and should not be used for production runs yet. However, it can already deliver outstanding performance for pretraining and finetuning LLMs supported by LitGPT , such as Mistral, Llama 2, Gemma, Falcon, and others. Check out the LitGPT integration to learn about running LitGPT and Thunder together. Inside Thunder: A brief look at the core features Given a Python callable or PyTorch module, Thunder can generate an optimized program that: Computes its forward and backward passes Coalesces operations into efficient fusion regions Dispatches computations to optimized kernels Distributes computations optimally across machines To do so, Thunder ships with: A JIT for acquiring Python programs targeting PyTorch and custom operations A multi-level intermediate representation (IR) to represent operations as a trace of a reduced operation set An extensible set of transformations on the trace of a computational graph, such as grad , fusions, distributed (like ddp , fsdp ), functional (like vmap , vjp , jvp ) A way to dispatch operations to an extensible collection of executors Thunder is written entirely in Python. Even its trace is represented as valid Python at all stages of transformation. This allows unprecedented levels of introspection and extensibility. Thunder doesn't generate code for accelerators, such as GPUs, directly. It acquires and transforms user programs so that it's possible to optimally select or generate device code using fast executors like: torch.compile nvFuser cuDNN Apex TransformerEngine PyTorch eager Custom CUDA kernels through PyCUDA , Numba , CuPy Custom kernels written in OpenAI Triton Modules and functions compiled with Thunder fully interoperate with vanilla PyTorch and support PyTorch's autograd. Also, Thunder works alongside torch.compile to leverage its state-of-the-art optimizations. Documentation Online documentation is available. To build documentation locally you can use bash
make docs and point your browser to the generated docs at docs/build/index.html . Get involved! We appreciate your feedback and contributions. If you have feature requests, questions, or want to contribute code or config files, please don't hesitate to use the GitHub Issue tracker. We welcome all individual contributors, regardless of their level of experience or hardware. Your contributions are valuable, and we are excited to see what you can accomplish in this collaborative and supportive environment. License Lightning Thunder is released under the Apache 2.0 license.
See the LICENSE file for details.;Make PyTorch models up to 40% faster! Thunder is a source to source compiler for PyTorch. It enables using different hardware executors at once; across one or thousands of GPUs.;[] | Lightning-AI/lightning-thunder |
toss/es-hangul;es-hangul 한국어 | English es-hangul 은 쉽게 한글을 다룰 수 있도록 돕는 JavaScript 라이브러리입니다. 편리하게 사용할 수 있는 모던한 라이브러리 API를 제공합니다. ECMAScript Modules을 사용하기 때문에, 사용자가 브라우저 환경에서 최소한의 코드를 내려받도록 할 수 있습니다. 사용 예시 초성 검색, 조사 붙이기와 같은 한글 작업을 간단히 할 수 있습니다. ```tsx
import { chosungIncludes } from 'es-hangul'; const searchWord = '라면';
const userInput = 'ㄹㅁ'; const result = chosungIncludes(searchWord, userInput); // true
``` ```tsx
import { josa } from 'es-hangul'; const word1 = '사과';
const sentence1 = josa(word1, '을/를') + ' 먹었습니다.';
console.log(sentence1); // '사과를 먹었습니다.' const word2 = '바나나';
const sentence2 = josa(word2, '이/가') + ' 맛있습니다.';
console.log(sentence2); // '바나나가 맛있습니다.'
``` 기여하기 es-hangul 라이브러리에 기여하고 싶다고 생각하셨다면 아래 문서를 참고해주세요. CONTRIBUTING 라이선스 MIT © Viva Republica, Inc. LICENSE 파일을 참고하세요.;A modern JavaScript library for handling Hangul characters.;[] | toss/es-hangul |
SakanaAI/evolutionary-model-merge;🐟 Evolutionary Optimization of Model Merging Recipes 🤗 Models | 👀 Demo | 📚 Paper | 📝 Blog | 🐦 Twitter This repository serves as a central hub for SakanaAI's Evolutionary Model Merge series, showcasing its releases and resources. It includes models and code for reproducing the evaluation presented in our paper. Look forward to more updates and additions coming soon. Models Our Models | Model | Size | License | Source |
| :-- | --: | :-- | :-- |
| EvoLLM-JP-v1-7B | 7B | Microsoft Research License | shisa-gamma-7b-v1 , WizardMath-7B-V1.1 , GAIR/Abel-7B-002 | EvoLLM-JP-v1-10B | 10B | Microsoft Research License | EvoLLM-JP-v1-7B, shisa-gamma-7b-v1 |
| EvoLLM-JP-A-v1-7B | 7B | Apache 2.0 | shisa-gamma-7b-v1 , Arithmo2-Mistral-7B , GAIR/Abel-7B-002 |
| EvoVLM-JP-v1-7B | 7B | Apache 2.0 | LLaVA-1.6-Mistral-7B , shisa-gamma-7b-v1 Comparing EvoLLM-JP w/ Source LLMs For details on the evaluation, please refer to Section 4.1 of the paper. | Model | MGSM-JA (acc ↑) | lm-eval-harness (avg ↑) |
| :-- | --: | --: |
| Shisa Gamma 7B v1 | 9.6 | 66.1 |
| WizardMath 7B V1.1 | 18.4 | 60.1 |
| Abel 7B 002 | 30.0 | 56.5 |
| Arithmo2 Mistral 7B | 24.0 | 56.4 |
| EvoLLM-JP-A-v1-7B | 52.4 | 69.0 |
| EvoLLM-JP-v1-7B | 52.0 | 70.5 |
| EvoLLM-JP-v1-10B | 55.6 | 66.2 | Comparing EvoVLM-JP w/ Existing VLMs For details on the evaluation, please see Section 4.2 of the paper. | Model | JA-VG-VQA-500 (ROUGE-L ↑) | JA-VLM-Bench-In-the-Wild (ROUGE-L ↑) |
| :-- | --: | --: |
| LLaVA-1.6-Mistral-7B | 14.32 | 41.10 |
| Japanese Stable VLM | - 1 | 40.50 |
| Heron BLIP Japanese StableLM Base 7B llava-620k | 14.51 | 33.26 |
| EvoVLM-JP-v1-7B | 19.70 | 51.25 * | *1: Japanese Stable VLM cannot be evaluated using the JA-VG-VQA-500 dataset because this model has used this dataset for training. Reproducing the Evaluation 1. Clone the Repo bash
git clone https://github.com/SakanaAI/evolutionary-model-merge.git
cd evolutionary-model-merge 2. Download fastext Model We use fastext to detect language for evaluation. Please download lid.176.ftz from this link and place it in your current directory. If you place the file in a directory other than the current directory, specify the path to the file using the LID176FTZ_PATH environment variable. 3. Install Libraries bash
pip install -e . We conducted our tests in the following environment: Python Version 3.10.12 and CUDA Version 12.3.
We cannot guarantee that it will work in other environments. 4. Run To launch evaluation, run the following script with a certain config. All configs used for the paper are in configs . bash
python evaluate.py --config_path {path-to-config} Acknowledgement We would like to thank the developers of the source models for their contributions and for making their work available. Our math evaluation code builds on the WizardMath repository, and we are grateful for their work.;Official repository of Evolutionary Optimization of Model Merging Recipes;[] | SakanaAI/evolutionary-model-merge |
tidwall/neco;Neco is a C library that provides concurrency using coroutines.
It's small & fast, and intended to make concurrent I/O & network programming
easy. Features Coroutines : starting, sleeping, suspending, resuming, yielding, and joining. Synchronization : channels, generators, mutexes, condition variables, and waitgroups. Support for deadlines and cancelation . Posix friendly interface using file descriptors. Additional APIs for networking , signals , random data , streams , and buffered I/O . Lightweight runtime with a fair and deterministic scheduler . Fast user-space context switching. Uses assembly in most cases. Stackful coroutines that are nestable, with their life times fully managed by the scheduler. Cross-platform. Linux, Mac, FreeBSD. (Also WebAssembly and Windows with some limitations ) . Single file amalgamation. No dependencies. Test suite with 100% coverage using sanitizers and Valgrind . For a deeper dive, check out the API reference . It may also be worthwhile to see the Bluebox project for a
more complete example of using Neco, including benchmarks. Goals Give C programs fast single-threaded concurrency. To use a concurrency model that resembles the simplicity of pthreads or Go. Provide an API for concurrent networking and I/O. Make it easy to interop with existing Posix functions. It's a non-goal for Neco to provide a scalable multithreaded runtime, where the
coroutine scheduler is shared among multiple cpu cores. Or to use other
concurrency models like async/await. Using Just drop the "neco.c" and "neco.h" files into your project. Uses standard C11 so most modern C compilers should work. sh
cc -c neco.c Example 1 (Start a coroutine) A coroutine is started with the neco_start() function. When neco_start() is called for the first time it will initialize a Neco runtime and scheduler for the current thread, and then blocks until the coroutine and all child coroutines have terminated. ```c include include "neco.h" void coroutine(int argc, void *argv[]) {
printf("main coroutine started\n");
} int main(int argc, char *argv[]) {
neco_start(coroutine, 0);
return 0;
}
``` Example 2 (Use neco_main instead of main) Optionally, neco_main() can be used in place of the standard main() .
This is for when the entirety of your program is intended to be run from only coroutines.
It adjusts the behavior of the program slightly to make development and error checking easier. ```c include include "neco.h" int neco_main(int argc, char *argv[]) {
printf("main coroutine started\n");
return 0;
}
``` Example 3 (Multiple coroutines) Here we'll start two coroutines that continuously prints "tick" every one second and "tock" every two. ```c include include "neco.h" void ticker(int argc, void *argv[]) {
while (1) {
neco_sleep(NECO_SECOND);
printf("tick\n");
}
} void tocker(int argc, void argv[]) {
while (1) {
neco_sleep(NECO_SECOND 2);
printf("tock\n");
}
} int neco_main(int argc, char *argv[]) {
neco_start(ticker, 0);
neco_start(tocker, 0); // Keep the program alive for an hour.
neco_sleep(NECO_HOUR);
return 0; }
``` Example 4 (Coroutine arguments) A coroutine is like its own little program that accepts any number of arguments. c
void coroutine(int argc, void *argv[]) The arguments are a series of pointers passed to the coroutine.
All arguments are guaranteed to be in scope when the coroutine starts and until the first neco_ function is called. This allows you an opportunity to validate and/or copy them. ```c include include include include "neco.h" void coroutine(int argc, void *argv[]) { // All arguments are currently in scope and should be copied before first
// neco_*() function is called in this coroutine.
int arg0 = *(int*)argv[0];
int arg1 = *(int*)argv[1];
int arg2 = *(int*)argv[2];
char *arg3 = argv[3];
char *arg4 = argv[4];
printf("arg0=%d, arg1=%d, arg2=%d, arg3=%s, arg4=%s\n",
arg0, arg1, arg2, arg3, arg4);
neco_sleep(NECO_SECOND/2);
// The arguments are no longer in scope and it's unsafe to use the argv
// variable any further.
printf("second done\n"); } int neco_main(int argc, char *argv[]) { int arg0 = 0;
int *arg1 = malloc(sizeof(int));
*arg1 = 1;
neco_start(coroutine, 5, &arg0, arg1, &(int){2}, NULL, "hello world");
free(arg1);
neco_sleep(NECO_SECOND);
printf("first done\n");
return 0; }
``` Example 5 (Channels) A channel is a mechanism for communicating between two or more coroutines. Here we'll create a second coroutine that sends the message 'ping' to the first coroutine. ```c include include include "neco.h" void coroutine(int argc, void argv[]) {
neco_chan messages = argv[0]; // Send a message of the 'messages' channel.
char *msg = "ping";
neco_chan_send(messages, &msg);
// This coroutine no longer needs the channel.
neco_chan_release(messages); } int neco_main(int argc, char *argv[]) { // Create a new channel that is used to send 'char*' string messages.
neco_chan *messages;
neco_chan_make(&messages, sizeof(char*), 0);
// Start a coroutine that sends messages over the channel.
// It's a good idea to use neco_chan_retain on a channel before using it
// in a new coroutine. This will avoid potential use-after-free bugs.
neco_chan_retain(messages);
neco_start(coroutine, 1, messages);
// Receive the next incoming message. Here we’ll receive the "ping"
// message we sent above and print it out.
char *msg = NULL;
neco_chan_recv(messages, &msg);
printf("%s\n", msg);
// This coroutine no longer needs the channel.
neco_chan_release(messages);
return 0; }
``` Example 6 (Generators) A generator is like channel but is stricly bound to a coroutine and is intended to treat the coroutine like an iterator. ```c include include include "neco.h" void coroutine(int argc, void *argv[]) {
// Yield each int to the caller, one at a time.
for (int i = 0; i < 10; i++) {
neco_gen_yield(&i);
}
} int neco_main(int argc, char *argv[]) { // Create a new generator coroutine that is used to send ints.
neco_gen *gen;
neco_gen_start(&gen, sizeof(int), coroutine, 0);
// Iterate over each int until the generator is closed.
int i;
while (neco_gen_next(gen, &i) != NECO_CLOSED) {
printf("%d\n", i);
}
// This coroutine no longer needs the generator.
neco_gen_release(gen);
return 0; }
``` Example 7 (Connect to server) Neco provides neco_dial() for easily connecting
to server. Here we'll performing a (very simple) HTTP request which prints the homepage of
the http://example.com website. ```c include include include "neco.h" int neco_main(int argc, char argv[]) {
int fd = neco_dial("tcp", "example.com:80");
if (fd < 0) {
printf("neco_dial: %s\n", neco_strerror(fd));
return 0;
}
char req[] = "GET / HTTP/1.1\r\n"
"Host: example.com\r\n"
"Connection: close\r\n"
"\r\n";
neco_write(fd, req, sizeof(req));
while (1) {
char buf[256];
int n = neco_read(fd, buf, sizeof(buf));
if (n <= 0) {
break;
}
printf("%. s", n, buf);
}
close(fd);
return 0;
}
``` Example 8 (Create a server) Use neco_serve() to quickly bind and listen on an address. Here we'll run a tiny webserver at http://127.0.0.1:8080 ```c include include include "../neco.h" void request(int argc, void argv[]) {
int fd = (int*)argv[0];
char req[256];
int n = neco_read(fd, req, sizeof(req));
if (n > 0) {
char res[] = "HTTP/1.0 200 OK\r\n"
"Content-Type: text/html\r\n"
"Content-Length: 21\r\n"
"\r\n"
" Hello Neco! \n";
neco_write(fd, res, sizeof(res));
}
close(fd);
} int neco_main(int argc, char *argv[]) {
int servfd = neco_serve("tcp", "127.0.0.1:8080");
if (servfd < 0) {
printf("neco_serve: %s\n", neco_strerror(servfd));
return 0;
}
printf("Serving at http://127.0.0.1:8080\n");
while (1) {
int fd = neco_accept(servfd, 0, 0);
if (servfd < 0) {
printf("neco_accept: %s\n", neco_strerror(fd));
continue;
}
neco_start(request, 1, &fd);
}
return 0;
}
``` Example 9 (Echo server and client) Run server with: sh
cc neco.c echo-server.c && ./a.out Run client with: sh
cc neco.c echo-client.c && ./a.out echo-server.c ```c include include include "neco.h" void client(int argc, void argv[]) {
int conn = (int )argv[0];
printf("client connected\n");
char buf[64];
while (1) {
ssize_t n = neco_read(conn, buf, sizeof(buf));
if (n <= 0) {
break;
}
printf("%. s", (int)n, buf);
}
printf("client disconnected\n");
close(conn);
} int neco_main(int argc, char *argv[]) {
int ln = neco_serve("tcp", "localhost:19203");
if (ln == -1) {
perror("neco_serve");
exit(1);
}
printf("listening at localhost:19203\n");
while (1) {
int conn = neco_accept(ln, 0, 0);
if (conn > 0) {
neco_start(client, 1, &conn);
}
}
close(ln);
return 0;
}
``` echo-client.c ```c include include include "neco.h" int neco_main(int argc, char *argv[]) {
int fd = neco_dial("tcp", "localhost:19203");
if (fd == -1) {
perror("neco_listen");
exit(1);
}
printf("connected\n");
char buf[64];
while (1) {
printf("> ");
fflush(stdout);
ssize_t nbytes = neco_read(STDIN_FILENO, buf, sizeof(buf));
if (nbytes < 0) {
break;
}
ssize_t ret = neco_write(fd, buf, nbytes);
if (ret < 0) {
break;
}
}
printf("disconnected\n");
close(fd);
return 0;
}
``` Example 10 (Suspending and resuming a coroutine) Any coroutines can suspended itself indefinetly and then be resumed by other
coroutines by using neco_suspend() and neco_resume() . ```c include include include "neco.h" void coroutine(int argc, void *argv[]) {
printf("Suspending coroutine\n");
neco_suspend();
printf("Coroutine resumed\n");
} int neco_main(int argc, char *argv[]) {
neco_start(coroutine, 0); for (int i = 0; i < 3; i++) {
printf("%d\n", i+1);
neco_sleep(NECO_SECOND);
}
// Resume the suspended. The neco_lastid() returns the identifier for the
// last coroutine started by the current coroutine.
neco_resume(neco_lastid());
return 0; }
// Output:
// Suspending coroutine
// 1
// 2
// 3
// Coroutine resumed
``` More examples You can find more examples here . Platform notes Linux, Mac, and FreeBSD supports all features. Windows and WebAssembly support the core coroutine features, but have some key
limitiations, mostly with working with file descriptors and networking.
This is primarly because the Neco event queue works with epoll and kqueue,
which are only available on Linux and Mac/BSD respectively. This means that the neco_wait() (which allows for a coroutine to wait for a file descriptor to be
readable or writeable) is not currently available on those platforms. Other limitations include: Windows only supports amd64. Windows and WebAssembly use smaller default stacks of 1MB. Windows and WebAssembly do not support guards or gaps. Windows and WebAssembly do not support NECO_CSPRNG (Cryptographically secure
pseudorandom number generator) Windows does not support stack unwinding. Other than that, Neco works great on those platforms. Any contributions towards making Windows and WebAssembly feature complete are
welcome. The scheduler Neco uses sco , which is a fair and
deterministic scheduler. This means that no coroutine takes priority over
another and that all concurrent operations will reproduce in an expected order. Fast context switching The coroutine context switching is powered by llco and uses assembly code in most
cases. On my lab machine (AMD Ryzen 9 5950X) a context switch takes about 11
nanoseconds. Thread local runtime There can be no more than one scheduler per thread. When the first coroutine is started using neco_start() , a new Neco
runtime is initialized in the current thread, and each runtime has its own
scheduler. Communicating between coroutines that are running in different threads will
require I/O mechanisms that do not block the current schedulers, such as pipe() , eventfd() or atomics. Pthread utilties such as pthread_mutex_t and pthread_cond_t do not work very well in coroutines. For example, here we'll create two threads, running their own Neco schedulers.
Each using pipes to communicate with the other. ```c include include include include "neco.h" void coro1(int argc, void argv[]) {
// This coroutine is running in a different scheduler than coro2.
int rd = (int )argv[0];
int wr = (int*)argv[1];
int val;
neco_read(rd, &val, sizeof(int));
printf("coro1: %d\n", val);
neco_write(wr, &(int){ 2 }, sizeof(int));
} void coro2(int argc, void argv[]) {
// This coroutine is running in a different scheduler than coro1.
int rd = (int )argv[0];
int wr = (int*)argv[1];
int val;
neco_write(wr, &(int){ 1 }, sizeof(int));
neco_read(rd, &val, sizeof(int));
printf("coro2: %d\n", val);
} void runtime1(void arg) {
int *pipefds = arg;
neco_start(coro1, 2, &pipefds[0], &pipefds[3]);
return 0;
} void runtime2(void arg) {
int *pipefds = arg;
neco_start(coro2, 2, &pipefds[2], &pipefds[1]);
return 0;
} int main() {
int pipefds[4];
pipe(&pipefds[0]);
pipe(&pipefds[2]);
pthread_t th1, th2;
pthread_create(&th1, 0, runtime1, pipefds);
pthread_create(&th2, 0, runtime2, pipefds);
pthread_join(th1, 0);
pthread_join(th2, 0);
return 0;
}
``` License Source code is available under the MIT License .;Concurrency library for C (coroutines);[] | tidwall/neco |
muskie82/MonoGS;Gaussian Splatting SLAM *Hidenobu Matsuki · *Riku Murai · Paul H.J. Kelly · Andrew J. Davison (* Equal Contribution) CVPR 2024 (Highlight) [comment]: <> ( PAPER ) Paper | Video | Project Page This software implements dense SLAM system presented in our paper Gaussian Splatting SLAM in CVPR'24.
The method demonstrates the first monocular SLAM solely based on 3D Gaussian Splatting (left), which also supports Stereo/RGB-D inputs (middle/right). # Note
- In an academic paper, please refer to our work as **Gaussian Splatting SLAM** or **MonoGS** for short (this repo's name) to avoid confusion with other works.
- Differential Gaussian Rasteriser with camera pose gradient computation is available [here](https://github.com/rmurai0610/diff-gaussian-rasterization-w-pose.git).
- **[New]** Speed-up version of our code is available in `dev.speedup` branch, It achieves up to 10fps on monocular fr3/office sequence while keeping consistent performance (tested on RTX4090/i9-12900K). The code will be merged into the main branch after further refactoring and testing.
# Getting Started
## Installation
```
git clone https://github.com/muskie82/MonoGS.git --recursive
cd MonoGS
```
Setup the environment.
```
conda env create -f environment.yml
conda activate MonoGS
```
Depending on your setup, please change the dependency version of pytorch/cudatoolkit in `environment.yml` by following [this document](https://pytorch.org/get-started/previous-versions/).
Our test setup were:
- Ubuntu 20.04: `pytorch==1.12.1 torchvision==0.13.1 torchaudio==0.12.1 cudatoolkit=11.6`
- Ubuntu 18.04: `pytorch==1.12.1 torchvision==0.13.1 torchaudio==0.12.1 cudatoolkit=11.3`
## Quick Demo
```
bash scripts/download_tum.sh
python slam.py --config configs/mono/tum/fr3_office.yaml
```
You will see a GUI window pops up.
## Downloading Datasets
Running the following scripts will automatically download datasets to the `./datasets` folder.
### TUM-RGBD dataset
```bash
bash scripts/download_tum.sh
```
### Replica dataset
```bash
bash scripts/download_replica.sh
```
### EuRoC MAV dataset
```bash
bash scripts/download_euroc.sh
```
## Run
### Monocular
```bash
python slam.py --config configs/mono/tum/fr3_office.yaml
```
### RGB-D
```bash
python slam.py --config configs/rgbd/tum/fr3_office.yaml
```
```bash
python slam.py --config configs/rgbd/replica/office0.yaml
```
Or the single process version as
```bash
python slam.py --config configs/rgbd/replica/office0_sp.yaml
```
### Stereo (experimental)
```bash
python slam.py --config configs/stereo/euroc/mh02.yaml
```
## Live demo with Realsense
First, you'll need to install `pyrealsense2`.
Inside the conda environment, run:
```bash
pip install pyrealsense2
```
Connect the realsense camera to the PC on a **USB-3** port and then run:
```bash
python slam.py --config configs/live/realsense.yaml
```
We tested the method with [Intel Realsense d455](https://www.mouser.co.uk/new/intel/intel-realsense-depth-camera-d455/). We recommend using a similar global shutter camera for robust camera tracking. Please avoid aggressive camera motion, especially before the initial BA is performed. Check out [the first 15 seconds of our YouTube video](https://youtu.be/x604ghp9R_Q?si=S21HgeVTVfNe0BVL) to see how you should move the camera for initialisation. We recommend to use the code in `dev.speed-up` branch for live demo. # Evaluation To evaluate our method, please add `--eval` to the command line argument:
```bash
python slam.py --config configs/mono/tum/fr3_office.yaml --eval
```
This flag will automatically run our system in a headless mode, and log the results including the rendering metrics.
# Reproducibility
There might be minor differences between the released version and the results in the paper. Please bear in mind that multi-process performance has some randomness due to GPU utilisation.
We run all our experiments on an RTX 4090, and the performance may differ when running with a different GPU.
# Acknowledgement
This work incorporates many open-source codes. We extend our gratitude to the authors of the software.
- [3D Gaussian Splatting](https://github.com/graphdeco-inria/gaussian-splatting)
- [Differential Gaussian Rasterization
](https://github.com/graphdeco-inria/diff-gaussian-rasterization)
- [SIBR_viewers](https://gitlab.inria.fr/sibr/sibr_core)
- [Tiny Gaussian Splatting Viewer](https://github.com/limacv/GaussianSplattingViewer)
- [Open3D](https://github.com/isl-org/Open3D)
- [Point-SLAM](https://github.com/eriksandstroem/Point-SLAM)
# License
MonoGS is released under a **LICENSE.md**. For a list of code dependencies which are not property of the authors of MonoGS, please check **Dependencies.md**.
# Citation
If you found this code/work to be useful in your own research, please considering citing the following:
```bibtex
@inproceedings{Matsuki:Murai:etal:CVPR2024,
title={{G}aussian {S}platting {SLAM}},
author={Hidenobu Matsuki and Riku Murai and Paul H. J. Kelly and Andrew J. Davison},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
year={2024}
}
```;[CVPR'24 Highlight] Gaussian Splatting SLAM;slam,gaussian-splatting,computer-vision,robotics,cvpr2024 | muskie82/MonoGS |
NUS-HPC-AI-Lab/OpenDiT;An Easy, Fast and Memory-Efficient System for DiT Training and Inference [Homepage] | [Discord] | [WeChat] | [Twitter] | [Zhihu] | [Media] Latest News 🔥 [2024/03/20] Propose Dynamic Sequence Parallel (DSP)[ paper ][ doc ], achieves 3x speed for training and 2x speed for inference in OpenSora compared with sota sequence parallelism. [2024/03/18] Support OpenSora : Democratizing Efficient Video Production for All. [2024/02/27] Officially release OpenDiT: An Easy, Fast and Memory-Efficent System for DiT Training and Inference. About OpenDiT is an open-source project that provides a high-performance implementation of Diffusion Transformer (DiT) powered by Colossal-AI, specifically designed to enhance the efficiency of training and inference for DiT applications, including text-to-video generation and text-to-image generation. OpenDiT has been adopted by: OpenSora , MiniSora , SpeeDiT . OpenDiT boasts the performance by the following techniques: Up to 80% speedup and 50% memory reduction on GPU Kernel optimization including FlashAttention, Fused AdaLN, and Fused layernorm kernel. Hybrid parallelism methods including ZeRO, Gemini, and DDP. Also, sharding the ema model further reduces the memory cost. FastSeq: A novel sequence parallelism method Specially designed for DiT-like workloads where the activation size is large but the parameter size is small. Up to 48% communication save for intra-node sequence parallel. Break the memory limitation of a single GPU and reduce the overall training and inference time. Ease of use Huge performance improvement gains with a few line changes Users do not need to know the implementation of distributed training. Complete pipeline of text-to-image and text-to-video generation Researchers and engineers can easily use and adapt our pipeline to real-world applications without modifying the parallel part. Verify the accuracy of OpenDiT with text-to-image training on ImageNet and release checkpoint. Authors: Xuanlei Zhao , Zhongkai Zhao , Ziming Liu , Haotian Zhou , Qianli Ma , Yang You OpenDiT will continue to integrate more open-source DiT models. Stay tuned for upcoming enhancements and additional features! Installation Prerequisites: Python >= 3.10 PyTorch >= 1.13 (We recommend to use a >2.0 version) CUDA >= 11.6 We strongly recommend using Anaconda to create a new environment (Python >= 3.10) to run our examples: shell
conda create -n opendit python=3.10 -y
conda activate opendit Install ColossalAI: shell
git clone https://github.com/hpcaitech/ColossalAI.git
cd ColossalAI
git checkout adae123df3badfb15d044bd416f0cf29f250bc86
pip install -e . Install OpenDiT: shell
git clone https://github.com/oahzxl/OpenDiT
cd OpenDiT
pip install -e . (Optional but recommended) Install libraries for training & inference speed up (you can run our code without these libraries): ```shell Install Triton for fused adaln kernel pip install triton Install FlashAttention pip install flash-attn Install apex for fused layernorm kernel git clone https://github.com/NVIDIA/apex.git
cd apex
git checkout 741bdf50825a97664db08574981962d66436d16a
pip install -v --disable-pip-version-check --no-cache-dir --no-build-isolation --config-settings "--build-option=--cpp_ext" --config-settings "--build-option=--cuda_ext" ./ --global-option="--cuda_ext" --global-option="--cpp_ext"
``` Usage OpenDiT fully supports the following models, including training and inference, which align with the original methods. Through our novel techniques, we enable these models to run faster and consume less memory. Here's how you can use them: | Model | Source | Function | Usage | Optimize |
| ------ | ------ | ------ | ------ | :------: |
| DiT | https://github.com/facebookresearch/DiT | label-to-image | Usage | ✅ |
| OpenSora | https://github.com/hpcaitech/Open-Sora | text-to-video | Usage | ✅ | Technique Overview DSP [ paper ][ doc ] DSP (Dynamic Sequence Parallelism) is a novel, elegant and super efficient sequence parallelism for OpenSora , Latte and other multi-dimensional transformer architecture. It achieves 3x speed for training and 2x speed for inference in OpenSora compared with sota sequence parallelism ( DeepSpeed Ulysses ). For a 10s (80 frames) of 512x512 video, the inference latency of OpenSora is: | Method | 1xH800 | 8xH800 (DS Ulysses) | 8xH800 (DSP) |
| ------ | ------ | ------ | ------ |
| Latency(s) | 106 | 45 | 22 | See its detail and usage here . FastSeq [ doc ] FastSeq is a novel sequence parallelism for large sequences and small-scale parallelism. It focuses on minimizing sequence communication by employing only two communication operators for every transformer layer, and we an async ring to overlap AllGather communication with qkv computation. See its detail and usage here . DiT Reproduction Result We have trained DiT using the origin method with OpenDiT to verify our accuracy. We have trained the model from scratch on ImageNet for 80k steps on 8xA100. Here are some results generated by our trained DiT: Our loss also aligns with the results listed in the paper: To reproduce our results, you can follow our instruction . Acknowledgement We extend our gratitude to Zangwei Zheng for providing valuable insights into algorithms and aiding in the development of the video pipeline. Additionally, we acknowledge Shenggan Cheng for his guidance on code optimization and parallelism. Our appreciation also goes to Fuzhao Xue , Shizun Wang , Yuchao Gu , Shenggui Li , and Haofan Wang for their invaluable advice and contributions. This codebase borrows from:
* OpenSora : Democratizing Efficient Video Production for All.
* DiT : Scalable Diffusion Models with Transformers.
* PixArt : An open-source DiT-based text-to-image model.
* Latte : An attempt to efficiently train DiT for video. Contributing If you encounter problems using OpenDiT or have a feature request, feel free to create an issue! We also welcome pull requests from the community. Citation @misc{zhao2024opendit,
author = {Xuanlei Zhao, Zhongkai Zhao, Ziming Liu, Haotian Zhou, Qianli Ma, and Yang You},
title = {OpenDiT: An Easy, Fast and Memory-Efficient System for DiT Training and Inference},
year = {2024},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/NUS-HPC-AI-Lab/OpenDiT}},
}
@misc{zhao2024dsp,
title={DSP: Dynamic Sequence Parallelism for Multi-Dimensional Transformers},
author={Xuanlei Zhao and Shenggan Cheng and Zangwei Zheng and Zheming Yang and Ziming Liu and Yang You},
year={2024},
eprint={2403.10266},
archivePrefix={arXiv},
primaryClass={cs.DC}
} Star History;OpenDiT: An Easy, Fast and Memory-Efficient System for DiT Training and Inference;[] | NUS-HPC-AI-Lab/OpenDiT |
linsomniac/spotify_to_ytmusic;Tools for moving from Spotify to YTMusic Overview This is a set of scripts for copying "liked" songs and playlists from Spotify to YTMusic.
It provides both CLI tools and a GUI (implemented by Yoween, formerly called spotify_to_ytmusic_gui ). Thanks Thanks to @RadicalArti and Meet Vora for their generous financial contributions to this
project. Getting Started Install Python (you may already have it) You will need a somewhat recent version of Python 3.10 and above are known to work,
3.8-3.10 might work. For Windows Download Python for Windows from: https://www.python.org/downloads/windows/ You can also use choco to install it: choco install python For MacOS Run: brew install python
brew install python-tk Install certificates by doing: Macintosh HD > Applications > Python Folder > double click on "Install Certificates.command" file. For Linux You probably already have it installed. See your package manager of choice to
install it. Install spotify2ytmusic (via pip) This package is available on pip, so you can install it using: pip install spotify2ytmusic or: python3 -m pip install spotify2ytmusic (Or) Running From Source (Not recommended) Another option, instead of pip, is to just clone this repo and run directly from the
source. However, you will need the "ytmusicapi" package installed, so you'll probably
want to use pip to install that at the very least. To run directly from source: shell
git clone git@github.com:linsomniac/spotify_to_ytmusic.git
cd spotify_to_ytmusic
pip install ytmusicapi
pip install tk # If using the GUI Then you can prefix the command you want to run with python3 -m spotify2ytmusic , for
example: shell
python3 -m spotify2ytmusic gui
python3 -m spotify2ytmusic list_playlists
python3 -m spotify2ytmusic load_liked
[etc...] Graphical UI If you have installed via PIP, you should be able to run: s2yt_gui Otherwise, if running from source: On Windows: python -m spotify2ytmusic gui Or on Linux: python3 -m spotify2ytmusic gui Login to YTMusic - Tab 0 Click the login button on the first tab OR Run ytmusicapi oauth in a console. OR Run s2yt_ytoauth OR Run python -m spotify2ytmusic ytoauth This will give you a URL, visit that URL and authorize the application. When you are
done with the import you can remove the authorization for this app. This will write a file "oauth.json". Keep this file secret while the app is authorized.
This file includes a logged in session token. ytmusicapi is a dependency of this software and should be installed as part of the "pip
install". Backup Your Spotify Playlists - Tab 1 Click the Backup button, and wait until it finished and switched to the next tab. OR do all the steps below Download spotify-backup . Run spotify-backup.py and it will help you authorize access to your spotify account. Run: python3 spotify-backup.py playlists.json --dump=liked,playlists --format=json This will save your playlists and liked songs into the file "playlists.json". Reverse your playlists - Tab 2 As mentionned below, the original program adds the songs in the 'wrong' order. That's a
feature I don't like, so I created a script to reverse them. It seems to be reliable,
but if you find anything weird, please open an issue. It creates a backup of the
original file just in case anyway. Example: python3 .\reverse_playlist.py ./playlists.json -r Import Your Liked Songs - Tab 3 Click the import button, and wait until it finished and switched to the next tab. It will go through your Spotify liked songs, and like them on YTMusic. It will display
the song from spotify and then the song that it found on YTMusic that it is liking. I've
spot-checked my songs and it seems to be doing a good job of matching YTMusic songs with
Spotify. So far I haven't seen a single failure across a couple hundread songs, but more
esoteric titles it may have issues with. List Your Playlists - Tab 4 Click the list button, and wait until it finished and switched to the next tab. This will list the playlists you have on both Spotify and YTMusic, so you can individually copy them. Copy Your Playlists - Tab 5 You can either copy all playlists, or do a more surgical copy of individual playlists.
Copying all playlists will use the name of the Spotify playlist as the destination playlist name on YTMusic. To copy all the playlists click the copy button, and wait until it finished and switched to the next tab. NOTE : This does not copy the Liked playlist (see above to do that). Copy specific Playlist - Tab 6 In the list output, find the "playlist id" (the first column) of the Spotify playlist and of the YTMusic playlist. Then fill both input fields and click the copy button. The copy playlist will take the name of the YTMusic playlist and will create the
playlist if it does not exist, if you start the YTMusic playlist with a "+": Re-running "copy_playlist" or "load_liked" in the event that it fails should be safe, it
will not duplicate entries on the playlist. Command Line Usage Login to YTMusic ytmusicapi oauth or s2yt_ytoauth or python -m spotify2ytmusic ytoauth This will give you a URL, visit that URL and authorize the application. When you are
done with the import you can remove the authorization for this app. This will write a file "oauth.json". Keep this file secret while the app is authorized.
This file includes a logged in session token. ytmusicapi is a dependency of this software and should be installed as part of the "pip
install". Backup Your Spotify Playlists Download spotify-backup . Run spotify-backup.py and it will help you authorize access to your spotify account. Run: python3 spotify-backup.py playlists.json --dump=liked,playlists --format=json This will save your playlists and liked songs into the file "playlists.json". Import Your Liked Songs Run: s2yt_load_liked It will go through your Spotify liked songs, and like them on YTMusic. It will display
the song from spotify and then the song that it found on YTMusic that it is liking. I've
spot-checked my songs and it seems to be doing a good job of matching YTMusic songs with
Spotify. So far I haven't seen a single failure across a couple thousand songs, but more
esoteric titles it may have issues with. Import Your Liked Albums Run: s2yt_load_liked_albums Spotify stores liked albums outside of the "Liked Songs" playlist. This is the command to
load your liked albums into YTMusic liked songs. List Your Playlists Run s2yt_list_playlists This will list the playlists you have on both Spotify and YTMusic. You will need to
individually copy them. Copy Your Playlists You can either copy all playlists, or do a more surgical copy of individual playlists.
Copying all playlists will use the name of the Spotify playlist as the destination
playlist name on YTMusic. To copy all playlists, run: s2yt_copy_all_playlists NOTE : This does not copy the Liked playlist (see above to do that). In the list output above, find the "playlist id" (the first column) of the Spotify playlist,
and of the YTMusic playlist, and then run: s2yt_copy_playlist <SPOTIFY_PLAYLIST_ID> <YTMUSIC_PLAYLIST_ID> If you need to create a playlist, you can run: s2yt_create_playlist "<PLAYLIST_NAME>" Or the copy playlist can take the name of the YTMusic playlist and will create the
playlist if it does not exist, if you start the YTMusic playlist with a "+": s2yt_copy_playlist <SPOTIFY_PLAYLIST_ID> +<YTMUSIC_PLAYLIST_NAME> For example: s2yt_copy_playlist SPOTIFY_PLAYLIST_ID "+Feeling Like a PUNK" Re-running "copy_playlist" or "load_liked" in the event that it fails should be safe, it
will not duplicate entries on the playlist. Searching for YTMusic Tracks This is mostly for debugging, but there is a command to search for tracks in YTMusic: s2yt_search --artist <ARTIST> --album <ALBUM> <TRACK_NAME> Details About Search Algorithms The function first searches for albums by the given artist name on YTMusic. It then iterates over the first three album results and tries to find a track with
the exact same name as the given track name. If it finds a match, it returns the
track information. If the function can't find the track in the albums, it then searches for songs by the
given track name and artist name. Depending on the yt_search_algo parameter, it performs one of the following actions: If yt_search_algo is 0, it simply returns the first song result. If yt_search_algo is 1, it iterates over the song results and returns the first song
that matches the track name, artist name, and album name exactly. If it can't find a
match, it raises a ValueError. If yt_search_algo is 2, it performs a fuzzy match. It removes everything in brackets
in the song title and checks for a match with the track name, artist name, and album
name. If it can't find a match, it then searches for videos with the track name and
artist name. If it still can't find a match, it raises a ValueError. If the function can't find the track using any of the above methods, it raises a
ValueError. FAQ Does this run on mobile? No, this runs on Linux/Windows/MacOS. I get "No matching distribution found for spotify2ytmusic". This has been reported in Issue #39 and it seems like a mismatch between python versions. Users there, on MacOS, needed
to install a specific version of Python, and then use the matching version of PIP: brew install python@3.10
brew install python-tk@3.10
pip3.10 install spotify2ytmusic How does the lookup algorithm work? Given the Spotify track information, it does a lookup for the album by the same artist
on YTMusic, then looks at the first 3 hits looking for a track with exactly the same
name. In the event that it can't find that exact track, it then does a search of songs
for the track name by the same artist and simply returns the first hit. The idea is that finding the album and artist and then looking for the exact track match
will be more likely to be accurate than searching for the song and artist and relying on
the YTMusic algorithm to figure things out, especially for short tracks that might be
have many contradictory hits like "Survival by Yes". My copy is failing with repeated "ERROR: (Retrying) Server returned HTTP 400: Bad
Request". Try running with "--track-sleep=3" argument to do a 3 second sleep between tracks. This
will take much longer, but may succeed where faster rates have failed. License Creative Commons Zero v1.0 Universal spotify-backup.py licensed under MIT License.
See https://github.com/caseychu/spotify-backup for more information.;Copy playlists and liked music from Spotify to YTMusic;[] | linsomniac/spotify_to_ytmusic |
twostraws/Vortex;Vortex is a powerful, high-performance particle system library for SwiftUI, allowing you to create beautiful effects such as fire, rain, smoke, and snow in only a few lines of code. Vortex comes with a range of built-in effects, such as fireworks, magic, confetti, and more, but you can also create completely custom effects that suit your needs. This framework is compatible with iOS, macOS, tvOS, watchOS, and visionOS. Installing Vortex uses Swift Package Manager, so you should use Xcode to add a package dependency for https://github.com/twostraws/Vortex . Once that completes, import Vortex into your Swift code wherever needed: swift
import Vortex In the Assets directory of this repository you'll find three example particle images you can use, but you're able to use a variety of SwiftUI views and shapes rather than just images. See it in action This repository contains a cross-platform sample project demonstrating all the presets being used. The sample project is built using SwiftUI and requires iOS 17, macOS 14, or visionOS 1. Basic use Rendering a Vortex particle system takes two steps: Creating an instance of VortexSystem , configured for how you want your particles to behave. This must be given a list of tag names of the particles you want to render. Adding a VortexView to your SwiftUI view hierarchy, passing in the particle system to render, and also all the views that are used for particles, tagged using the same names from step 1. There are lots of built-in particle system designs, such as rain: swift
VortexView(.rain) {
Circle()
.fill(.white)
.frame(width: 32)
.tag("circle")
} Fireworks: swift
VortexView(.fireworks) {
Circle()
.fill(.white)
.blendMode(.plusLighter)
.frame(width: 32)
.tag("circle")
} And fire: swift
VortexView(.fire) {
Circle()
.fill(.white)
.blendMode(.plusLighter)
.blur(radius: 3)
.frame(width: 32)
.tag("circle")
} [!Note]
Each preset is designed to look for one or more tags; please check their documentation below for the correct tags to provide. You can also create completely custom effects, like this: ```swift
struct ContentView: View {
var body: some View {
VortexView(createSnow()) {
Circle()
.fill(.white)
.blur(radius: 5)
.frame(width: 32)
.tag("circle")
}
} func createSnow() -> VortexSystem {
let system = VortexSystem(tags: ["circle"])
system.position = [0.5, 0]
system.speed = 0.5
system.speedVariation = 0.25
system.lifespan = 3
system.shape = .box(width: 1, height: 0)
system.angle = .degrees(180)
system.angleRange = .degrees(20)
system.size = 0.25
system.sizeVariation = 0.5
return system
} }
``` [!Note] VortexView does not copy the particle system you provide unless you specifically ask for it using yourSystem.makeUniqueCopy() . This allows you to create a particle system once and re-use it in multiple places without losing its state. Programmatic particle control Although many particle systems emit particles constantly, it's not required – you can instead create particles that burst on demand, e.g. a confetti cannon that fires when the user presses a button. This follows a similar approach used in SwiftUI, such as with ScrollView and ScrollViewReader : wrap your VortexView in a VortexViewReader , which passes you a VortexProxy object that is able to manipulate the first particle system it finds. For example, this uses the built-in .confetti effect, then uses the Vortex proxy object to trigger a particle burst on demand: ```swift
VortexViewReader { proxy in
VortexView(.confetti) {
Rectangle()
.fill(.white)
.frame(width: 16, height: 16)
.tag("square") Circle()
.fill(.white)
.frame(width: 16)
.tag("circle")
}
Button("Burst", action: proxy.burst) }
``` You can also use the proxy's attractTo() method to make particles move towards or away from a specific point, specified in screen coordinates. The exact behavior depends on the value you assign to the attractionStrength property of your particle system: positive values move towards your attraction point, whereas negative values move away. [!Tip]
Call attractTo() with nil as its parameter to clear the attraction point. Secondary systems One of the more advanced Vortex features is the ability create secondary particle systems – for each particle in one system to create a new particle system. This enables creation of multi-stage effects, such as fireworks: one particle launches upwards, setting off sparks as it flies, then exploding into color when it dies. [!Important]
When creating particle systems with secondary systems inside, both the primary and secondary system can have their own set of tags. However, you must provide all tags from all systems when creating your ParticleView . Creating custom particle systems The initializer for VortexSystem takes a wide range of configuration options to control how your particle systems behave. All but one of these has a sensible default value, allowing you to get started quickly and adjust things on the fly. Details (Click to expand) The `VortexSystem` initializer parameters are:
- `tags` (`[String]`, *required*) should be the names of one or more views you're passing into a `VortexView` to render this particle system. This string array might only be *some* of the views you're passing in – you might have a secondary system that uses different tags, for example.
- `secondarySystems` (`[VortexSystem]`, defaults to an empty array) should contain all the secondary particle systems that should be attached to this primary emitter.
- `spawnOccasion` (`SpawnOccasion`, defaults to `.onBirth`) determines when this secondary system should be created. Ignored if this is your primary particle system.
- `position` (`SIMD2 `, defaults to `[0.5, 0.5]`) determines the center position of this particle system.
- `shape` (`Shape`, defaults to `.point`) determines the bounds of where particles are emitted.
- `birthRate` (`Double`, defaults to 100) determines how many particles are created every second.
- `emissionLimit` (`Int?`, defaults to `nil`) determines how many total particles this system should create before it is spent.
- `emissionDuration` (`Double`, defaults to 1) determines how long this particle system should emit for before pausing. Does nothing if `idleDuration` is set to 0, because there is no pause between emissions.
- `idleDuration` (`Double`, defaults to 0) determines how much time should elapsed between emission bursts.
- `burstCount` (`Int`, defaults to 100) determines how many particles should be emitted when you call `burst()` on the `VortexProxy` for this particle system.
- `burstCountVariation` (`Int`, defaults to 0) determines how much variation to allow in bursts, +/- the base `burstCount` value.
- `lifespan` (`TimeInterval`, defaults to 1) determines how many seconds particles should live for before being destroyed.
- `lifeSpanVariation` (`TimeInterval`, defaults to 0) determines how much variation to allow in particle lifespan, +/- the base `lifespan` value.
- `speed` (`Double`, defaults to 1) determines how fast particles should be launched. A speed of 1 should allow a particle to move from one of the screen to another in 1 second.
- `speedVariation` (`Double`, defaults to 0) determines how much variation to allow in particle speed, +/- the base `speed` value.
- `angle` (`Angle`, defaults to `.zero`) determines the direction particles should be launched, where 0 is directly up.
- `angleRange` (`Angle`, defaults to `.zero`) determines how much variation to allow in particle launch direction, +/- the base `angle` value.
- `acceleration` (`SIMD2 `, defaults to `[0, 0]`) determines how much to adjust particle speed over time. Positive X values make particles move to the right as if wind were blowing, and positive Y values make particles fall downwards as if affected by gravity.
- `attractionCenter` (`SIMD2 ?`, defaults to `nil`) makes particles move towards or away from a particular location. This should be specified in screen coordinates.
- `attractionStrength` (`Double`, defaults to 0) determines how quickly to move towards or away from the point specified in `attractionCenter`.
- `dampingFactor` (`Double`, defaults to 0) determines how quickly particles should lose momentum over time.
- `angularSpeed` (`SIMD3 `, defaults to `[0, 0, 0]`) determines how quickly particles should spin in X, Y, and Z axes. Note: watchOS supports only Z rotation.
- `angularSpeedVariation` (`SIMD3 `, defaults to `[0, 0, 0]` determines how much variation to allow in particle rotation speed, +/- the base `angularSpeed` value.
- `colors` (`ColorMode`, defaults to `.single(.white)`) determines how particles should be colored over time.
- `size` (`Double`, defaults to 1) determines how big particles should be compared to their source view, where 1 is 100% the original size.
- `sizeVariation` (`Double`, defaults to 0) determines how much variation to allow in initial particle size, +/- the base `size` value.
- `sizeMultiplierAtDeath` (`Double`, defaults to 1) determines how much bigger or smaller particles should be by the time they are destroyed. A value of 1 means the size won't change, whereas a value of 0.5 means particles will be half whatever their initial size was.
- `stretchFactor` (`Double`, defaults to 1) determines whether particles should be stretched based on their movement speed. A value of 1 means no stretch is applied.
Most of those are built-in types, but two deserve extra explanation.
First, `Shape` allows you to emit particles from a range of shapes: a single point, a straight line, a circle, and more. For example, this emits particles in a horizontal line across the available space:
.box(width: 1, height: 0)
And this creates particles in an ellipse half the size of the available space:
.ellipse(radius: 0.5)
Second, `ColorMode` gives you fine-grained control over how colors work with Vortex. The default value for new particle system is `.single(.white)`, which means all particles are created white. However, you can create particles in a range of static colors like this:
.random(.red, .white, .blue)
You can also create color ramps, where particles change their colors as they age. For example, this makes particles start white, then turn red, then fade out:
.ramp(.white, .red, .clear)
For maximum control, you can use *random ramps*, where each particle system picks a different ramp for particles to use as they age. For example, this makes some particles start red then fade out, and others start blue then fade out:
.randomRamp([.red, .clear], [.blue, .clear])
Because Vortex uses these color modes to dynamically recolor your particles, it's a good idea to specify `.fill(.white)` when using SwiftUI's native shapes such as `Rectangle` and `Circle` to ensure the particles can be recolored correctly. Built-in presets Vortex provides a selection of built-in presets to create common effects, but also to act as starting points for your own creations. Details (Click to expand) ### Confetti
The `.confetti` preset creates a confetti effect where views fly shoot out when a burst happens. This means using a `VortexViewReader` to gain access to the Vortex proxy, like this:
```swift
VortexViewReader { proxy in
VortexView(.confetti) {
Rectangle()
.fill(.white)
.frame(width: 16, height: 16)
.tag("square")
Circle()
.fill(.white)
.frame(width: 16)
.tag("circle")
}
Button("Burst", action: proxy.burst)
}
```
### Fire
The `.fire` preset creates a flame effect. This works better when your particles have a soft edge, and use a `.plusLighter` blend mode, like this:
```swift
VortexView(.fire) {
Circle()
.fill(.white)
.frame(width: 32)
.blur(radius: 3)
.blendMode(.plusLighter)
.tag("circle")
}
```
### Fireflies
The `.fireflies` preset creates glowing yellow dots that zoom up and fade out. This works better when your particles have a soft edge, like this:
```swift
VortexView(.fireflies) {
Circle()
.fill(.white)
.frame(width: 32)
.blur(radius: 3)
.blendMode(.plusLighter)
.tag("circle")
}
```
### Fireworks
The `.fireworks` preset creates a three-stage particle effect to simulate exploding fireworks. Each firework is a particle, and also launches new "spark" particles as it flies upwards. When the firework particle is destroyed, it creates an explosion effect in a range of colors.
```swift
VortexView(.fireworks) {
Circle()
.fill(.white)
.frame(width: 32)
.blur(radius: 5)
.blendMode(.plusLighter)
.tag("circle")
}
```
### Magic
The `.magic` preset creates a simple ring of particles that fly outwards as they fade out. This works best using the "sparkle" image contained in the Assets folder of this repository, but you can use any other image or shape you prefer.
```swift
VortexView(.magic) {
Image(.sparkle)
.blendMode(.plusLighter)
.tag("sparkle")
}
```
### Rain
The `.rain` preset creates a rainfall system by stretching your view based on the rain speed:
```swift
VortexView(.rain) {
Circle()
.fill(.white)
.frame(width: 32)
.tag("circle")
}
```
### Smoke
The `.smoke` preset creates a dark gray to black smoke effect. This works best when your views are a little larger, and have soft edges:
```swift
VortexView(.smoke) {
Circle()
.fill(.white)
.frame(width: 64)
.blur(radius: 10)
.tag("circle")
}
```
### Snow
The `.snow` preset creates a falling snow effect. This works best when your views have soft edges, like this:
```swift
VortexView(.snow) {
Circle()
.fill(.white)
.frame(width: 24)
.blur(radius: 5)
.tag("circle")
}
```
### Spark
The `.spark` preset creates an intermittent spark effect, where sparks fly out for a short time, then pause, then fly out again, etc.
```swift
VortexView(.spark) {
Circle()
.fill(.white)
.frame(width: 16)
.tag("circle")
}
```
### Splash
The `.splash` present contains raindrop splashes, as if rain were hitting the ground. This works best in combination with the `.rain` preset, like this:
```swift
ZStack {
VortexView(.rain) {
Circle()
.fill(.white)
.frame(width: 32)
.tag("circle")
}
VortexView(.splash) {
Circle()
.fill(.white)
.frame(width: 16, height: 16)
.tag("circle")
}
}
``` Contributing I welcome all contributions, whether that's adding new particle system presets, fixing up existing code, adding comments, or improving this README – everyone is welcome! You must comment your code thoroughly, using documentation comments or regular comments as applicable. All code must be licensed under the MIT license so it can benefit the most people. Please add your code to the Vortex Sandbox app, so folks can try it out easily. License MIT License. Copyright (c) 2024 Paul Hudson. Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. Vortex was made by Paul Hudson , who writes free Swift tutorials over at Hacking with Swift . It’s available under the MIT license, which permits commercial use, modification, distribution, and private use. A Hacking with Swift Project;High-performance particle effects for SwiftUI.;[] | twostraws/Vortex |
latentcat/uvcanvas;UVCanvas An open source React.js component library for beautifully shaded canvas, brought to you by Latent Cat . Documentation Visit uvcanvas.com/docs to view the documentation. License Licensed under the MIT license .;An React.js component library for beautifully shaded canvas https://uvcanvas.com;component-library,graphics,reactjs,shaders,typescript,webgl,nextjs,react | latentcat/uvcanvas |
Marker-Inc-Korea/AutoRAG;AutoRAG RAG AutoML tool for automatically finds an optimal RAG pipeline for your data. Explore our 📖 Document !! Plus, join our 📞 Discord Community. 💪 Colab Tutorial Step 1: Basic of AutoRAG | Optimizing your RAG pipeline Step 2: Create evaluation dataset 🚨 YouTube Tutorial https://github.com/Marker-Inc-Korea/AutoRAG/assets/96727832/c0d23896-40c0-479f-a17b-aa2ec3183a26 Muted by default, enable sound for voice-over You can see on YouTube ☎️ FaQ 🛣️ Support plans & Roadmap 💻 Hardware Specs ⭐ Running AutoRAG 🍯 Tips/Tricks ☎️ TroubleShooting 😁 Introduction There are many RAG pipelines and modules out there,
but you don’t know what pipeline is great for “your own data” and "your own use-case."
Making and evaluating all RAG modules is very time-consuming and hard to do.
But without it, you will never know which RAG pipeline is the best for your own use-case. AutoRAG is a tool for finding optimal RAG pipeline for “your data.”
You can evaluate various RAG modules automatically with your own evaluation data,
and find the best RAG pipeline for your own use-case. AutoRAG supports a simple way to evaluate many RAG module combinations.
Try now and find the best RAG pipeline for your own use-case. 📌AutoRAG Structure ❗Supporting Nodes & modules You can check our all supporting Nodes & modules
at here ❗Supporting Evaluation Metrics You can check our all supporting Evaluation Metrics
at here Retrieval Metrics Retrieval Token Metrics Generation Metrics ⚡ Quick Install We recommend using Python version 3.9 or higher for AutoRAG. bash
pip install AutoRAG ⚡ QuickStart 1. Prepare your evaluation data For evaluation, you need to prepare just three files. QA dataset file (qa.parquet) Corpus dataset file (corpus.parquet) Config yaml file (config.yaml) There is a template for your evaluation data for using AutoRAG. Check out how to make evaluation data
at here . Check out the evaluation data rule
at here . Plus, you can get example datasets for testing AutoRAG at here . 2. Evaluate your data to various RAG modules You can get various config yaml files at here .
We highly recommend using pre-made config yaml files for starter. If you want to make your own config yaml files, check out the Config yaml file section. You can evaluate your RAG pipeline with just a few lines of code. ```python
from autorag.evaluator import Evaluator evaluator = Evaluator(qa_data_path='your/path/to/qa.parquet', corpus_data_path='your/path/to/corpus.parquet')
evaluator.start_trial('your/path/to/config.yaml')
``` or you can use command line interface bash
autorag evaluate --config your/path/to/default_config.yaml --qa_data_path your/path/to/qa.parquet --corpus_data_path your/path/to/corpus.parquet Once it is done, you can see several files and folders created at your current directory.
At the trial folder named to numbers (like 0),
you can check summary.csv file that summarizes the evaluation results and the best RAG pipeline for your data. For more details, you can check out how the folder structure looks like
at here . 3. Use a found optimal RAG pipeline You can use a found optimal RAG pipeline right away.
It needs just a few lines of code, and you are ready to use! First, you need to build pipeline yaml file from your evaluated trial folder.
You can find the trial folder in your current directory.
Just looking folder like '0' or other numbers. ```python
from autorag.deploy import Runner runner = Runner.from_trial_folder('your/path/to/trial_folder')
runner.run('your question')
``` Or, you can run this pipeline as api server.
You can use python code or CLI command.
Check out API endpoint at here . ```python
from autorag.deploy import Runner runner = Runner.from_trial_folder('your/path/to/trial_folder')
runner.run_api_server()
``` You can run api server with CLI command. bash
autorag run_api --config_path your/path/to/pipeline.yaml --host 0.0.0.0 --port 8000 4. Run Dashboard You can run dashboard to easily see the result. bash
autorag dashboard --trial_dir /your/path/to/trial_dir sample dashboard: 5. Share your RAG pipeline You can use your RAG pipeline from extracted pipeline yaml file.
This extracted pipeline is great for sharing your RAG pipeline to others. You must run this at project folder, which contains datas in data folder, and ingested corpus for retrieval at resources
folder. ```python
from autorag.deploy import extract_best_config pipeline_dict = extract_best_config(trial_path='your/path/to/trial_folder', output_path='your/path/to/pipeline.yaml')
``` 6. Deploy your optimal RAG pipeline (for testing) 6-1. Run as a CLI You can use a found optimal RAG pipeline right away with extracted yaml file. ```python
from autorag.deploy import Runner runner = Runner.from_yaml('your/path/to/pipeline.yaml')
runner.run('your question')
``` 6-2. Run as an API server You can run this pipeline as an API server. Check out API endpoint at here . ```python
from autorag.deploy import Runner runner = Runner.from_yaml('your/path/to/pipeline.yaml')
runner.run_api_server()
``` bash
autorag run_api --config_path your/path/to/pipeline.yaml --host 0.0.0.0 --port 8000 6-3. Run as a Web Interface you can run this pipeline as a web interface. Check out web interface at here . bash
autorag run_web --trial_path your/path/to/trial_path sample web interface: Star History Contributors ⭐ Thanks go to these wonderful people: Contribution We are developing AutoRAG as open-source. So this project welcomes contributions and suggestions. Feel free to contribute to this project. Plus, check out our detailed documentation at here .;RAG AutoML Tool - Find optimal RAG pipeline for your own data.;[] | Marker-Inc-Korea/AutoRAG |
sveltecult/franken-ui;Franken UI HTML-first, framework-agnostic, beautifully designed components that you can truly copy and paste into your site. Accessible. Customizable. Open Source. Documentation Visit https://www.franken-ui.dev to view the documentation. Looking for releases? Head over to the dedicated releases repository. Is it down? Probably not, but you can always clone the documentation and run it locally. Credits Credits and attributions are now on their dedicated page . Disclaimer I'm a completely independent developer and have no affiliation with Svelte, @shadcn, UIkit and YOOtheme, or any other entities. Follow me on Mastodon or X, formerly known as Twitter ™ for updates. Support If Franken UI has been beneficial to you in any way, I have setup Ko-Fi and Liberapay . What your donation will support: Continuous Documentation Improvement: Enhancing and expanding documentation to ensure clarity and ease of use. More components: Developing more components to broaden the range of options available to developers. Building "Blocks": Creating common page templates like Authentication pages, admin dashboards, etc., for easy integration into projects. Project Maintenance: Ensuring the project remains up-to-date and compatible with evolving technologies. Educational Content: Producing tutorials and screencasts for greater adoption and understanding of Franken UI. License Licensed under the MIT license .;Franken UI is an HTML-first, open-source library of UI components that works as a standalone or as a Tailwind CSS plugin. It is compatible with UIkit 3. The design is influenced by shadcn/ui.;component,css,design-system,framework,shadcn,shadcn-ui,tailwind,tailwindcss,ui,uikit | sveltecult/franken-ui |
tk04/Marker;Marker An open-source, user-friendly UI for viewing and editing markdown files Download Navigate to the release page and select the installer that matches your platform. Using Hombrew bash
$ brew install --cask tk04/tap/marker AUR for Arch Linux Using paru bash
$ paru -S marker-md Using yay bash
$ yay -S marker-md Building Locally To build Marker locally, clone this repo and run the following commands (make sure to have Rust already installed on your system): sh
$ pnpm install && npx tauri build Contributing If you feel that Marker is missing something, feel free to open a PR. Contributions are welcome and highly appreciated.;A Desktop App for Easily Viewing and Editing Markdown Files;tauri,typescript,markdown | tk04/Marker |