text
stringlengths 74
478k
| repo
stringlengths 7
106
|
---|---|
FrigadeHQ/remote-storage;remoteStorage remoteStorage is a simple library that combines the localStorage API with a remote server to persist data across browsers and devices. Website · Live Demo · Source · Docs Why Storing data in localStorage is useful, but it's not a good solution when you store data that needs to be shared across multiple devices or browsers. For instance, let's say you want to show a welcome modal to all new users that sign up for your product. If you use localStorage to track if a user has already seen this modal, your users will continue to get the experience over and over again every time they switch devices or browsers. That's where remoteStorage comes in. Using the same API as localStorage, remoteStorage allows you to easily read and write data on the fly while maintaining state across browsers and devices in order to provide a better user experience. Features ✨ Simple API (same as localStorage) 🔐 Secure (built-in JWT support) 👌 Works with all Javascript frameworks 📦 Lightweight (~1 kB minified) 🔓 Open source server and client (MIT license) 🆓 Free hosted community server Quick start Install the library using your favorite package manager: bash
npm install remote-storage Or simply include it in your HTML: ```html ``` Import the library and use it like you would localStorage: ```javascript
import { RemoteStorage } from 'remote-storage' const remoteStorage = new RemoteStorage({ userId: "my-user-id" }) const hasSeenNewFeature = await remoteStorage.getItem('hasSeenNewFeature') if (!hasSeenNewFeature) {
await remoteStorage.setItem('hasSeenNewFeature', true)
// Highlight your new and exciting feature!
}
``` That's it! Documentation User IDs remoteStorage uses user IDs to identify users. A user ID is a string that uniquely identifies a user. It can be anything you want, but we recommend using a non-iterable UUID to prevent users from guessing other user IDs and accessing their data. The User ID is set when you create a new instance of remoteStorage: javascript
const remoteStorage = new RemoteStorage({
userId: '123e4567-e89b-12d3-a456-426614174000'
}) If you don't provide a user ID, remoteStorage will generate a random UUID which will change every time the user visits your site. This is useful for testing, but defeats the purpose of remoteStorage since the data will not persist across devices or browsers. Instance IDs remoteStorage uses instance IDs to identify the application instance that is making the request. An instance ID is a string that uniquely identifies an application instance. Typically you would use the same instance ID for all requests from the same application instance. The instance ID is set when you create a new instance of remoteStorage: javascript
const remoteStorage = new RemoteStorage({
userId: '123e4567-e89b-12d3-a456-426614174000',
instanceId: 'my-cool-app'
}) Server We offer a free hosted community server at https://api.remote.storage (the default behavior if no serverAddress is provided). This hosted server should not be used for production apps, but it's great for testing and prototyping. To use a different server, simply pass the serverAddress option when creating a new instance of remoteStorage: javascript
const remoteStorage = new RemoteStorage({
serverAddress: 'https://api.remote.storage',
userId: '123e4567-e89b-12d3-a456-426614174000',
instanceId: 'my-cool-app'
}) The server can be spun up using Docker in a few minutes. See the server documentation for more information. FAQ What data should I store in remoteStorage? remoteStorage should only be used for non-sensitive data. We recommend using it for things like user preferences, settings, and other non-sensitive data. Due to the nature of the public API, it's not a good fit for storing sensitive data like passwords or PII. How is remoteStorage different from localStorage? localStorage is a browser API that allows you to store data in the browser. The data is stored locally on the user's device and is not shared across devices or browsers. remoteStorage is a library that combines the localStorage API with a remote server to persist data across browsers and devices. How do I authenticate requests to remoteStorage? remoteStorage can be used without any authentication, but we highly recommend using JSON Web Tokens (JWT) to authenticate requests to the server. This can be done by setting the JWT_SECRET environment variable in .env to your JWT secret for the server.
See the server documentation for more information. Contributing Pull requests are always welcome. Note that if you are going to propose drastic changes, make sure to open an issue for discussion first. This will ensure that your PR will be accepted before you start working on it. For any existing issues that do not yet have an assigned contributor, feel free to comment on the issue if you would like to work on it. We will assign the issue to you if we think you are a good fit. Making changes: implement your bug fix or feature, write tests to cover it and make sure all tests are passing. Ensure your commit leverages Semantic Commit Messages and that your commit message follows the Conventional Commits format.
Then open a pull request to the main branch.;remoteStorage is a simple library that combines the localStorage API with a remote server to persist data across sessions, devices, and browsers. It works as a simple key value database store and backend with support for React, Next.js, Vue, Node, or any Javascript stack;caching,database,javascript,keyvalue,keyvalue-db,localstorage,web,backend,local-storage,nextjs | FrigadeHQ/remote-storage |
microsoft/Phi-3CookBook;Welcome to Microsoft Phi-3 Cookbook This is a manual on how to use the Microsoft Phi-3 family. Phi-3, a family of open AI models developed by Microsoft. Phi-3 models are the most capable and cost-effective small language models (SLMs) available, outperforming models of the same size and next size up across a variety of language, reasoning, coding, and math benchmarks. Phi-3-mini, a 3.8B language model is available on Microsoft Azure AI Studio , Hugging Face , and Ollama . Phi-3 models significantly outperform language models of the same and larger sizes on key benchmarks (see benchmark numbers below, higher is better). Phi-3-mini does better than models twice its size, and Phi-3-small and Phi-3-medium outperform much larger models, including GPT-3.5T. All reported numbers are produced with the same pipeline to ensure that the numbers are comparable. As a result, these numbers may differ from other published numbers due to slight differences in the evaluation methodology. More details on benchmarks are provided in our technical paper. Phi-3-small with only 7B parameters beats GPT-3.5T across a variety of language, reasoning, coding and math benchmarks. Phi-3-medium with 14B parameters continues the trend and outperforms Gemini 1.0 Pro. Phi-3-vision with just 4.2B parameters continues that trend and outperforms larger models such as Claude-3 Haiku and Gemini 1.0 Pro V across general visual reasoning tasks, OCR, table and chart understanding tasks. Note: Phi-3 models do not perform as well on factual knowledge benchmarks (such as TriviaQA) as the smaller model size results in less capacity to retain facts. We are introducing Phi Silica which is built from the Phi series of models and is designed specifically for the NPUs in Copilot+ PCs. Windows is the first platform to have a state-of-the-art small language model (SLM) custom built for the NPU and shipping inbox. Phi Silica API along with OCR, Studio Effects, Live Captions, Recall User Activity APIs will be available in Windows Copilot Library in June. More APIs like Vector Embedding, RAG API, Text Summarization will be coming later. Azure AI Studio You can learn how to use Microsoft Phi-3 and how to build E2E solutions in your different hardware devices. To experience Phi-3 for yourself, start with playing with the model and customizing Phi-3 for your scenarios using the Azure AI Studio, Azure AI Model Catalog Playground Each model has a dedicated playground to test the model Azure AI Playground . Hugging Face You can also find the model on the Hugging Face Playground Hugging Chat playground Contents This cookbook includes: Microsoft Phi-3 Cookbook Introduction Setting up your environment (✅) Welcome to the Phi-3 Family (✅) Understanding Key Technologies (✅) AI Safety for Phi-3 Models (✅) Phi-3 Hardware Support (✅) Phi-3 Models & Availability across platforms (✅) Quick Start Using Phi-3 in Hugging face (✅) Using Phi-3 in Azure AI Studio (✅) Using Phi-3 in Ollama (✅) Using Phi-3 in LM Studio (✅) Using Phi-3 in AI Toolkit VSCode (✅) Inference Phi-3 Inference Phi-3 in iOS (✅) Inference Phi-3 in Jetson (✅) Inference Phi-3 in AI PC (✅) Inference Phi-3 with Apple MLX Framrwork (✅) Inference Phi-3 in Local Server (✅) Inference Phi-3 in Remote Server using AI Toolkit (✅) Inference Phi-3-Vision in Local (✅) Fine-tuning Phi-3 Downloading & Creating Sample Data Set (✅) Fine-tuning Scenarios (✅) Fine-tuning vs RAG (✅) Fine-tuning Let Phi-3 become an industry expert (✅) Fine-tuning Phi-3 with AI Toolkit for VS Code (✅) Fine-tuning Phi-3 with Azure Machine Learning Service (✅) Fine-tuning Phi-3 with Lora (✅) Fine-tuning Phi-3 with QLora (✅) Fine-tuning Phi-3 with Azure AI Studio (✅) Fine-tuning Phi-3 with Azure ML CLI/SDK (✅) Fine-tuning with Microsoft Olive (✅) Fine-tuning Phi-3-vision with Weights and Bias (✅) Fine-tuning Phi-3 with Apple MLX Framework (✅) Evaluation Phi-3 Introduction to Responsible AI (✅) Introduction to Promptflow (✅) Introduction to Azure AI Studio for evaluation (✅) E2E Samples for Phi-3-mini Introduction to End to End Samples (✅) Prepare your industry data (✅) Use Microsoft Olive to architect your projects (✅) Inference Your Fine-tuning ONNX Runtime Model (✅) Multi Model - Interactive Phi-3-mini and OpenAI Whisper (✅) MLFlow - Building a wrapper and using Phi-3 with MLFlow (✅) E2E Samples for Phi-3-vision Phi3-vision-Image text to text (✅) Phi-3-Vision-ONNX (✅) Phi-3-vision CLIP Embedding (✅) Labs and workshops samples Phi-3 C# .NET Labs (✅) Build your own Visual Studio Code GitHub Copilot Chat with Microsoft Phi-3 Family (✅) Phi-3 ONNX Tutorial (✅) Phi-3-vision ONNX Tutorial (✅) Run the Phi-3 models with the ONNX Runtime generate() API (✅) Phi-3 ONNX Multi Model LLM Chat UI, This is a chat demo (✅) C# Hello Phi-3 ONNX example Phi-3 (✅) C# API Phi-3 ONNX example to support Phi3-Vision (✅) Contributing This project welcomes contributions and suggestions. Most contributions require you to agree to a
Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us
the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com. When you submit a pull request, a CLA bot will automatically determine whether you need to provide
a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions
provided by the bot. You will only need to do this once across all repos using our CLA. This project has adopted the Microsoft Open Source Code of Conduct .
For more information see the Code of Conduct FAQ or
contact opencode@microsoft.com with any additional questions or comments. Trademarks This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft
trademarks or logos is subject to and must follow Microsoft's Trademark & Brand Guidelines .
Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship.
Any use of third-party trademarks or logos are subject to those third-party's policies.;This is a Phi-3 book for getting started with Phi-3. Phi-3, a family of open AI models developed by Microsoft. Phi-3 models are the most capable and cost-effective small language models (SLMs) available, outperforming models of the same size and next size up across a variety of language, reasoning, coding, and math benchmarks. ;phi3,phi3-testing,phi3-vision | microsoft/Phi-3CookBook |
Dillettant/Athena;STEM ESSAY Readme 🚀 🔥 For more in-depth information and resources, please visit our official website . What is STEM ESSAY? 🤖 STEM ESSAY is a cutting-edge tool designed to simplify the process of creating structured outlines for STEM essays. It uses advanced algorithms to break down complex topics into coherent, logically organized outlines, making essay writing in Science, Technology, Engineering, and Mathematics fields more accessible and less time-consuming. Our goal is to help users from students to researchers transform their ideas into high-quality essays with ease. STEM ESSAY supports a wide range of STEM disciplines, ensuring your essays are structured, insightful, and ready to engage your audience. It's not just an aid; it's your companion in mastering STEM writing! 🌟 Problems STEM ESSAY Tries to Tackle 🛠️ Complexity Simplification: Breaks down intricate STEM topics into manageable outlines. Time Efficiency: Reduces the hours spent on structuring essays. Clarity and Coherence: Enhances the readability of STEM essays for a wider audience. Idea Organization: Helps organize thoughts and research findings systematically. Writing Barriers: Lowers the entry barrier for effective STEM communication. Install Requirements Linux or MaxOS Python 3.10+ openai pyautogen a. Clone the project. ```shell
git clone https://github.com/Dillettant/Athena cd Athena
``` b. Create a conda virtual environment and activate it. ```shell
conda create -n athena python=3.10 -y conda activate athena
``` c. Install dependencies. shell
pip install -r requirements.txt Usage You should obtain an APl key from OpenAl. Once you have the key, set it as an environment variable named OPENAI API KEY. Set OpenAI API Key : Replace $YOUR_OPENAI_API_KEY with your
actual OpenAI API key. On macOS or Linux systems, bash
export OPENAI_API_KEY=$YOUR_OPENAI_API_KEY On Windows systems, powershell
setx OPENAI_API_KEY $YOUR_OPENAI_API_KEY For example: sh
export OPENAI_API_KEY='sk...DAHY' You can then run the code using the following command:
```sh
cd src/ python test.py
``` The first step in the automated essay generation process is to generate a topic. Then you will get the result sh
[DEBUG] Topics:
1. "Analyzing the Impact and Efficiency of Different Voting Systems through Mathematical Modelling"
2. "A Comprehensive Study about the Probability and Statistical Implications in Casino Games"
3. "The Application and Effectiveness of Cryptography in Digital Security: A Mathematical Perspective"
select one of the topic.. With the topic selected, the next step is to generate an outline.
```sh
Admin (to chat_manager): Write an IB essay "Evaluating the Efficiency and Impact of Cryptographic Algorithms in Cybersecurity: A Mathematical Analysis" with 4000 words. subject_expert (to chat_manager): [plan]
Title: Evaluating the Efficiency and Impact of Cryptographic Algorithms in Cybersecurity: A Mathematical Analysis Introduction/Rationale (Word Count: 300) Purpose: To explore the significance of cryptographic algorithms in the digital age where cybersecurity threats are omnipresent, and to understand and evaluate their mathematical complexities and efficacies. Personal Motivation: Recount a scenario wherein the breach of personal data led to a growing concern over cybersecurity and a fascination with the cryptographic measures employed for protection. This intrigue fueled a deeper look into the underlying mathematics of these cryptographic systems. Research Objective: To quantitatively analyze and assess the efficiency and impact of various cryptographic algorithms, with a focus on their computational complexity, security level, and practical performance in cybersecurity applications. Background Information (Word Count: 500)
...
```
The final step is the actual writing of the essay based on the generated outline. The following is a partial paragraph display ```sh
Admin (to chat_manager): Write the following paragraph:
1.Introduction/Rationale
pose: To explore the significance of cryptographic algorithms in the digital age where cybersecurity threats are omnipresent, and to understand and evaluate their mathematical complexities and efficacies.
sonal Motivation: Recount a scenario wherein the breach of personal data led to a growing concern over cybersecurity and a fascination with the cryptographic measures employed for protection. This intrigue fueled a deeper look into the underlying mathematics of these cryptographic systems.
earch Objective: To quantitatively analyze and assess the efficiency and impact of various cryptographic algorithms, with a focus on their computational complexity, security level, and practical performance in cybersecurity applications.
total words:300
...
In the vibrant realm of casino games, understanding the dance of chance is paramount. At its core lies probability theory, a branch of mathematics that navigates through the potential outcomes in games of chance. It all begins with a well-defined set of possibilities, known as the sample space, and the events or outcomes that may occur within it. The probability of an event is simply the count of favorable outcomes divided by the total number of outcomes - a formula elegantly captured by ( P(E) = \frac{n(E)}{n(S)} ). Random variables come into play when outcomes are numerical, such as the dots facing up after a dice toss. These variables allow us to calculate predicted results or 'expected values'. The expected value—what one might anticipate in the long run—is found by weighting each possible outcome by its corresponding probability and summing them up: ( E(X) = \sum (x_i \cdot P(x_i)) ). Another vital tool is variance, which captures how much the outcomes spread out from the expected value. It's described mathematically by ( Var(X) = E((X - E(X))^2) ), offering a gauge of a game's risk level. The square root of variance, the standard deviation, is especially handy as it measures risk in the original units of the data. Statistical independence is the notion that one event doesn't influence another, essential when dealing with sequential actions, such as separate draws from a deck of cards. Independence is central to correctly calculating combined event probabilities, a frequent aspect of gaming strategies. The binomial distribution allows us to predict outcomes for a specific number of successes in a series of independent trials, such as betting on red in roulette several times. It's a model that exemplifies the predictability embedded within supposedly random events. Probability distributions chart all the potential outcomes for a variable and their likelihoods, summing up to 1. These can be discrete or continuous, painting a picture of what to expect from a game on any given play. Breaking down these foundational concepts, such as random variables, expected value, variance, statistical independence, and binomial distribution, and applying probability to sample spaces in games of chance, we can interpret the erratic nature of games into more measured elements. This treatment not only deepens our strategic understanding but creates a bridge from abstract math to the tangible decisions made at the tables and slot machines.
... ``` The following shows the images generated by the essay: The following represents a selection of essay topics that can be generated. If you're interested in using our project, you can follow the example provided in | Topic | Notebook Link |
|-------|---------------|
| Understanding the Role of Probability Theory and Statistics in Predictive Modeling for Climate Change Scenarios| |
| The Mathematical Exploration of Population Growth: An investigation into different types of mathematical models predicting population growth over time | |
| Predicting Stock Market Trends Using Stochastic Processes and Probability Theory| | Stem Essay Use Case: Modeling of Zombie Apocalypse Demo Contributing This project is open to contributions and ideas. To contribute, you'll need to accept a Contributor License Agreement (CLA), which confirms your authority to offer your contribution and grants us the permission to utilize it. Upon initiating a pull request, an automated CLA system will assess if your contribution requires a CLA and update the pull request with the necessary information (such as a status check or a comment). Just follow the steps outlined by the automated system. This process is a one-time requirement for all contributions across repositories that employ our CLA. Contributors This project exists thanks to all the people who contribute. Contact Us License MIT;Structure your STEM essay in several minutes with Generative AI.;[] | Dillettant/Athena |
mattmassicotte/ConcurrencyRecipes;ConcurrencyRecipes Practical solutions to problems with Swift Concurrency Swift Concurrency can be really hard to use. I thought it could be handy to document and share solutions and hazards you might face along the way. I am absolutely not saying this is comprehensive, or that the solutions presented are great. I'm learning too. Contributions are very welcome, especially for problems! Table of Contents Creating an Async Context Using Protocols Isolation Structured Concurrency SwiftUI Using Libraries not Designed for Concurrency Interoperability Hazards Quick definitions for the hazards referenced throughout the recipes: Timing: More than one option is available, but can affect when events actually occur. Ordering: Unstructured tasks means ordering is up to the caller. Think carefully about dependencies, multiple invocations, and cancellation. Lack of Caller Control: definitions always control actor context. This is different from other threading models, and you cannot alter definitions you do not control. Sendability: types that cross isolation domains must be sendable. This isn't always easy, and for types you do not control, not possible. Blocking: Swift concurrency uses a fixed-size thread pool. Tying up background threads can lead to lag and even deadlock. Availability: Concurrency is evolving rapidly, and some APIs require the latest SDK. Async virality: Making a function async affects all its callsites. This can result in a large number of changes, each of which could, itself, affect subsequence callsites. Actor Reentrancy: More than one thread can enter an Actor's async methods. An actor's state can change across awaits. Contributing and Collaboration I'd love to hear from you! Get in touch via mastodon , an issue, or a pull request. I prefer collaboration, and would love to find ways to work together if you have a similar project. By participating in this project you agree to abide by the Contributor Code of Conduct .;Practical solutions to problems with Swift Concurrency;[] | mattmassicotte/ConcurrencyRecipes |
polyfillpolyfill/polyfill-library;Polyfill-library · NodeJS module to create polyfill bundles tailored to individual user-agents Install bash
npm install polyfill-library --save Usage ```javascript
const polyfillLibrary = require('polyfill-library'); const polyfillBundle = polyfillLibrary.getPolyfillString({
uaString: 'Mozilla/5.0 (Windows; U; MSIE 7.0; Windows NT 6.0; en-US)',
minify: true,
features: {
'es6': { flags: ['gated'] }
}
}).then(function(bundleString) {
console.log(bundleString);
});
``` API polyfillLibrary.listAllPolyfills() Get a list of all the polyfills which exist within the collection of polyfill sources. Returns a Promise which resolves with an array of all the polyfills within the collection. polyfillLibrary.describePolyfill(featureName) Get the metadata for a specific polyfill within the collection of polyfill sources. @param {String} featureName - The name of a polyfill whose metadata should be returned. Returns a Promise which resolves with the metadata or with undefined if no metadata exists for the polyfill. polyfillLibrary.getOptions(opts = {}) Create an options object for use with getPolyfills or getPolyfillString . @param {object} opts - Valid keys are uaString, minify, unknown, excludes, rum and features. @param {Boolean} [opts.minify=true] - Whether to return the minified or raw implementation of the polyfills. @param {'ignore'|'polyfill'} [opts.unknown='polyfill'] - Whether to return all polyfills or no polyfills if the user-agent is unknown or unsupported. @param {Object} [opts.features={}] - Which features should be returned if the user-agent does not support them natively. @param {Array<String>} [opts.excludes=[]] - Which features should be excluded from the returned object. @param {String} [opts.uaString=''] - The user-agent string to check each feature against. @param {Boolean} [opts.rum=false] - Whether to include a script that reports anonymous usage data in the polyfill bundle. Returns an object which has merged opts with the defaults option values. polyfillLibrary.getPolyfills(opts) Given a set of features that should be polyfilled in 'opts.features' (with flags i.e. {<featurename>: {flags:Set[<flaglist>]}, ...} ), determine which have a configuration valid for the given opts.uaString, and return a promise of set of canonical (unaliased) features (with flags) and polyfills. @param {object} opts - Valid keys are uaString, minify, unknown, excludes, rum and features. @param {Boolean} [opts.minify=true] - Whether to return the minified or raw implementation of the polyfills. @param {'ignore'|'polyfill'} [opts.unknown='polyfill'] - Whether to return all polyfills or no polyfills if the user-agent is unknown or unsupported. @param {Object} [opts.features={}] - Which features should be returned if the user-agent does not support them natively. @param {Array<String>} [opts.excludes=[]] - Which features should be excluded from the returned object. @param {String} [opts.uaString=''] - The user-agent string to check each feature against. @param {Boolean} [opts.rum=false] - Whether to include a script that reports anonymous usage data in the polyfill bundle. Returns a Promise which resolves to an Object which contains the canonicalised feature definitions filtered for UA. polyfillLibrary.getPolyfillString(opts) Create a polyfill bundle. @param {object} opts - Valid keys are uaString, minify, unknown, excludes, rum and features. @param {Boolean} [opts.minify=true] - Whether to return the minified or raw implementation of the polyfills. @param {'ignore'|'polyfill'} [opts.unknown='polyfill'] - Whether to return all polyfills or no polyfills if the user-agent is unknown or unsupported. @param {Object} [opts.features={}] - Which features should be returned if the user-agent does not support them natively. @param {Array<String>} [opts.excludes=[]] - Which features should be excluded from the returned object. @param {String} [opts.uaString=''] - The user-agent string to check each feature against. @param {Boolean} [opts.rum=false] - Whether to include a script that reports anonymous usage data in the polyfill bundle. @param {Boolean} [opts.stream=false] - Whether to return a stream or a string of the polyfill bundle. Returns a polyfill bundle as either a utf-8 ReadStream or as a Promise of a utf-8 String. AWS Lambda To use this package in an AWS Lambda function, you need to include the distribution Polyfills located in ./node_modules/polyfill-library/polyfills/__dist in the root directory of your Lambda. In AWS, Lambdas are executed in the /var/task/... directory. Therefore, during execution, the directory where the polyfills will be located will be /var/task/polyfill-library/__dist . Example of a script to copy files The following snippet will allow us to copy the polyfills to our already compiled Lambda. To do this, we will first install the necessary dependencies. bash
yarn add -D make-dir fs-extra Once the dependencies are installed, we will create the file with the script at /scripts/polyfills-serverless.mjs and replace YOUR_BUNDELED_LAMBDA_DIRECTORY with the directory that contains our packaged Lambda. In the example, we will use the directory ./.serverless_nextjs/api-lambda , which is the one used when using Serverless Next.js. ```js
import { copySync } from 'fs-extra/esm';
import makeDir from 'make-dir'; const DIR_POLYFILLS = './node_modules/polyfill-library/polyfills/__dist';
// const DIR_SERVERLESS = 'YOUR_BUNDELED_LAMBDA_DIRECTORY/polyfills/__dist';
const DIR_SERVERLESS = './.serverless_nextjs/api-lambda/polyfills/__dist'; const paths = await makeDir(DIR_SERVERLESS);
console.log( The directory ${paths} is created successfully. ); try {
console.log('Copying polyfills to serverless directory...');
copySync(DIR_POLYFILLS, DIR_SERVERLESS, { overwrite: false });
console.log('Polyfills copied successfully!');
} catch (err) {
console.error(err);
}
``` To execute the script, you will need to run the following command: bash
node ./scripts/polyfills-serverless.mjs Contributing Development of polyfill-library happens on GitHub. Read below to learn how you can take part in contributing to Polyfill.io. Contributing Guide Read our contributing guide to learn about our development process, how to propose bugfixes and improvements, and how to build and test your changes. ``` To test on BrowserStack you will need to have a BrowserStack account We test pull-requests using BrowserStack npm run test-all-polyfills # Run the tests for all polyfills using BrowserStack
npm run test-polyfills -- --features=Array.from # Run the tests for Array.from
npm run test-polyfills -- --features=Array.from --browserstack # Run the tests for Array.from using BrowserStack
``` License Polyfill-library is MIT licensed .;NodeJS module to create polyfill bundles tailored to individual user-agents.;[] | polyfillpolyfill/polyfill-library |
Picsart-AI-Research/StreamingT2V;StreamingT2V This repository is the official implementation of StreamingT2V . StreamingT2V: Consistent, Dynamic, and Extendable Long Video Generation from Text Roberto Henschel *, Levon Khachatryan *, Daniil Hayrapetyan *, Hayk Poghosyan , Vahram Tadevosyan , Zhangyang Wang , Shant Navasardyan , Humphrey Shi StreamingT2V is an advanced autoregressive technique that enables the creation of long videos featuring rich motion dynamics without any stagnation. It ensures temporal consistency throughout the video, aligns closely with the descriptive text, and maintains high frame-level image quality. Our demonstrations include successful examples of videos up to 1200 frames, spanning 2 minutes, and can be extended for even longer durations. Importantly, the effectiveness of StreamingT2V is not limited by the specific Text2Video model used, indicating that improvements in base models could yield even higher-quality videos. News [03/21/2024] Paper StreamingT2V released! [04/05/2024] Code and model released! [04/06/2024] The first version of our huggingface demo released! Setup Clone this repository and enter: shell
git clone https://github.com/Picsart-AI-Research/StreamingT2V.git
cd StreamingT2V/ 2. Install requirements using Python 3.10 and CUDA >= 11.6 shell
conda create -n st2v python=3.10
conda activate st2v
pip install -r requirements.txt 3. (Optional) Install FFmpeg if it's missing on your system shell
conda install conda-forge::ffmpeg 4. Download the weights from HF and put them into the t2v_enhanced/checkpoints directory. Inference For Text-to-Video shell
cd t2v_enhanced
python inference.py --prompt="A cat running on the street" To use other base models add the --base_model=AnimateDiff argument. Use python inference.py --help for more options. For Image-to-Video shell
cd t2v_enhanced
python inference.py --image=../__assets__/demo/fish.jpg --base_model=SVD Inference Time ModelscopeT2V as a Base Model | Number of Frames | Inference Time for Faster Preview (256x256) | Inference Time for Final Result (720x720) |
| ---------------- | :-------------------------------------------:| :-------------------------------------------:|
| 24 frames | 40 seconds | 165 seconds |
| 56 frames | 75 seconds | 360 seconds |
| 80 frames | 110 seconds | 525 seconds |
| 240 frames | 340 seconds | 1610 seconds (~27 min) |
| 600 frames | 860 seconds | 5128 seconds (~85 min) |
| 1200 frames | 1710 seconds (~28 min) | 10225 seconds (~170 min) | AnimateDiff as a Base Model | Number of Frames | Inference Time for Faster Preview (256x256) | Inference Time for Final Result (720x720) |
| ---------------- | :-------------------------------------------:| :-------------------------------------------:|
| 24 frames | 50 seconds | 180 seconds |
| 56 frames | 85 seconds | 370 seconds |
| 80 frames | 120 seconds | 535 seconds |
| 240 frames | 350 seconds | 1620 seconds (~27 min) |
| 600 frames | 870 seconds | 5138 seconds (~85 min) |
| 1200 frames | 1720 seconds (~28 min) | 10235 seconds (~170 min) | SVD as a Base Model | Number of Frames | Inference Time for Faster Preview (256x256) | Inference Time for Final Result (720x720) |
| ---------------- | :-------------------------------------------:| :-------------------------------------------:|
| 24 frames | 80 seconds | 210 seconds |
| 56 frames | 115 seconds | 400 seconds |
| 80 frames | 150 seconds | 565 seconds |
| 240 frames | 380 seconds | 1650 seconds (~27 min) |
| 600 frames | 900 seconds | 5168 seconds (~86 min) |
| 1200 frames | 1750 seconds (~29 min) | 10265 seconds (~171 min) | All measurements were conducted using the NVIDIA A100 (80 GB) GPU. Randomized blending is employed when the frame count surpasses 80. For Randomized blending, the values for chunk_size and overlap_size are set to 112 and 32, respectively. Gradio The same functionality is also available as a gradio demo shell
cd t2v_enhanced
python gradio_demo.py Results Detailed results can be found in the Project page . License Our code is published under the CreativeML Open RAIL-M license. We include ModelscopeT2V , AnimateDiff , SVD in the demo for research purposes and to demonstrate the flexibility of the StreamingT2V framework to include different T2V/I2V models. For commercial usage of such components, please refer to their original license. BibTeX If you use our work in your research, please cite our publication: @article{henschel2024streamingt2v,
title={StreamingT2V: Consistent, Dynamic, and Extendable Long Video Generation from Text},
author={Henschel, Roberto and Khachatryan, Levon and Hayrapetyan, Daniil and Poghosyan, Hayk and Tadevosyan, Vahram and Wang, Zhangyang and Navasardyan, Shant and Shi, Humphrey},
journal={arXiv preprint arXiv:2403.14773},
year={2024}
};StreamingT2V: Consistent, Dynamic, and Extendable Long Video Generation from Text;long-video-generation | Picsart-AI-Research/StreamingT2V |
polyfillpolyfill/fetch;window.fetch polyfill The fetch() function is a Promise-based mechanism for programmatically making
web requests in the browser. This project is a polyfill that implements a subset
of the standard Fetch specification , enough to make fetch a viable
replacement for most uses of XMLHttpRequest in traditional web applications. Table of Contents Read this first Installation Usage Importing HTML JSON Response metadata Post form Post JSON File upload Caveats Handling HTTP error statuses Sending cookies Receiving cookies Redirect modes Obtaining the Response URL Aborting requests Browser Support Read this first If you believe you found a bug with how fetch behaves in your browser,
please don't open an issue in this repository unless you are testing in
an old version of a browser that doesn't support window.fetch natively.
Make sure you read this entire readme, especially the Caveats section, as there's probably a known work-around for an issue you've found.
This project is a polyfill , and since all modern browsers now implement the fetch function natively, no code from this project actually takes any
effect there. See Browser support for detailed
information. If you have trouble making a request to another domain (a different
subdomain or port number also constitutes another domain), please familiarize
yourself with all the intricacies and limitations of CORS requests.
Because CORS requires participation of the server by implementing specific
HTTP response headers, it is often nontrivial to set up or debug. CORS is
exclusively handled by the browser's internal mechanisms which this polyfill
cannot influence. This project doesn't work under Node.js environments . It's meant for web
browsers only. You should ensure that your application doesn't try to package
and run this on the server. If you have an idea for a new feature of fetch , submit your feature
requests to the specification's repository .
We only add features and APIs that are part of the Fetch specification . Installation npm install whatwg-fetch --save You will also need a Promise polyfill for older browsers .
We recommend taylorhakes/promise-polyfill for its small size and Promises/A+ compatibility. Usage Importing Importing will automatically polyfill window.fetch and related APIs: ```javascript
import 'whatwg-fetch' window.fetch(...)
``` If for some reason you need to access the polyfill implementation, it is
available via exports: ```javascript
import {fetch as fetchPolyfill} from 'whatwg-fetch' window.fetch(...) // use native browser version
fetchPolyfill(...) // use polyfill implementation
``` This approach can be used to, for example, use abort
functionality in browsers that implement a native but
outdated version of fetch that doesn't support aborting. For use with webpack, add this package in the entry configuration option
before your application entry point: javascript
entry: ['whatwg-fetch', ...] HTML javascript
fetch('/users.html')
.then(function(response) {
return response.text()
}).then(function(body) {
document.body.innerHTML = body
}) JSON javascript
fetch('/users.json')
.then(function(response) {
return response.json()
}).then(function(json) {
console.log('parsed json', json)
}).catch(function(ex) {
console.log('parsing failed', ex)
}) Response metadata javascript
fetch('/users.json').then(function(response) {
console.log(response.headers.get('Content-Type'))
console.log(response.headers.get('Date'))
console.log(response.status)
console.log(response.statusText)
}) Post form ```javascript
var form = document.querySelector('form') fetch('/users', {
method: 'POST',
body: new FormData(form)
})
``` Post JSON javascript
fetch('/users', {
method: 'POST',
headers: {
'Content-Type': 'application/json'
},
body: JSON.stringify({
name: 'Hubot',
login: 'hubot',
})
}) File upload ```javascript
var input = document.querySelector('input[type="file"]') var data = new FormData()
data.append('file', input.files[0])
data.append('user', 'hubot') fetch('/avatars', {
method: 'POST',
body: data
})
``` Caveats The Promise returned from fetch() won't reject on HTTP error status even if the response is an HTTP 404 or 500. Instead, it will resolve normally,
and it will only reject on network failure or if anything prevented the
request from completing. For maximum browser compatibility when it comes to sending & receiving
cookies, always supply the credentials: 'same-origin' option instead of
relying on the default. See Sending cookies . Not all Fetch standard options are supported in this polyfill. For instance, redirect and cache directives are ignored. keepalive is not supported because it would involve making a synchronous XHR, which is something this project is not willing to do. See issue #700 for more information. Handling HTTP error statuses To have fetch Promise reject on HTTP error statuses, i.e. on any non-2xx
status, define a custom response handler: ```javascript
function checkStatus(response) {
if (response.status >= 200 && response.status < 300) {
return response
} else {
var error = new Error(response.statusText)
error.response = response
throw error
}
} function parseJSON(response) {
return response.json()
} fetch('/users')
.then(checkStatus)
.then(parseJSON)
.then(function(data) {
console.log('request succeeded with JSON response', data)
}).catch(function(error) {
console.log('request failed', error)
})
``` Sending cookies For CORS requests, use credentials: 'include' to allow sending credentials
to other domains: javascript
fetch('https://example.com:1234/users', {
credentials: 'include'
}) The default value for credentials is "same-origin". The default for credentials wasn't always the same, though. The following
versions of browsers implemented an older version of the fetch specification
where the default was "omit": Firefox 39-60 Chrome 42-67 Safari 10.1-11.1.2 If you target these browsers, it's advisable to always specify credentials:
'same-origin' explicitly with all fetch requests instead of relying on the
default: javascript
fetch('/users', {
credentials: 'same-origin'
}) Note: due to limitations of
XMLHttpRequest ,
using credentials: 'omit' is not respected for same domains in browsers where
this polyfill is active. Cookies will always be sent to same domains in older
browsers. Receiving cookies As with XMLHttpRequest, the Set-Cookie response header returned from the
server is a forbidden header name and therefore can't be programmatically
read with response.headers.get() . Instead, it's the browser's responsibility
to handle new cookies being set (if applicable to the current URL). Unless they
are HTTP-only, new cookies will be available through document.cookie . Redirect modes The Fetch specification defines these values for the redirect option : "follow"
(the default), "error", and "manual". Due to limitations of XMLHttpRequest, only the "follow" mode is available in
browsers where this polyfill is active. Obtaining the Response URL Due to limitations of XMLHttpRequest, the response.url value might not be
reliable after HTTP redirects on older browsers. The solution is to configure the server to set the response HTTP header X-Request-URL to the current URL after any redirect that might have happened.
It should be safe to set it unconditionally. ``` ruby Ruby on Rails controller example response.headers['X-Request-URL'] = request.url
``` This server workaround is necessary if you need reliable response.url in
Firefox < 32, Chrome < 37, Safari, or IE. Aborting requests This polyfill supports the abortable fetch API .
However, aborting a fetch requires use of two additional DOM APIs: AbortController and AbortSignal .
Typically, browsers that do not support fetch will also not support
AbortController or AbortSignal. Consequently, you will need to include an additional polyfill for these APIs to abort fetches: ```js
import 'yet-another-abortcontroller-polyfill'
import {fetch} from 'whatwg-fetch' // use native browser implementation if it supports aborting
const abortableFetch = ('signal' in new Request('')) ? window.fetch : fetch const controller = new AbortController() abortableFetch('/avatars', {
signal: controller.signal
}).catch(function(ex) {
if (ex.name === 'AbortError') {
console.log('request aborted')
}
}) // some time later...
controller.abort()
``` Browser Support Chrome Firefox Safari 6.1+ Internet Explorer 10+ Note: modern browsers such as Chrome, Firefox, Microsoft Edge, and Safari contain native
implementations of window.fetch , therefore the code from this polyfill doesn't
have any effect on those browsers. If you believe you've encountered an error
with how window.fetch is implemented in any of these browsers, you should file
an issue with that browser vendor instead of this project.;A window.fetch JavaScript polyfill.;[] | polyfillpolyfill/fetch |
gabrielchua/RAGxplorer;RAGxplorer 🦙🦺 RAGxplorer is a tool to build Retrieval Augmented Generation (RAG) visualisations. Quick Start ⚡ Installation bash
pip install ragxplorer Usage python
from ragxplorer import RAGxplorer
client = RAGxplorer(embedding_model="thenlper/gte-large")
client.load_pdf("presentation.pdf", verbose=True)
client.visualize_query("What are the top revenue drivers for Microsoft?") A quickstart Jupyter notebook tutorial on how to use ragxplorer can be found at https://github.com/gabrielchua/RAGxplorer/blob/main/tutorials/quickstart.ipynb Or as a Colab notebook: Streamlit Demo 🔎 The demo can be found here: https://ragxplorer.streamlit.app/ View the project here Contributing 👋 Contributions to RAGxplorer are welcome. Please read our contributing guidelines (WIP) for details. License 👀 This project is licensed under the MIT license - see the LICENSE for details. Acknowledgments 💙 DeepLearning.AI and Chroma for the inspiration and code labs in their Advanced Retrival course. The Streamlit community for the support and resources.;Open-source tool to visualise your RAG 🔮;llm,python,rag,streamlit,visualization,interactive | gabrielchua/RAGxplorer |
kwuking/TimeMixer;(ICLR'24) TimeMixer: Decomposable Multiscale Mixing for Time Series Forecasting ![](https://img.shields.io/github/last-commit/KimMeen/Time-LLM?color=green)
![](https://img.shields.io/github/stars/kwuking/TimeMixer?color=yellow)
![](https://img.shields.io/github/forks/kwuking/TimeMixer?color=lightblue)
![](https://img.shields.io/badge/PRs-Welcome-green) **[ Paper Page ]**
**[ 中文解读1 ]**
**[ 中文解读2 ]**
**[ 中文解读3 ]** 🙋 Please let us know if you find out a mistake or have any suggestions! 🌟 If you find this resource helpful, please consider to star this repository and cite our research: @inproceedings{wang2023timemixer,
title={TimeMixer: Decomposable Multiscale Mixing for Time Series Forecasting},
author={Wang, Shiyu and Wu, Haixu and Shi, Xiaoming and Hu, Tengge and Luo, Huakun and Ma, Lintao and Zhang, James Y and ZHOU, JUN},
booktitle={International Conference on Learning Representations (ICLR)},
year={2024}
} Updates 🚩 News (2024.05) TimeMixer has now released a 28-page full paper version on arXiv . Furthermore, we have provided a brief video to facilitate your understanding of our work. 🚩 News (2024.05) TimeMixer currently supports using future temporal features for prediction . This feature has been well-received by the community members. You can now decide whether to enable this feature by using the parameter use_future_temporal_feature. 🚩 News (2024.03) TimeMixer has been included in [Time-Series-Library] and achieve the consistent 🏆 state-of-the-art in long-term time and short-term series forecasting. 🚩 News (2024.03) TimeMixer has added a time-series decomposition method based on DFT, as well as downsampling operation based on 1D convolution. 🚩 News (2024.02) TimeMixer has been accepted as ICLR 2024 Poster . Introduction 🏆 TimeMixer , as a fully MLP-based architecture, taking full advantage of disentangled multiscale time series, is proposed to achieve consistent SOTA performances in both long and short-term forecasting tasks with favorable run-time efficiency . 🌟 Observation 1: History Extraction Given that seasonal and trend components exhibit significantly different characteristics in time series, and different scales of the time series reflect different properties, with seasonal characteristics being more pronounced at a fine-grained micro scale and trend characteristics being more pronounced at a coarse macro scale, it is therefore necessary to decouple seasonal and trend components at different scales. 🌟 Observation 2: Future Prediction Integrating forecasts from different scales to obtain the final prediction results, different scales exhibit complementary predictive capabilities. Overall Architecture TimeMixer as a fully MLP-based architecture with Past-Decomposable-Mixing (PDM) and Future-Multipredictor-Mixing (FMM) blocks to take full advantage of disentangled multiscale series in both past extraction and future prediction phases. Past Decomposable Mixing we propose the Past-Decomposable-Mixing (PDM) block to mix the decomposed seasonal and trend components in multiple scales separately. Empowered by seasonal and trend mixing, PDM progressively aggregates the detailed seasonal information from fine to coarse and dive into the macroscopic trend information with prior knowledge from coarser scales, eventually achieving the multiscale mixing in past information extraction. Future Multipredictor Mixing Note that Future Multipredictor Mixing (FMM) is an ensemble of multiple predictors, where different predictors are based on past information from different scales, enabling FMM to integrate complementary forecasting capabilities of mixed multiscale series. Get Started Install requirements. pip install -r requirements.txt Download data. You can download the all datasets from Google Driver , Baidu Driver or Kaggle Datasets . All the datasets are well pre-processed and can be used easily. Train the model. We provide the experiment scripts of all benchmarks under the folder ./scripts . You can reproduce the experiment results by: bash
bash ./scripts/long_term_forecast/ETT_script/TimeMixer_ETTm1.sh
bash ./scripts/long_term_forecast/ECL_script/TimeMixer.sh
bash ./scripts/long_term_forecast/Traffic_script/TimeMixer.sh
bash ./scripts/long_term_forecast/Solar_script/TimeMixer.sh
bash ./scripts/long_term_forecast/Weather_script/TimeMixer.sh
bash ./scripts/short_term_forecast/M4/TimeMixer.sh
bash ./scripts/short_term_forecast/PEMS/TimeMixer.sh Main Results We conduct extensive experiments to evaluate the performance and efficiency of TimeMixer, covering long-term and short-term forecasting, including 18 real-world benchmarks and 15 baselines. 🏆 TimeMixer achieves consistent state-of-the-art performance in all benchmarks , covering a large variety of series with different frequencies, variate numbers and real-world scenarios. Long-term Forecasting To ensure model comparison fairness, experiments were performed with standardized parameters, aligning input lengths, batch sizes, and training epochs. Additionally, given that results in various studies often stem from hyperparameter optimization, we include outcomes from comprehensive parameter searches. Short-term Forecasting: Multivariate data Short-term Forecasting: Univariate data Model Abalations To verify the effectiveness of each component of TimeMixer, we provide detailed ablation study on every possible design in both Past-Decomposable-Mixing and Future-Multipredictor-Mixing blocks on all 18 experiment benchmarks (see our paper for full results 😊). Model Efficiency We compare the running memory and time against the latest state-of-the-art models under the training phase, where TimeMixer consistently demonstrates favorable efficiency, in terms of both GPU memory and running time, for various series lengths (ranging from 192 to 3072), in addition to the consistent state-of-the-art perfor- mances for both long-term and short-term forecasting tasks. It is noteworthy that TimeMixer, as a deep model, demonstrates results close to those of full-linear models in terms of efficiency. This makes TimeMixer promising in a wide range of scenarios that require high model efficiency. Further Reading 1, Time-LLM: Time Series Forecasting by Reprogramming Large Language Models , in ICLR 2024. [GitHub Repo] Authors : Ming Jin, Shiyu Wang, Lintao Ma, Zhixuan Chu, James Y. Zhang, Xiaoming Shi, Pin-Yu Chen, Yuxuan Liang, Yuan-Fang Li, Shirui Pan, Qingsong Wen bibtex
@inproceedings{jin2023time,
title={{Time-LLM}: Time series forecasting by reprogramming large language models},
author={Jin, Ming and Wang, Shiyu and Ma, Lintao and Chu, Zhixuan and Zhang, James Y and Shi, Xiaoming and Chen, Pin-Yu and Liang, Yuxuan and Li, Yuan-Fang and Pan, Shirui and Wen, Qingsong},
booktitle={International Conference on Learning Representations (ICLR)},
year={2024}
} 2, iTransformer: Inverted Transformers Are Effective for Time Series Forecasting , in ICLR 2024 Spotlight. [GitHub Repo] Authors : Yong Liu, Tengge Hu, Haoran Zhang, Haixu Wu, Shiyu Wang, Lintao Ma, Mingsheng Long bibtex
@article{liu2023itransformer,
title={iTransformer: Inverted Transformers Are Effective for Time Series Forecasting},
author={Liu, Yong and Hu, Tengge and Zhang, Haoran and Wu, Haixu and Wang, Shiyu and Ma, Lintao and Long, Mingsheng},
journal={arXiv preprint arXiv:2310.06625},
year={2023}
} Acknowledgement We appreciate the following GitHub repos a lot for their valuable code and efforts.
- Time-Series-Library (https://github.com/thuml/Time-Series-Library)
- Autoformer (https://github.com/thuml/Autoformer) Contact If you have any questions or want to use the code, feel free to contact:
* Shiyu Wang (kwuking@163.com or weiming.wsy@antgroup.com)
* Haixu Wu (wuhx23@mails.tsinghua.edu.cn);[ICLR 2024] Official implementation of "TimeMixer: Decomposable Multiscale Mixing for Time Series Forecasting";deep-learning,machine-learning,time-series,time-series-forecasting | kwuking/TimeMixer |
ZiqiaoPeng/SyncTalk;SyncTalk: The Devil😈 is in the Synchronization for Talking Head Synthesis [CVPR 2024] The official repository of the paper SyncTalk: The Devil is in the Synchronization for Talking Head Synthesis Paper | Project Page | Code Colab notebook demonstration: A short demo video can be found here . The proposed SyncTalk synthesizes synchronized talking head videos, employing tri-plane hash representations to maintain subject identity. It can generate synchronized lip movements, facial expressions, and stable head poses, and restores hair details to create high-resolution videos. 🔥🔥🔥 News [2023-11-30] Update arXiv paper. [2024-03-04] The code and pre-trained model are released. [2024-03-22] The Google Colab notebook is released. [2024-04-14] Add Windows support. [2024-04-28] The preprocessing code is released. [2024-04-29] Fix bugs: audio encoder, blendshape capture, and face tracker. [2024-05-03] Try replacing NeRF with Gaussian Splatting. Code: GS-SyncTalk [2024-05-24] Introduce torso training to repair double chin. For Windows Thanks to okgpt , we have launched a Windows integration package, you can download SyncTalk-Windows.zip and unzip it, double-click inference.bat to run the demo. Download link: Hugging Face || Baidu Netdisk For Linux Installation Tested on Ubuntu 18.04, Pytorch 1.12.1 and CUDA 11.3. bash
git clone https://github.com/ZiqiaoPeng/SyncTalk.git
cd SyncTalk Install dependency bash
conda create -n synctalk python==3.8.8
conda activate synctalk
pip install torch==1.12.1+cu113 torchvision==0.13.1+cu113 torchaudio==0.12.1 --extra-index-url https://download.pytorch.org/whl/cu113
pip install -r requirements.txt
pip install --no-index --no-cache-dir pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/py38_cu113_pyt1121/download.html
pip install tensorflow-gpu==2.8.1
pip install ./freqencoder
pip install ./shencoder
pip install ./gridencoder
pip install ./raymarching If you encounter problems installing PyTorch3D, you can use the following command to install it: bash
python ./scripts/install_pytorch3d.py Data Preparation Pre-trained model Please place the May.zip in the data folder, the trial_may.zip in the model folder, and then unzip them. [New] Process your video Prepare face-parsing model. bash
wget https://github.com/YudongGuo/AD-NeRF/blob/master/data_util/face_parsing/79999_iter.pth?raw=true -O data_utils/face_parsing/79999_iter.pth Prepare the 3DMM model for head pose estimation. bash
wget https://github.com/YudongGuo/AD-NeRF/blob/master/data_util/face_tracking/3DMM/exp_info.npy?raw=true -O data_utils/face_tracking/3DMM/exp_info.npy
wget https://github.com/YudongGuo/AD-NeRF/blob/master/data_util/face_tracking/3DMM/keys_info.npy?raw=true -O data_utils/face_tracking/3DMM/keys_info.npy
wget https://github.com/YudongGuo/AD-NeRF/blob/master/data_util/face_tracking/3DMM/sub_mesh.obj?raw=true -O data_utils/face_tracking/3DMM/sub_mesh.obj
wget https://github.com/YudongGuo/AD-NeRF/blob/master/data_util/face_tracking/3DMM/topology_info.npy?raw=true -O data_utils/face_tracking/3DMM/topology_info.npy Download 3DMM model from Basel Face Model 2009 : # 1. copy 01_MorphableModel.mat to data_util/face_tracking/3DMM/
# 2.
cd data_utils/face_tracking
python convert_BFM.py - Put your video under data/<ID>/<ID>.mp4 , and then run the following command to process the video. [Note] The video must be 25FPS, with all frames containing the talking person. The resolution should be about 512x512, and duration about 4-5 min. bash
python data_utils/process.py data/<ID>/<ID>.mp4 --asr ave You can choose to use AVE, DeepSpeech or Hubert. The processed video will be saved in the data folder. [Optional] Obtain AU45 for eyes blinking Run FeatureExtraction in OpenFace , rename and move the output CSV file to data/<ID>/au.csv . [Note] Since EmoTalk's blendshape capture is not open source, the preprocessing code here is replaced with mediapipe's blendshape capture. But according to some feedback, it doesn't work well, you can choose to replace it with AU45. If you want to compare with SyncTalk, some results from using EmoTalk capture can be obtained here and videos from GeneFace . Quick Start Run the evaluation code ```bash
python main.py data/May --workspace model/trial_may -O --test --asr_model ave python main.py data/May --workspace model/trial_may -O --test --asr_model ave --portrait
```
“ave” refers to our Audio Visual Encoder, “portrait” signifies pasting the generated face back onto the original image, representing higher quality. If it runs correctly, you will get the following results. | Setting | PSNR | LPIPS | LMD |
|--------------------------|--------|--------|-------|
| SyncTalk (w/o Portrait) | 32.201 | 0.0394 | 2.822 |
| SyncTalk (Portrait) | 37.644 | 0.0117 | 2.825 | This is for a single subject; the paper reports the average results for multiple subjects. Inference with target audio bash
python main.py data/May --workspace model/trial_may -O --test --test_train --asr_model ave --portrait --aud ./demo/test.wav Please use files with the “.wav” extension for inference, and the inference results will be saved in “model/trial_may/results/”. If do not use Audio Visual Encoder, replace wav with the npy file path.
* DeepSpeech bash
python data_utils/deepspeech_features/extract_ds_features.py --input data/<name>.wav # save to data/<name>.npy * HuBERT bash
# Borrowed from GeneFace. English pre-trained.
python data_utils/hubert.py --wav data/<name>.wav # save to data/<name>_hu.npy Train ```bash by default, we load data from disk on the fly. we can also preload all data to CPU/GPU for faster training, but this is very memory-hungry for large datasets. --preload 0 : load from disk (default, slower). --preload 1 : load to CPU (slightly slower) --preload 2 : load to GPU (fast) python main.py data/May --workspace model/trial_may -O --iters 60000 --asr_model ave
python main.py data/May --workspace model/trial_may -O --iters 100000 --finetune_lips --patch_size 64 --asr_model ave or you can use the script to train sh ./scripts/train_may.sh
``` [Tips] Audio visual encoder (AVE) is suitable for characters with accurate lip sync and large lip movements such as May and Shaheen. Using AVE in the inference stage can achieve more accurate lip sync. If your training results show lip jitter, please try using deepspeech or hubert model as audio feature encoder. ```bash Use deepspeech model python main.py data/May --workspace model/trial_may -O --iters 60000 --asr_model deepspeech
python main.py data/May --workspace model/trial_may -O --iters 100000 --finetune_lips --patch_size 64 --asr_model deepspeech Use hubert model python main.py data/May --workspace model/trial_may -O --iters 60000 --asr_model hubert
python main.py data/May --workspace model/trial_may -O --iters 100000 --finetune_lips --patch_size 64 --asr_model hubert
``` If you want to use the OpenFace au45 as the eye parameter, please add "--au45" to the command line. ```bash Use OpenFace AU45 python main.py data/May --workspace model/trial_may -O --iters 60000 --asr_model ave --au45
python main.py data/May --workspace model/trial_may -O --iters 100000 --finetune_lips --patch_size 64 --asr_model ave --au45
``` Test ```bash
python main.py data/May --workspace model/trial_may -O --test --asr_model ave --portrait ``` Train & Test Torso [Repair Double Chin] If your character trained only the head appeared double chin problem, you can introduce torso training. By training the torso, this problem can be solved, but you will not be able to use the "--portrait" mode. If you add "--portrait", the torso model will fail! ```bash Train .pth should be the latest checkpoint in trial_may python main.py data/May/ --workspace model/trial_may_torso/ -O --torso --head_ckpt .pth --iters 150000 --asr_model ave For example python main.py data/May/ --workspace model/trial_may_torso/ -O --torso --head_ckpt model/trial_may/ngp_ep0019.pth --iters 150000 --asr_model ave Test python main.py data/May --workspace model/trial_may_torso -O --torso --test --asr_model ave # not support --portrait Inference with target audio python main.py data/May --workspace model/trial_may_torso -O --torso --test --test_train --asr_model ave --aud ./demo/test.wav # not support --portrait ``` Citation @InProceedings{peng2023synctalk,
title = {SyncTalk: The Devil is in the Synchronization for Talking Head Synthesis},
author = {Ziqiao Peng and Wentao Hu and Yue Shi and Xiangyu Zhu and Xiaomei Zhang and Jun He and Hongyan Liu and Zhaoxin Fan},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2024},
} Acknowledgement This code is developed heavily relying on ER-NeRF , and also RAD-NeRF , GeneFace , DFRF , DFA-NeRF , AD-NeRF , and Deep3DFaceRecon_pytorch . Thanks for these great projects. Thanks to Tiandishihua for helping us fix the bug that loss equals NaN. Disclaimer By using the "SyncTalk", users agree to comply with all applicable laws and regulations, and acknowledge that misuse of the software, including the creation or distribution of harmful content, is strictly prohibited. The developers of the software disclaim all liability for any direct, indirect, or consequential damages arising from the use or misuse of the software.;[CVPR 2024] This is the official source for our paper "SyncTalk: The Devil is in the Synchronization for Talking Head Synthesis";talking-face-generation,talking-head,audio-driven-talking-face,talking-face,cvpr,cvpr2024 | ZiqiaoPeng/SyncTalk |
zk-Call/zkp-hmac-communication-js;zk-Call & Labs "Zero-Knowledge" Proof Implementation with HMAC Communication in JavaScript Built by zk-Call :) Table of Contents Credits Purpose How it Works API Example Usage Credits This repository hosts a refined implementation of Schnorr's Protocol , innovatively incorporating a state seed for enhanced security measures. While the underlying proofs may appear intricate, I aim to elucidate their functionality to the best of my ability. However, for a deeper understanding, I encourage referencing the seminal research papers underpinning this implementation, as they offer comprehensive insights. For further exploration: Elliptic Curve Based "Zero-Knowledge" Proofs and Their Applicability on Resource Constrained Devices by Ioannis Chatzigiannakis, Apostolos Pyrgelis, Paul G. Spirakis, and Yannis C. Stamatiou Additionally, this repository delves into the concepts of "Zero-Knowledge" Proofs (ZKPs) and Hash-based Message Authentication Codes (HMACs) . ZKPs are cryptographic protocols that allow one party (the prover) to prove to another party (the verifier) that a given statement is true, without revealing any additional information beyond the validity of the statement itself. This property is particularly valuable for preserving privacy while establishing trust. On the other hand, HMACs are a type of cryptographic hash function used for message authentication. They involve a cryptographic hash function (such as SHA-256) and a secret cryptographic key. HMACs provide a way to verify both the data integrity and the authenticity of a message, ensuring that it has not been altered or tampered with during transmission and that it indeed originates from the purported sender. Purpose In today's rapidly evolving IT and application development landscape, "Zero-Knowledge" Proofs (ZKPs) emerge as a pivotal paradigm for authentication security. Their capacity to affirm the validity of a claim, such as proving possession of a secret password — without revealing any sensitive information about the claim itself, such as passwords or hashes, revolutionizes the assurance of secure AAA operations ( authentication , authorization , and accounting ). zk-Call & Labs represents an implementation of a Non-Interactive "Zero-Knowledge" Proof (NIZKP) protocol tailored specifically for validating text-based secrets. This framework proves invaluable for safeguarding passwords and other authentication mechanisms, ensuring robust security measures without compromising privacy. Additionally, the integration of HMAC (Hash-Based Message Authentication Code) further fortifies the authentication process, enhancing data integrity and thwarting potential security breaches. How It Works The authentication protocol employed in this system operates based on two fundamental concepts: "Zero-Knowledge" Proofs (ZKPs) and Hash-Based Message Authentication Code (HMAC) . Let's delve into each of these components and understand how they synergize to ensure secure authentication in messaging applications. "Zero-Knowledge" Proofs (ZKPs) "Zero-Knowledge" Proofs (ZKPs): ZKPs form the bedrock of privacy-preserving authentication mechanisms. These proofs allow one party (the prover) to demonstrate the validity of a claim to another party (the verifier) without revealing any additional information beyond the claim's validity. In essence, ZKPs enable authentication without the need for the prover to disclose sensitive data, such as passwords or cryptographic keys. Application in Authentication: In the context of messaging applications, ZKPs play a pivotal role in verifying a user's identity without the need to transmit explicit credentials over the network. Instead, users can generate cryptographic proofs attesting to their identity or possession of certain credentials without exposing those credentials themselves. This ensures that sensitive information remains confidential during the authentication process, bolstering security and privacy. Hash-Based Message Authentication Code (HMAC) Hash-Based Message Authentication Code (HMAC): HMAC provides a robust mechanism for verifying the integrity and authenticity of messages exchanged between parties. It involves the use of a cryptographic hash function in conjunction with a secret key to generate a unique code (the HMAC) for each message. This code serves as a digital signature, allowing the recipient to verify that the message has not been tampered with or altered during transmission. Application in Authentication: In messaging applications, HMAC can be employed to authenticate message senders and ensure the integrity of communication channels. By appending an HMAC to each message using a shared secret key, both the sender and recipient can validate the message's authenticity upon receipt. Any unauthorized modifications to the message would result in a mismatch between the computed HMAC and the received HMAC , thereby alerting the recipient to potential tampering. Synergistic Operation When combined, "Zero-Knowledge" Proofs and HMAC create a formidable framework for secure authentication in messaging applications. ZKPs facilitate identity verification without divulging sensitive information, while HMAC ensures the integrity and authenticity of messages exchanged between parties. Together, these mechanisms uphold the confidentiality, integrity, and authenticity of communication channels, safeguarding users' privacy and security in the digital realm. API The "Zero-Knowledge" JavaScript API is meant to be simple and intuitive: Core Components The Core Components are key for establishing a secure and efficient framework for cryptographic protocols; streamlining the creation and validation of "Zero-Knowledge" Proofs (ZKPs) . They enhance anonymous, data-safe proof validations. ZeroKnowledge.models.ZeroKnowledgeParams The parameters used to initialize the "Zero-Knowledge" crypto system. class ZeroKnowledgeParams(NamedTuple):
"""
Parameters used to construct a Zero-Knowledge Proof state, utilizing an elliptic curve and a random salt
"""
algorithm: str # Hashing algorithm name
curve: str # Standard Elliptic Curve name to use
s: int # Random salt for the state ZeroKnowledge.models.ZeroKnowledgeSignature A cryptographic "Zero-Knowledge" signature that can be used to verify future messages. class ZeroKnowledgeSignature(NamedTuple):
"""
Cryptographic public signature designed to verify future messages
"""
params: ZeroKnowledgeParams # Reference ZeroKnowledge Parameters
signature: int # The public key derived from your original secret ZeroKnowledge.models.ZeroKnowledgeProof A cryptographic proof that can be verified against a signature. class ZeroKnowledgeProof(NamedTuple):
"""
Non-deterministic cryptographic Zero-Knowledge Proof designed to confirm that the
private key creating the proof matches the key used to generate the signature
"""
params: ZeroKnowledgeParams # Reference ZeroKnowledge Parameters
c: int # The hash of the signed data and random point, R
m: int # The offset from the secret `r` (`R=r*g`) from c * Hash(secret) ZeroKnowledge.models.ZeroKnowledgeData Wrapper that contains a proof and the necessary data to validate the proof against a signature. class ZeroKnowledgeData(NamedTuple):
"""
Wrapper designed to hold data along with its corresponding signed proof
"""
data: Union[str, bytes, int]
proof: ZeroKnowledgeProof ZeroKnowledge The ZeroKnowledge class is the central component of ZeroKnowledge and its state (defined by ZeroKnowledgeParams ) should be inherently known to both the Client (Prover) and Server (Verifier) . Instance Methods Method Params Role Purpose create_signature secret: Union[str, bytes] Prover Create a cryptographic signature derived from the value secret to be generated during initial registration and stored for subsequent challenge proofs. sign secret: Union[str, bytes] data: Union[str, bytes, int] Prover Create a ZeroKnowledgeData object using the secret and any additional data. verify challenge: Union[ZeroKnowledgeData, ZeroKnowledgeProof] signature: ZeroKnowledgeSignature data: Optional[Union[str, bytes, int]] Verifier Verify the user-provided challenge against the stored signature and randomly generated token to verify the validity of the challenge . Example Usage TODO: Include Example Usage Example 1 import {HMACClient} from './src/HMAC/core/base.mjs';
import {SeedGenerator} from './src/SeedGeneration/core/base.mjs';
// DEBUG constant used for enabling/disabling debugging messages
const DEBUG = true;
// Function to print messages with specific formatting if DEBUG is enabled
function printMsg(who, message) {
if (DEBUG) {
console.log(`[${who}] ${message}\n`);
}
}
// The main function of the script
function main() {
// Generating a client seed using a SeedGenerator instance
const client_seed = new SeedGenerator("job").generate();
// Creating an HMAC client instance for the client using sha256 algorithm and the generated seed
const client_hmac = new HMACClient("sha256", client_seed, 1);
// Creating an HMAC server instance for the server using sha256 algorithm and the same generated seed
const serverhmac = new HMACClient("sha256", client_seed, 1);
// Checking if the encrypted message from client and server matches
if (client_hmac.encrypt_message('') === serverhmac.encrypt_message('')) {
// Defining a message to be sent from client to server
const client_message = 'hello';
// Encrypting the client message in chunks using the client HMAC instance
const client_encrypted_message_for_server = client_hmac.encrypt_message_by_chunks(client_message)
// Printing a message indicating that client has sent an encrypted message
printMsg('client', 'sent has encrypted message')
// Decrypting the message received from client by the server using server HMAC instance
const server_decrypted_message = serverhmac.decrypt_message_by_chunks(client_encrypted_message_for_server)
// Printing a message indicating that server has decrypted the message
printMsg('server', 'server has decrypt message')
// Encrypting the decrypted message by the server
const server_response = serverhmac.encrypt_message(server_decrypted_message)
// Printing a message indicating that server has encrypted the message
printMsg('server', 'server has encrypted message')
// Checking if the encrypted message from client matches the server's response
if (client_hmac.encrypt_message(client_message) === server_response) {
// Printing a message indicating that server has successfully read the message from client
printMsg('client', 'server has read message')
}
}
}
// Calling the main function to start the script execution
main() Example 2 // Importing necessary modules
import { ZeroKnowledge } from "./src/ZeroKnowledge/core/base.mjs"; // Importing ZeroKnowledge class
import { ZeroKnowledgeData } from "./src/ZeroKnowledge/models/base.mjs"; // Importing ZeroKnowledgeData class
// DEBUG constant used for enabling/disabling debugging messages
const DEBUG = true;
// Function to print messages with specific formatting if DEBUG is enabled
function printMsg(who, message) {
if (DEBUG) {
console.log(`[${who}] ${message}\n`); // Print formatted message
}
}
// The main function of the script
function main() {
// Generating a client seed using a SeedGenerator instance
const server_password = "SecretServerPassword"; // Define server password
// Creating ZeroKnowledge instances for server and client
const server_object = ZeroKnowledge.new("secp256k1", "sha3_256"); // Initialize server ZeroKnowledge instance
const client_object = ZeroKnowledge.new("secp256k1", "sha3_256"); // Initialize client ZeroKnowledge instance
// Creating signatures for server and client
const server_signature = server_object.create_signature(server_password); // Generate server signature
printMsg("Server", `Server signature: ${server_signature}`); // Print server signature
const idenity = 'John'; // Define client identity
const client_sig = client_object.create_signature(idenity); // Generate client signature
printMsg("Client", `Client signature: ${client_sig}`); // Print client signature
// Signing and generating token for server and client
const server_token = server_object.sign(server_password, client_object.token()); // Sign and generate token for server
printMsg("Server", `Server token: ${server_token}`); // Print server token
const client_proof = client_object.sign(idenity, server_token.data); // Sign token data for client
printMsg("Client", `Client proof: ${client_proof}`); // Print client proof
// Creating ZeroKnowledgeData instance for token verification
const token_veif = new ZeroKnowledgeData(client_proof.data, client_proof.proof);
// Verifying the token against server signature
const server_verif = server_object.verify(token_veif, server_signature); // Verify token against server signature
printMsg("Server", `Server verification: ${server_verif}`); // Print server verification
}
// Calling the main function to start the script execution
main(); Example 3 // Importing necessary modules
import {ZeroKnowledge} from "./src/ZeroKnowledge/core/base.mjs"; // Importing ZeroKnowledge class
import {ZeroKnowledgeData} from "./src/ZeroKnowledge/models/base.mjs";
import {SeedGenerator} from "./src/SeedGeneration/core/base.mjs";
import {HMACClient} from "./src/HMAC/core/base.mjs"; // Importing ZeroKnowledgeData class
// DEBUG constant used for enabling/disabling debugging messages
const DEBUG = true;
// Function to print messages with specific formatting if DEBUG is enabled
function printMsg(who, message) {
if (DEBUG) {
console.log(`[${who}] ${message}\n`); // Print formatted message
}
}
// The main function of the script
function main() {
// Generating a client seed using a SeedGenerator instance
const server_password = "SecretServerPassword"; // Define server password
// Creating ZeroKnowledge instances for server and client
const server_object = ZeroKnowledge.new("secp256k1", "sha3_256"); // Initialize server ZeroKnowledge instance
const client_object = ZeroKnowledge.new("secp256k1", "sha3_256"); // Initialize client ZeroKnowledge instance
// Creating signatures for server and client
const server_signature = server_object.create_signature(server_password); // Generate server signature
printMsg("Server", `Server signature: ${server_signature}`); // Print server signature
const idenity = 'John'; // Define client identity
const client_sig = client_object.create_signature(idenity); // Generate client signature
printMsg("Client", `Client signature: ${client_sig}`); // Print client signature
// Signing and generating token for server and client
const server_token = server_object.sign(server_password, client_object.token()); // Sign and generate token for server
printMsg("Server", `Server token: ${server_token}`); // Print server token
const client_proof = client_object.sign(idenity, server_token.data); // Sign token data for client
printMsg("Client", `Client proof: ${client_proof}`); // Print client proof
// Creating ZeroKnowledgeData instance for token verification
const token_veif = new ZeroKnowledgeData(client_proof.data, client_proof.proof);
// Verifying the token against server signature
const server_verif = server_object.verify(token_veif, server_signature); // Verify token against server signature
printMsg("Server", `Server verification: ${server_verif}`); // Print server verification
if (server_verif) {
// Generating a client seed using a SeedGenerator instance
const client_seed = new SeedGenerator("job").generate();
// Creating an HMAC client instance for the client using sha256 algorithm and the generated seed
const client_hmac = new HMACClient("sha256", client_seed, 1);
// Creating an HMAC server instance for the server using sha256 algorithm and the same generated seed
const serverhmac = new HMACClient("sha256", client_seed, 1);
// Checking if the encrypted message from client and server matches
if (client_hmac.encrypt_message('') === serverhmac.encrypt_message('')) {
// Defining a message to be sent from client to server
const client_message = 'hello';
// Encrypting the client message in chunks using the client HMAC instance
const client_encrypted_message_for_server = client_hmac.encrypt_message_by_chunks(client_message)
// Printing a message indicating that client has sent an encrypted message
printMsg('client', 'sent has encrypted message')
// Decrypting the message received from client by the server using server HMAC instance
const server_decrypted_message = serverhmac.decrypt_message_by_chunks(client_encrypted_message_for_server)
// Printing a message indicating that server has decrypted the message
printMsg('server', 'server has decrypt message')
// Encrypting the decrypted message by the server
const server_response = serverhmac.encrypt_message(server_decrypted_message)
// Printing a message indicating that server has encrypted the message
printMsg('server', 'server has encrypted message')
// Checking if the encrypted message from client matches the server's response
if (client_hmac.encrypt_message(client_message) === server_response) {
// Printing a message indicating that server has successfully read the message from client
printMsg('client', 'server has read message')
}
}
}
}
// Calling the main function to start the script execution
main();;"Zero-Knowledge" Proof Implementation with HMAC Communication in JavaScript;hmac,javascript,zero-knowledge,zk-call,zkproof | zk-Call/zkp-hmac-communication-js |
OpenGenerativeAI/llm-colosseum;Evaluate LLMs in real time with Street Fighter III Make LLM fight each other in real time in Street Fighter III. Which LLM will be the best fighter ? Our criterias 🔥 They need to be: Fast : It is a real time game, fast decisions are key Smart : A good fighter thinks 50 moves ahead Out of the box thinking : Outsmart your opponent with unexpected moves Adaptable : Learn from your mistakes and adapt your strategy Resilient : Keep your RPS high for an entire game Let the fight begin 🥷 1 VS 1: Mistral 7B vs Mistral 7B https://github.com/OpenGenerativeAI/llm-colosseum/assets/19614572/79b58e26-7902-4687-af5d-0e1e845ecaf8 1 VS 1 X 6 : Mistral 7B vs Mistral 7B https://github.com/OpenGenerativeAI/llm-colosseum/assets/19614572/5d3d386b-150a-48a5-8f68-7e2954ec18db A new kind of benchmark ? Street Fighter III assesses the ability of LLMs to understand their environment and take actions based on a specific context.
As opposed to RL models, which blindly take actions based on the reward function, LLMs are fully aware of the context and act accordingly. Results Our experimentations (342 fights so far) led to the following leader board.
Each LLM has an ELO score based on its results Ranking ELO ranking | Model | Rating |
| ------------------------------ | ------: |
| 🥇openai:gpt-3.5-turbo-0125 | 1776.11 |
| 🥈mistral:mistral-small-latest | 1586.16 |
| 🥉openai:gpt-4-1106-preview | 1584.78 |
| openai:gpt-4 | 1517.2 |
| openai:gpt-4-turbo-preview | 1509.28 |
| openai:gpt-4-0125-preview | 1438.92 |
| mistral:mistral-medium-latest | 1356.19 |
| mistral:mistral-large-latest | 1231.36 | Win rate matrix Explanation Each player is controlled by an LLM.
We send to the LLM a text description of the screen. The LLM decide on the next moves its character will make. The next moves depends on its previous moves, the moves of its opponents, its power and health bars. Agent based Multithreading Real time Installation Follow instructions in https://docs.diambra.ai/#installation Download the ROM and put it in ~/.diambra/roms (Optional) Create and activate a new python venv Install dependencies with make install or pip install -r requirements.txt Create a .env file and fill it with the content like in the .env.example file Run with make run Test mode To disable the LLM calls, set DISABLE_LLM to True in the .env file.
It will choose the actions randomly. Logging Change the logging level in the script.py file. Local model You can run the arena with local models using Ollama . Make sure you have ollama installed, running, and with a model downloaded (run ollama serve mistral in the terminal for example) Run make local to start the fight. By default, it runs mistral against mistral. To use other models, you need to change the parameter model in ollama.py . ```python
from eval.game import Game, Player1, Player2 def main():
game = Game(
render=True,
save_game=True,
player_1=Player1(
nickname="Baby",
model="ollama:mistral", # change this
),
player_2=Player2(
nickname="Daddy",
model="ollama:mistral", # change this
),
)
game.run()
return 0
``` The convention we use is model_provider:model_name . If you want to use another local model than Mistral, you can do ollama:some_other_model How to make my own LLM model play? Can I improve the prompts? The LLM is called in Robot.call_llm() method of the agent/robot.py file. ```python
def call_llm(
self,
temperature: float = 0.7,
max_tokens: int = 50,
top_p: float = 1.0,
) -> str:
"""
Make an API call to the language model. Edit this method to change the behavior of the robot!
"""
# self.model is a slug like mistral:mistral-small-latest or ollama:mistral
provider_name, model_name = get_provider_and_model(self.model)
client = get_sync_client(provider_name) # OpenAI client
# Generate the prompts
move_list = "- " + "\n - ".join([move for move in META_INSTRUCTIONS])
system_prompt = f"""You are the best and most aggressive Street Fighter III 3rd strike player in the world. Your character is {self.character}. Your goal is to beat the other opponent. You respond with a bullet point list of moves.
{self.context_prompt()}
The moves you can use are:
{move_list} Reply with a bullet point list of moves. The format should be: - <name of the move> separated by a new line.
Example if the opponent is close:
- Move closer
- Medium Punch Example if the opponent is far:
- Fireball
- Move closer""" # Call the LLM
completion = client.chat.completions.create(
model=model_name,
messages=[
{"role": "system", "content": system_prompt},
{"role": "user", "content": "Your next moves are:"},
],
temperature=temperature,
max_tokens=max_tokens,
top_p=top_p,
)
# Return the string to be parsed with regex
llm_response = completion.choices[0].message.content.strip()
return llm_response ``` To use another model or other prompts, make a call to another client in this function, change the system prompt, or make any fancy stuff. Submit your model Create a new class herited from Robot that has the changes you want to make and open a PR. We'll do our best to add it to the ranking! Credits Made with ❤️ by the OpenGenerativeAI team from phospho (@oulianov @Pierre-LouisBJT @Platinn) and Quivr (@StanGirard) during Mistral Hackathon 2024 in San Francisco;Benchmark LLMs by fighting in Street Fighter 3! The new way to evaluate the quality of an LLM;genai,llm,benchmark,streetfighterai | OpenGenerativeAI/llm-colosseum |
EvolvingLMMs-Lab/lmms-eval;The Evaluation Suite of Large Multimodal Models Accelerating the development of large multimodal models (LMMs) with lmms-eval 🏠 LMMs-Lab Homepage | 🎉 Blog | 📚 Documentation | 🤗 Huggingface Datasets | discord/lmms-eval Annoucement [2024-06] 🎬🎬 The lmms-eval/v0.2 has been upgraded to support video evaluations for video models like LLaVA-NeXT Video and Gemini 1.5 Pro across tasks such as EgoSchema, PerceptionTest, VideoMME, and more. Please refer to the blog for more details [2024-03] 📝📝 We have released the first version of lmms-eval , please refer to the blog for more details Why lmms-eval ? In today's world, we're on an exciting journey toward creating Artificial General Intelligence (AGI), much like the enthusiasm of the 1960s moon landing. This journey is powered by advanced large language models (LLMs) and large multimodal models (LMMs), which are complex systems capable of understanding, learning, and performing a wide variety of human tasks. To gauge how advanced these models are, we use a variety of evaluation benchmarks. These benchmarks are tools that help us understand the capabilities of these models, showing us how close we are to achieving AGI. However, finding and using these benchmarks is a big challenge. The necessary benchmarks and datasets are spread out and hidden in various places like Google Drive, Dropbox, and different school and research lab websites. It feels like we're on a treasure hunt, but the maps are scattered everywhere. In the field of language models, there has been a valuable precedent set by the work of lm-evaluation-harness . They offer integrated data and model interfaces, enabling rapid evaluation of language models and serving as the backend support framework for the open-llm-leaderboard , and has gradually become the underlying ecosystem of the era of foundation models. We humbly obsorbed the exquisite and efficient design of lm-evaluation-harness and introduce lmms-eval , an evaluation framework meticulously crafted for consistent and efficient evaluation of LMM. Installation For formal usage, you can install the package from PyPI by running the following command: bash
pip install lmms-eval For development, you can install the package by cloning the repository and running the following command: bash
git clone https://github.com/EvolvingLMMs-Lab/lmms-eval
cd lmms-eval
pip install -e . If you wanted to test llava, you will have to clone their repo from LLaVA and
```bash for llava 1.5 git clone https://github.com/haotian-liu/LLaVA cd LLaVA pip install -e . for llava-next (1.6) git clone https://github.com/LLaVA-VL/LLaVA-NeXT
cd LLaVA-NeXT
pip install -e .
``` Reproduction of LLaVA-1.5's paper results You can check the [environment install script](miscs/repr_scripts.sh) and [torch environment info](miscs/repr_torch_envs.txt) to **reproduce LLaVA-1.5's paper results**. We found torch/cuda versions difference would cause small variations in the results, we provide the [results check](miscs/llava_result_check.md) with different environments. If you want to test on caption dataset such as coco , refcoco , and nocaps , you will need to have java==1.8.0 to let pycocoeval api to work. If you don't have it, you can install by using conda conda install openjdk=8 you can then check your java version by java -version Comprehensive Evaluation Results of LLaVA Family Models As demonstrated by the extensive table below, we aim to provide detailed information for readers to understand the datasets included in lmms-eval and some specific details about these datasets (we remain grateful for any corrections readers may have during our evaluation process).
We provide a Google Sheet for the detailed results of the LLaVA series models on different datasets. You can access the sheet [here](https://docs.google.com/spreadsheets/d/1a5ImfdKATDI8T7Cwh6eH-bEsnQFzanFraFUgcS9KHWc/edit?usp=sharing). It's a live sheet, and we are updating it with new results. We also provide the raw data exported from Weights & Biases for the detailed results of the LLaVA series models on different datasets. You can access the raw data [here](https://docs.google.com/spreadsheets/d/1AvaEmuG4csSmXaHjgu4ei1KBMmNNW8wflOD_kkTDdv8/edit?usp=sharing). Our Development will be continuing on the main branch, and we encourage you to give us feedback on what features are desired and how to improve the library further, or ask questions, either in issues or PRs on GitHub. Multiple Usages Evaluation of LLaVA on MME bash
python3 -m accelerate.commands.launch \
--num_processes=8 \
-m lmms_eval \
--model llava \
--model_args pretrained="liuhaotian/llava-v1.5-7b" \
--tasks mme \
--batch_size 1 \
--log_samples \
--log_samples_suffix llava_v1.5_mme \
--output_path ./logs/ Evaluation of LLaVA on multiple datasets bash
python3 -m accelerate.commands.launch \
--num_processes=8 \
-m lmms_eval \
--model llava \
--model_args pretrained="liuhaotian/llava-v1.5-7b" \
--tasks mme,mmbench_en \
--batch_size 1 \
--log_samples \
--log_samples_suffix llava_v1.5_mme_mmbenchen \
--output_path ./logs/ For other variants llava. Please change the conv_template in the model_args conv_template is an arg of the init function of llava in lmms_eval/models/llava.py , you could find the corresponding value at LLaVA's code, probably in a dict variable conv_templates in llava/conversations.py bash
python3 -m accelerate.commands.launch \
--num_processes=8 \
-m lmms_eval \
--model llava \
--model_args pretrained="liuhaotian/llava-v1.6-mistral-7b,conv_template=mistral_instruct" \
--tasks mme,mmbench_en \
--batch_size 1 \
--log_samples \
--log_samples_suffix llava_v1.5_mme_mmbenchen \
--output_path ./logs/ Evaluation of larger lmms (llava-v1.6-34b) bash
python3 -m accelerate.commands.launch \
--num_processes=8 \
-m lmms_eval \
--model llava \
--model_args pretrained="liuhaotian/llava-v1.6-34b,conv_template=mistral_direct" \
--tasks mme,mmbench_en \
--batch_size 1 \
--log_samples \
--log_samples_suffix llava_v1.5_mme_mmbenchen \
--output_path ./logs/ Evaluation with a set of configurations, supporting evaluation of multiple models and datasets bash
python3 -m accelerate.commands.launch --num_processes=8 -m lmms_eval --config ./miscs/example_eval.yaml Evaluation with naive model sharding for bigger model (llava-next-72b) bash
python3 -m lmms_eval \
--model=llava \
--model_args=pretrained=lmms-lab/llava-next-72b,conv_template=qwen_1_5,device_map=auto,model_name=llava_qwen \
--tasks=pope,vizwiz_vqa_val,scienceqa_img \
--batch_size=1 \
--log_samples \
--log_samples_suffix=llava_qwen \
--output_path="./logs/" \
--wandb_args=project=lmms-eval,job_type=eval,entity=llava-vl Evaluation with SGLang for bigger model (llava-next-72b) bash
python3 -m lmms_eval \
--model=llava_sglang \
--model_args=pretrained=lmms-lab/llava-next-72b,tokenizer=lmms-lab/llavanext-qwen-tokenizer,conv_template=chatml-llava,tp_size=8,parallel=8 \
--tasks=mme \
--batch_size=1 \
--log_samples \
--log_samples_suffix=llava_qwen \
--output_path=./logs/ \
--verbosity=INFO Supported models Please check supported models for more details. Supported tasks Please check supported tasks for more details. Add Customized Model and Dataset Please refer to our documentation . Acknowledgement lmms_eval is a fork of lm-eval-harness . We recommend you to read through the docs of lm-eval-harness for relevant information. Below are the changes we made to the original API:
- Build context now only pass in idx and process image and doc during the model responding phase. This is due to the fact that dataset now contains lots of images and we can't store them in the doc like the original lm-eval-harness other wise the cpu memory would explode.
- Instance.args (lmms_eval/api/instance.py) now contains a list of images to be inputted to lmms.
- lm-eval-harness supports all HF language models as single model class. Currently this is not possible of lmms because the input/output format of lmms in HF are not yet unified. Thererfore, we have to create a new class for each lmms model. This is not ideal and we will try to unify them in the future. During the initial stage of our project, we thank:
- Xiang Yue , Jingkang Yang , Dong Guo and Sheng Shen for early discussion and testing. During the v0.1 to v0.2 , we thank the community support from pull requests (PRs): Details are in lmms-eval/v0.2.0 release notes Datasets: VCR: Visual Caption Restoration (officially from the authors, MILA) ConBench (officially from the authors, PKU/Bytedance) MathVerse (officially from the authors, CUHK) MM-UPD (officially from the authors, University of Tokyo) WebSRC (from Hunter Heiden) ScreeSpot (from Hunter Heiden) RealworldQA (from Fanyi Pu, NTU) Multi-lingual LLaVA-W (from Gagan Bhatia, UBC) Models: LLaVA-HF (officially from Huggingface) Idefics-2 (from the lmms-lab team) microsoft/Phi-3-Vision (officially from the authors, Microsoft) LLaVA-SGlang (from the lmms-lab team) Citations shell
@misc{lmms_eval2024,
title={LMMs-Eval: Accelerating the Development of Large Multimoal Models},
url={https://github.com/EvolvingLMMs-Lab/lmms-eval},
author={Bo Li*, Peiyuan Zhang*, Kaichen Zhang*, Fanyi Pu*, Xinrun Du, Yuhao Dong, Haotian Liu, Yuanhan Zhang, Ge Zhang, Chunyuan Li and Ziwei Liu},
publisher = {Zenodo},
version = {v0.1.0},
month={March},
year={2024}
};Accelerating the development of large multimodal models (LMMs) with lmms-eval;[] | EvolvingLMMs-Lab/lmms-eval |
lmstudio-ai/lms;lms - Command Line Tool for LM Studio Built with lmstudio.js Installation lms ships with LM Studio 0.2.22 and newer. To set it up, run the built-in bootstrap command like so: Windows : shell
cmd /c %USERPROFILE%/.cache/lm-studio/bin/lms.exe bootstrap Linux/macOS : shell
~/.cache/lm-studio/bin/lms bootstrap To check if the bootstrapping was successful, run the following in a 👉 new terminal window 👈 : shell
lms Usage You can use lms --help to see a list of all available subcommands. For details about each subcommand, run lms <subcommand> --help . Here are some frequently used commands: lms status - To check the status of LM Studio. lms server start - To start the local API server. lms server stop - To stop the local API server. lms ls - To list all downloaded models. lms ls --detailed - To list all downloaded models with detailed information. lms ls --json - To list all downloaded models in machine-readable JSON format. lms ps - To list all loaded models available for inferencing. lms ps --json - To list all loaded models available for inferencing in machine-readable JSON format. lms load --gpu max - To load a model with maximum GPU acceleration lms load <model path> --gpu max -y - To load a model with maximum GPU acceleration without confirmation lms unload <model identifier> - To unload a model lms unload --all - To unload all models lms create - To create a new project with LM Studio SDK lms log stream - To stream logs from LM Studio;LM Studio CLI. Written in TypeScript/Node;llm,lmstudio,nodejs,typescript | lmstudio-ai/lms |
OXeu/Rin;Rin English | 简体中文 Introduction Rin is a blog based on Cloudflare Pages + Workers + D1 + R2. It does not require a server to deploy. It can be deployed just with a domain name that resolves to Cloudflare. Demo address xeu.life Features Support GitHub OAuth login. By default, the first logged-in user has management privileges, and other users are ordinary users Support article writing and editing Support local real-time saving of modifications/edits to any article without interfering between multiple articles Support setting it as visible only to yourself, which can serve as a draft box for cloud synchronization or record more private content Support dragging/pasting uploaded images to a bucket that supports the S3 protocol and generating links Support setting article aliases, and access articles through links such as https://xeu.life/about Support articles not being listed in the homepage list Support adding links of friends' blog, and the backend regularly checks and updates the accessible status of links every 20 minutes Support replying to comment articles/deleting comments Support sending comment notifications through Webhook Support automatic identification of the first picture in the article and display it as the header image in the article list Support inputting tag texts such as "#Blog #Cloudflare" and automatically parsing them into tags For more features, please refer to https://xeu.life Documentation Deployment Documentation Environment Variables List SEO Optimization Configuration Contribution Guide Code of Conduct Star History License ```
MIT License Copyright (c) 2024 Xeu Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
```;⚡️Rin 是一个基于 Cloudflare Pages + Workers + D1 + R2 全家桶的博客,无需服务器无需备案,只需要一个解析到 Cloudflare 的域名即可部署。;blog,bun,bunjs,framework,web,cloudflare,cloudflare-workers,elysiajs,react | OXeu/Rin |
b4rtaz/distributed-llama;Distributed Llama Tensor parallelism is all you need. Run LLMs on weak devices or make powerful devices even more powerful by distributing the workload and dividing the RAM usage. This project proves that it's possible split the workload of LLMs across multiple devices and achieve a significant speedup. Distributed Llama allows you to run huge LLMs in-house. The project uses TCP sockets to synchronize the state. You can easily configure your AI cluster by using a home router. Distributed Llama running Llama 2 70B on 8 Raspberry Pi 4B devices 🔥 Setup Root Node by Single Command Python 3 and C++ compiler required. The command will download the model and the tokenizer. | Model | Purpose | Size | Command |
| ----------------------- | --------- | -------- | ----------------------------------------- |
| TinyLlama 1.1B 3T Q40 | Benchmark | 844 MB | python launch.py tinyllama_1_1b_3t_q40 |
| Llama 3 8B Q40 | Benchmark | 6.32 GB | python launch.py llama3_8b_q40 |
| Llama 3 8B Instruct Q40 | Chat, API | 6.32 GB | python launch.py llama3_8b_instruct_q40 | 🛠️ Convert Model Manually Supported architectures: Llama, Mixtral, Grok How to Convert Llama 2, Llama 3 How to Convert Hugging Face Model 🚧 Known Limitations You can run Distributed Llama only on 1, 2, 4... 2^n nodes. The maximum number of nodes is equal to the number of KV heads in the model #70 . Optimized for (weights format × buffer format): ARM CPUs ✅ F32 × F32 ❌ F16 × F32 ❌ Q40 × F32 ✅ Q40 × Q80 x86_64 AVX2 CPUs ❌ F32 × F32 ❌ F16 × F32 ❌ Q40 × F32 ✅ Q40 × Q80 👷 Architecture The project is split up into two parts:
* Root node - it's responsible for loading the model and weights and forward them to workers. Also, it synchronizes the state of the neural network. The root node is also a worker, it processes own slice of the neural network.
* Worker node - it processes own slice of the neural network. It doesn't require any configuration related to the model. You always need the root node and you can add 2^n - 1 worker nodes to speed up the inference. The RAM usage of the neural network is split up across all nodes. The root node requires a bit more RAM than worker nodes. 🎹 Commands dllama inference - run the inference with a simple benchmark, dllama chat - run the CLI chat, dllama worker - run the worker node, dllama-api - run the API server. Inference, Chat, API | Argument | Description | Example |
| ---------------------------- | ---------------------------------------------------------------- | -------------------------------------- |
| --model <path> | Path to model. | dllama_model_meta-llama-3-8b_q40.m |
| --tokenizer <path> | Tokenizer to model. | dllama_tokenizer_llama3.t |
| --buffer-float-type <type> | Float precision of synchronization. | q80 |
| --workers <workers> | Addresses of workers (ip:port), separated by space. | 0.0.0.1:9991 10.0.0.2:9991 | Inference, Chat, Worker, API | Argument | Description | Example |
| ---------------------------- | --------------------------------------------------------------------- | ----------------------------------- |
| --nthreads <n> | Amount of threads. Don't set a higher value than number of CPU cores. | 4 | Worker, API | Argument | Description | Example |
| ---------------------------- | --------------------------------- | ----------------- |
| --port <port> | Binding port. | 9999 | Inference | Argument | Description | Example |
| ---------------------------- | ------------------------------ | ------------------ |
| --prompt <prompt> | Initial prompt. | "Hello World" |
| --steps <steps> | Number of tokens to generate. | 256 | 📊 Measurements Average Token Generation Time I - inference time of the root node, T - network transfer time of the root node. Raspberry Pi 5 8GB Weights = Q40, Buffer = Q80, nSamples = 16, switch = TP-Link LS1008G, tested on 0.3.1 version | Model | 1 x RasPi 5 8 GB | 2 x RasPi 5 8 GB | 4 x RasPi 5 8 GB |
|-------------|---------------------------------------------------------------------|---------------------------------------------------------------------|---------------------------------------------------------------------|
| Llama 2 7B | 441.09 ms , 2.26 t/s I: 434.84 ms, T: 5.25 ms | 341.46 ms , 2.92 t/s I: 257.78 ms, T: 83.27 ms | 219.08 ms , 4.56 t/s 🔥 I: 163.42 ms, T: 55.25 ms |
| Llama 3 8B | 564.31 ms , 1.77 t/s I: 556.67 ms, T: 6.17 ms | 444.27 ms , 2.25 t/s I: 362.73 ms, T: 80.11 ms | 331.47 ms , 3.01 t/s 🔥 I: 267.62 ms, T: 62.34 ms | Raspberry Pi 4B 8 GB Weights = Q40, Buffer = Q80, nSamples = 16, switch = TP-Link LS1008G, tested on 0.1.0 version 8 x Raspberry Pi 4B 8GB | Model | 1 x RasPi 4B 8 GB | 2 x RasPi 4B 8 GB | 4 x RasPi 4B 8 GB | 8 x RasPi 4B 8 GB |
|-------------|---------------------------------------------------------------------|-----------------------------------------------------------------------|--------------------------------------------------------------------------------------|----------------------------------------------------------------------|
| Llama 2 7B | 1312.50 ms I: 1307.94 ms, T: 1.81 ms | 793.69 ms I: 739.00 ms, T: 52.50 ms | 494.00 ms 🔥 I: 458.81 ms, T: 34.06 ms | 588.19 ms I: 296.69 ms, T: 289.75 ms |
| Llama 2 13B | Not enough RAM | 1497.19 ms I: 1465.06 ms, T: 30.88 ms | 848.19 ms 🔥 I: 746.88 ms, T: 99.50 ms | 1114.88 ms I: 460.8 ms, T: 652.88 ms |
| Llama 2 70B | Not enough RAM | Not enough RAM | Not enough RAM | 4842.81 ms 🔥 I: 2121.94 ms, T: 2719.62 ms | x86_64 CPU Cloud Server Weights = Q40, Buffer = Q80, nSamples = 16, VMs = c3d-highcpu-30 , tested on 0.1.0 version | Model | 1 x VM | 2 x VM | 4 x VM |
|-------------|---------------------------------------------------------------------|-----------------------------------------------------------------------|--------------------------------------------------------------------------------------|
| Llama 2 7B | 101.81 ms I: 101.06 ms, T: 0.19 ms | 69.69 ms I: 61.50 ms, T: 7.62 ms | 53.69 ms 🔥 I: 40.25 ms, T: 12.81 ms |
| Llama 2 13B | 184.19 ms I: 182.88 ms, T: 0.69 ms | 115.38 ms I: 107.12 ms, T: 7.81 ms | 86.81 ms 🔥 I: 66.25 ms, T: 19.94 ms |
| Llama 2 70B | 909.69 ms I: 907.25 ms, T: 1.75 ms | 501.38 ms I: 475.50 ms, T: 25.00 ms | 293.06 ms 🔥 I: 264.00 ms, T: 28.50 ms | Network Transfer for Generating Token F32 Buffer | Model | 2 devices | 4 devices | 8 devices |
|-------------|----------------|---------------|---------------|
| Llama 3 8B | 2048 kB | 6144 kB | 14336 kB | Q80 Buffer | Model | 2 devices | 4 devices | 8 devices |
|-------------|--------------|---------------|----------------|
| Llama 3 8B | 544 kB | 1632 kB | 3808 kB | 📟 Setup Raspberry Pi Devices Install Raspberry Pi OS Lite (64 bit) on your Raspberry Pi devices. This OS doesn't have desktop environment. Connect all devices to your switch or router. Connect to all devices via SSH. ssh user@raspberrypi1.local
ssh user@raspberrypi2.local Install Git: sh
sudo apt install git Clone this repository and compile Distributed Llama on all devices: sh
git clone https://github.com/b4rtaz/distributed-llama.git
make dllama Transfer weights and the tokenizer file to the root device. Optional: assign static IP addresses. sh
sudo ip addr add 10.0.0.1/24 dev eth0 # 1th device
sudo ip addr add 10.0.0.2/24 dev eth0 # 2th device Run worker nodes on worker devices: sh
sudo nice -n -20 ./dllama worker --port 9998 --nthreads 4 Run root node on the root device: sh
sudo nice -n -20 ./dllama inference --model dllama_model_meta-llama-3-8b_q40.m --tokenizer dllama_tokenizer_llama3.t --buffer-float-type q80 --prompt "Hello world" --steps 16 --nthreads 4 --workers 10.0.0.2:9998 To add more worker nodes, just add more addresses to the --workers argument. ./dllama inference ... --workers 10.0.0.2:9998 10.0.0.3:9998 10.0.0.4:9998 💻 Setup computers with MacOS, Linux, or Windows You need x86_64 AVX2 CPUs or ARM CPUs. Different devices may have different CPUs. MacOS or Linux The below instructions are for Debian-based distributions but you can easily adapt them to your distribution, macOS. Install Git and GCC: sh
sudo apt install git build-essential Clone this repository and compile Distributed Llama on all computers: sh
git clone https://github.com/b4rtaz/distributed-llama.git
make dllama Continue to point 3. Windows Install Git and Mingw (via Chocolatey ): powershell
choco install mingw Clone this repository and compile Distributed Llama on all computers: sh
git clone https://github.com/b4rtaz/distributed-llama.git
make dllama Continue to point 3. Run Cluster Transfer weights and the tokenizer file to the root computer. Run worker nodes on worker computers: sh
./dllama worker --port 9998 --nthreads 4 Run root node on the root computer: sh
./dllama inference --model dllama_model_meta-llama-3-8b_q40.m --tokenizer dllama_tokenizer_llama3.t --buffer-float-type q80 --prompt "Hello world" --steps 16 --nthreads 4 --workers 192.168.0.1:9998 To add more worker nodes, just add more addresses to the --workers argument. ./dllama inference ... --workers 192.168.0.1:9998 192.168.0.2:9998 192.168.0.3:9998 💡 License This project is released under the MIT license. 📖 Citation @misc{dllama,
author = {Bartłomiej Tadych},
title = {Distributed Llama},
year = {2024},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/b4rtaz/distributed-llama}},
commit = {7eb77ca93ec0d502e28d36b6fb20039b449cbea4}
};Tensor parallelism is all you need. Run LLMs on weak devices or make powerful devices even more powerful by distributing the workload and dividing the RAM usage.;distributed-computing,llama2,llm,llm-inference,neural-network,llms,open-llm,distributed-llm,llama3 | b4rtaz/distributed-llama |