text
stringlengths
74
478k
repo
stringlengths
7
106
tefkah/zotero-night;[!NOTE] Development of this plugin has ceased, as Zotero has introduced it's own native dark mode in Zotero 7. Thank you for using the plugin! 🌌 Night for Zotero 6 ⬇️ Download latest version 7️⃣ Click here for the version compatible with the Zotero 7 Beta Install by downloading the latest version Night theme for Zotero UI & Pdf Also adds some animations and other UI changes. Based on the Nord colorscheme and prior work by Rosmaninho . ✨ Features ◼️ Dark UI Easier on the eyes for those late night deadlines. 🌚 Dark-mode for PDF It's 2022, we can change the colors of PDFs. Choice between two themes: a very dark one, and one that matches the background color. 🔁 Quick Toggle Quickly toggle between different filters for the pdf https://user-images.githubusercontent.com/21983833/164006109-5615d800-fbab-4174-b04e-1ad721238a61.mov 🌊 Miscellaneous UI improvements Clean up the tab bar, add some animations here and there, get rid of all the borders. ⬇️ Install Download the xpi from Releases . As always, if you're on Firefox, right-click -> Save link as.... Video instructions https://user-images.githubusercontent.com/21983833/168032714-6106b138-2725-4091-830b-770dbdff43a4.mov Once installed in Zotero, activate it: Tools > Night Preferences, and select "Enable Dark Theme". 😢 Limitations Popup menus do not have proper styling on some platforms. currently using CSS filter functions in order to make the pdfs dark, however this is rather slow. ✅ To-do [ ] Make prettier, more curves. Basically redesign zotero a bit [ ] Add more themes [x] Add user preferences [ ] Write contributing guide 💪 Contributing You're help is very welcome! However, getting setup for Zotero plugin development is a bit of a pain in the ass. What you need to do [ ] Download Zotero 60 ESR [ ] Git clone [ ] yarn [ ] do the zotero plugin stuff (expound on this) [ ] Launch zotero with --debugger and -somethingcaches [ ] Launch Firefox 60 [ ] In Firefox, go to devtools, go to settings, click "enable remote debugging" and the one next to it that's also about debugging. [ ] In Zotero, go to setting, advanced, config editor, look up "debugging" and click on "allow remote debugging" [ ] In Firefox, click the hamburger menu in the top right -> web developer -> Connect... [ ] Enter localhost:6100 [ ] Connect [ ] Click "Inspect Main Process" Wow now you can finally do things. Sponsors If you really like Zotero Night, you can consider sponsoring me monthly! If you donate $5/month or more, you'll be listed here and get priority for feature requests/bugfixes! (mention that you're a sponsor in the issue because i'll forget);Night theme for Zotero UI and PDF;[]
tefkah/zotero-night
vanus-labs/vanus;Vanus is an open-source message queue with built-in event processing capabilities. [![stars](https://img.shields.io/github/stars/vanus-labs/vanus.svg?style=flat&logo=github&colorB=blueviolet&label=stars)](https://github.com/vanus-labs/vanus) [![License](https://img.shields.io/badge/License-Apache_2.0-green.svg)](https://github.com/vanus-labs/vanus/blob/main/LICENSE) [![codecov](https://codecov.io/gh/vanus-labs/vanus/branch/main/graph/badge.svg?token=RSXSIMEY4V)](https://codecov.io/gh/vanus-labs/vanus) [![Language](https://img.shields.io/github/go-mod/go-version/vanus-labs/vanus?logo=go)](https://golang.org/) [![Vanus Cloud](https://img.shields.io/badge/VanusCloud-Try%20it%20%20free-red)](https://cloud.vanus.ai) [![docs](https://img.shields.io/badge/Docs-online-green)](https://docs.vanus.ai/) Introduction Vanus helps users build event pipelines between SaaS, cloud services, and cloud functions in minutes. 1. Build the event-driven system Get events from cloud services and SaaS, and deliver them to cloud functions or microservices. Synchronize changed data or transfer data to the data lake. Obtain events generated by SaaS and send them to other SaaS. 2. Out-of-the-box event computing capabilities Real-time processing during event transmissions, such as filtering and transformation. Natively support the CloudEvents specification, and can directly send events to workloads that support CloudEvent. 3. 100% open source, Super easy to use One-click deployment, the installation is completed within 1 minute, and developers without MQ experience can also use it. Message queues and connectors are 100% open source, a one-stop open-source solution. Getting Started You can install Vanus with a single command within 1 minute. Check out our website for detailed information. shell kubectl apply -f https://dl.vanus.ai/all-in-one/v0.9.0.yml Community We have a few channels for contact: Slack @Vanus_dev on Twitter GitHub Issues Send emails to: Vanus How to contribute See here for how to contribute to Vanus. License Vanus is under the Apache 2.0 license. See the LICENSE file for details.;Vanus is a Serverless, event streaming system with processing capabilities. It easily connects SaaS, Cloud Services, and Databases to help users build next-gen Event-driven Applications.;eventbus,message-queue,event-driven,serverless,cloudevents,event-bridge,cloudnative,kafka,rabbitmq,rocketmq
vanus-labs/vanus
antfu/vscode-file-nesting-config;Anthony's File Nesting Config for VS Code Requires VS Code v1.67 This is a config snippet making your file tree cleaner with the file nesting feature of VS Code. Inspired by this tweet by Dzhavat Ushev and this tweet by Jacob Hands . With some scripts to avoid duplication of works. And it's very opinionated. Use it VS Code Extension We now have a new VS Code extension to handle the updates automatically for you. Check the readme for instructions . Update Manually Open your VS Code, bring up your settings.json , copy-n-paste the snippet below, and you are good to go :) jsonc // updated 2024-06-11 13:25 // https://github.com/antfu/vscode-file-nesting-config "explorer.fileNesting.enabled": true, "explorer.fileNesting.expand": false, "explorer.fileNesting.patterns": { ".clang-tidy": ".clang-format, .clangd, compile_commands.json", ".env": "*.env, .env.*, .envrc, env.d.ts", ".gitignore": ".gitattributes, .gitmodules, .gitmessage, .mailmap, .git-blame*", ".project": ".classpath", "+layout.svelte": "+layout.ts,+layout.ts,+layout.js,+layout.server.ts,+layout.server.js,+layout.gql", "+page.svelte": "+page.server.ts,+page.server.js,+page.ts,+page.js,+page.gql", "app.config.*": "*.env, .babelrc*, .codecov, .cssnanorc*, .env.*, .envrc, .htmlnanorc*, .lighthouserc.*, .mocha*, .postcssrc*, .terserrc*, api-extractor.json, ava.config.*, babel.config.*, capacitor.config.*, contentlayer.config.*, cssnano.config.*, cypress.*, env.d.ts, formkit.config.*, formulate.config.*, histoire.config.*, htmlnanorc.*, i18n.config.*, ionic.config.*, jasmine.*, jest.config.*, jsconfig.*, karma*, lighthouserc.*, panda.config.*, playwright.config.*, postcss.config.*, puppeteer.config.*, rspack.config.*, sst.config.*, svgo.config.*, tailwind.config.*, tsconfig.*, tsdoc.*, uno.config.*, unocss.config.*, vitest.config.*, vuetify.config.*, webpack.config.*, windi.config.*", "artisan": "*.env, .babelrc*, .codecov, .cssnanorc*, .env.*, .envrc, .htmlnanorc*, .lighthouserc.*, .mocha*, .postcssrc*, .terserrc*, api-extractor.json, ava.config.*, babel.config.*, capacitor.config.*, contentlayer.config.*, cssnano.config.*, cypress.*, env.d.ts, formkit.config.*, formulate.config.*, histoire.config.*, htmlnanorc.*, i18n.config.*, ionic.config.*, jasmine.*, jest.config.*, jsconfig.*, karma*, lighthouserc.*, panda.config.*, playwright.config.*, postcss.config.*, puppeteer.config.*, rspack.config.*, server.php, sst.config.*, svgo.config.*, tailwind.config.*, tsconfig.*, tsdoc.*, uno.config.*, unocss.config.*, vitest.config.*, vuetify.config.*, webpack.config.*, webpack.mix.js, windi.config.*", "astro.config.*": "*.env, .babelrc*, .codecov, .cssnanorc*, .env.*, .envrc, .htmlnanorc*, .lighthouserc.*, .mocha*, .postcssrc*, .terserrc*, api-extractor.json, ava.config.*, babel.config.*, capacitor.config.*, contentlayer.config.*, cssnano.config.*, cypress.*, env.d.ts, formkit.config.*, formulate.config.*, histoire.config.*, htmlnanorc.*, i18n.config.*, ionic.config.*, jasmine.*, jest.config.*, jsconfig.*, karma*, lighthouserc.*, panda.config.*, playwright.config.*, postcss.config.*, puppeteer.config.*, rspack.config.*, sst.config.*, svgo.config.*, tailwind.config.*, tsconfig.*, tsdoc.*, uno.config.*, unocss.config.*, vitest.config.*, vuetify.config.*, webpack.config.*, windi.config.*", "BUILD.bazel": "*.bzl, *.bazel, *.bazelrc, bazel.rc, .bazelignore, .bazelproject, WORKSPACE", "Cargo.toml": ".clippy.toml, .rustfmt.toml, cargo.lock, clippy.toml, cross.toml, rust-toolchain.toml, rustfmt.toml", "CMakeLists.txt": "*.cmake, *.cmake.in, .cmake-format.yaml, CMakePresets.json, CMakeCache.txt", "composer.json": ".php*.cache, composer.lock, phpunit.xml*, psalm*.xml", "default.nix": "shell.nix", "deno.json*": "*.env, .env.*, .envrc, api-extractor.json, deno.lock, env.d.ts, import-map.json, import_map.json, jsconfig.*, tsconfig.*, tsdoc.*", "Dockerfile": "*.dockerfile, .devcontainer.*, .dockerignore, captain-definition, compose.*, docker-compose.*, dockerfile*", "flake.nix": "flake.lock", "gatsby-config.*": "*.env, .babelrc*, .codecov, .cssnanorc*, .env.*, .envrc, .htmlnanorc*, .lighthouserc.*, .mocha*, .postcssrc*, .terserrc*, api-extractor.json, ava.config.*, babel.config.*, capacitor.config.*, contentlayer.config.*, cssnano.config.*, cypress.*, env.d.ts, formkit.config.*, formulate.config.*, gatsby-browser.*, gatsby-node.*, gatsby-ssr.*, gatsby-transformer.*, histoire.config.*, htmlnanorc.*, i18n.config.*, ionic.config.*, jasmine.*, jest.config.*, jsconfig.*, karma*, lighthouserc.*, panda.config.*, playwright.config.*, postcss.config.*, puppeteer.config.*, rspack.config.*, sst.config.*, svgo.config.*, tailwind.config.*, tsconfig.*, tsdoc.*, uno.config.*, unocss.config.*, vitest.config.*, vuetify.config.*, webpack.config.*, windi.config.*", "gemfile": ".ruby-version, gemfile.lock", "go.mod": ".air*, go.sum", "go.work": "go.work.sum", "hatch.toml": ".editorconfig, .flake8, .isort.cfg, .python-version, hatch.toml, requirements*.in, requirements*.pip, requirements*.txt, tox.ini", "I*.cs": "$(capture).cs", "Makefile": "*.mk", "mix.exs": ".credo.exs, .dialyzer_ignore.exs, .formatter.exs, .iex.exs, .tool-versions, mix.lock", "next.config.*": "*.env, .babelrc*, .codecov, .cssnanorc*, .env.*, .envrc, .htmlnanorc*, .lighthouserc.*, .mocha*, .postcssrc*, .terserrc*, api-extractor.json, ava.config.*, babel.config.*, capacitor.config.*, contentlayer.config.*, cssnano.config.*, cypress.*, env.d.ts, formkit.config.*, formulate.config.*, histoire.config.*, htmlnanorc.*, i18n.config.*, ionic.config.*, jasmine.*, jest.config.*, jsconfig.*, karma*, lighthouserc.*, next-env.d.ts, next-i18next.config.*, panda.config.*, playwright.config.*, postcss.config.*, puppeteer.config.*, rspack.config.*, sst.config.*, svgo.config.*, tailwind.config.*, tsconfig.*, tsdoc.*, uno.config.*, unocss.config.*, vitest.config.*, vuetify.config.*, webpack.config.*, windi.config.*", "nuxt.config.*": "*.env, .babelrc*, .codecov, .cssnanorc*, .env.*, .envrc, .htmlnanorc*, .lighthouserc.*, .mocha*, .nuxtignore, .nuxtrc, .postcssrc*, .terserrc*, api-extractor.json, ava.config.*, babel.config.*, capacitor.config.*, contentlayer.config.*, cssnano.config.*, cypress.*, env.d.ts, formkit.config.*, formulate.config.*, histoire.config.*, htmlnanorc.*, i18n.config.*, ionic.config.*, jasmine.*, jest.config.*, jsconfig.*, karma*, lighthouserc.*, panda.config.*, playwright.config.*, postcss.config.*, puppeteer.config.*, rspack.config.*, sst.config.*, svgo.config.*, tailwind.config.*, tsconfig.*, tsdoc.*, uno.config.*, unocss.config.*, vitest.config.*, vuetify.config.*, webpack.config.*, windi.config.*", "package.json": ".browserslist*, .circleci*, .commitlint*, .cz-config.js, .czrc, .dlint.json, .dprint.json*, .editorconfig, .eslint*, .firebase*, .flowconfig, .github*, .gitlab*, .gitmojirc.json, .gitpod*, .huskyrc*, .jslint*, .knip.*, .lintstagedrc*, .markdownlint*, .node-version, .nodemon*, .npm*, .nvmrc, .pm2*, .pnp.*, .pnpm*, .prettier*, .pylintrc, .release-please*.json, .releaserc*, .ruff.toml, .sentry*, .simple-git-hooks*, .stackblitz*, .styleci*, .stylelint*, .tazerc*, .textlint*, .tool-versions, .travis*, .versionrc*, .vscode*, .watchman*, .xo-config*, .yamllint*, .yarnrc*, Procfile, apollo.config.*, appveyor*, azure-pipelines*, biome.json*, bower.json, build.config.*, bun.lockb, commitlint*, crowdin*, dangerfile*, dlint.json, dprint.json*, electron-builder.*, eslint*, firebase.json, grunt*, gulp*, jenkins*, knip.*, lerna*, lint-staged*, nest-cli.*, netlify*, nodemon*, npm-shrinkwrap.json, nx.*, package-lock.json, package.nls*.json, phpcs.xml, pm2.*, pnpm*, prettier*, pullapprove*, pyrightconfig.json, release-please*.json, release-tasks.sh, release.config.*, renovate*, rollup.config.*, rspack*, ruff.toml, simple-git-hooks*, sonar-project.properties, stylelint*, tslint*, tsup.config.*, turbo*, typedoc*, unlighthouse*, vercel*, vetur.config.*, webpack*, workspace.json, wrangler.toml, xo.config.*, yarn*", "Pipfile": ".editorconfig, .flake8, .isort.cfg, .python-version, Pipfile, Pipfile.lock, requirements*.in, requirements*.pip, requirements*.txt, tox.ini", "pubspec.yaml": ".metadata, .packages, all_lint_rules.yaml, analysis_options.yaml, build.yaml, pubspec.lock, pubspec_overrides.yaml", "pyproject.toml": ".commitlint*, .dlint.json, .dprint.json*, .editorconfig, .eslint*, .flake8, .flowconfig, .isort.cfg, .jslint*, .lintstagedrc*, .markdownlint*, .pdm-python, .pdm.toml, .prettier*, .pylintrc, .python-version, .ruff.toml, .stylelint*, .textlint*, .xo-config*, .yamllint*, MANIFEST.in, Pipfile, Pipfile.lock, biome.json*, commitlint*, dangerfile*, dlint.json, dprint.json*, eslint*, hatch.toml, lint-staged*, pdm.lock, phpcs.xml, poetry.lock, poetry.toml, prettier*, pyproject.toml, pyrightconfig.json, requirements*.in, requirements*.pip, requirements*.txt, ruff.toml, setup.cfg, setup.py, stylelint*, tox.ini, tslint*, xo.config.*", "quasar.conf.js": "*.env, .babelrc*, .codecov, .cssnanorc*, .env.*, .envrc, .htmlnanorc*, .lighthouserc.*, .mocha*, .postcssrc*, .terserrc*, api-extractor.json, ava.config.*, babel.config.*, capacitor.config.*, contentlayer.config.*, cssnano.config.*, cypress.*, env.d.ts, formkit.config.*, formulate.config.*, histoire.config.*, htmlnanorc.*, i18n.config.*, ionic.config.*, jasmine.*, jest.config.*, jsconfig.*, karma*, lighthouserc.*, panda.config.*, playwright.config.*, postcss.config.*, puppeteer.config.*, quasar.extensions.json, rspack.config.*, sst.config.*, svgo.config.*, tailwind.config.*, tsconfig.*, tsdoc.*, uno.config.*, unocss.config.*, vitest.config.*, vuetify.config.*, webpack.config.*, windi.config.*", "readme*": "AUTHORS, Authors, BACKERS*, Backers*, CHANGELOG*, CITATION*, CODEOWNERS, CODE_OF_CONDUCT*, CONTRIBUTING*, CONTRIBUTORS, COPYING*, CREDITS, Changelog*, Citation*, Code_Of_Conduct*, Codeowners, Contributing*, Contributors, Copying*, Credits, GOVERNANCE.MD, Governance.md, HISTORY.MD, History.md, LICENSE*, License*, MAINTAINERS, Maintainers, RELEASE_NOTES*, Release_Notes*, SECURITY.MD, SPONSORS*, Security.md, Sponsors*, authors, backers*, changelog*, citation*, code_of_conduct*, codeowners, contributing*, contributors, copying*, credits, governance.md, history.md, license*, maintainers, release_notes*, security.md, sponsors*", "Readme*": "AUTHORS, Authors, BACKERS*, Backers*, CHANGELOG*, CITATION*, CODEOWNERS, CODE_OF_CONDUCT*, CONTRIBUTING*, CONTRIBUTORS, COPYING*, CREDITS, Changelog*, Citation*, Code_Of_Conduct*, Codeowners, Contributing*, Contributors, Copying*, Credits, GOVERNANCE.MD, Governance.md, HISTORY.MD, History.md, LICENSE*, License*, MAINTAINERS, Maintainers, RELEASE_NOTES*, Release_Notes*, SECURITY.MD, SPONSORS*, Security.md, Sponsors*, authors, backers*, changelog*, citation*, code_of_conduct*, codeowners, contributing*, contributors, copying*, credits, governance.md, history.md, license*, maintainers, release_notes*, security.md, sponsors*", "README*": "AUTHORS, Authors, BACKERS*, Backers*, CHANGELOG*, CITATION*, CODEOWNERS, CODE_OF_CONDUCT*, CONTRIBUTING*, CONTRIBUTORS, COPYING*, CREDITS, Changelog*, Citation*, Code_Of_Conduct*, Codeowners, Contributing*, Contributors, Copying*, Credits, GOVERNANCE.MD, Governance.md, HISTORY.MD, History.md, LICENSE*, License*, MAINTAINERS, Maintainers, RELEASE_NOTES*, Release_Notes*, SECURITY.MD, SPONSORS*, Security.md, Sponsors*, authors, backers*, changelog*, citation*, code_of_conduct*, codeowners, contributing*, contributors, copying*, credits, governance.md, history.md, license*, maintainers, release_notes*, security.md, sponsors*", "remix.config.*": "*.env, .babelrc*, .codecov, .cssnanorc*, .env.*, .envrc, .htmlnanorc*, .lighthouserc.*, .mocha*, .postcssrc*, .terserrc*, api-extractor.json, ava.config.*, babel.config.*, capacitor.config.*, contentlayer.config.*, cssnano.config.*, cypress.*, env.d.ts, formkit.config.*, formulate.config.*, histoire.config.*, htmlnanorc.*, i18n.config.*, ionic.config.*, jasmine.*, jest.config.*, jsconfig.*, karma*, lighthouserc.*, panda.config.*, playwright.config.*, postcss.config.*, puppeteer.config.*, remix.*, rspack.config.*, sst.config.*, svgo.config.*, tailwind.config.*, tsconfig.*, tsdoc.*, uno.config.*, unocss.config.*, vitest.config.*, vuetify.config.*, webpack.config.*, windi.config.*", "requirements.txt": ".editorconfig, .flake8, .isort.cfg, .python-version, requirements*.in, requirements*.pip, requirements*.txt, tox.ini", "rush.json": ".browserslist*, .circleci*, .commitlint*, .cz-config.js, .czrc, .dlint.json, .dprint.json*, .editorconfig, .eslint*, .firebase*, .flowconfig, .github*, .gitlab*, .gitmojirc.json, .gitpod*, .huskyrc*, .jslint*, .knip.*, .lintstagedrc*, .markdownlint*, .node-version, .nodemon*, .npm*, .nvmrc, .pm2*, .pnp.*, .pnpm*, .prettier*, .pylintrc, .release-please*.json, .releaserc*, .ruff.toml, .sentry*, .simple-git-hooks*, .stackblitz*, .styleci*, .stylelint*, .tazerc*, .textlint*, .tool-versions, .travis*, .versionrc*, .vscode*, .watchman*, .xo-config*, .yamllint*, .yarnrc*, Procfile, apollo.config.*, appveyor*, azure-pipelines*, biome.json*, bower.json, build.config.*, bun.lockb, commitlint*, crowdin*, dangerfile*, dlint.json, dprint.json*, electron-builder.*, eslint*, firebase.json, grunt*, gulp*, jenkins*, knip.*, lerna*, lint-staged*, nest-cli.*, netlify*, nodemon*, npm-shrinkwrap.json, nx.*, package-lock.json, package.nls*.json, phpcs.xml, pm2.*, pnpm*, prettier*, pullapprove*, pyrightconfig.json, release-please*.json, release-tasks.sh, release.config.*, renovate*, rollup.config.*, rspack*, ruff.toml, simple-git-hooks*, sonar-project.properties, stylelint*, tslint*, tsup.config.*, turbo*, typedoc*, unlighthouse*, vercel*, vetur.config.*, webpack*, workspace.json, wrangler.toml, xo.config.*, yarn*", "setup.cfg": ".editorconfig, .flake8, .isort.cfg, .python-version, MANIFEST.in, requirements*.in, requirements*.pip, requirements*.txt, setup.cfg, tox.ini", "setup.py": ".editorconfig, .flake8, .isort.cfg, .python-version, MANIFEST.in, requirements*.in, requirements*.pip, requirements*.txt, setup.cfg, setup.py, tox.ini", "shims.d.ts": "*.d.ts", "svelte.config.*": "*.env, .babelrc*, .codecov, .cssnanorc*, .env.*, .envrc, .htmlnanorc*, .lighthouserc.*, .mocha*, .postcssrc*, .terserrc*, api-extractor.json, ava.config.*, babel.config.*, capacitor.config.*, contentlayer.config.*, cssnano.config.*, cypress.*, env.d.ts, formkit.config.*, formulate.config.*, histoire.config.*, houdini.config.*, htmlnanorc.*, i18n.config.*, ionic.config.*, jasmine.*, jest.config.*, jsconfig.*, karma*, lighthouserc.*, mdsvex.config.js, panda.config.*, playwright.config.*, postcss.config.*, puppeteer.config.*, rspack.config.*, sst.config.*, svgo.config.*, tailwind.config.*, tsconfig.*, tsdoc.*, uno.config.*, unocss.config.*, vite.config.*, vitest.config.*, vuetify.config.*, webpack.config.*, windi.config.*", "vite.config.*": "*.env, .babelrc*, .codecov, .cssnanorc*, .env.*, .envrc, .htmlnanorc*, .lighthouserc.*, .mocha*, .postcssrc*, .terserrc*, api-extractor.json, ava.config.*, babel.config.*, capacitor.config.*, contentlayer.config.*, cssnano.config.*, cypress.*, env.d.ts, formkit.config.*, formulate.config.*, histoire.config.*, htmlnanorc.*, i18n.config.*, ionic.config.*, jasmine.*, jest.config.*, jsconfig.*, karma*, lighthouserc.*, panda.config.*, playwright.config.*, postcss.config.*, puppeteer.config.*, rspack.config.*, sst.config.*, svgo.config.*, tailwind.config.*, tsconfig.*, tsdoc.*, uno.config.*, unocss.config.*, vitest.config.*, vuetify.config.*, webpack.config.*, windi.config.*", "vue.config.*": "*.env, .babelrc*, .codecov, .cssnanorc*, .env.*, .envrc, .htmlnanorc*, .lighthouserc.*, .mocha*, .postcssrc*, .terserrc*, api-extractor.json, ava.config.*, babel.config.*, capacitor.config.*, contentlayer.config.*, cssnano.config.*, cypress.*, env.d.ts, formkit.config.*, formulate.config.*, histoire.config.*, htmlnanorc.*, i18n.config.*, ionic.config.*, jasmine.*, jest.config.*, jsconfig.*, karma*, lighthouserc.*, panda.config.*, playwright.config.*, postcss.config.*, puppeteer.config.*, rspack.config.*, sst.config.*, svgo.config.*, tailwind.config.*, tsconfig.*, tsdoc.*, uno.config.*, unocss.config.*, vitest.config.*, vuetify.config.*, webpack.config.*, windi.config.*", "*.asax": "$(capture).*.cs, $(capture).*.vb", "*.ascx": "$(capture).*.cs, $(capture).*.vb", "*.ashx": "$(capture).*.cs, $(capture).*.vb", "*.aspx": "$(capture).*.cs, $(capture).*.vb", "*.axaml": "$(capture).axaml.cs", "*.bloc.dart": "$(capture).event.dart, $(capture).state.dart", "*.c": "$(capture).h", "*.cc": "$(capture).hpp, $(capture).h, $(capture).hxx, $(capture).hh", "*.cjs": "$(capture).cjs.map, $(capture).*.cjs, $(capture)_*.cjs", "*.component.ts": "$(capture).component.html, $(capture).component.spec.ts, $(capture).component.css, $(capture).component.scss, $(capture).component.sass, $(capture).component.less", "*.cpp": "$(capture).hpp, $(capture).h, $(capture).hxx, $(capture).hh", "*.cs": "$(capture).*.cs", "*.cshtml": "$(capture).cshtml.cs", "*.csproj": "*.config, *proj.user, appsettings.*, bundleconfig.json", "*.css": "$(capture).css.map, $(capture).*.css", "*.cxx": "$(capture).hpp, $(capture).h, $(capture).hxx, $(capture).hh", "*.dart": "$(capture).freezed.dart, $(capture).g.dart", "*.db": "*.db-shm, *.db-wal", "*.ex": "$(capture).html.eex, $(capture).html.heex, $(capture).html.leex", "*.fs": "$(capture).fs.js, $(capture).fs.js.map, $(capture).fs.jsx, $(capture).fs.ts, $(capture).fs.tsx, $(capture).fs.rs, $(capture).fs.php, $(capture).fs.dart", "*.go": "$(capture)_test.go", "*.java": "$(capture).class", "*.js": "$(capture).js.map, $(capture).*.js, $(capture)_*.js", "*.jsx": "$(capture).js, $(capture).*.jsx, $(capture)_*.js, $(capture)_*.jsx, $(capture).less, $(capture).module.less", "*.master": "$(capture).*.cs, $(capture).*.vb", "*.md": "$(capture).*", "*.mjs": "$(capture).mjs.map, $(capture).*.mjs, $(capture)_*.mjs", "*.module.ts": "$(capture).resolver.ts, $(capture).controller.ts, $(capture).service.ts", "*.mts": "$(capture).mts.map, $(capture).*.mts, $(capture)_*.mts", "*.pubxml": "$(capture).pubxml.user", "*.py": "$(capture).pyi", "*.razor": "$(capture).razor.cs, $(capture).razor.css, $(capture).razor.scss", "*.resx": "$(capture).*.resx, $(capture).designer.cs, $(capture).designer.vb", "*.tex": "$(capture).acn, $(capture).acr, $(capture).alg, $(capture).aux, $(capture).bbl, $(capture).blg, $(capture).fdb_latexmk, $(capture).fls, $(capture).glg, $(capture).glo, $(capture).gls, $(capture).idx, $(capture).ind, $(capture).ist, $(capture).lof, $(capture).log, $(capture).lot, $(capture).out, $(capture).pdf, $(capture).synctex.gz, $(capture).toc, $(capture).xdv", "*.ts": "$(capture).js, $(capture).d.ts.map, $(capture).*.ts, $(capture)_*.js, $(capture)_*.ts", "*.tsx": "$(capture).ts, $(capture).*.tsx, $(capture)_*.ts, $(capture)_*.tsx, $(capture).less, $(capture).module.less, $(capture).scss, $(capture).module.scss", "*.vbproj": "*.config, *proj.user, appsettings.*, bundleconfig.json", "*.vue": "$(capture).*.ts, $(capture).*.js, $(capture).story.vue", "*.xaml": "$(capture).xaml.cs" }, Contributing The snippet is generated by script, do not edit the README directly. Instead, go to update.mjs , make changes and then submit a PR. Thanks! License MIT;Config of File Nesting for VS Code;vscode,config
antfu/vscode-file-nesting-config
secretflow/secretflow;简体中文 | English SecretFlow is a unified framework for privacy-preserving data intelligence and machine learning. To achieve this goal, it provides: An abstract device layer consists of plain devices and secret devices which encapsulate various cryptographic protocols. A device flow layer modeling higher algorithms as device object flow and DAG. An algorithm layer to do data analysis and machine learning with horizontal or vertical partitioned data. A workflow layer that seamlessly integrates data processing, model training, and hyperparameter tuning. Documentation SecretFlow Getting Started User Guide API Reference Tutorial SecretFlow Related Projects Kuscia : A lightweight privacy-preserving computing task orchestration framework based on K3s. SCQL : A system that allows multiple distrusting parties to run joint analysis without revealing their private data. SPU : A provable, measurable secure computation device, which provides computation ability while keeping your private data protected. HEU : A high-performance homomorphic encryption algorithm library. YACL : A C++ library that contains cryptography, network and io modules which other SecretFlow code depends on. Install Please check INSTALLATION.md Deployment Please check DEPLOYMENT.md Learn PETs We also provide a curated list of papers and SecretFlow's tutorials on Privacy-Enhancing Technologies (PETs). Please check AWESOME-PETS.md Contributing Please check CONTRIBUTING.md Benchmarks Please check OVERALL_BENCHMARK.md Disclaimer Non-release versions of SecretFlow are prohibited from using in any production environment due to possible bugs, glitches, lack of functionality, security issues or other problems.;A unified framework for privacy-preserving data analysis and machine learning;differential-privacy,homomorphic-encryption,machine-learning,privacy-preserving,private-set-intersection,secure-multiparty-computation,trusted-execution-environment,data-analysis,federated-learning,split-learning
secretflow/secretflow
google-deepmind/mctx;Mctx: MCTS-in-JAX Mctx is a library with a JAX -native implementation of Monte Carlo tree search (MCTS) algorithms such as AlphaZero , MuZero , and Gumbel MuZero . For computation speed up, the implementation fully supports JIT-compilation. Search algorithms in Mctx are defined for and operate on batches of inputs, in parallel. This allows to make the most of the accelerators and enables the algorithms to work with large learned environment models parameterized by deep neural networks. Installation You can install the latest released version of Mctx from PyPI via: sh pip install mctx or you can install the latest development version from GitHub: sh pip install git+https://github.com/google-deepmind/mctx.git Motivation Learning and search have been important topics since the early days of AI research. In the words of Rich Sutton : One thing that should be learned [...] is the great power of general purpose methods, of methods that continue to scale with increased computation even as the available computation becomes very great. The two methods that seem to scale arbitrarily in this way are search and learning . Recently, search algorithms have been successfully combined with learned models parameterized by deep neural networks, resulting in some of the most powerful and general reinforcement learning algorithms to date (e.g. MuZero). However, using search algorithms in combination with deep neural networks requires efficient implementations, typically written in fast compiled languages; this can come at the expense of usability and hackability, especially for researchers that are not familiar with C++. In turn, this limits adoption and further research on this critical topic. Through this library, we hope to help researchers everywhere to contribute to such an exciting area of research. We provide JAX-native implementations of core search algorithms such as MCTS, that we believe strike a good balance between performance and usability for researchers that want to investigate search-based algorithms in Python. The search methods provided by Mctx are heavily configurable to allow researchers to explore a variety of ideas in this space, and contribute to the next generation of search based agents. Search in Reinforcement Learning In Reinforcement Learning the agent must learn to interact with the environment in order to maximize a scalar reward signal. On each step the agent must select an action and receives in exchange an observation and a reward. We may call whatever mechanism the agent uses to select the action the agent's policy . Classically, policies are parameterized directly by a function approximator (as in REINFORCE), or policies are inferred by inspecting a set of learned estimates of the value of each action (as in Q-learning). Alternatively, search allows to select actions by constructing on the fly, in each state, a policy or a value function local to the current state, by searching using a learned model of the environment. Exhaustive search over all possible future courses of actions is computationally prohibitive in any non trivial environment, hence we need search algorithms that can make the best use of a finite computational budget. Typically priors are needed to guide which nodes in the search tree to expand (to reduce the breadth of the tree that we construct), and value functions are used to estimate the value of incomplete paths in the tree that don't reach an episode termination (to reduce the depth of the search tree). Quickstart Mctx provides a low-level generic search function and high-level concrete policies: muzero_policy and gumbel_muzero_policy . The user needs to provide several learned components to specify the representation, dynamics and prediction used by MuZero . In the context of the Mctx library, the representation of the root state is specified by a RootFnOutput . The RootFnOutput contains the prior_logits from a policy network, the estimated value of the root state, and any embedding suitable to represent the root state for the environment model. The dynamics environment model needs to be specified by a recurrent_fn . A recurrent_fn(params, rng_key, action, embedding) call takes an action and a state embedding . The call should return a tuple (recurrent_fn_output, new_embedding) with a RecurrentFnOutput and the embedding of the next state. The RecurrentFnOutput contains the reward and discount for the transition, and prior_logits and value for the new state. In examples/visualization_demo.py , you can see calls to a policy: python policy_output = mctx.gumbel_muzero_policy(params, rng_key, root, recurrent_fn, num_simulations=32) The policy_output.action contains the action proposed by the search. That action can be passed to the environment. To improve the policy, the policy_output.action_weights contain targets usable to train the policy probabilities. We recommend to use the gumbel_muzero_policy . Gumbel MuZero guarantees a policy improvement if the action values are correctly evaluated. The policy improvement is demonstrated in examples/policy_improvement_demo.py . Example projects The following projects demonstrate the Mctx usage: Pgx — A collection of 20+ vectorized JAX environments, including backgammon, chess, shogi, Go, and an AlphaZero example. Basic Learning Demo with Mctx — AlphaZero on random mazes. a0-jax — AlphaZero on Connect Four, Gomoku, and Go. muax — MuZero on gym-style environments (CartPole, LunarLander). Classic MCTS — A simple example on Connect Four. mctx-az — Mctx with AlphaZero subtree persistence. Tell us about your project. Citing Mctx This repository is part of the DeepMind JAX Ecosystem, to cite Mctx please use the citation: bibtex @software{deepmind2020jax, title = {The {D}eep{M}ind {JAX} {E}cosystem}, author = {DeepMind and Babuschkin, Igor and Baumli, Kate and Bell, Alison and Bhupatiraju, Surya and Bruce, Jake and Buchlovsky, Peter and Budden, David and Cai, Trevor and Clark, Aidan and Danihelka, Ivo and Dedieu, Antoine and Fantacci, Claudio and Godwin, Jonathan and Jones, Chris and Hemsley, Ross and Hennigan, Tom and Hessel, Matteo and Hou, Shaobo and Kapturowski, Steven and Keck, Thomas and Kemaev, Iurii and King, Michael and Kunesch, Markus and Martens, Lena and Merzic, Hamza and Mikulik, Vladimir and Norman, Tamara and Papamakarios, George and Quan, John and Ring, Roman and Ruiz, Francisco and Sanchez, Alvaro and Sartran, Laurent and Schneider, Rosalia and Sezener, Eren and Spencer, Stephen and Srinivasan, Srivatsan and Stanojevi\'{c}, Milo\v{s} and Stokowiec, Wojciech and Wang, Luyu and Zhou, Guangyao and Viola, Fabio}, url = {http://github.com/deepmind}, year = {2020}, };Monte Carlo tree search in JAX;jax,reinforcement-learning,monte-carlo-tree-search
google-deepmind/mctx
mito-ds/mito;Mito Monorepo Mito is a spreadsheet that lives inside your Jupyter notebooks, Dash apps, and Streamlit apps. It allows you to edit Pandas dataframes like an Excel file, and generates Python code that corresponds to each of your edits. Mito aims to be the first tool in your data science toolkit and supports: - Point-and-click CSV and XLSX import - Excel-style pivot tables - Graph generation - Filtering and sorting - Merge (lookups) - Excel-Style formulas - Column summary statistics - And much more! Mito is an open source tool (look around...), and will always be built by and for our community. See our plans page for more detail about our features, and consider purchasing Mito Pro to help fund development. ⚡️ Quick start To get started, open a terminal, command prompt, or Anaconda Prompt. Then, download the Mito installer: python -m pip install mitoinstaller Then, run the installer. This command may take a few moments to run: python -m mitoinstaller install This will install Mito for classic Jupyter Notebooks and JupyterLab 3.0. More detailed installation instructions can also be found here . If you're interested in Mito Pro, see our plans page . Documentation You can find all Mito documentation available here . Getting Help To get support, join our Discord or Slack . Docker Quick Start Coming soon! MyBinder MyBinder link for the main branch: Contributing This repo is the monorepo for the Mito project, and so contains the mitosheet package, the trymito.io website, and our documentation as well. Mitosheet To see the code for the mitosheet package, see the mitosheet folder. Testing To test the current version of mitosheet that is deployed on Test PyPi, create an empty venv, and run the command python3 -m pip install mitoinstaller python3 -m mitoinstaller install --test-pypi Then, launch JLab to test the current version of the mitosheet package on Test PyPi. Mitoinstaller To see the mitoinstaller package, see the mitoinstaller folder. Trymito.io To see the code for our website, see the trymito.io folder. Docs Our docs are hosted on Gitbooks here . You can see and edit the docs in the /docs folder, PRs greatly appreciated!;The mitosheet package, trymito.io, and other public Mito code.;data-science,python,data,data-visualization,data-analysis,jupyter,pandas,streamlit-component
mito-ds/mito
jamiebuilds/tailwindcss-animate;tailwindcss-animate A Tailwind CSS plugin for creating beautiful animations. ```html ... ... ... ... ``` Installation Install the plugin from npm: sh npm install -D tailwindcss-animate Then add the plugin to your tailwind.config.js file: js // @filename tailwind.config.js module.exports = { theme: { // ... }, plugins: [ require("tailwindcss-animate"), // ... ], } Documentation Basic Usage Changing animation delay Changing animation direction Changing animation duration Changing animation fill mode Changing animation iteration count Changing animation play state Changing animation timing function Prefers-reduced-motion Enter & Exit Animations Adding enter animations Adding exit animations Changing enter animation starting opacity Changing enter animation starting rotation Changing enter animation starting scale Changing enter animation starting translate Changing exit animation ending opacity Changing exit animation ending rotation Changing exit animation ending scale Changing exit animation ending translate Basic Usage Changing animation delay Use the delay-{amount} utilities to control an element’s animation-delay . html <button class="animate-bounce delay-150 duration-300 ...">Button A</button> <button class="animate-bounce delay-300 duration-300 ...">Button B</button> <button class="animate-bounce delay-700 duration-300 ...">Button C</button> Learn more in the animation delay documentation. Changing animation direction Use the direction-{keyword} utilities to control an element’s animation-delay . html <button class="animate-bounce direction-normal ...">Button A</button> <button class="animate-bounce direction-reverse ...">Button B</button> <button class="animate-bounce direction-alternate ...">Button C</button> <button class="animate-bounce direction-alternate-reverse ...">Button C</button> Learn more in the animation direction documentation. Changing animation duration Use the duration-{amount} utilities to control an element’s animation-duration . html <button class="animate-bounce duration-150 ...">Button A</button> <button class="animate-bounce duration-300 ...">Button B</button> <button class="animate-bounce duration-700 ...">Button C</button> Learn more in the animation duration documentation. Changing animation fill mode Use the fill-mode-{keyword} utilities to control an element’s animation-fill-mode . html <button class="animate-bounce fill-mode-none ...">Button A</button> <button class="animate-bounce fill-mode-forwards ...">Button B</button> <button class="animate-bounce fill-mode-backwards ...">Button C</button> <button class="animate-bounce fill-mode-both ...">Button C</button> Learn more in the animation fill mode documentation. Changing animation iteration count Use the repeat-{amount} utilities to control an element’s animation-iteration-count . html <button class="animate-bounce repeat-0 ...">Button A</button> <button class="animate-bounce repeat-1 ...">Button B</button> <button class="animate-bounce repeat-infinite ...">Button C</button> Learn more in the animation iteration count documentation. Changing animation play state Use the running and paused utilities to control an element’s animation-play-state . html <button class="animate-bounce running ...">Button B</button> <button class="animate-bounce paused ...">Button A</button> Learn more in the animation play state documentation. Changing animation timing function Use the ease-{keyword} utilities to control an element’s animation-timing-function . html <button class="animate-bounce ease-linear ...">Button A</button> <button class="animate-bounce ease-in ...">Button B</button> <button class="animate-bounce ease-out ...">Button C</button> <button class="animate-bounce ease-in-out ...">Button C</button> Learn more in the animation timing function documentation. Prefers-reduced-motion For situations where the user has specified that they prefer reduced motion, you can conditionally apply animations and transitions using the motion-safe and motion-reduce variants: html <button class="motion-safe:animate-bounce ...">Button B</button> Enter & Exit Animations Adding enter animations To give an element an enter animation, use the animate-in utility, in combination with some fade-in , spin-in , zoom-in , and slide-in-from utilities. html <button class="animate-in fade-in ...">Button A</button> <button class="animate-in spin-in ...">Button B</button> <button class="animate-in zoom-in ...">Button C</button> <button class="animate-in slide-in-from-top ...">Button D</button> <button class="animate-in slide-in-from-left ...">Button E</button> Learn more in the enter animation documentation. Adding exit animations To give an element an exit animation, use the animate-out utility, in combination with some fade-out , spin-out , zoom-out , and slide-out-from utilities. html <button class="animate-out fade-out ...">Button A</button> <button class="animate-out spin-out ...">Button B</button> <button class="animate-out zoom-out ...">Button C</button> <button class="animate-out slide-out-from-top ...">Button D</button> <button class="animate-out slide-out-from-left ...">Button E</button> Learn more in the exit animation documentation. Changing enter animation starting opacity Set the starting opacity of an animation using the fade-in-{amount} utilities. html <button class="animate-in fade-in ...">Button A</button> <button class="animate-in fade-in-25 ...">Button B</button> <button class="animate-in fade-in-50 ...">Button C</button> <button class="animate-in fade-in-75 ...">Button C</button> Learn more in the enter animation opacity documentation. Changing enter animation starting rotation Set the starting rotation of an animation using the spin-in-{amount} utilities. html <button class="animate-in spin-in-1 ...">Button A</button> <button class="animate-in spin-in-6 ...">Button B</button> <button class="animate-in spin-in-75 ...">Button C</button> <button class="animate-in spin-in-90 ...">Button C</button> Learn more in the enter animation rotate documentation. Changing enter animation starting scale Set the starting scale of an animation using the zoom-in-{amount} utilities. html <button class="animate-in zoom-in ...">Button A</button> <button class="animate-in zoom-in-50 ...">Button B</button> <button class="animate-in zoom-in-75 ...">Button C</button> <button class="animate-in zoom-in-95 ...">Button C</button> Learn more in the enter animation scale documentation. Changing enter animation starting translate Set the starting translate of an animation using the slide-in-from-{direction}-{amount} utilities. html <button class="animate-in slide-in-from-top ...">Button A</button> <button class="animate-in slide-in-from-bottom-48 ...">Button B</button> <button class="animate-in slide-in-from-left-72 ...">Button C</button> <button class="animate-in slide-in-from-right-96 ...">Button C</button> Learn more in the enter animation translate documentation. Changing exit animation ending opacity Set the ending opacity of an animation using the fade-out-{amount} utilities. html <button class="animate-out fade-out ...">Button A</button> <button class="animate-out fade-out-25 ...">Button B</button> <button class="animate-out fade-out-50 ...">Button C</button> <button class="animate-out fade-out-75 ...">Button C</button> Learn more in the exit animation opacity documentation. Changing exit animation ending rotation Set the ending rotation of an animation using the spin-out-{amount} utilities. html <button class="animate-out spin-out-1 ...">Button A</button> <button class="animate-out spin-out-6 ...">Button B</button> <button class="animate-out spin-out-75 ...">Button C</button> <button class="animate-out spin-out-90 ...">Button C</button> Learn more in the exit animation rotate documentation. Changing exit animation ending scale Set the ending scale of an animation using the zoom-out-{amount} utilities. html <button class="animate-out zoom-out ...">Button A</button> <button class="animate-out zoom-out-50 ...">Button B</button> <button class="animate-out zoom-out-75 ...">Button C</button> <button class="animate-out zoom-out-95 ...">Button C</button> Learn more in the exit animation scale documentation. Changing exit animation ending translate Set the ending translate of an animation using the slide-out-to-{direction}-{amount} utilities. html <button class="animate-out slide-out-to-top ...">Button A</button> <button class="animate-out slide-out-to-bottom-48 ...">Button B</button> <button class="animate-out slide-out-to-left-72 ...">Button C</button> <button class="animate-out slide-out-to-right-96 ...">Button C</button> Learn more in the exit animation translate documentation.;A Tailwind CSS plugin for creating beautiful animations;[]
jamiebuilds/tailwindcss-animate
qingsongedu/time-series-transformers-review;Transformers in Time Series A professionally curated list of awesome resources (paper, code, data, etc.) on Transformers in Time Series , which is first work to comprehensively and systematically summarize the recent advances of Transformers for modeling time series data to the best of our knowledge. We will continue to update this list with newest resources. If you found any missed resources (paper/code) or errors, please feel free to open an issue or make a pull request. For general AI for Time Series (AI4TS) Papers, Tutorials, and Surveys at the Top AI Conferences and Journals , please check This Repo . For general Recent AI Advances: Tutorials and Surveys in various areas (DL, ML, DM, CV, NLP, Speech, etc.) at the Top AI Conferences and Journals , please check This Repo . Survey paper Transformers in Time Series: A Survey (IJCAI'23 Survey Track) Qingsong Wen , Tian Zhou, Chaoli Zhang, Weiqi Chen, Ziqing Ma, Junchi Yan and Liang Sun . If you find this repository helpful for your work, please kindly cite our survey paper. bibtex @inproceedings{wen2023transformers, title={Transformers in time series: A survey}, author={Wen, Qingsong and Zhou, Tian and Zhang, Chaoli and Chen, Weiqi and Ma, Ziqing and Yan, Junchi and Sun, Liang}, booktitle={International Joint Conference on Artificial Intelligence(IJCAI)}, year={2023} } Taxonomy of Transformers for time series modeling Application Domains of Time Series Transformers [official code] Transformers in Forecasting Time Series Forecasting CARD: Channel Aligned Robust Blend Transformer for Time Series Forecasting, in ICLR 2024. [paper] [official code] Pathformer: Multi-scale Transformers with Adaptive Pathways for Time Series Forecasting, in ICLR 2024. [paper] [official code] GAFormer: Enhancing Timeseries Transformers Through Group-Aware Embeddings, in ICLR 2024. [paper] Transformer-Modulated Diffusion Models for Probabilistic Multivariate Time Series Forecasting, in ICLR 2024. [paper] iTransformer: Inverted Transformers Are Effective for Time Series Forecasting, in ICLR 2024. [paper] Considering Nonstationary within Multivariate Time Series with Variational Hierarchical Transformer for Forecasting, in AAAI 2024. [paper] Latent Diffusion Transformer for Probabilistic Time Series Forecasting, in AAAI 2024. [paper] BasisFormer: Attention-based Time Series Forecasting with Learnable and Interpretable Basis, in NeurIPS 2023. [paper] ContiFormer: Continuous-Time Transformer for Irregular Time Series Modeling, in NeurIPS 2023. [paper] A Time Series is Worth 64 Words: Long-term Forecasting with Transformers, in ICLR 2023. [paper] [code] Crossformer: Transformer Utilizing Cross-Dimension Dependency for Multivariate Time Series Forecasting, in ICLR 2023. [paper] Scaleformer: Iterative Multi-scale Refining Transformers for Time Series Forecasting, in ICLR 2023. [paper] Non-stationary Transformers: Rethinking the Stationarity in Time Series Forecasting, in NeurIPS 2022. [paper] Learning to Rotate: Quaternion Transformer for Complicated Periodical Time Series Forecasting”, in KDD 2022. [paper] FEDformer: Frequency Enhanced Decomposed Transformer for Long-term Series Forecasting, in ICML 2022. [paper] [official code] TACTiS: Transformer-Attentional Copulas for Time Series, in ICML 2022. [paper] Pyraformer: Low-Complexity Pyramidal Attention for Long-Range Time Series Modeling and Forecasting, in ICLR 2022. [paper] [official code] Autoformer: Decomposition transformers with auto-correlation for long-term series forecasting, in NeurIPS 2021. [paper] [official code] Informer: Beyond efficient transformer for long sequence time-series forecasting, in AAAI 2021. [paper] [official code] [dataset] Temporal fusion transformers for interpretable multi-horizon time series forecasting, in International Journal of Forecasting 2021. [paper] [code] Probabilistic Transformer For Time Series Analysis, in NeurIPS 2021. [paper] Deep Transformer Models for Time Series Forecasting: The Influenza Prevalence Case, in arXiv 2020. [paper] Adversarial sparse transformer for time series forecasting, in NeurIPS 2020. [paper] [code] Enhancing the locality and breaking the memory bottleneck of transformer on time series forecasting, in NeurIPS 2019. [paper] [code] SSDNet: State Space Decomposition Neural Network for Time Series Forecasting, in ICDM 2021, [paper] From Known to Unknown: Knowledge-guided Transformer for Time-Series Sales Forecasting in Alibaba, in arXiv 2021. [paper] TCCT: Tightly-coupled convolutional transformer on time series forecasting, in Neurocomputing 2022. [paper] Triformer: Triangular, Variable-Specific Attentions for Long Sequence Multivariate Time Series Forecasting, in IJCAI 2022. [paper] #### Spatio-Temporal Forecasting * AirFormer: Predicting Nationwide Air Quality in China with Transformers, in AAAI 2023. [paper] [official code] * Earthformer: Exploring Space-Time Transformers for Earth System Forecasting, in NeurIPS 2022. [paper] [official code] * Bidirectional Spatial-Temporal Adaptive Transformer for Urban Traffic Flow Forecasting, in TNNLS 2022. [paper] * Spatio-temporal graph transformer networks for pedestrian trajectory prediction, in ECCV 2020. [paper] [official code] * Spatial-temporal transformer networks for traffic flow forecasting, in arXiv 2020. [paper] [official code] * Traffic transformer: Capturing the continuity and periodicity of time series for traffic forecasting, in Transactions in GIS 2022. [paper] #### Event Irregular Time Series Modeling * Time Series as Images: Vision Transformer for Irregularly Sampled Time Series,in NeurIPS 2023. [paper] * ContiFormer: Continuous-Time Transformer for Irregular Time Series Modeling,in NeurIPS 2023. [paper] * HYPRO: A Hybridly Normalized Probabilistic Model for Long-Horizon Prediction of Event Sequences,in NeurIPS 2022. [paper] [official code] * Transformer Embeddings of Irregularly Spaced Events and Their Participants, in ICLR 2022. [paper] [official code] * Self-attentive Hawkes process, in ICML 2020. [paper] [official code] * Transformer Hawkes process, in ICML 2020. [paper] [official code] Transformers in Anomaly Detection MEMTO: Memory-guided Transformer for Multivariate Time Series Anomaly Detection,in NeurIPS 2023. [paper] CAT: Beyond Efficient Transformer for Content-Aware Anomaly Detection in Event Sequences, in KDD 2022. [paper] [official code] DCT-GAN: Dilated Convolutional Transformer-based GAN for Time Series Anomaly Detection, in TKDE 2022. [paper] Concept Drift Adaptation for Time Series Anomaly Detection via Transformer, in Neural Processing Letters 2022. [paper] Anomaly Transformer: Time Series Anomaly Detection with Association Discrepancy, in ICLR 2022. [paper] [official code] TranAD: Deep Transformer Networks for Anomaly Detection in Multivariate Time Series Data, in VLDB 2022. [paper] [official code] Learning graph structures with transformer for multivariate time series anomaly detection in IoT, in IEEE Internet of Things Journal 2021. [paper] [official code] Spacecraft Anomaly Detection via Transformer Reconstruction Error, in ICASSE 2019. [paper] Unsupervised Anomaly Detection in Multivariate Time Series through Transformer-based Variational Autoencoder, in CCDC 2021. [paper] Variational Transformer-based anomaly detection approach for multivariate time series, in Measurement 2022. [paper] Transformers in Classification Time Series as Images: Vision Transformer for Irregularly Sampled Time Series, in NeurIPS 2023. [paper] TrajFormer: Efficient Trajectory Classification with Transformers, in CIKM 2022. [paper] TARNet : Task-Aware Reconstruction for Time-Series Transformer, in KDD 2022. [paper] [official code] A transformer-based framework for multivariate time series representation learning, in KDD 2021. [paper] [official code] Voice2series: Reprogramming acoustic models for time series classification, in ICML 2021. [paper] [official code] Gated Transformer Networks for Multivariate Time Series Classification, in arXiv 2021. [paper] [official code] Self-attention for raw optical satellite time series classification, in ISPRS Journal of Photogrammetry and Remote Sensing 2020. [paper] [official code] Self-supervised pretraining of transformers for satellite image time series classification, in IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 2020. [paper] Self-Supervised Transformer for Sparse and Irregularly Sampled Multivariate Clinical Time-Series, in ACM TKDD 2022. [paper] [official code] Time Series Related Survey What Can Large Language Models Tell Us about Time Series Analysis, in arXiv 2024. [paper] Large Models for Time Series and Spatio-Temporal Data: A Survey and Outlook, in arXiv 2023. [paper] [Website] Deep Learning for Multivariate Time Series Imputation: A Survey, in arXiv 2024. [paper] [Website] Self-Supervised Learning for Time Series Analysis: Taxonomy, Progress, and Prospects, in arXiv 2023. [paper] [Website] A Survey on Graph Neural Networks for Time Series: Forecasting, Classification, Imputation, and Anomaly Detection, in arXiv 2023. [paper] [Website] Time series data augmentation for deep learning: a survey, in IJCAI 2021. [paper] Neural temporal point processes: a review, in IJCAI 2021. [paper] Time-series forecasting with deep learning: a survey, in Philosophical Transactions of the Royal Society A 2021. [paper] Deep learning for time series forecasting: a survey, in Big Data 2021. [paper] Neural forecasting: Introduction and literature overview, in arXiv 2020. [paper] Deep learning for anomaly detection in time-series data: review, analysis, and guidelines, in Access 2021. [paper] A review on outlier/anomaly detection in time series data, in ACM Computing Surveys 2021. [paper] A unifying review of deep and shallow anomaly detection, in Proceedings of the IEEE 2021. [paper] Deep learning for time series classification: a review, in Data Mining and Knowledge Discovery 2019. [paper] More related time series surveys, tutorials, and papers can be found at this repo . Transformer/Attention Tutorial/Survey in Other Disciplines Everything You Need to Know about Transformers: Architectures, Optimization, Applications, and Interpretation, in AAAI Tutorial 2023. [link] Transformer Architectures for Multimodal Signal Processing and Decision Making, in ICASSP Tutorial 2022. [link] Efficient transformers: A survey, in ACM Computing Surveys 2022. [paper] [paper] A survey on visual transformer, in IEEE TPAMI 2022. [paper] A General Survey on Attention Mechanisms in Deep Learning, in IEEE TKDE 2022. [paper] Attention, please! A survey of neural attention models in deep learning, in Artificial Intelligence Review 2022. [paper] Attention mechanisms in computer vision: A survey, in Computational Visual Media 2022. [paper] Survey: Transformer based video-language pre-training, in AI Open 2022. [paper] Transformers in vision: A survey, in ACM Computing Surveys 2021. [paper] Pre-trained models: Past, present and future, in AI Open 2021. [paper] An attentive survey of attention models, in ACM TIST 2021. [paper] Attention in natural language processing, in IEEE TNNLS 2020. [paper] Pre-trained models for natural language processing: A survey, in Science China Technological Sciences 2020. [paper] A review on the attention mechanism of deep learning, in Neurocomputing 2021. [paper] A Survey of Transformers, in arXiv 2021. [paper] A Survey of Vision-Language Pre-Trained Models, in arXiv 2022. [paper] Video Transformers: A Survey, in arXiv 2022. [paper] Transformer for Graphs: An Overview from Architecture Perspective, in arXiv 2022. [paper] Transformers in Medical Imaging: A Survey, in arXiv 2022. [paper] A Survey of Controllable Text Generation using Transformer-based Pre-trained Language Models, in arXiv 2022. [paper];A professionally curated list of awesome resources (paper, code, data, etc.) on transformers in time series.;timeseries,transformer,forecasting,anomalydetection,classification,timeseries-analysis,time-series,time-series-forecasting,machine-learning,deep-learning
qingsongedu/time-series-transformers-review
FBlackBox/BlackBox;Currently, I think everybody has knows this event, this project affects so many innocent developers. So I decide to dissolve the telegram group and delete this project. About this event: - https://github.com/FBlackBox/BlackBox/issues/122 - https://github.com/FBlackBox/BlackBox/issues/121;BlackBox is a virtual engine, it can clone and run virtual application on Android, users don't have to install APK file to run the application on devices. BlackBox control all virtual applications, so you can do anything you want by using BlackBox.;blackbox,virtualapp,android,plugin,virtualbox,library,virtual-engine
FBlackBox/BlackBox
pheralb/svgl;Discover ✦ Request logo ✦ Submit logo ✦ Extensions ✦ API ✦ Contributing ![Svelte Badge](https://img.shields.io/badge/Svelte-FF3E00?logo=svelte&logoColor=fff&style=flat) [![Build Status](https://img.shields.io/endpoint.svg?url=https%3A%2F%2Factions-badge.atrox.dev%2Fpheralb%2Fsvgl%2Fbadge%3Fref%3Dmain&style=flat)](https://actions-badge.atrox.dev/pheralb/svgl/goto?ref=main) ![GitHub stars](https://img.shields.io/github/stars/pheralb/svgl) ![GitHub issues](https://img.shields.io/github/issues/pheralb/svgl) ![GitHub forks](https://img.shields.io/github/forks/pheralb/svgl) ![GitHub PRs](https://img.shields.io/github/issues-pr/pheralb/svgl) ![Tailwind CSS Badge](https://img.shields.io/badge/Tailwind%20CSS-06B6D4?logo=tailwindcss&logoColor=fff&style=flat) 🛠️ Stack Sveltekit - Web development, streamlined. Typescript - JavaScript with syntax for types. mdsvex - Markdown for Svelte apps. Shiki - A beautiful Syntax Highlighter. Tailwindcss - A utility-first CSS framework for rapidly building custom designs. bits-ui - A collection of headless components for Svelte. clsx + tailwind-merge inspired by shadcn/ui - A tiny utility for constructing className strings conditionally. Prettier + prettier-plugin-tailwindcss - An opinionated code formatter. Lucide Icons + phosphor-svelte - A clean and friendly icons libraries. svelte-sonner - An opinionated toast component for Svelte. @svgr/core - Node.js utility to transform SVGs into React components. @upstash/redis + @upstash/ratelimit - Serverless Redis for developers. Vitest - Blazing Fast Unit Test Framework. 🚀 Getting Started [!IMPORTANT] Before submitting the SVG, make sure that you have permission or that the license of the SVG allows you to add it to svgl. If you are not sure, please contact the company or author. You will need: Node.js 16+ (recommended 18 LTS) . Git . Fork this repository and clone it locally: bash git clone git@github.com:your_username/svgl.git Install dependencies: ```bash Install pnpm globally if you don't have it: npm install -g pnpm and install dependencies: pnpm install ``` Go to the static/library folder and add your .svg logo. [!WARNING] Remember to optimize SVG for web, you can use SVGOMG . When you optimize the SVG, make sure that the viewBox is not removed. The size limit for each .svg is 20kb . Go to the src/data/svgs.ts and add the information about your logo, following the structure: If the logo is a solid color: json { "title": "Title", "category": "Category", "route": "/library/your_logo.svg", "url": "Website" } If the logo has logo + wordmark version: json { "title": "Title", "category": "Category", "route": "/library/your_logo.svg", "wordmark": "/library/your_logo_wordmark.svg", "url": "Website" } If the logo/wordmark has light and dark mode: json { "title": "Title", "category": "Category", "route": { "light": "/library/your_logo_light.svg", "dark": "/library/your_logo_dark.svg" }, "wordmark": { "light": "/library/your_wordmark-logo_light.svg", "dark": "/library/your_wordmark-logo_dark.svg" }, "url": "Website" } [!NOTE] The list of categories is here: src/types/categories.ts . You can add a new category if you need it. You can add multiple categories to the same logo, for example: "category": ["Social", "Design"] (max 3 categories per logo). And create a pull request with your logo 🚀. (Optional) If you want to run the API locally, you will need to create a .env file in the root of the project with the following variables: Create a Upstash account . Create a Upstash Redis Database . bash SVGL_API_REQUESTS = 1 UPSTASH_REDIS_URL = "" UPSTASH_REDIS_TOKEN = "" 📦 Extensions A list of extensions that use the svgl API , created by the community: | | Extension | Description | Created by | Link | | ---------------------------------------------------------------------------------------------- | ---------------- | -------------------------------------------------- | ------------------------------------------------------ | ------------------------------------------------------------------------------------------- | | | svgls | A CLI for easily adding SVG icons to your project. | sujjeee | GitHub Repository | | | SVGL for Figma | Add svgs from svgl to your Figma project. | quilljou | Figma Plugin | | | SVGL for Raycast | Search SVG logos via svgl. | 1weiho | Raycast Store | | | SVGL for VSCode | SVGL directly in your VSCode. | girlazote | VSCode Marketplace | | | SVGL Badge | A beautiful badges with svgl SVG logos. | ridemountainpig | Svgl Badge | ✌️ Contributing 🔑 License MIT .;🧩 A beautiful library with SVG logos. Built with Sveltekit & Tailwind CSS.;logos,open-source,svg-icons,svg-images,svg,hacktoberfest,optimized,svelte,sveltekit,tailwindcss
pheralb/svgl
mtlynch/picoshare;PicoShare Overview PicoShare is a minimalist service that allows you to share files easily. Live demo Why PicoShare? There are a million services for sharing files, but none of them are quite like PicoShare. Here are PicoShare's advantages: Direct download links : PicoShare gives you a direct download link you can share with anyone. They can view or download the file with no ads or signups. No file restrictions : Unlike sites like imgur, Vimeo, or SoundCloud that only allow you to share specific types of files, PicoShare lets you share any file of any size. No resizing/re-encoding : If you upload media like images, video, or audio, PicoShare never forces you to wait on re-encoding. You get a direct download link as soon as you upload the file, and PicoShare never resizes or re-encodes your file. Run PicoShare From source bash PS_SHARED_SECRET=somesecretpass PORT=4001 \ go run cmd/picoshare/main.go From Docker To run PicoShare within a Docker container, mount a volume from your local system to store the PicoShare sqlite database. bash docker run \ --env "PORT=4001" \ --env "PS_SHARED_SECRET=somesecretpass" \ --publish 4001:4001/tcp \ --volume "${PWD}/data:/data" \ --name picoshare \ mtlynch/picoshare From Docker + cloud data replication If you specify settings for a Litestream -compatible cloud storage location, PicoShare will automatically replicate your data. You can kill the container and start it later, and PicoShare will restore your data from the cloud storage location and continue as if there was no interruption. ```bash PORT=4001 PS_SHARED_SECRET="somesecretpass" LITESTREAM_BUCKET=YOUR-LITESTREAM-BUCKET LITESTREAM_ENDPOINT=YOUR-LITESTREAM-ENDPOINT LITESTREAM_ACCESS_KEY_ID=YOUR-ACCESS-ID LITESTREAM_SECRET_ACCESS_KEY=YOUR-SECRET-ACCESS-KEY docker run \ --publish "${PORT}:${PORT}/tcp" \ --env "PORT=${PORT}" \ --env "PS_SHARED_SECRET=${PS_SHARED_SECRET}" \ --env "LITESTREAM_ACCESS_KEY_ID=${LITESTREAM_ACCESS_KEY_ID}" \ --env "LITESTREAM_SECRET_ACCESS_KEY=${LITESTREAM_SECRET_ACCESS_KEY}" \ --env "LITESTREAM_BUCKET=${LITESTREAM_BUCKET}" \ --env "LITESTREAM_ENDPOINT=${LITESTREAM_ENDPOINT}" \ --name picoshare \ mtlynch/picoshare ``` Notes: Only run one Docker container for each Litestream location. PicoShare can't sync writes across multiple instances. Using Docker Compose To run PicoShare under docker-compose, copy the following to a file called docker-compose.yml and then run docker-compose up . yaml version: "3.2" services: picoshare: image: mtlynch/picoshare environment: - PORT=4001 - PS_SHARED_SECRET=dummypass # Change to any password ports: - 4001:4001 command: -db /data/store.db volumes: - ./data:/data Parameters Command-line flags | Flag | Meaning | Default Value | | ----- | ----------------------- | ----------------- | | -db | Path to SQLite database | "data/store.db" | Environment variables | Environment Variable | Meaning | | -------------------- | ------------------------------------------------------------------------------------ | | PORT | TCP port on which to listen for HTTP connections (defaults to 4001). | | PS_BEHIND_PROXY | Set to "true" for better logging when PicoShare is running behind a reverse proxy. | | PS_SHARED_SECRET | (required) Specifies a passphrase for the admin user to log in to PicoShare. | Docker environment variables You can adjust behavior of the Docker container by specifying these Docker-specific variables with docker run -e : | Environment Variable | Meaning | | ------------------------------ | ----------------------------------------------------------------------------------------------------- | | LITESTREAM_BUCKET | Litestream-compatible cloud storage bucket where Litestream should replicate data. | | LITESTREAM_ENDPOINT | Litestream-compatible cloud storage endpoint where Litestream should replicate data. | | LITESTREAM_ACCESS_KEY_ID | Litestream-compatible cloud storage access key ID to the bucket where you want to replicate data. | | LITESTREAM_SECRET_ACCESS_KEY | Litestream-compatible cloud storage secret access key to the bucket where you want to replicate data. | | LITESTREAM_RETENTION | The amount of time Litestream snapshots & WAL files will be kept (defaults to 72h). | Docker build args If you rebuild the Docker image from source, you can adjust the build behavior with docker build --build-arg : | Build Arg | Meaning | Default Value | | -------------------- | --------------------------------------------------------------------------- | ------------- | | litestream_version | Version of Litestream to use for data replication | 0.3.9 | PicoShare's scope and future PicoShare is maintained by Michael Lynch as a hobby project. Due to time limitations, I keep PicoShare's scope limited to only the features that fit into my workflows. That unfortunately means that I sometimes reject proposals or contributions for perfectly good features. It's nothing against those features, but I only have bandwidth to maintain features that I use. Deployment PicoShare is easy to deploy to cloud hosting platforms: fly.io Tips and tricks Reclaiming reserved database space Some users find it surprising that when they delete files from PicoShare, they don't gain back free space on their filesystem. When you delete files, PicoShare reserves the space for future uploads. If you'd like to reduce PicoShare's usage of your filesystem, you can manually force PicoShare to give up the space by performing the following steps: Shut down PicoShare. Run sqlite3 data/store.db 'VACUUM' where data/store.db is the path to your PicoShare database. You should find that the data/store.db should shrink in file size, as it relinquishes the space dedicated to previously deleted files. If you start PicoShare again, the System Information screen will show the smaller size of PicoShare files.;A minimalist, easy-to-host service for sharing images and other files;[]
mtlynch/picoshare
flutter/pinball;I/O Pinball [ ] very_good_analysis_link [ ] license_link A Pinball game built with Flutter and Firebase for Google I/O 2022 . Try it now and learn about how it's made . Built by Very Good Ventures in partnership with Google Created using Very Good CLI 🤖 Getting Started 🚀 To run the desired project either use the launch configuration in VSCode/Android Studio or use the following commands: sh $ flutter run -d chrome *I/O Pinball works on Web for desktop and mobile. Running Tests 🧪 To run all unit and widget tests use the following command: sh $ flutter test --coverage --test-randomize-ordering-seed random To view the generated coverage report you can use lcov . ```sh Generate Coverage Report $ genhtml coverage/lcov.info -o coverage/ Open Coverage Report $ open coverage/index.html ``` Working with Translations 🌐 This project relies on flutter_localizations and follows the official internationalization guide for Flutter . Adding Strings To add a new localizable string, open the app_en.arb file at lib/l10n/arb/app_en.arb . arb { "@@locale": "en", "counterAppBarTitle": "Counter", "@counterAppBarTitle": { "description": "Text shown in the AppBar of the Counter Page" } } Then add a new key/value and description arb { "@@locale": "en", "counterAppBarTitle": "Counter", "@counterAppBarTitle": { "description": "Text shown in the AppBar of the Counter Page" }, "helloWorld": "Hello World", "@helloWorld": { "description": "Hello World Text" } } Use the new string ```dart import 'package:pinball/l10n/l10n.dart'; @override Widget build(BuildContext context) { final l10n = context.l10n; return Text(l10n.helloWorld); } ``` Adding Translations For each supported locale, add a new ARB file in lib/l10n/arb . ├── l10n │ ├── arb │ │ ├── app_en.arb │ │ └── app_es.arb Add the translated strings to each .arb file: app_en.arb arb { "@@locale": "en", "counterAppBarTitle": "Counter", "@counterAppBarTitle": { "description": "Text shown in the AppBar of the Counter Page" } } app_es.arb arb { "@@locale": "es", "counterAppBarTitle": "Contador", "@counterAppBarTitle": { "description": "Texto mostrado en la AppBar de la página del contador" } };Google I/O 2022 Pinball game built with Flutter and Firebase;[]
flutter/pinball
techno-tim/k3s-ansible;Automated build of HA k3s Cluster with kube-vip and MetalLB This playbook will build an HA Kubernetes cluster with k3s , kube-vip and MetalLB via ansible . This is based on the work from this fork which is based on the work from k3s-io/k3s-ansible . It uses kube-vip to create a load balancer for control plane, and metal-lb for its service LoadBalancer . If you want more context on how this works, see: 📄 Documentation (including example commands) 📺 Watch the Video 📖 k3s Ansible Playbook Build a Kubernetes cluster using Ansible with k3s. The goal is easily install a HA Kubernetes cluster on machines running: [x] Debian (tested on version 11) [x] Ubuntu (tested on version 22.04) [x] Rocky (tested on version 9) on processor architecture: [X] x64 [X] arm64 [X] armhf ✅ System requirements Control Node (the machine you are running ansible commands) must have Ansible 2.11+ If you need a quick primer on Ansible you can check out my docs and setting up Ansible . You will also need to install collections that this playbook uses by running ansible-galaxy collection install -r ./collections/requirements.yml (important❗) netaddr package must be available to Ansible. If you have installed Ansible via apt, this is already taken care of. If you have installed Ansible via pip , make sure to install netaddr into the respective virtual environment. server and agent nodes should have passwordless SSH access, if not you can supply arguments to provide credentials --ask-pass --ask-become-pass to each command. 🚀 Getting Started 🍴 Preparation First create a new directory based on the sample directory within the inventory directory: bash cp -R inventory/sample inventory/my-cluster Second, edit inventory/my-cluster/hosts.ini to match the system information gathered above For example: ```ini [master] 192.168.30.38 192.168.30.39 192.168.30.40 [node] 192.168.30.41 192.168.30.42 [k3s_cluster:children] master node ``` If multiple hosts are in the master group, the playbook will automatically set up k3s in HA mode with etcd . Finally, copy ansible.example.cfg to ansible.cfg and adapt the inventory path to match the files that you just created. This requires at least k3s version 1.19.1 however the version is configurable by using the k3s_version variable. If needed, you can also edit inventory/my-cluster/group_vars/all.yml to match your environment. ☸️ Create Cluster Start provisioning of the cluster using the following command: bash ansible-playbook site.yml -i inventory/my-cluster/hosts.ini After deployment control plane will be accessible via virtual ip-address which is defined in inventory/group_vars/all.yml as apiserver_endpoint 🔥 Remove k3s cluster bash ansible-playbook reset.yml -i inventory/my-cluster/hosts.ini You should also reboot these nodes due to the VIP not being destroyed ⚙️ Kube Config To copy your kube config locally so that you can access your Kubernetes cluster run: bash scp debian@master_ip:/etc/rancher/k3s/k3s.yaml ~/.kube/config If you get file Permission denied, go into the node and temporarly run: bash sudo chmod 777 /etc/rancher/k3s/k3s.yaml Then copy with the scp command and reset the permissions back to: bash sudo chmod 600 /etc/rancher/k3s/k3s.yaml You'll then want to modify the config to point to master IP by running: bash sudo nano ~/.kube/config Then change server: https://127.0.0.1:6443 to match your master IP: server: https://192.168.1.222:6443 🔨 Testing your cluster See the commands here . Troubleshooting Be sure to see this post on how to troubleshoot common problems Testing the playbook using molecule This playbook includes a molecule -based test setup. It is run automatically in CI, but you can also run the tests locally. This might be helpful for quick feedback in a few cases. You can find more information about it here . Pre-commit Hooks This repo uses pre-commit and pre-commit-hooks to lint and fix common style and syntax errors. Be sure to install python packages and then run pre-commit install . For more information, see pre-commit 🌌 Ansible Galaxy This collection can now be used in larger ansible projects. Instructions: create or modify a file collections/requirements.yml in your project yml collections: - name: ansible.utils - name: community.general - name: ansible.posix - name: kubernetes.core - name: https://github.com/techno-tim/k3s-ansible.git type: git version: master install via ansible-galaxy collection install -r ./collections/requirements.yml every role is now available via the prefix techno_tim.k3s_ansible. e.g. techno_tim.k3s_ansible.lxc Thanks 🤝 This repo is really standing on the shoulders of giants. Thank you to all those who have contributed and thanks to these repos for code and ideas: k3s-io/k3s-ansible geerlingguy/turing-pi-cluster 212850a/k3s-ansible;The easiest way to bootstrap a self-hosted High Availability Kubernetes cluster. A fully automated HA k3s etcd install with kube-vip, MetalLB, and more. Build. Destroy. Repeat.;k3s,kubernetes,metallb,kube-vip,etcd,rancher,k8s,k3s-cluster,high-availability
techno-tim/k3s-ansible
yhzhang0128/egos-2000;Vision This project's vision is to help every student read all the code of a teaching operating system. With only 2000 lines of code, egos-2000 implements every component of an operating system for education. It can run on RISC-V boards and the QEMU software emulator. ```shell The cloc utility is used to count the lines of code (LOC). The command below counts the LOC of everything excluding text files. cloc egos-2000 --exclude-ext=md,txt,toml ... github.com/AlDanial/cloc v 1.94 T=0.05 s (949.3 files/s, 62349.4 lines/s) Language files blank comment code C 35 467 601 1536 C/C++ Header 10 70 105 300 Assembly 3 10 47 97 make 1 16 5 67 SUM: 49 563 758 2000 (exactly!) ``` Earth and Grass Operating System The egos part of egos-2000 is named after its three-layer architecture. The earth layer implements hardware-specific abstractions. tty and disk device interface timer, exception and memory management interface The grass layer implements hardware-independent abstractions. process control block and system call interface The application layer implements file system, shell and user commands. The definitions of struct earth and struct grass in this header file specify the layer interface. For compiling and running egos-2000, please read this document . The RISC-V instruction set manual and SiFive FE310 processor manual introduce the privileged ISA and memory map. This document further introduces the teaching plans, software architecture and development history. Acknowledgements Many thanks to Meta for a Facebook fellowship . Many thanks to Robbert van Renesse , Lorenzo Alvisi , Shan Lu , Hakim Weatherspoon and Christopher Batten for their support. Many thanks to all the CS5411/4411 students at Cornell University over the years for helping improve this course. Many thanks to Cheng Tan for providing valuable feedback and using egos-2000 in his CS6640 at Northeastern University . Many thanks to Brandon Fusi for porting to the Allwinner's D1 chip using Sipeed's Lichee RV64 compute module. For any questions, please contact Yunhao Zhang .;Envision a future where every student can read all the code of a teaching operating system.;education,operating-system
yhzhang0128/egos-2000
Aidoku/Aidoku;Aidoku A free and open source manga reading application for iOS and iPadOS. Features [x] Ad free [x] Robust WASM source system [x] Online reading through external sources [x] iCloud sync support [x] Downloads [x] Tracker support [ ] Update notifications Installation The latest ipa will always be available from the releases page . For detailed installation instructions, check out the website . To join the TestFlight, you will need to join the Aidoku Discord . Contributing Aidoku is still relatively new, and there are a lot of planned features and fixes. If you're interested in contributing, I'd first recommend checking with me on Discord . Translations Interested in translating Aidoku? We use Weblate to crowdsource translations, so anyone can create an account and contribute!;Free and open source manga reader for iOS and iPadOS;ios,manga,reading,swift
Aidoku/Aidoku
D3Ext/WEF;WEF WiFi Exploitation Framework Coded with 💙 by D3Ext Introduction • Attacks • Features • Installation • Requirements Readme in spanish Introduction This project started over 2021 as a personal tool to easily audit networks without writing long commands nor setting all values one by one and to automate the whole processes. This is not a professional tool, I created this project to learn and for testing purposes. After some time the repository obtained a bunch of stars so I decided to improve it. It's a fully offensive framework to audit wifi networks with different types of attacks for WPA/WPA2, WPS and WEP, automated hash cracking, and much more. If you have any error please open an issue (if you want to write it in spanish you can do it). If you have any doubt contact me via Discord, my username is d3ext Attacks Deauthentication attack WIDS Confusion attack Authentication attack Beacon Flood attack TKIP attack (Michael Shutdown Exploitation) Pixie Dust attack Null Pin attack PIN Bruteforce attack ARP Replay attack HIRTE attack Caffe Latte attack Fake Authentication attack WPA/WPA2 handshake capture attack PMKID attack EvilTwin attack You have an explanation of the different attacks here on Wiki's repo Features :ballot_box_with_check: WPA/WPA2, WPS and WEP Attacks :ballot_box_with_check: Automatic handshake capture and cracking :ballot_box_with_check: Multiple templates for EvilTwin attack (different languages) :ballot_box_with_check: Enable/disable monitor mode and view interface info (frequencies, chipset, MAC...) :ballot_box_with_check: 2.4 GHz and 5 GHz supported :ballot_box_with_check: Informative attack logs (just done user side) :ballot_box_with_check: Custom wordlist selector when cracking :ballot_box_with_check: English and spanish supported And much more Installation As root sh git clone https://github.com/D3Ext/WEF cd WEF bash wef Take a look at the Wiki where I have more info about the installation Uninstallation Simply execute this: sh rm -rf /opt/wef \ /usr/bin/wef Usage Common usage of the framework sh wef -i wlan0 # Your interface name might be different Help panel ``` __ _ ___ \ \ / / | | \ \/\/ /| || _| _/_/ | |_| [WEF] WiFi Exploitation Framework 1.3 [*] Interfaces: docker0 ens33 lo Required parameters: -i, --interface) The name of your network adapter interface in managed mode Optional parameters: -h, --help) Show this help panel --version) Print the version and exit ``` See here for more information about how to use the tool and other related topics Demo TODO ~~EvilTwin attack for Enterprise networks~~ ~~More options to crack handshakes~~ ~~Better way to scan APs~~ ~~Identify found devices vendors by their MAC addresses~~ ~~Config file improved with more settings~~ ~~General improvements~~ Test compatibility with others OS Support WPA3 dictionary attack MANA and KARMA attack In-depth testing of implemented features More general improvement Contributing See CONTRIBUTING.md Changelog See CHANGELOG.md Credits Thanks to ultrazar and ErKbModifier , they helped me a lot <3 References https://github.com/v1s1t0r1sh3r3/airgeddon https://github.com/FluxionNetwork/fluxion https://github.com/P0cL4bs/wifipumpkin3 https://github.com/s0lst1c3/eaphammer https://github.com/derv82/wifite2 https://github.com/aircrack-ng/mdk4 https://github.com/aircrack-ng/aircrack-ng https://github.com/wifiphisher/wifiphisher https://github.com/ZerBea/hcxtools https://github.com/ZerBea/hcxdumptool https://github.com/Tylous/SniffAir https://github.com/koutto/pi-pwnbox-rogueap https://github.com/koutto/pi-pwnbox-rogueap/wiki/01.-WiFi-Basics Disclaimer Creator has no responsibility for any kind of: Illegal use of the project. Law infringement by third parties and users. Malicious act, capable of causing damage to third parties, promoted by the user through this software. License This project is under MIT license Copyright © 2024, D3Ext;Wi-Fi Exploitation Framework;kali-linux,wifi,bash,wef,oswp,wifi-exploitation-framework
D3Ext/WEF
adrianhajdin/ecommerce_sanity_stripe;Modern Full Stack ECommerce Application with Stripe & Sanity Launch your development career with project-based coaching - https://www.jsmastery.pro Build and Deploy a fully responsive Modern Full Stack Ecommerce application with Payments functionality . With Modern design, animations, the ability to add and edit products on the go using a CMS, all advanced cart functionalities, and most importantly the complete integration with Stripe so that you can cover REAL payments. This is the best e-commerce website project that you can currently find on YouTube! In this video, you'll learn: - Advanced React Best Practices such as - Folder and file structure, hooks and refs - Advanced State Management of the entire application using React Context API - Next.js Best Practices such as - File-based routing, Data fetching that allows server-side rendering and static generation which makes your websites incredibly optimized (show getServerSideProps, getStaticPaths, getStaticProps), and you’ll also learn how to use Next.js as a backend endpoint. - You’ll learn how to integrate Stripe to manage payments, products, shipping rates, and the entire checkout process - And most importantly you’ll learn how to manage the entire content of your app using Sanity. Sanity is the unified content platform that’ll make the making of our entire app possible. - Through Sanity, you or your clients will be able to change the store’s homepage and more importantly, the details of all the products in the store, instantly and on the go! - Sanity allows us to focus on developing the application without having to worry about the content, file storage, and databases. They’ll cover the dirty work for us and allow us to build scalable and modern e-commerce web applications extremely easily.;Modern Full Stack ECommerce Application with Stripe;ecommerce,ecommerce-application,nextjs,react,sanity,sanity-io,stripe,stripe-api
adrianhajdin/ecommerce_sanity_stripe
wjakob/nanobind;nanobind: tiny and efficient C++/Python bindings nanobind is a small binding library that exposes C++ types in Python and vice versa. It is reminiscent of Boost.Python and pybind11 and uses near-identical syntax. In contrast to these existing tools, nanobind is more efficient: bindings compile in a shorter amount of time, produce smaller binaries, and have better runtime performance. More concretely, benchmarks show up to ~4× faster compile time, ~5× smaller binaries, and ~10× lower runtime overheads compared to pybind11. nanobind also outperforms Cython in important metrics ( 3-12× binary size reduction, 1.6-4× compilation time reduction, similar runtime performance). Documentation Please see the following links for tutorial and reference documentation in HTML and PDF formats. License and attribution All material in this repository is licensed under a three-clause BSD license . Please use the following BibTeX template to cite nanobind in scientific discourse: bibtex @misc{nanobind, author = {Wenzel Jakob}, year = {2022}, note = {https://github.com/wjakob/nanobind}, title = {nanobind: tiny and efficient C++/Python bindings} } The nanobind logo was designed by AndoTwin Studio (high-resolution download: light , dark ).;nanobind: tiny and efficient C++/Python bindings;python,bindings,pybind11,cpp17
wjakob/nanobind
dmunozv04/iSponsorBlockTV;iSponsorBlockTV Skip sponsor segments in YouTube videos playing on a YouTube TV device (see below for compatibility details). This project is written in asynchronous python and should be pretty quick. Installation Check the wiki Warning: docker armv7 builds have been deprecated. Amd64 and arm64 builds are still available. Compatibility Legend: ✅ = Working, ❌ = Not working, ❔ = Not tested Open an issue/pull request if you have tested a device that isn't listed here. | Device | Status | |:-------------------|:------:| | Apple TV | ✅ | | Samsung TV (Tizen) | ✅ | | LG TV (WebOS) | ✅ | | Android TV | ✅ | | Chromecast | ✅ | | Google TV | ✅ | | Roku | ✅ | | Fire TV | ✅ | | CCwGTV | ✅ | | Nintendo Switch | ✅ | | Xbox One/Series | ✅ | | Playstation 4/5 | ✅ | Usage Run iSponsorBlockTV on a computer that has network access. Auto discovery will require the computer to be on the same network as the device during setup. The device can also be manually added to iSponsorBlockTV with a YouTube TV code. This code can be found in the settings page of your YouTube application. It connects to the device, watches its activity and skips any sponsor segment using the SponsorBlock API. It can also skip/mute YouTube ads. Libraries used pyytlounge Used to interact with the device asyncio and aiohttp async-cache Textual Used for the amazing new graphical configurator ssdp Used for auto discovery Projects using this project Home Assistant Addon Contributing Fork it ( https://github.com/dmunozv04/iSponsorBlockTV/fork ) Create your feature branch ( git checkout -b my-new-feature ) Commit your changes ( git commit -am 'Add some feature' ) Push to the branch ( git push origin my-new-feature ) Create a new Pull Request Contributors dmunozv04 - creator and maintainer HaltCatchFire - updated dependencies and improved skip logic Oxixes - added support for channel whitelist and minor improvements License;SponsorBlock client for all YouTube TV clients.;python,appletv,youtube,sponsorblock,chromecast,roku,tizen-tv
dmunozv04/iSponsorBlockTV
ProjectOpenSea/seaport;Seaport Seaport is a marketplace protocol for safely and efficiently buying and selling NFTs. Table of Contents Seaport Table of Contents Background Deployments Diagram Docs Install Usage Foundry Tests Linting Audits Contributing License Background Seaport is a marketplace protocol for safely and efficiently buying and selling NFTs. Each listing contains an arbitrary number of items that the offerer is willing to give (the "offer") along with an arbitrary number of items that must be received along with their respective receivers (the "consideration"). See the documentation , the interface , and the full interface documentation for more information on Seaport. This repository is also split into smaller repositories for easier use and integration: seaport-core seaport-types seaport-sol Deployments Canonical Cross-chain Deployment Addresses Contract Canonical Cross-chain Deployment Address Seaport 1.1 0x00000000006c3852cbEf3e08E8dF289169EdE581 Seaport 1.2* 0x00000000000006c7676171937C444f6BDe3D6282 Seaport 1.3* 0x0000000000000aD24e80fd803C6ac37206a45f15 Seaport 1.4* 0x00000000000001ad428e4906aE43D8F9852d0dD6 Seaport 1.5 0x00000000000000ADc04C56Bf30aC9d3c0aAF14dC Seaport 1.6 0x0000000000000068F116a894984e2DB1123eB395 ConduitController 0x00000000F9490004C11Cef243f5400493c00Ad63 SeaportValidator 0x00e5F120f500006757E984F1DED400fc00370000 SeaportNavigator 0x0000f00000627D293Ab4Dfb40082001724dB006F *Note: Seaport 1.2 through 1.4 contain known limitations; proceed with caution if interacting with them, particularly when utilizing restricted or contract orders. Deployments By EVM Chain Network Seaport 1.6 Seaport 1.5 Seaport 1.1 ConduitController SeaportValidator SeaportNavigator Ethereum [0x0000000000000068F116a894984e2DB1123eB395](https://etherscan.io/address/0x0000000000000068F116a894984e2DB1123eB395#code) [0x00000000000000ADc04C56Bf30aC9d3c0aAF14dC](https://etherscan.io/address/0x00000000000000ADc04C56Bf30aC9d3c0aAF14dC#code) [0x00000000006c3852cbEf3e08E8dF289169EdE581](https://etherscan.io/address/0x00000000006c3852cbEf3e08E8dF289169EdE581#code) [0x00000000F9490004C11Cef243f5400493c00Ad63](https://etherscan.io/address/0x00000000F9490004C11Cef243f5400493c00Ad63#code) [0x00e5F120f500006757E984F1DED400fc00370000](https://etherscan.io/address/0x00e5F120f500006757E984F1DED400fc00370000#code) [0x0000f00000627D293Ab4Dfb40082001724dB006F](https://etherscan.io/address/0x0000f00000627D293Ab4Dfb40082001724dB006F#code) Sepolia [0x0000000000000068F116a894984e2DB1123eB395](https://sepolia.etherscan.io/address/0x0000000000000068F116a894984e2DB1123eB395#code) [0x00000000000000ADc04C56Bf30aC9d3c0aAF14dC](https://sepolia.etherscan.io/address/0x00000000000000ADc04C56Bf30aC9d3c0aAF14dC#code) [0x00000000006c3852cbEf3e08E8dF289169EdE581](https://sepolia.etherscan.io/address/0x00000000006c3852cbEf3e08E8dF289169EdE581#code) [0x00000000F9490004C11Cef243f5400493c00Ad63](https://sepolia.etherscan.io/address/0x00000000F9490004C11Cef243f5400493c00Ad63#code) [0x00e5F120f500006757E984F1DED400fc00370000](https://sepolia.etherscan.io/address/0x00e5F120f500006757E984F1DED400fc00370000#code) [0x0000f00000627D293Ab4Dfb40082001724dB006F](https://sepolia.etherscan.io/address/0x0000f00000627D293Ab4Dfb40082001724dB006F#code) Polygon [0x0000000000000068F116a894984e2DB1123eB395](https://polygonscan.com/address/0x0000000000000068F116a894984e2DB1123eB395#code) [0x00000000000000ADc04C56Bf30aC9d3c0aAF14dC](https://polygonscan.com/address/0x00000000000000ADc04C56Bf30aC9d3c0aAF14dC#code) [0x00000000006c3852cbEf3e08E8dF289169EdE581](https://polygonscan.com/address/0x00000000006c3852cbEf3e08E8dF289169EdE581#code) [0x00000000F9490004C11Cef243f5400493c00Ad63](https://polygonscan.com/address/0x00000000F9490004C11Cef243f5400493c00Ad63#code) [0x00e5F120f500006757E984F1DED400fc00370000](https://polygonscan.com/address/0x00e5F120f500006757E984F1DED400fc00370000#code) [0x0000f00000627D293Ab4Dfb40082001724dB006F](https://polygonscan.com/address/0x0000f00000627D293Ab4Dfb40082001724dB006F#code) Amoy [0x0000000000000068F116a894984e2DB1123eB395](https://www.oklink.com/amoy/address/0x0000000000000068F116a894984e2DB1123eB395#code) [0x00000000000000ADc04C56Bf30aC9d3c0aAF14dC](https://www.oklink.com/amoy/address/0x00000000000000ADc04C56Bf30aC9d3c0aAF14dC#code) [0x00000000006c3852cbEf3e08E8dF289169EdE581](https://www.oklink.com/amoy/address/0x00000000006c3852cbEf3e08E8dF289169EdE581#code) [0x00000000F9490004C11Cef243f5400493c00Ad63](https://www.oklink.com/amoy/address/0x00000000F9490004C11Cef243f5400493c00Ad63#code) [0x00e5F120f500006757E984F1DED400fc00370000](https://www.oklink.com/amoy/address/0x00e5F120f500006757E984F1DED400fc00370000#code) [0x0000f00000627D293Ab4Dfb40082001724dB006F](https://www.oklink.com/amoy/address/0x0000f00000627D293Ab4Dfb40082001724dB006F#code) Optimism [0x0000000000000068F116a894984e2DB1123eB395](https://optimistic.etherscan.io/address/0x0000000000000068F116a894984e2DB1123eB395#code) [0x00000000000000ADc04C56Bf30aC9d3c0aAF14dC](https://optimistic.etherscan.io/address/0x00000000000000ADc04C56Bf30aC9d3c0aAF14dC#code) [0x00000000006c3852cbEf3e08E8dF289169EdE581](https://optimistic.etherscan.io/address/0x00000000006c3852cbEf3e08E8dF289169EdE581#code) [0x00000000F9490004C11Cef243f5400493c00Ad63](https://optimistic.etherscan.io/address/0x00000000F9490004C11Cef243f5400493c00Ad63#code) [0x00e5F120f500006757E984F1DED400fc00370000](https://optimistic.etherscan.io/address/0x00e5F120f500006757E984F1DED400fc00370000#code) [0x0000f00000627D293Ab4Dfb40082001724dB006F](https://optimistic.etherscan.io/address/0x0000f00000627D293Ab4Dfb40082001724dB006F#code) Optimism Sepolia [0x0000000000000068F116a894984e2DB1123eB395](https://sepolia-optimism.etherscan.io/address/0x0000000000000068F116a894984e2DB1123eB395#code) [0x00000000000000ADc04C56Bf30aC9d3c0aAF14dC](https://sepolia-optimism.etherscan.io/address/0x00000000000000ADc04C56Bf30aC9d3c0aAF14dC#code) Not deployed [0x00000000F9490004C11Cef243f5400493c00Ad63](https://sepolia-optimism.etherscan.io/address/0x00000000F9490004C11Cef243f5400493c00Ad63#code) [0x00e5F120f500006757E984F1DED400fc00370000](https://sepolia-optimism.etherscan.io/address/0x00e5F120f500006757E984F1DED400fc00370000#code) [0x0000f00000627D293Ab4Dfb40082001724dB006F](https://sepolia-optimism.etherscan.io/address/0x0000f00000627D293Ab4Dfb40082001724dB006F#code) Arbitrum [0x0000000000000068F116a894984e2DB1123eB395](https://arbiscan.io/address/0x0000000000000068F116a894984e2DB1123eB395#code) [0x00000000000000ADc04C56Bf30aC9d3c0aAF14dC](https://arbiscan.io/address/0x00000000000000ADc04C56Bf30aC9d3c0aAF14dC#code) [0x00000000006c3852cbEf3e08E8dF289169EdE581](https://arbiscan.io/address/0x00000000006c3852cbEf3e08E8dF289169EdE581#code) [0x00000000F9490004C11Cef243f5400493c00Ad63](https://arbiscan.io/address/0x00000000F9490004C11Cef243f5400493c00Ad63#code) [0x00e5F120f500006757E984F1DED400fc00370000](https://arbiscan.io/address/0x00e5F120f500006757E984F1DED400fc00370000#code) [0x0000f00000627D293Ab4Dfb40082001724dB006F](https://arbiscan.io/address/0x0000f00000627D293Ab4Dfb40082001724dB006F#code) Arbitrum Sepolia [0x0000000000000068F116a894984e2DB1123eB395](https://sepolia.arbiscan.io/address/0x0000000000000068F116a894984e2DB1123eB395#code) [0x00000000000000ADc04C56Bf30aC9d3c0aAF14dC](https://sepolia.arbiscan.io/address/0x00000000000000ADc04C56Bf30aC9d3c0aAF14dC#code) Not deployed [0x00000000F9490004C11Cef243f5400493c00Ad63](https://sepolia.arbiscan.io/address/0x00000000F9490004C11Cef243f5400493c00Ad63#code) [0x00e5F120f500006757E984F1DED400fc00370000](https://sepolia.arbiscan.io/address/0x00e5F120f500006757E984F1DED400fc00370000#code) [0x0000f00000627D293Ab4Dfb40082001724dB006F](https://sepolia.arbiscan.io/address/0x0000f00000627D293Ab4Dfb40082001724dB006F#code) Arbitrum Nova [0x0000000000000068F116a894984e2DB1123eB395](https://nova.arbiscan.io/address/0x0000000000000068F116a894984e2DB1123eB395#code) [0x00000000000000ADc04C56Bf30aC9d3c0aAF14dC](https://nova.arbiscan.io/address/0x00000000000000ADc04C56Bf30aC9d3c0aAF14dC#code) [0x00000000006c3852cbEf3e08E8dF289169EdE581](https://nova.arbiscan.io/address/0x00000000006c3852cbEf3e08E8dF289169EdE581#code) [0x00000000F9490004C11Cef243f5400493c00Ad63](https://nova.arbiscan.io/address/0x00000000F9490004C11Cef243f5400493c00Ad63#code) [0x00e5F120f500006757E984F1DED400fc00370000](https://nova.arbiscan.io/address/0x00e5F120f500006757E984F1DED400fc00370000#code) [0x0000f00000627D293Ab4Dfb40082001724dB006F](https://nova.arbiscan.io/address/0x0000f00000627D293Ab4Dfb40082001724dB006F#code) Base [0x0000000000000068F116a894984e2DB1123eB395](https://basescan.org/address/0x0000000000000068F116a894984e2DB1123eB395) [0x00000000000000ADc04C56Bf30aC9d3c0aAF14dC](https://basescan.org/address/0x00000000000000ADc04C56Bf30aC9d3c0aAF14dC#code) Not deployed [0x00000000F9490004C11Cef243f5400493c00Ad63](https://basescan.org/address/0x00000000f9490004c11cef243f5400493c00ad63) [0x00e5F120f500006757E984F1DED400fc00370000](https://basescan.org/address/0x00e5f120f500006757e984f1ded400fc00370000) [0x0000f00000627D293Ab4Dfb40082001724dB006F](https://basescan.org/address/0x0000f00000627D293Ab4Dfb40082001724dB006F#code) Base Sepolia [0x0000000000000068F116a894984e2DB1123eB395](https://sepolia.basescan.org/address/0x0000000000000068F116a894984e2DB1123eB395#code) [0x00000000000000ADc04C56Bf30aC9d3c0aAF14dC](https://sepolia.basescan.org/address/0x00000000000000ADc04C56Bf30aC9d3c0aAF14dC#code) Not deployed [0x00000000F9490004C11Cef243f5400493c00Ad63](https://sepolia.basescan.org/address/0x00000000f9490004c11cef243f5400493c00ad63) [0x00e5F120f500006757E984F1DED400fc00370000](https://sepolia.basescan.org/address/0x00e5f120f500006757e984f1ded400fc00370000) [0x0000f00000627D293Ab4Dfb40082001724dB006F](https://sepolia.basescan.org/address/0x0000f00000627D293Ab4Dfb40082001724dB006F#code) Avalanche C-Chain [0x0000000000000068F116a894984e2DB1123eB395](https://snowtrace.io/address/0x0000000000000068F116a894984e2DB1123eB395#code) [0x00000000000000ADc04C56Bf30aC9d3c0aAF14dC](https://snowtrace.io/address/0x00000000000000ADc04C56Bf30aC9d3c0aAF14dC#code) [0x00000000006c3852cbEf3e08E8dF289169EdE581](https://snowtrace.io/address/0x00000000006c3852cbEf3e08E8dF289169EdE581#code) [0x00000000F9490004C11Cef243f5400493c00Ad63](https://snowtrace.io/address/0x00000000F9490004C11Cef243f5400493c00Ad63#code) [0x00e5F120f500006757E984F1DED400fc00370000](https://snowtrace.io/address/0x00e5F120f500006757E984F1DED400fc00370000#code) [0x0000f00000627D293Ab4Dfb40082001724dB006F](https://snowtrace.io/address/0x0000f00000627D293Ab4Dfb40082001724dB006F#code) Avalanche Fuji [0x0000000000000068F116a894984e2DB1123eB395](https://testnet.snowtrace.io/address/0x0000000000000068F116a894984e2DB1123eB395#code) [0x00000000000000ADc04C56Bf30aC9d3c0aAF14dC](https://testnet.snowtrace.io/address/0x00000000000000ADc04C56Bf30aC9d3c0aAF14dC#code) [0x00000000006c3852cbEf3e08E8dF289169EdE581](https://testnet.snowtrace.io/address/0x00000000006c3852cbEf3e08E8dF289169EdE581#code) [0x00000000F9490004C11Cef243f5400493c00Ad63](https://testnet.snowtrace.io/address/0x00000000F9490004C11Cef243f5400493c00Ad63#code) [0x00e5F120f500006757E984F1DED400fc00370000](https://testnet.snowtrace.io/address/0x00e5F120f500006757E984F1DED400fc00370000#code) [0x0000f00000627D293Ab4Dfb40082001724dB006F](https://testnet.snowtrace.io/address/0x0000f00000627D293Ab4Dfb40082001724dB006F#code) Gnosis Chain [0x0000000000000068F116a894984e2DB1123eB395](https://gnosisscan.io/address/0x0000000000000068F116a894984e2DB1123eB395#code) [0x00000000000000ADc04C56Bf30aC9d3c0aAF14dC](https://gnosisscan.io/address/0x00000000000000ADc04C56Bf30aC9d3c0aAF14dC#code) [0x00000000006c3852cbEf3e08E8dF289169EdE581](https://gnosisscan.io/address/0x00000000006c3852cbEf3e08E8dF289169EdE581#code) [0x00000000F9490004C11Cef243f5400493c00Ad63](https://gnosisscan.io/address/0x00000000F9490004C11Cef243f5400493c00Ad63#code) [0x00000000000000ADc04C56Bf30aC9d3c0aAF14dC](https://gnosisscan.io/address/0x00000000000000ADc04C56Bf30aC9d3c0aAF14dC#code) Not deployed Chiado [0x0000000000000068F116a894984e2DB1123eB395](https://blockscout.com/gnosis/chiado/address/0x0000000000000068F116a894984e2DB1123eB395/contracts#address-tabs) [0x00000000000000ADc04C56Bf30aC9d3c0aAF14dC](https://blockscout.com/gnosis/chiado/address/0x00000000000000ADc04C56Bf30aC9d3c0aAF14dC/contracsts#address-tabs) Not deployed [0x00000000F9490004C11Cef243f5400493c00Ad63](https://blockscout.com/gnosis/chiado/address/0x00000000F9490004C11Cef243f5400493c00Ad63/contracts#address-tabs) Not deployed Not deployed BSC [0x0000000000000068F116a894984e2DB1123eB395](https://bscscan.com/address/0x0000000000000068F116a894984e2DB1123eB395#code) [0x00000000000000ADc04C56Bf30aC9d3c0aAF14dC](https://bscscan.com/address/0x00000000000000ADc04C56Bf30aC9d3c0aAF14dC#code) [0x00000000006c3852cbEf3e08E8dF289169EdE581](https://bscscan.com/address/0x00000000006c3852cbEf3e08E8dF289169EdE581#code) [0x00000000F9490004C11Cef243f5400493c00Ad63](https://bscscan.com/address/0x00000000F9490004C11Cef243f5400493c00Ad63#code) [0x00e5F120f500006757E984F1DED400fc00370000](https://bscscan.com/address/0x00e5F120f500006757E984F1DED400fc00370000#code) [0x0000f00000627D293Ab4Dfb40082001724dB006F](https://bscscan.com/address/0x0000f00000627D293Ab4Dfb40082001724dB006F#code) BSC Testnet [0x0000000000000068F116a894984e2DB1123eB395](https://testnet.bscscan.com/address/0x0000000000000068F116a894984e2DB1123eB395#code) [0x00000000000000ADc04C56Bf30aC9d3c0aAF14dC](https://testnet.bscscan.com/address/0x00000000000000ADc04C56Bf30aC9d3c0aAF14dC#code) [0x00000000006c3852cbEf3e08E8dF289169EdE581](https://testnet.bscscan.com/address/0x00000000006c3852cbEf3e08E8dF289169EdE581#code) [0x00000000F9490004C11Cef243f5400493c00Ad63](https://testnet.bscscan.com/address/0x00000000F9490004C11Cef243f5400493c00Ad63#code) [0x00e5F120f500006757E984F1DED400fc00370000](https://testnet.bscscan.com/address/0x00e5F120f500006757E984F1DED400fc00370000#code) [0x0000f00000627D293Ab4Dfb40082001724dB006F](https://testnet.bscscan.com/address/0x0000f00000627D293Ab4Dfb40082001724dB006F#code) Klaytn [0x0000000000000068F116a894984e2DB1123eB395](https://scope.klaytn.com/address/0x0000000000000068F116a894984e2DB1123eB395#code) [0x00000000000000ADc04C56Bf30aC9d3c0aAF14dC](https://scope.klaytn.com/address/0x00000000000000ADc04C56Bf30aC9d3c0aAF14dC#code) [0x00000000006c3852cbEf3e08E8dF289169EdE581](https://scope.klaytn.com/address/0x00000000006c3852cbEf3e08E8dF289169EdE581#code) [0x00000000F9490004C11Cef243f5400493c00Ad63](https://scope.klaytn.com/address/0x00000000F9490004C11Cef243f5400493c00Ad63#code) [0x00e5F120f500006757E984F1DED400fc00370000](https://scope.klaytn.com/address/0x00e5F120f500006757E984F1DED400fc00370000#code) [0x0000f00000627D293Ab4Dfb40082001724dB006F](https://scope.klaytn.com/address/0x0000f00000627D293Ab4Dfb40082001724dB006F#code) Baobab [0x0000000000000068F116a894984e2DB1123eB395](https://baobab.scope.klaytn.com/address/0x0000000000000068F116a894984e2DB1123eB395#code) [0x00000000000000ADc04C56Bf30aC9d3c0aAF14dC](https://baobab.scope.klaytn.com/address/0x00000000000000ADc04C56Bf30aC9d3c0aAF14dC#code) [0x00000000006c3852cbEf3e08E8dF289169EdE581](https://baobab.scope.klaytn.com/address/0x00000000006c3852cbEf3e08E8dF289169EdE581#code) [0x00000000F9490004C11Cef243f5400493c00Ad63](https://baobab.scope.klaytn.com/address/0x00000000F9490004C11Cef243f5400493c00Ad63#code) [0x00e5F120f500006757E984F1DED400fc00370000](https://baobab.scope.klaytn.com/address/0x00e5F120f500006757E984F1DED400fc00370000#code) [0x0000f00000627D293Ab4Dfb40082001724dB006F](https://baobab.scope.klaytn.com/address/0x0000f00000627D293Ab4Dfb40082001724dB006F#code) Moonbeam [0x0000000000000068F116a894984e2DB1123eB395](https://moonscan.io/address/0x0000000000000068F116a894984e2DB1123eB395#code) [0x00000000000000ADc04C56Bf30aC9d3c0aAF14dC](https://moonscan.io/address/0x00000000000000ADc04C56Bf30aC9d3c0aAF14dC#code) Not deployed [0x00000000F9490004C11Cef243f5400493c00Ad63](https://moonscan.io/address/0x00000000F9490004C11Cef243f5400493c00Ad63#code) Not deployed Not deployed Moonriver [0x0000000000000068F116a894984e2DB1123eB395](https://moonriver.moonscan.io/address/0x0000000000000068F116a894984e2DB1123eB395#code) [0x00000000000000ADc04C56Bf30aC9d3c0aAF14dC](https://moonriver.moonscan.io/address/0x00000000000000ADc04C56Bf30aC9d3c0aAF14dC#code) [0x00000000006c3852cbEf3e08E8dF289169EdE581](https://moonriver.moonscan.io/address/0x00000000006c3852cbEf3e08E8dF289169EdE581#code) [0x00000000F9490004C11Cef243f5400493c00Ad63](https://moonriver.moonscan.io/address/0x00000000F9490004C11Cef243f5400493c00Ad63#code) Not deployed Not deployed Canto [0x0000000000000068F116a894984e2DB1123eB395](https://evm.explorer.canto.io/address/0x0000000000000068F116a894984e2DB1123eB395#code) [0x00000000000000ADc04C56Bf30aC9d3c0aAF14dC](https://evm.explorer.canto.io/address/0x00000000000000ADc04C56Bf30aC9d3c0aAF14dC) [0x00000000006c3852cbEf3e08E8dF289169EdE581](https://evm.explorer.canto.io/address/0x00000000006c3852cbEf3e08E8dF289169EdE581#code) [0x00000000F9490004C11Cef243f5400493c00Ad63](https://evm.explorer.canto.io/address/0x00000000F9490004C11Cef243f5400493c00Ad63#code) Not deployed Not deployed Fantom [0x0000000000000068F116a894984e2DB1123eB395](https://ftmscan.com/address/0x0000000000000068F116a894984e2DB1123eB395#code) [0x00000000000000ADc04C56Bf30aC9d3c0aAF14dC](https://ftmscan.com/address/0x00000000000000ADc04C56Bf30aC9d3c0aAF14dC#code) Not deployed [0x00000000F9490004C11Cef243f5400493c00Ad63](https://ftmscan.com/address/0x00000000F9490004C11Cef243f5400493c00Ad63#code) Not deployed Not deployed Celo [0x0000000000000068F116a894984e2DB1123eB395](https://celoscan.io/address/0x0000000000000068F116a894984e2DB1123eB395#code) [0x00000000000000ADc04C56Bf30aC9d3c0aAF14dC](https://celoscan.io/address/0x00000000000000ADc04C56Bf30aC9d3c0aAF14dC#code) Not deployed [0x00000000F9490004C11Cef243f5400493c00Ad63](https://celoscan.io/address/0x00000000F9490004C11Cef243f5400493c00Ad63#code) Not deployed Not deployed Zora [0x0000000000000068F116a894984e2DB1123eB395](https://explorer.zora.energy/address/0x0000000000000068F116a894984e2DB1123eB395#code) [0x00000000000000ADc04C56Bf30aC9d3c0aAF14dC](https://explorer.zora.energy/address/0x00000000000000ADc04C56Bf30aC9d3c0aAF14dC#code) Not deployed [0x00000000F9490004C11Cef243f5400493c00Ad63](https://explorer.zora.energy/address/0x00000000f9490004c11cef243f5400493c00ad63) [0x00e5F120f500006757E984F1DED400fc00370000](https://explorer.zora.energy/address/0x00e5f120f500006757e984f1ded400fc00370000) [0x0000f00000627D293Ab4Dfb40082001724dB006F](https://explorer.zora.energy/address/0x0000f00000627D293Ab4Dfb40082001724dB006F#code) Zora Sepolia [0x0000000000000068F116a894984e2DB1123eB395](https://sepolia.explorer.zora.energy/address/0x0000000000000068F116a894984e2DB1123eB395#code) [0x00000000000000ADc04C56Bf30aC9d3c0aAF14dC](https://sepolia.explorer.zora.energy/address/0x00000000000000ADc04C56Bf30aC9d3c0aAF14dC#code) Not deployed [0x00000000F9490004C11Cef243f5400493c00Ad63](https://sepolia.explorer.zora.energy/address/0x00000000f9490004c11cef243f5400493c00ad63) [0x00e5F120f500006757E984F1DED400fc00370000](https://sepolia.explorer.zora.energy/address/0x00e5f120f500006757e984f1ded400fc00370000) [0x0000f00000627D293Ab4Dfb40082001724dB006F](https://sepolia.explorer.zora.energy/address/0x0000f00000627D293Ab4Dfb40082001724dB006F#code) To deploy to a new EVM chain, follow the steps outlined here . Diagram ```mermaid graph TD Offer & Consideration --> Order zone & conduitKey --> Order subgraph Seaport[ ] Order --> Fulfill & Match Order --> Validate & Cancel end Validate --> Verify Cancel --> OrderStatus Fulfill & Match --> OrderCombiner --> OrderFulfiller OrderCombiner --> BasicOrderFulfiller --> OrderValidator OrderCombiner --> FulfillmentApplier OrderFulfiller --> CriteriaResolution OrderFulfiller --> AmountDeriver OrderFulfiller --> OrderValidator OrderValidator --> ZoneInteraction OrderValidator --> Executor --> TokenTransferrer Executor --> Conduit --> TokenTransferrer Executor --> Verify subgraph Verifiers[ ] Verify --> Time & Signature & OrderStatus end ``` For a more thorough flowchart see Seaport diagram . Docs Seaport Deployment Seaport Documentation Zone Documentation Function Signatures Order Validator Install To install dependencies and compile contracts: bash git clone --recurse-submodules https://github.com/ProjectOpenSea/seaport && cd seaport yarn install yarn build Usage To run hardhat tests written in javascript: bash yarn test yarn coverage Note: artifacts and cache folders may occasionally need to be removed between standard and coverage test runs. To run hardhat tests against reference contracts: bash yarn test:ref yarn coverage:ref To open the generated Hardhat coverage report locally after running yarn coverage or yarn coverage:ref : bash open coverage/index.html To profile gas usage: bash yarn profile Foundry Tests Seaport also includes a suite of fuzzing tests written in Solidity with Foundry. Before running these tests, you will need to compile an optimized build by running: bash FOUNDRY_PROFILE=optimized forge build This should create an optimized-out/ directory in your project root. To run tests with full traces and debugging with source, create an .env file with the following line: bash FOUNDRY_PROFILE=debug You may then run tests with forge test , optionally specifying a level of verbosity (anywhere from one to five v 's, eg, -vvv ) This will compile tests and contracts without via-ir enabled, which is much faster, but will not exactly match the deployed bytecode. To run tests against the actual bytecode intended to be deployed on networks, you will need to pre-compile the contracts, and remove the FOUNDRY_PROFILE variable from your .env file. Note that informative error traces may not be available, and the Forge debugger will not show the accompanying source code. bash FOUNDRY_PROFILE=optimized forge build FOUNDRY_PROFILE=reference forge build To run Forge coverage tests and open the generated coverage report locally: bash brew install lcov SEAPORT_COVERAGE=true forge coverage --report summary --report lcov && lcov -o lcov.info --remove lcov.info --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 "test/*" "script/*" && genhtml lcov.info -o html --branch open html/index.html When working on the test suite based around FuzzEngine.sol , using FOUNDRY_PROFILE=moat_debug will cut compile times roughly in half. Note that Forge does not yet ignore specific filepaths when running coverage tests. For information on Foundry, including installation and testing, see the Foundry Book . Linting To run lint checks: bash yarn lint:check Lint checks utilize prettier, prettier-plugin-solidity, and solhint. javascript "prettier": "^2.5.1", "prettier-plugin-solidity": "^1.0.0-beta.19", Audits OpenSea engaged Trail of Bits to audit the security of Seaport. From April 18th to May 12th 2022, a team of Trail of Bits consultants conducted a security review of Seaport. The audit did not uncover significant flaws that could result in the compromise of a smart contract, loss of funds, or unexpected behavior in the target system. Their full report is available here . Contributing Contributions to Seaport are welcome by anyone interested in writing more tests, improving readability, optimizing for gas efficiency, or extending the protocol via new zone contracts or other features. When making a pull request, ensure that: All tests pass. Code coverage remains at 100% (coverage tests must currently be written in hardhat). All new code adheres to the style guide: All lint checks pass. Code is thoroughly commented with natspec where relevant. If making a change to the contracts: Gas snapshots are provided and demonstrate an improvement (or an acceptable deficit given other improvements). Reference contracts are modified correspondingly if relevant. New tests (ideally via foundry) are included for all new features or code paths. If making a modification to third-party dependencies, yarn audit passes. A descriptive summary of the PR has been provided. License MIT Copyright 2023 Ozone Networks, Inc.;Seaport is a marketplace protocol for safely and efficiently buying and selling NFTs.;[]
ProjectOpenSea/seaport
PhrozenIO/PowerRemoteDesktop;Power Remote Desktop Welcome to Power Remote Desktop for remote desktop access in pure PowerShell! This module offers a unique solution for remotely controlling one or multiple screens using only PowerShell. Unlike other remote desktop tools that rely on external protocols and software, our module utilizes its own remote desktop protocol. The module consists of both a client and a server component, both of which are written entirely in PowerShell. Our protocol provides secure, encrypted communication using TLS and offers both challenge-based password authentication and certificate-based authentication. In addition to providing full mouse and keyboard control over the remote desktop, our module also replicates the mouse cursor icon for the viewer, synchronizes the clipboard between the local and remote systems, and more. Despite the limitations of PowerShell, we have implemented techniques to optimize network traffic and improve the streaming experience, resulting in a smooth and efficient remote desktop experience. At the time of writing, this is the only known entirely PowerShell-based remote desktop application. We hope you find it useful and we welcome any feedback or suggestions you may have. Tested on: Windows 10 Windows 11 Current version: 4.0.0 Stable Performance For a better streaming performance and overall experience, we recommend using PowerShell 7 instead of PowerShell 5. You can install PowerShell 7 for Windows here Highlighted Features Remote Desktop Streaming: This feature allows you to stream the desktop of the remote computer to your own device. The streaming supports HDPI and scaling, providing a high-quality display on various screens and resolutions. Remote Control: With this feature, you can control the mouse (including moves, clicks, and wheel) and keyboard of the remote computer as if you were sitting in front of it. Secure: To protect the privacy and security of your remote desktop sessions, the module uses TLSv1.2 or 1.3 to encrypt the network traffic. Access to the server is granted through a challenge-based authentication mechanism that requires a user-defined complex password. Network Traffic Encryption: The module supports encrypting the network traffic using either a default X509 certificate (which requires administrator privileges) or your own custom X509 certificate. Server Certificate Fingerprint Validation: To ensure the authenticity of the server, the module allows you to validate the fingerprint of the server certificate and optionally persist this validation between sessions. Clipboard Synchronization: This feature allows you to synchronize the clipboard text between the viewer (your device) and the server (the remote computer). You can easily copy and paste text between the two systems. Mouse Cursor Icon Synchronization: The module also synchronizes the state of the mouse cursor icon between the viewer (virtual desktop) and the server, providing a more seamless and intuitive remote desktop experience. Multi-Screen Support: If the remote computer has more than one desktop screen, you can choose which screen to capture and stream to your device. View Only Mode: This feature allows you to disable remote control abilities and simply view the screen of the remote computer. It can be useful for demonstrations or presentations. Session Concurrency: Multiple viewers can connect to a single server at the same time, allowing multiple users to collaborate on the same remote desktop. Sleep Mode Prevention: To ensure that the remote desktop remains active and responsive, the module prevents the remote computer from entering sleep mode while it is waiting for viewers to connect. Streaming Optimization: To improve the streaming speed, the module only sends updated pieces of the desktop to the viewer, reducing the amount of data transmitted over the network. Setup everything in less than a minute (Fast Setup) ````powershell Install-Module -Name PowerRemoteDesktop_Server Invoke-RemoteDesktopServer -CertificateFile " " ```` If you want to avoid using your own certificate and prefer not to go through the process of creating one, you can remove the 'CertificateFile' option and run PowerShell as an administrator instead. ````powershell Install-Module -Name PowerRemoteDesktop_Viewer Invoke-RemoteDesktopViewer -ServerAddress " " -Password " " ```` Thats it 😉 Detailed Installation and Instructions There are several ways to use this PowerShell application. The recommended method is to install both the server and viewer components using the PowerShell Gallery. Alternatively, you can install them as modules or import them as scripts manually. Choose the method that best fits your needs and preferences. Install as a PowerShell Module from PowerShell Gallery ( Recommended ) You can install Power Remote Desktop from the PowerShell Gallery, which is similar to Aptitude for Debian or Brew for MacOS. To do so, run the following commands: ```powershell Install-Module -Name PowerRemoteDesktop_Server Install-Module -Name PowerRemoteDesktop_Viewer ``` AllowPrerelease is mandatory when current version is marked as a Prerelease When you run the command, you may see the following warning in your command prompt: Untrusted repository You are installing the modules from an untrusted repository. If you trust this repository, change its InstallationPolicy value by running the Set-PSRepository cmdlet. Are you sure you want to install the modules from 'PSGallery'? Type 'Y' to confirm and proceed with the installation. When the installation is complete, both modules should be available. You can verify this by running the following command: powershell Get-Module -ListAvailable Example Output: ``` PS C:\Users\Phrozen\Desktop> Get-Module -ListAvailable Directory: C:\Users\Phrozen\Documents\WindowsPowerShell\Modules ModuleType Version Name ExportedCommands ---------- ------- ---- ---------------- Manifest 1.0.0 PowerRemoteDesktop_Server Invoke-RemoteDesktopServer Manifest 1.0.0 PowerRemoteDesktop_Viewer Invoke-RemoteDesktopViewer <..snip..> ``` If the modules are not showing up, try running the following commands and then check again: ```powershell Import-Module PowerRemoteDesktop_Server Import-Module PowerRemoteDesktop_Viewer ``` Install as a PowerShell Module (Manually / Unmanaged) In order for a module to be available, it must be located in a registered module path. You can view the registered module paths by running the following command: powershell Write-Output $env:PSModulePath Example Output: C:\Users\Phrozen\Documents\WindowsPowerShell\Modules;C:\Program Files\WindowsPowerShell\Modules;C:\WINDOWS\system32\WindowsPowerShell\v1.0\Modules Clone PowerRemoteDesktop repository or download a Github release package. git clone https://github.com/DarkCoderSc/PowerRemoteDesktop.git Copy both PowerRemoteDesktop_Viewer and PowerRemoteDesktop_Server folders to desired module path Example: C:\Users\<USER>\Documents\WindowsPowerShell\Modules Both modules should now be available, you can verify using the command: powershell Get-Module -ListAvailable Example Output: ``` PS C:\Users\Phrozen\Desktop> Get-Module -ListAvailable Directory: C:\Users\Phrozen\Documents\WindowsPowerShell\Modules ModuleType Version Name ExportedCommands ---------- ------- ---- ---------------- Manifest 1.0.0 PowerRemoteDesktop_Server Invoke-RemoteDesktopServer Manifest 1.0.0 PowerRemoteDesktop_Viewer Invoke-RemoteDesktopViewer <..snip..> ``` If you don't see them, run the following commands and check back. ```powershell Import-Module PowerRemoteDesktop_Server Import-Module PowerRemoteDesktop_Viewer ``` Notice: Manifest files are optional ( *.psd1 ) and can be removed. As a PowerShell Script It is not mandatory to install this application as a PowerShell module (Even if file extension is *.psm1 ) You can also load it as a PowerShell Script. Multiple methods exists including: Invoking Commands Using: powershell IEX (Get-Content .\PowerRemoteDesktop_[Server/Viewer].psm1 -Raw) Loading script from a remote location: powershell IEX (New-Object Net.WebClient).DownloadString('http://127.0.0.1/PowerRemoteDesktop_[Server/Viewer].psm1') etc... Usage Client PowerRemoteDesktop_Viewer.psm1 needs to be imported / or installed on local machine. Available Module Functions powershell Invoke-RemoteDesktopViewer Get-TrustedServers Remove-TrustedServer Clear-TrustedServers Invoke-RemoteDesktopViewer Create a new remote desktop session with a Power Remote Desktop Server. ⚙️ Supported Options: | Parameter | Type | Default | Description | |-------------------------|------------------|------------|--------------| | ServerAddress | String | 127.0.0.1 | Remote server host or address | | ServerPort | Integer | 2801 | Port number for the remote server | | SecurePassword | SecureString | None | SecureString object containing the password used for authenticating with the remote server (recommended) | | Password | String | None | Plain-text password used for authenticating with the remote server (not recommended; use SecurePassword instead)) | | DisableVerbosity | Switch | False | If specified, the program will suppress verbosity messages | | UseTLSv1_3 | Switch | False | If specified, the program will use TLS v1.3 instead of TLS v1.2 for encryption (recommended if both systems support it) | | Clipboard | Enum | Both | Specify the clipboard synchronization mode (options include 'Both', 'Disabled', 'Send', and 'Receive'; see below for more detail) | | ImageCompressionQuality | Integer (0-100) | 75 | JPEG compression level ranging from 0 (lowest quality) to 100 (highest quality) | | Resize | Switch | False | If specified, the remote desktop will be resized according to the 'ResizeRatio' option | | ResizeRatio | Integer (30-99) | 90 | Used in conjunction with the 'Resize' option, specify the resize ratio as a percentage | | AlwaysOnTop | Switch | False | If specified, the virtual desktop window will be displayed above all other windows | | PacketSize | Enum | Size9216 | Specify the network packet size for streams. Choose a size that is appropriate for your network constraints. | | BlockSize | Enum | Size64 | Specify the size of the screen grid blocks. Choose a size that is appropriate for the remote screen size and the computer's resources (such as CPU and network capabilities) | | LogonUI | Switch | False | Request the server to open the LogonUI/Winlogon desktop instead of the default user desktop (requires SYSTEM privilege in the active session) | Clipboard Mode Enum Properties | Value | Description | |-------------------|----------------------------------------------------| | Disabled | Clipboard synchronization is disabled on both the viewer and server sides | | Receive | Only incoming clipboard data is allowed | | Send | Only outgoing clipboard data is allowed | | Both | Clipboard synchronization is allowed on both the viewer and server sides | PacketSize Mode Enum Properties | Value | Description | |-------------------|---------------------| | Size1024 | 1024 Bytes (1KiB) | | Size2048 | 2048 Bytes (2KiB) | | Size4096 | 4096 Bytes (4KiB) | | Size8192 | 8192 Bytes (8KiB) | | Size9216 | 9216 Bytes (9KiB) | | Size12288 | 12288 Bytes (12KiB) | | Size16384 | 16384 Bytes (16KiB) | BlockSize Mode Enum Properties | Value | Description | |-------------------|------------------| | Size32 | 32x32 | | Size64 | 64x64 | | Size96 | 96x96 | | Size128 | 128x128 | | Size256 | 256x256 | | Size512 | 512x512 | ⚠️ Important Notices It is recommended to use SecurePassword instead of a plain-text password, even if the plain-text password is being converted to a SecureString Example Open a new remote desktop session to '127.0.0.1:2801' using the password 'urCompl3xP@ssw0rd' powershell Invoke-RemoteDesktopViewer -ServerAddress "127.0.0.1" -ServerPort 2801 -SecurePassword (ConvertTo-SecureString -String "urCompl3xP@ssw0rd" -AsPlainText -Force) Enumerate Trusted Servers When connecting to a new remote server for the first time, the viewer will ask if you want to trust the server's fingerprint. If you select the option to 'Always' trust this fingerprint, it will be saved in the local user registry. You can revoke the trust for this fingerprint at any time using the appropriate function. powershell Get-TrustedServers Example output: ```` PS C:\Users\Phrozen\Desktop\Projects\PowerRemoteDesktop> Get-TrustedServers Detail Fingerprint ------ ----------- @{FirstSeen=18/01/2022 19:40:24} D9F4637463445D6BB9F3EFBF08E06BE4C27035AF @{FirstSeen=20/01/2022 15:52:33} 3FCBBFB37CF6A9C225F7F582F14AC4A4181BED53 @{FirstSeen=20/01/2022 16:32:14} EA88AADA402864D1864542F7F2A3C49E56F473B0 @{FirstSeen=21/01/2022 12:24:18} 3441CE337A59FC827466FC954F2530C76A3F8FE4 ```` Permanently Delete a Trusted Server powershell Remove-TrustedServer -Fingerprint "<target_ingerprint>" Permanently Delete all Trusted Servers (Purge) powershell Clear-TrustedServers Server PowerRemoteDesktop_Server.psm1 needs to be imported / or installed on local machine. Available Module Functions powershell Invoke-RemoteDesktopServer ⚙️ Supported Options: | Parameter | Type | Default | Description | |------------------------|------------------|------------|--------------| | ServerAddress | String | 0.0.0.0 | IP address representing the local machine's IP address | | ServerPort | Integer | 2801 | The port number on which to listen for incoming connections | | SecurePassword | SecureString | None | SecureString object containing the password used for authenticating remote viewers (recommended) | | Password | String | None | Plain-text password used for authenticating remote viewers (not recommended; use SecurePassword instead) | | DisableVerbosity | Switch | False | If specified, the program will suppress verbosity messages | | UseTLSv1_3 | Switch | False | If specified, the program will use TLS v1.3 instead of TLS v1.2 for encryption (recommended if both systems support it) | | Clipboard | Enum | Both | Specify the clipboard synchronization mode (options include 'Both', 'Disabled', 'Send', and 'Receive'; see below for more detail) | | CertificateFile | String | None | A file containing valid certificate information (x509) that includes the private key | | EncodedCertificate | String | None | A base64-encoded representation of the entire certificate file, including the private key | | ViewOnly | Switch | False | If specified, the remote viewer will only be able to view the desktop and will not have access to the mouse or keyboard | | PreventComputerToSleep | Switch | False | If specified, this option will prevent the computer from entering sleep mode while the server is active and waiting for new connections | | CertificatePassword | SecureString | None | Specify the password used to access a password-protected x509 certificate provided by the user | Server Address Examples | Value | Description | |-------------------|--------------------------------------------------------------------------| | 127.0.0.1 | Only listen for connections from the localhost (usually for debugging purposes) | | 0.0.0.0 | Listen for connections on all network interfaces, including the local network and the internet | Clipboard Mode Enum Properties | Value | Description | |-------------------|----------------------------------------------------| | Disabled | Clipboard synchronization is disabled on both the viewer and server sides | | Receive | Only incoming clipboard data is allowed | | Send | Only outgoing clipboard data is allowed | | Both | Clipboard synchronization is allowed on both the viewer and server sides | ⚠️ Important Notices It is recommended to use SecurePassword instead of a plain-text password, even if the plain-text password is being converted to a SecureString. If you do not specify a custom certificate using 'CertificateFile' or 'EncodedCertificate', a default self-signed certificate will be generated and installed on the local machine (if one does not already exist). This requires administrator privileges. To run the server with a non-privileged account, you must provide your own certificate location. If you do not specify a SecurePassword or Password, a random, complex password will be generated and displayed in the terminal (this password is temporary). Examples ```powershell Invoke-RemoteDesktopServer -ListenAddress "0.0.0.0" -ListenPort 2801 -SecurePassword (ConvertTo-SecureString -String "urCompl3xP@ssw0rd" -AsPlainText -Force) Invoke-RemoteDesktopServer -ListenAddress "0.0.0.0" -ListenPort 2801 -SecurePassword (ConvertTo-SecureString -String "urCompl3xP@ssw0rd" -AsPlainText -Force) -CertificateFile "c:\certs\phrozen.p12" ``` How to capture LogonUI As of version 4.0.0, it is possible to capture the LogonUI/Winlogon (UAC Prompt, Windows Login Window, CTRL+ALT+DEL, etc.). However, in order to capture the LogonUI, the server must be run under the context of 'NT AUTHORITY/System' in the current active session. There are multiple methods for spawning a process as the SYSTEM user in the active session (e.g., PsExec, Process Hacker), but for simplicity I recommend using my PowerRunAsSystem project (available on GitHub and installable through the PowerShell Gallery). powershell Install-Module -Name PowerRunAsSystem Then run bellow command as Administrator. powershell Invoke-InteractiveSystemPowerShell A new PowerShell terminal should appear on your desktop as NT AUTHORITY/System If you follow the steps above, a new PowerShell terminal should appear on your desktop running as the 'NT AUTHORITY/System' user. From this terminal, you can run the Power Remote Desktop server command and enable the 'LogonUI' option for future Power Remote Desktop viewer connections. It's worth noting that if you don't use your own X509 certificate, you will need administrator privileges to create a new server. However, you can easily create your own X509 certificate using tools such as the OpenSSL command line tool. Generate your Certificate openssl req -x509 -sha512 -nodes -days 365 -newkey rsa:4096 -keyout phrozen.key -out phrozen.crt Then export the new certificate ( must include private key ). openssl pkcs12 -export -out phrozen.p12 -inkey phrozen.key -in phrozen.crt Integrate to server as a file Use CertificateFile . Example: c:\tlscert\phrozen.crt Integrate to server as a base64 representation Encode an existing certificate using PowerShell ```powershell ``` or on Linux / Mac systems base64 -i /tmp/phrozen.p12 You can then pass the output base64 certificate file to parameter EncodedCertificate (One line) Changelog 11 January 2022 (1.0.1 Beta 2) Desktop images are now transported in raw bytes instead of base64 string thus slightly improving performances. Protocol has drastically changed. It is smoother to read and less prone to errors. TLS v1.3 option added (Might not be supported by some systems). Several code optimization, refactoring and fixes. Password complexity check implemented to avoid lazy passwords. Possibility to disable verbose. Server & Viewer version synchronization. Same version must be used between the two. 12 January 2022 (1.0.2 Beta 3) HDPI is completely supported. 12 January 2022 (1.0.3 Beta 4) Possibility to change desktop image quality. Possibility to choose which screen to capture if multiple screens (Monitors) are present on remote machine. Multi Screen Selection 14 January 2022 (1.0.4 Beta 5) Password is stored as SecureString on Viewer. I don't see the point of implementing SecureString sever-side, if you do see the point, please change my mind. Server Fingerprint Validation. Possibility to trust a server for current PowerShell Instance or persistantly using a local storage. Possibility to manage trusted servers (List, Remove, Remove All) Fingerprint Validation 18 January 2022 (1.0.5 Beta 6) Multiple code improvements to support incoming / outgoing events. Global cursor state synchronization implemented (Now virtual desktop mouse cursor is the same as remote server). Password Generator algorithm fixed. Virtual keyboard ] and ) correctly sent and interpreted. Clipboard synchronization Viewer <-> Server added. Server support a new option to only show desktop (Mouse moves, clicks, wheel and keyboard control is disabled in this mode). 21 January 2022 (1.0.6) TransportMode option removed. Desktop streaming performance / speed increased. 28 January 2022 (2.0.0) Protocol was completely revisited, protocol is now more stable and modular. Session concurrency is now supported, multiple viewers can connect at the same time to a server. Possibility to stop the server using CTRL+C Image quality is now requested by viewer. Desktop resize is now made server-side. Desktop resize can now be forced and requested by viewer. Center virtual desktop glitch fixed. Handshake calls (auth + session / worker negociation) will now timeout to avoid possible dead locks. Virtual Desktop Form can now be set always on top of other forms. Server finally use secure string to handle password-authentication. 9 February 2022 (3.0.0) Prevent computer to sleep in server side. Motion Update now supported in its very first version to increase desktop streaming speed. Mouse move works as expected in certain circumstances. Keyboard simulation improved. Various Optimization and fixes. 10 February 2022 (3.1.0) Code refactoring and improvement. Desktop streaming improvement to gain few more FPS. Support password-protected external x509 Certificates. 10 March 2022 (4.0.0) Huge desktop streaming optimization, FPS rate increased by 65% (even more if tuning BlockSize) Desktop resize is now made viewer-side and automatically to simplify the code and efficiency. FastResize option is not required anymore. Various code optimization / fix. WIN Keyboard Key supported. Virtual Desktop window opens above the terminal. Server now support LogonUI / Winlogon (Beta) List of ideas and TODO 🟢 Mutual Authentication for SSL/TLS (Client Certificate) 🟠 Interrupt sessions when local resolution has changed. 🔴 LogonUI Support. 🟢 = Easy 🟠 = Medium 🔴 = Hard Made with ❤️ in 🇫🇷;Remote Desktop entirely coded in PowerShell.;remote-control,powershell,windows,rdp,remote,desktop,windows-10,windows-11
PhrozenIO/PowerRemoteDesktop
nvim-neotest/neotest;Neotest A framework for interacting with tests within NeoVim. This is early stage software. Introduction Installation Supported Runners Configuration Usage Consumers Output Window Summary Window Diagnostic Messages Status Signs Strategies Writing Adapters Parsing tests in a directory Collecting results Introduction See :h neotest for details on neotest is designed and how to interact with it programmatically. Installation Neotest uses nvim-nio and plenary.nvim . Most adapters will also require nvim-treesitter . Neotest uses the CursorHold event. This uses the updatetime setting which is by default very high, and lowering this can lead to excessive writes to disk. It's recommended to use https://github.com/antoinemadec/FixCursorHold.nvim which allows detaching updatetime from the frequency of the CursorHold event. The repo claims it is no longer needed but it is still recommended (See this issue ) Install with your favourite package manager alongside nvim-dap dein : vim call dein#add("nvim-lua/plenary.nvim") call dein#add("antoinemadec/FixCursorHold.nvim") call dein#add("nvim-treesitter/nvim-treesitter") call dein#add("nvim-neotest/nvim-nio") call dein#add("nvim-neotest/neotest") vim-plug vim Plug 'nvim-lua/plenary.nvim' Plug 'antoinemadec/FixCursorHold.nvim' Plug 'nvim-treesitter/nvim-treesitter' Plug 'nvim-neotest/nvim-nio' Plug 'nvim-neotest/neotest' packer.nvim lua use { "nvim-neotest/neotest", requires = { "nvim-neotest/nvim-nio", "nvim-lua/plenary.nvim", "antoinemadec/FixCursorHold.nvim", "nvim-treesitter/nvim-treesitter" } } lazy.nvim lua { "nvim-neotest/neotest", dependencies = { "nvim-neotest/nvim-nio", "nvim-lua/plenary.nvim", "antoinemadec/FixCursorHold.nvim", "nvim-treesitter/nvim-treesitter" } } To get started you will also need to install an adapter for your test runner. See the adapter's documentation for their specific setup instructions. Supported Runners | Test Runner | Adapter | | :---------------- | :------------------------------------------------------------------: | | pytest | neotest-python | | python-unittest | neotest-python | | plenary | neotest-plenary | | go | neotest-go neotest-golang | | jest | neotest-jest | | vitest | neotest-vitest | | stenciljs | neotest-stenciljs | | playwright | neotest-playwright | | rspec | neotest-rspec | | minitest | neotest-minitest | | dart, flutter | neotest-dart | | testthat | neotest-testthat | | phpunit | neotest-phpunit | | pest | neotest-pest | | rust (treesitter) | neotest-rust | | rust (LSP) | rustaceanvim | | elixir | neotest-elixir | | dotnet | neotest-dotnet | | scala | neotest-scala | | haskell | neotest-haskell | | deno | neotest-deno | | java | neotest-java | | foundry | neotest-foundry | | zig | neotest-zig | | c++ (google test) | neotest-gtest | | gradle | neotest-gradle | | bash | neotest-bash | | hardhat | neotest-hardhat | For any runner without an adapter you can use neotest-vim-test which supports any runner that vim-test supports. The vim-test adapter does not support some of the more advanced features such as error locations or per-test output. If you're using the vim-test adapter then install vim-test too. Configuration Provide your adapters and other config to the setup function. lua require("neotest").setup({ adapters = { require("neotest-python")({ dap = { justMyCode = false }, }), require("neotest-plenary"), require("neotest-vim-test")({ ignore_file_types = { "python", "vim", "lua" }, }), }, }) See :h neotest.Config for configuration options and :h neotest.setup() for the default values. It is highly recommended to use lazydev.nvim to enable type checking for neotest to get type checking, documentation and autocompletion for all API functions. The default icons use codicons . It's recommended to use this fork which fixes alignment issues for the terminal. If your terminal doesn't support font fallback and you need to have icons included in your font, you can patch it via Font Patcher . There is a simple step by step guide here . Usage The interface for using neotest is very simple. Run the nearest test lua require("neotest").run.run() Run the current file lua require("neotest").run.run(vim.fn.expand("%")) Debug the nearest test (requires nvim-dap and adapter support) lua require("neotest").run.run({strategy = "dap"}) See :h neotest.run.run() for parameters. Stop the nearest test, see :h neotest.run.stop() lua require("neotest").run.stop() Attach to the nearest test, see :h neotest.run.attach() lua require("neotest").run.attach() Consumers For extra features neotest provides consumers which interact with the state of the tests and their results. Some consumers will be passive while others can be interacted with. Watch Tests :h neotest.watch Watches files related to tests for changes and re-runs tests https://user-images.githubusercontent.com/24252670/229367494-6775d7f1-a8fb-461b-bbbd-d6124031293e.mp4 Output Window :h neotest.output Displays output of tests Displays per-test output Output Panel :h neotest.output_panel Records all output of tests over time in a single window Summary Window :h neotest.summary Displays test suite structure from project root. Provides mappings for running, attaching, stopping and showing output. Diagnostic Messages :h neotest.diagnostic Use vim.diagnostic to display error messages where they occur while running. Status Signs :h neotest.status Displays the status of a test/namespace beside the beginning of the definition. See the help doc for a list of all consumers and their documentation. Strategies Strategies are methods of running tests. They provide the functionality to attach to running processes and so attaching will mean different things for different strategies. | Name | Description | | :--------: | :---------------------------------------------------------------------------------------------------------- | | integrated | Default strategy that will run a process in the background and allow opening a floating terminal to attach. | | dap | Uses nvim-dap to debug tests (adapter must support providing an nvim-dap configuration) | Custom strategies can implemented by providing a function which takes a neotest.RunSpec and returns an table that fits the neotest.Process interface. Plenary's async library can be used to run asynchronously. Writing Adapters This section is for people wishing to develop their own neotest adapters. The documentation here and the underlying libraries are WIP and open to feedback/change. Please raise issues with any problems understanding or using the this doc. The best place to figure out how to create an adapter is by looking at the existing ones. Adapters must fulfill an interface to run (defined here ). Much of the functionality is built around using a custom tree object that defines the structure of the test suite. There are helpers that adapters can use within their code (all defined under neotest.lib ) Adapters must solve three problems: Parse tests Construct test commands Collect results Parsing Tests There are two stages to this, finding files which is often a simple file name check (it's OK if a test file has no actual tests in it) and parsing test files. For languages supported by nvim-treesitter, the easiest way to parse tests is to use the neotest treesitter wrapper to parse a query to constuct a tree structure. The query can define capture groups for tests and namespaces. Each type must have <type>.name and <type>.definition capture groups. They can be used multiple times in the query Example from neotest-plenary: ```lua local lib = require("neotest.lib") function PlenaryNeotestAdapter.discover_positions(path) local query = [[ ;; describe blocks ((function_call name: (identifier) @func_name (#match? @func_name "^describe$") arguments: (arguments (_) @namespace.name (function_definition)) )) @namespace.definition ;; it blocks ((function_call name: (identifier) @func_name arguments: (arguments (_) @test.name (function_definition)) ) (#match? @func_name "^it$")) @test.definition ;; async it blocks (async.it) ((function_call name: ( dot_index_expression field: (identifier) @func_name ) arguments: (arguments (_) @test.name (function_definition)) ) (#match? @func_name "^it$")) @test.definition ]] return lib.treesitter.parse_positions(path, query, { nested_namespaces = true }) end ``` For languages unsupported by treesitter you can use regexes like neotest-vim-test or hook into the test runner. Constructing Test Commands This is the easiest part of writing an adapter. You need to handle the different types of positions that a user may run (directory, file, namespace and test). If you are hooking into the runner, you may not be running the test runner command directly. neotest-python and neotest-plenary both are examples of this, with a script being used to run each runner to handle parsing results and storing them for result collection later. Collecting Results Collecting results will be the most involved process in the adapter, with complexity depending on the test runner and desired features. For the most basic implementation an adapter can choose to only run tests individually and use the exit code as an indicator of the result (this is how neotest-vim-test works) but this impacts peformance and also loses out on more advanced features. If tests can be run together then the adapter must provide results for at least each individual test. Results for namespaces, files and directories will be inferred from their child tests. For collecting test specific error messages, error locations etc you'll need to parse output or hook into the runner. See neotest-python and neotest-plenary for examples on how this can be done.;An extensible framework for interacting with tests within NeoVim.;neovim,lua
nvim-neotest/neotest
niXman/mingw-builds-binaries;MinGW-W64-binaries MinGW-W64 compiler binaries MinGW-W64 online installer ( VirusTotal ), ( sources ). The online installer provides GUI for selection parameters of build you need and archive extraction into selected dir. It also creates a shortcut in start menu that runs terminal with added PATH to the compiler dir.;MinGW-W64 compiler binaries;[]
niXman/mingw-builds-binaries
potatoqualitee/eol-dr;EOL DR / End-of-life Disaster Response Back in 2012, I moved to Belgium with my wife and started working with a bunch of techies who eventually became life-long friends. Our VDI guy, Andy, was one of my favorites. He was grumpy, always tucked his shirt in, kept his desk Type A clean and was just so principled. He was into VMware VDI and I supported the SQL Servers in his Horizon environment. Even after Andy and I both left Belgium, we stayed in touch, sharing stories of our current employment and talking about the current state of our setup. He still got to use PowerShell at work and I did too. I always thought he'd be there and was devastated when I found out he died unexpectedly. What about his homelab? "What about his homelab?" I thought. "Will his wife's wifi devices even be able to get an IP address if his DHCP server goes down?". I reached out to her to see how she was doing and she told me that, six months on, she avoids his office at all costs. She worries what will happen when her TV no longer works, when her wifi no longer works. She knows people will help, but the idea of calling them is torturous. Heartbreaking. Immediately after reading her email, I reached out to my and Andy's former colleagues who lived near her and they offered to drop by to help her figure out both the short-term and long-term tech plans. I asked Andy's widow for her email and phone number so she wouldn't have to dread calling, someone else would place that call for her. That got us all thinking -- what would Andy have wanted for his homelab? What would our own spouses do if we suddenly weren't there? Who would close our Azure accounts? Who should get the PureStorage array? For those of us who are The Bill Payers, how would our spouses know which bill is paid by what bank account? I put together an initial draft to answer these questions for my own wife, and then crowdsourced the rest. So many of my tech friends suggested stuff I hadn't thought of and I'm sure there's more. Initially, I was going to make it a gist, but a friend suggested putting it on GitHub which would make PRs possible. checklist.md -> checklist.docx Within hours of this interaction, I created a Word document, printed it out, filled in a couple passwords manually, and then stored it in a fire proof bag. Here is a sanitized list that you can use for your own purposes. If anything is missing or you have suggestions, please feel free to submit a PR. Upon approval, the Word doc will be regenerated for others. So here is the checklist: In markdown format: checklist.md In docx format, generated by the GitHub Action: checklist.docx You may also be interested in... In Case You Get Hit by a Bus;A crowd-sourced guide to help techs help their non-tech spouses / partners / parents / kids when we are at the end-of-life;disaster-response,disaster-management
potatoqualitee/eol-dr
xuebinqin/DIS;Highly Accurate Dichotomous Image Segmentation (ECCV 2022) Xuebin Qin , Hang Dai , Xiaobin Hu , Deng-Ping Fan* , Ling Shao , Luc Van Gool . This is the official repo for our newly formulated DIS task: Project Page , Arxiv , 中文 . Currently, only a few sample images of our DIS V2.0 dataset are included in this repo. The complete DIS V2.0 dataset and model have NOT been released yet! (quick response to many emails regarding to the DIS V2.0.) We are trying our best to release that as early as possible! Updates !!! ** (2022-Aug.-17)** The optimized model for general use of our IS-Net is now released: isnet-general-use.pth (for general use, this is NOT DIS V2.0.) from (Google Drive) or (Baidu Pan 提取码:6jh2) , please feel free to try it with the newly created simple inference.py code on your own datasets. ** (2022-Jul.-30)** Thank AK391 for the implementaiton of a Web Demo: Integrated into Huggingface Spaces 🤗 using Gradio . Try out the Web Demo . Notes for official DIS group: Currently, the released DIS deep model is the academic version that was trained with DIS V1.0, which includes very few animal, human, cars, etc. So it may not work well on these targets. We will release another version for general use and test. In addition, our DIS V2.0 will cover more categories with extremely well-annotated samples. Please stay tuned. ** (2022-Jul.-17)** Our paper, code and dataset are now officially released!!! Please check our project page for more details: Project Page . ** (2022-Jul.-5)** Our DIS work is now accepted by ECCV 2022, the code and dataset will be released before July 17th, 2022. Please be aware of our updates. 1. Our Dichotomous Image Segmentation (DIS) Dataset 1.1 DIS dataset V1.0: DIS5K Download: Google Drive or Baidu Pan 提取码:rtgw 1.2 DIS dataset V2.0 Although our DIS5K V1.0 includes samples from more than 200 categories, many categories, such as human, animals, cars and so on, in real world are not included. So the current version (v1.0) of our dataset may limit the robustness of the trained models. To build the comprehensive and large-scale highly accurate dichotomous image segmentation dataset, we are building our DIS dataset V2.0. The V2.0 will be released soon. Please stay tuned. Samples from DIS dataset V2.0. 2. APPLICATIONS of Our DIS5K Dataset 3D Modeling Image Editing Art Design Materials Still Image Animation AR 3D Rendering 3. Architecture of Our IS-Net 4. Human Correction Efforts (HCE) 5. Experimental Results Predicted Maps, (Google Drive) , (Baidu Pan 提取码:ph1d) , of Our IS-Net and Other SOTAs Qualitative Comparisons Against SOTAs Quantitative Comparisons Against SOTAs 6. Run Our Code (1) Clone this repo git clone https://github.com/xuebinqin/DIS.git (2) Configuring the environment: go to the DIS/ISNet folder and run conda env create -f pytorch18.yml Or you can check the requirements.txt to configure the dependancies. (3) activate the conda environment by conda activate pytorch18 (4) Train: (a) Open train_valid_inference_main.py , set the path of your to-be-inferenced train_datasets and valid_datasets , e.g., valid_datasets=[dataset_vd] (b) Set the hypar["mode"] to "train" (c) Create a new folder your_model_weights in the directory saved_models and set it as the hypar["model_path"] ="../saved_models/your_model_weights" and make sure hypar["valid_out_dir"] (line 668) is set to "" , otherwise the prediction maps of the validation stage will be saved to that directory, which will slow the training speed down (d) Run python train_valid_inference_main.py (5) Inference Download the pre-trained weights (for fair academic comparisons) isnet.pth from (Google Drive) or (Baidu Pan 提取码:xbfk) OR the optimized model weights isnet-general-use.pth (for general use) from (Google Drive) or (Baidu Pan 提取码:6jh2) , and store them in saved_models/IS-Net I. Simple inference code for your own dataset without ground truth: (a) Open \ISNet\inference.py and configure your input and output directories (b) Run python inference.py II. Inference for dataset with/without ground truth (a) Open train_valid_inference_main.py , set the path of your to-be-inferenced valid_datasets , e.g., valid_datasets=[dataset_te1, dataset_te2, dataset_te3, dataset_te4] (b) Set the hypar["mode"] to "valid" (c) Set the output directory of your predicted maps, e.g., hypar["valid_out_dir"] = "../DIS5K-Results-test" (d) Run python train_valid_inference_main.py (5) Use of our Human Correction Efforts(HCE) metric Set the ground truth directory gt_root and the prediction directory pred_root . To reduce the time costs for computing HCE, the skeletion of the DIS5K dataset can be pre-computed and stored in gt_ske_root . If gt_ske_root="" , the HCE code will compute the skeleton online which usually takes a lot for time for large size ground truth. Then, run python hce_metric_main.py . Other metrics are evaluated based on the SOCToolbox . 7. Term of Use Our code and evaluation metric use Apache License 2.0. The Terms of use for our DIS5K dataset is provided as DIS5K-Dataset-Terms-of-Use.pdf . Acknowledgements We would like to thank Dr. Ibrahim Almakky for his helps in implementing the dataloader cache machanism of loading large-size training samples and Jiayi Zhu for his efforts in re-organizing our code and dataset. Citation @InProceedings{qin2022, author={Xuebin Qin and Hang Dai and Xiaobin Hu and Deng-Ping Fan and Ling Shao and Luc Van Gool}, title={Highly Accurate Dichotomous Image Segmentation}, booktitle={ECCV}, year={2022} } Our Previous Works: U 2 -Net , BASNet . ``` @InProceedings{Qin_2020_PR, title = {U2-Net: Going Deeper with Nested U-Structure for Salient Object Detection}, author = {Qin, Xuebin and Zhang, Zichen and Huang, Chenyang and Dehghan, Masood and Zaiane, Osmar and Jagersand, Martin}, journal = {Pattern Recognition}, volume = {106}, pages = {107404}, year = {2020} } @InProceedings{Qin_2019_CVPR, author = {Qin, Xuebin and Zhang, Zichen and Huang, Chenyang and Gao, Chao and Dehghan, Masood and Jagersand, Martin}, title = {BASNet: Boundary-Aware Salient Object Detection}, booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2019} } @article{qin2021boundary, title={Boundary-aware segmentation network for mobile and web applications}, author={Qin, Xuebin and Fan, Deng-Ping and Huang, Chenyang and Diagne, Cyril and Zhang, Zichen and Sant'Anna, Adri{`a} Cabeza and Suarez, Albert and Jagersand, Martin and Shao, Ling}, journal={arXiv preprint arXiv:2101.04704}, year={2021} };This is the repo for our new project Highly Accurate Dichotomous Image Segmentation;background-removal,deep-learning,dichotomous-image-segmentation,computer-vision,u-2-net
xuebinqin/DIS
apache/paimon;Apache Paimon is a lake format that enables building a Realtime Lakehouse Architecture with Flink and Spark for both streaming and batch operations. Paimon innovatively combines lake format and LSM structure, bringing realtime streaming updates into the lake architecture. Background and documentation are available at https://paimon.apache.org Paimon 's former name was Flink Table Store , developed from the Flink community. The architecture refers to some design concepts of Iceberg. Thanks to Apache Flink and Apache Iceberg. Collaboration Paimon tracks issues in GitHub and prefers to receive contributions as pull requests. Mailing Lists Name Subscribe Digest Unsubscribe Post Archive user @paimon.apache.org User support and questions mailing list Subscribe Subscribe Unsubscribe Post Archives dev @paimon.apache.org Development related discussions Subscribe Subscribe Unsubscribe Post Archives Please make sure you are subscribed to the mailing list you are posting to! If you are not subscribed to the mailing list, your message will either be rejected (dev@ list) or you won't receive the response (user@ list). Slack You can join the Paimon community on Slack. Paimon channel is in ASF Slack workspace. Anyone with an @apache.org email address can become a full member of the ASF Slack workspace. Search Paimon channel and join it. If you don't have an @apache.org email address, you can email to user@paimon.apache.org to apply for an ASF Slack invitation . Then join Paimon channel . Building JDK 8/11 is required for building the project. Maven version >=3.3.1. Run the mvn clean install -DskipTests command to build the project. Run the mvn spotless:apply to format the project (both Java and Scala). IDE: Mark paimon-common/target/generated-sources/antlr4 as Sources Root. How to Contribute Contribution Guide . License The code in this repository is licensed under the Apache Software License 2 .;Apache Paimon is a lake format that enables building a Realtime Lakehouse Architecture with Flink and Spark for both streaming and batch operations.;big-data,data-ingestion,flink,paimon,real-time-analytics,spark,table-store,streaming-datalake
apache/paimon
n0-computer/iroh;https://iroh.computer Bytes, Distributed. Docs Site | Rust Docs | Releases Iroh is a protocol for syncing & moving bytes. Bytes of any size, on any device. At its core, it's a peer-2-peer network built on a magic socket that establishes QUIC connections between peers. Peers request and provide blobs of opaque bytes that are incrementally verified by their BLAKE3 hash during transfer. Using Iroh Iroh is delivered as a Rust library and a CLI. Library Run cargo add iroh , to add iroh to your project. CLI Check out https://iroh.computer/docs/install to get started. The implementation lives in the iroh-cli crate. License Copyright 2024 N0, INC. This project is licensed under either of Apache License, Version 2.0, ( LICENSE-APACHE or http://www.apache.org/licenses/LICENSE-2.0) MIT license ( LICENSE-MIT or http://opensource.org/licenses/MIT) at your option. Contribution Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in this project by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions.;A toolkit for building distributed applications;rust,content-addressed,memes,object-store,realtime,sync,tags,does-anyone-read-these,tagsoftags
n0-computer/iroh
evilmartians/mono;Martian Mono Martian Mono is a monospaced version of the Martian Grotesk font for code style design. It inherits Grotesk’s brutal and eye-catching aesthetics as well as all of its benefits—metrics equilibrium, readability and intelligibility, and convenience for web developers and designers who believe in a systematic approach to design. 👉 Get your Martian Grotesk free trial on Gumroad or buy it on MyFonts to support Mono development. The typeface features a tall x-height, and it has vertical metrics which guarantee equal space is present above the cap height and under the baseline. The latter makes this typeface an on-screen workhorse: it is evenly placed on buttons, inputs, lists, and forms. When coupled together, all the above features make Martian Mono a reasonable choice for any user interface design. When choosing a font format, prefer ttf for variable and otf for static on macOS, and ttf for Windows. Download Download the latest package from the releases page or embed the font from Google Fonts . Styles Martian Mono consists of a variable font and 28 styles: Condensed to Semi Wide, Thin to Extra Bold. The font has 4 styles on the width axis: | Short name | Full name | CSS percentage | CSS keyword | | --- | --- | ---: | --- | | sWd | Semi Wide | 112.5% | semi-expanded | | Std | Standard | 100% | normal | | Nr | Narrow | 87.5% | semi-condensed | | Cn | Condensed | 75% | condensed | And 7 weights: | Short name | Full name | CSS numeric | CSS keyword | | --- | --- | ---: | --- | | Th | Thin | 100 | | | xLt | Extra Light | 200 | | | Lt | Light | 300 | | | Rg | Regular | 400 | normal | | Md | Medium | 500 | | | Bd | Bold | 700 | bold | | xBd | Extra Bold | 800 | | OpenType features Font size and legibility Originally designed for the screen, the glyph heights stick to the pixel grid on commonly used font sizes. In addition, it comes equipped with OpenType and TrueType hinting, and Martian Mono appears legible on most platforms, even when being rendered in small sizes. For the best results, use the following pairs of size / line height: 7.5 / 10 (or 14, 18, etc.) px 10 / 12 (or 16, 20, etc.) px 12.5 / 14 (or 18, 22, etc.) px 15 / 20 (or 24, 28, etc.) px Usage You are welcome to add more hints on usage (especially on the desktop) via pull requests. On the Web Download the woff2 package from the releases page to get the variable font in WOFF2 format ( see WOFF2 support matrix between browsers ). Consult the following articles from Evil Martians' blog on how to use variable fonts: Variable fonts in real life: how to use and love them The joy of Variable Fonts: getting started on the Frontend On the Desktop: choosing a variant For better compatibility with various terminal emulators and text editors on the desktop, it is a good idea to install the font not as a single variable font but as several different fonts. Manually See the releases page , and download otf or ttf files. Install the fonts. Windows See the releases page , and download ttf files. Install the fonts. macOS See the releases page , and download otf . Install the fonts. Or, use Homebrew : shell brew tap homebrew/cask-fonts brew install --cask font-martian-mono Next, if your application has a font picker, just choose Martian Mono and the variant you require. If the configuration is done using a text file, use Martian Mono for the default font variant ( Martian Mono Std Rg ), or try specifying the font name like MartianMono-NrRg for the Nr Rg variant. Choosing a variant for a dark background When choosing a font variant for a darker (or pitch black) background for your terminal or text editor, consider choosing a "lighter" variant if the font looks "too bold" to you. White font on a dark background can have that effect, see here for details . For example, go for Std Lt instead of Std Rg . On the Desktop: line spacing Once you install the font and start using it, you might notice that the picture might look quite confined: Instead, you might want to opt for something more readable and easy for the eyes if you like: The difference is line spacing . Learn how to set it up below, and consult the Font size and legibility chapter to learn about the best setting. Or, experiment yourself by setting different percentages ( 120% , 140% ) or paddings in pixels ( 1 , 2 , 4 , and so on). Terminal emulators Terminal (macOS) Preferences → Profiles → (choose a profile) → Text → Font → [Change]. You will be met with a font picker dialog that has the Line Spacing property. iTerm 2 (macOS) Preferences → Profiles → (choose a profile) → Text. Look for the n/n symbol that looks like a fraction. That's your line spacing, in percentage (100% is the default). kitty Open the config file ( ~/.config/kitty/kitty.conf ). Look for the adjust_line_height property and see the documentation. Text editors VS Code To specify values for variable axes, use editor.fontVariations : jsonc // settings.json { "editor.fontFamily": "Martian Mono", "editor.fontVariations": "'wdth' 87.5, 'wght' 450", } Consider switching font aliasing method to auto for improved rendering on displays with high DPI: jsonc // settings.json { "workbench.fontAliasing": "auto", } Finally, fine tune line height ( editor.lineHeight ): jsonc // settings.json { "editor.fontFamily": "Martian Mono", "editor.fontSize": 12.5, "editor.lineHeight": 20, } vim For setting line spacing in GUI versions of vim, see linespace / lsp . Sublime Text Open your preferences. Add the line_padding_top and line_padding_bottom parameters. Both set the padding for a line of text in pixels. Roadmap Coding ligatures (work in progress) Cyrillic script for Bulgarian, Serbian, and Macedonian (work in progress) Powerline symbols (not sure) Italics (not sure) Support My name is Roman Shamin , and I work on Martian Mono in my spare time. If you want to support Martian Mono, oklch.com , and other free and open-source fonts , there are a few things you can do. Spread the word on social media by using the hashtag #MartianMono Buy Martian Grotesk on MyFonts or Gumroad Consider becoming a patron I’m sincerely grateful for any support!;Free and open-source monospaced font from Evil Martians;[]
evilmartians/mono
LinkStackOrg/LinkStack;Open-Source Linktree Alternative LinkStack is a highly customizable link sharing platform with an intuitive, easy to use user interface. Function • About • Instances • Themes • Installation • Docker Version • Updating • Discord • Fork structure • License • Supporters • Special thanks • Additional credit Function LinkStack: The Ultimate Link Management Solution LinkStack is a unique platform that provides an efficient solution for managing and sharing links online. Our platform offers a website similar to Linktree, which allows users to overcome the limitation of only being able to add one link on social media platforms. With LinkStack, users can easily link to their own custom page and provide their followers with access to all the links they need in one convenient location. What sets LinkStack apart from other link management platforms is its flexibility, which allows users to host their links on their own web server or web hosting provider. This provides users with complete control over their online presence and ensures that their links are easily accessible. Additionally, LinkStack allows other users to register and create their own links, making it an ideal solution for businesses and organizations looking to manage multiple links. With our user-friendly Admin Panel, managing and accessing other users' links is easy. About With LinkStack, our mission is to provide users with a free and privacy-focused solution for managing and sharing links online. We believe that everyone should have access to a customizable link-sharing platform without sacrificing their privacy and control over their data. To achieve this mission, we offer a self-hosted option for users who want complete control over their data without having it sold to third-party companies. Our platform can be easily implemented through a simple drag and drop process, eliminating the need for complex terminal commands or source code manipulation. For those who may not have the technical expertise to self-host, we also offer free instances of our platform while still prioritizing their privacy. Our platform offers many of the same features and options as commercial link-sharing platforms while maintaining the values of privacy and autonomy. Our goal is to provide a free version of a link-sharing service, similar to Linktree, while empowering users to take ownership of their data. We will never sell user data and believe in providing a trustworthy and transparent solution for managing and sharing links online. Instances Find the right instance for you Our community instance program provides users with the opportunity to register on hosted instances and use Linkstack for free. Members of our community have generously provided their resources to host instances, allowing us to expand the reach of Linkstack and give back to the community ## Themes Custom Themes Customize the look of your LinkStack instance with themes. Themes allow you to change the look and feel of your site with a few clicks. Users can submit themes they created for everyone to download and use. Contribute by designing your own themes. You can read more about contributing below. |![preview1](https://raw.githubusercontent.com/LinkStackOrg/stargazer/main/preview.png) |![preview2](https://raw.githubusercontent.com/LinkStackOrg/Magic-Kingdom/main/preview.png)| | ------------- |-------------| |![preview3](https://raw.githubusercontent.com/LinkStackOrg/polygon/main/preview.png)|![preview4](https://raw.githubusercontent.com/LinkStackOrg/PolySleek/main/preview.png)| You can find all available Themes on here: [linkstack.org/themes](https://linkstack.org/themes) ### How to add themes #### How to add themes to your LinkStack instance You can add your downloaded themes to your LinkStack instance on the Admin Panel. Navigate to the 'Themes' tab and scroll to the bottom of the page. Now click on Choose file and select your downloaded theme zip file. Now click on 'Upload theme' and you should be able to select your uploaded theme now. ### **Themes are envisioned to be made by users for users.** If you know a bit about CSS, consider making your own theme and adding it to the public directory. Everything is documented in the dedicated GitHub repository. [github.com/LinkStackOrg/linkstack-themes/tree/main/contributing](https://github.com/LinkStackOrg/linkstack-themes/tree/main/contributing) ## Installation ### Downloading and installing steps: * **[Download](https://github.com/linkstackorg/linkstack/releases)** the latest release of LinkStack and simply place the folder 'linkstack' or the contents of this folder in the root directory of your website. ### That's it! No coding no command line setup just plug and play. #### Go through the first setup page: When accessing your instance for the first time, you will be greeted by the first setup page. ## Docker The official docker version of [LinkStack](https://github.com/linkstackorg/linkstack). This docker image is a simple to set up solution, containing everything you need to run LinkStack. The docker version of LinkStack retains all the features and customization options of the [original version](https://github.com/linkstackorg/linkstack). This docker is based on [Alpine Linux](https://www.alpinelinux.org), a Linux distribution designed to be small, simple and secure. The web server is running [Apache2](https://www.apache.org), a free and open-source cross-platform web server software. The docker comes with [PHP 8.0](https://www.php.net/releases/8.0/en.php) for high compatibility and performance. #### Using the docker is as simple as pulling and deploying. #### Pull `docker pull linkstackorg/linkstack` #### [Learn more about the Docker version](https://github.com/LinkStackOrg/linkstack-docker) ## Updating When a **new version** is released, you will get an update notification on your Admin Panel. ### Automatic one click Updater This updater allows you to update your installation with just one click. **How to use the Automatic Updater:** - To update your instance, click on the update notification on your Admin Panel. - Click on “Update automatically” and the updater will take care of the rest. You can still download updates manually. New versions will are still uploaded to the GitHub repository as usual. Before updating, the updater will create a backup. Your instance won’t save more than two backups at a time. You can download these updates from the created folder: `backups\updater-backups`. If you switched your database to MySQL, your database will not be included in the backup. ## Discord ## License [![License: AGPL v3](https://img.lss.ovh/badge/License-AGPL%20v3-blue.svg)](https://www.gnu.org/licenses/agpl-3.0) As of version 4.0.0, the license for this project has been updated to the GNU Affero General Public License v3.0, which explicitly requires that any modifications made to the project must be made public. This license also requires that a copyright notice and license notice be included in any copies or derivative works of the project. Additionally, any changes made to the project must be clearly stated, and the source code for the modified version must be made available to anyone who receives the modified version. Network use of the project is also considered distribution, and as such, any network use of the project must comply with the terms of the license. Finally, any derivative works of the project must be licensed under the same license terms as the original project. [Read more here](https://www.gnu.org/licenses/agpl-3.0) ## Supporters You can support LinkStack [here](https://linkstack.org/sponsor). **💖 Thank you:** - Stephen Marshall - [Jascha Urbach](https://github.com/jaschaurbach) - [LeoColman](https://github.com/LeoColman) - [Eric Chung](https://github.com/erickchung) - [Daltz](https://github.com/Daltz) - [Jan Klomp](https://github.com/escuco) - [AnhDOS](https://github.com/AnhDOS) - [MrSpuddy](https://github.com/MrSpuddy) - [Chih Wang](https://github.com/dozod-c) - [kigordid](https://github.com/kigordid) - [Ariq Naufal](https://github.com/naufdotal) - [Molleman-De-Coster-BV](https://github.com/Molleman-De-Coster-BV) - [RogueThorn](https://github.com/DunklerPhoenix) - [sachacalibre](https://github.com/sachacalibre) - [John Francis Sukamto](https://github.com/bigbadmonster17) - [Add Your Name](https://linkstack.org/sponsor) ### Contributors Thank you for improving LinkStack! ### Beta Testers Thank you for all your efforts! [Become a beta tester](https://linkstack.org/beta-tester) ### Stargazers ## Additional-credit - [laravel](https://github.com/laravel/laravel) - [forked from](https://github.com/khzg/littlelink-admin) - [default theme](https://github.com/sethcottle/littlelink) - [dashboard template](https://github.com/iqonicdesignofficial/hope-ui-laravel-dashboard) - [general animations](https://github.com/animate-css/animate.css) - [config editor](https://github.com/GeoSot/Laravel-EnvEditor) - [text editor (admin)](https://github.com/ckeditor/ckeditor4) - [text editor (user)](https://github.com/ckeditor/ckeditor5) - [updater backend](https://github.com/codedge/laravel-selfupdater) - [backup backend](https://github.com/spatie/laravel-backup) - [Vcard backend](https://github.com/jeroendesloovere/vcard) - [QR code backend](https://github.com/Bacon/BaconQrCode);LinkStack - the ultimate solution for creating a personalized & professional profile page. Showcase all your important links in one place, forget the limitation of one link on social media. Set up your personal site on your own server with just a few clicks.;php,blade,laravel,self-hosted,webapp,open-source,personal-website,linktree-alternative,linktree,privacy
LinkStackOrg/LinkStack
IDEA-Research/DINO;DINO This is the official implementation of the paper " DINO: DETR with Improved DeNoising Anchor Boxes for End-to-End Object Detection ". (DINO pronounced `daɪnoʊ' as in dinosaur) Authors: Hao Zhang *, Feng Li *, Shilong Liu *, Lei Zhang , Hang Su , Jun Zhu , Lionel M. Ni , Heung-Yeung Shum News [2023/7/10] We release Semantic-SAM , a universal image segmentation model to enable segment and recognize anything at any desired granularity. Code and checkpoint are available! [2023/4/28]: We release a strong open-set object detection and segmentation model OpenSeeD that achieves the best results on open-set object segmentation tasks. Code and checkpoints are available here . [2023/4/26]: DINO is shining again! We release Stable-DINO which is built upon DINO and FocalNet-Huge backbone that achieves 64.8 AP on COCO test-dev. [2023/4/22]: With better hyper-params, our DINO-4scale model achieves 49.8 AP under 12ep settings, please check detrex: DINO for more details. [2023/3/13]: We release a strong open-set object detection model Grounding DINO that achieves the best results on open-set object detection tasks. It achieves 52.5 zero-shot AP on COCO detection, without any COCO training data! It achieves 63.0 AP on COCO after fine-tuning. Code and checkpoints will be available here . [2023/1/23]: DINO has been accepted to ICLR 2023! [2022/12/02]: Code for Mask DINO is released (also in detrex )! Mask DINO further Achieves 51.7 and 59.0 box AP on COCO with a ResNet-50 and SwinL without extra detection data, outperforming DINO under the same setting!. [2022/9/22]: We release a toolbox detrex that provides state-of-the-art Transformer-based detection algorithms. It includes DINO with better performance . Welcome to use it! - Supports Now: DETR , Deformble DETR , Conditional DETR , DAB-DETR , DN-DETR , DINO . [2022/9/18]: We organize ECCV Workshop Computer Vision in the Wild (CVinW) , where two challenges are hosted to evaluate the zero-shot, few-shot and full-shot performance of pre-trained vision models in downstream tasks: `` Image Classification in the Wild (ICinW) '' Challenge evaluates on 20 image classification tasks. `` Object Detection in the Wild (ODinW) '' Challenge evaluates on 35 object detection tasks. [Workshop] [IC Challenge] [OD Challenge] [2022/8/6]: We update Swin-L model results without techniques such as O365 pre-training, large image size, and multi-scale test. We also upload the corresponding checkpoints to Google Drive. Our 5-scale model without any tricks obtains 58.5 AP on COCO val. [2022/7/14]: We release the code with Swin-L and Convnext backbone. [2022/7/10]: We release the code and checkpoints with Resnet-50 backbone. [2022/6/7]: We release a unified detection and segmentation model Mask DINO that achieves the best results on all the three segmentation tasks ( 54.7 AP on COCO instance leaderboard , 59.5 PQ on COCO panoptic leaderboard , and 60.8 mIoU on ADE20K semantic leaderboard )! Code will be available here . [2022/5/28] Code for DN-DETR is available here . [2020/4/10]: Code for DAB-DETR is avaliable here . [2022/3/8]: We reach the SOTA on MS-COCO leader board with 63.3AP ! [2022/3/9]: We build a repo Awesome Detection Transformer to present papers about transformer for detection and segmenttion. Welcome to your attention! Introduction We present DINO ( D ETR with I mproved de N oising anch O r boxes) with: State-of-the-art & end-to-end : DINO achieves 63.2 AP on COCO Val and 63.3 AP on COCO test-dev with more than ten times smaller model size and data size than previous best models. Fast-converging : With the ResNet-50 backbone, DINO with 5 scales achieves 49.4 AP in 12 epochs and 51.3 AP in 24 epochs. Our 4-scale model achieves similar performance and runs at 23 FPS. Methods Model Zoo We have put our model checkpoints here [model zoo in Google Drive] [model zoo in 百度网盘] (提取码"DINO"), where checkpoint{x}_{y}scale.pth denotes the checkpoint of y-scale model trained for x epochs. 12 epoch setting name backbone box AP Checkpoint Where in Our Paper 1 DINO-4scale R50 49.0 Google Drive / BaiDu Table 1 2 DINO-5scale R50 49.4 Google Drive / BaiDu Table 1 3 DINO-4scale Swin-L 56.8 Google Drive 4 DINO-5scale Swin-L 57.3 Google Drive 24 epoch setting name backbone box AP Checkpoint Where in Our Paper 1 DINO-4scale R50 50.4 Google Drive / BaiDu Table 2 2 DINO-5scale R50 51.3 Google Drive / BaiDu Table 2 36 epoch setting name backbone box AP Checkpoint Where in Our Paper 1 DINO-4scale R50 50.9 Google Drive / BaiDu Table 2 2 DINO-5scale R50 51.2 Google Drive / BaiDu Table 2 3 DINO-4scale Swin-L 58.0 Google Drive 4 DINO-5scale Swin-L 58.5 Google Drive Installation Installation We use the environment same to DAB-DETR and DN-DETR to run DINO. If you have run DN-DETR or DAB-DETR, you can skip this step. We test our models under ```python=3.7.3,pytorch=1.9.0,cuda=11.1```. Other versions might be available as well. Click the `Details` below for more details. 1. Clone this repo ```sh git clone https://github.com/IDEA-Research/DINO.git cd DINO ``` 2. Install Pytorch and torchvision Follow the instruction on https://pytorch.org/get-started/locally/. ```sh # an example: conda install -c pytorch pytorch torchvision ``` 3. Install other needed packages ```sh pip install -r requirements.txt ``` 4. Compiling CUDA operators ```sh cd models/dino/ops python setup.py build install # unit test (should see all checking is True) python test.py cd ../../.. ``` Data Data Please download [COCO 2017](https://cocodataset.org/) dataset and organize them as following: ``` COCODIR/ ├── train2017/ ├── val2017/ └── annotations/ ├── instances_train2017.json └── instances_val2017.json ``` Run 1. Eval our pretrianed models Download our DINO model checkpoint "checkpoint0011_4scale.pth" from [this link](https://drive.google.com/drive/folders/1qD5m1NmK0kjE5hh-G17XUX751WsEG-h_?usp=sharing) and perform the command below. You can expect to get the final AP about 49.0. ```sh bash scripts/DINO_eval.sh /path/to/your/COCODIR /path/to/your/checkpoint ``` 2. Inference and Visualizations For inference and visualizations, we provide a [notebook](inference_and_visualization.ipynb) as an example. 3. Train a 4-scale model for 12 epochs We use the DINO 4-scale model trained for 12 epochs as an example to demonstrate how to evaluate and train our model. You can also train our model on a single process: ```sh bash scripts/DINO_train.sh /path/to/your/COCODIR ``` 4. Supports for Swin Transformer To train Swin-L model, you need to first download the checkpoint of Swin-L backbone from [link](https://github.com/SwinTransformer/storage/releases/download/v1.0.0/swin_large_patch4_window12_384_22k.pth) and specify the dir of the pre-trained backbone when running the scripts. Here is an example. ``` bash scripts/DINO_train_submitit_swin.sh /path/to/your/COCODIR /path/to/your/pretrained_backbone ``` 5. Distributed Run As the training is time consuming, we suggest to train the model on multi-device. If you plan to train the models **on a cluster with Slurm**, here is an example command for training: ```sh # for DINO-4scale: 49.0 bash scripts/DINO_train_submitit.sh /path/to/your/COCODIR # for DINO-5scale: 49.4 bash scripts/DINO_train_submitit_5scale.sh /path/to/your/COCODIR ``` Notes: The results are sensitive to the batch size. We use 16(2 images each GPU x 8 GPUs for DINO-4scale and 1 images each GPU x 16 GPUs for DINO-5scale) by default. Or run with **multi-processes on a single node**: ```sh # for DINO-4scale: 49.0 bash scripts/DINO_train_dist.sh /path/to/your/COCODIR ``` 6. Training/Fine-tuning a DINO on your custom dataset To train a DINO on a custom dataset **from scratch**, you need to tune two parameters in a config file: - Tuning the `num_classes` to the number of classes to detect in your dataset. - Tuning the parameter `dn_labebook_size` to ensure that `dn_labebook_size >= num_classes + 1` To **leverage our pre-trained models** for model fine-tuning, we suggest add two more commands in a bash: - `--pretrain_model_path /path/to/a/pretrianed/model`. specify a pre-trained model. - `--finetune_ignore label_enc.weight class_embed`. ignore some inconsistent parameters. Links Our model is based on DAB-DETR and DN-DETR . DN-DETR: Accelerate DETR Training by Introducing Query DeNoising. Feng Li*, Hao Zhang*, Shilong Liu, Jian Guo, Lionel M. Ni, Lei Zhang. IEEE Conference on Computer Vision and Pattern Recognition ( CVPR ) 2022. [paper] [code] [中文解读] DAB-DETR: Dynamic Anchor Boxes are Better Queries for DETR. Shilong Liu, Feng Li, Hao Zhang, Xiao Yang, Xianbiao Qi, Hang Su, Jun Zhu, Lei Zhang. International Conference on Learning Representations ( ICLR ) 2022. [paper] [code] We also thank great previous work including DETR, Deformable DETR, SMCA, Conditional DETR, Anchor DETR, Dynamic DETR, etc. More related work are available at Awesome Detection Transformer . LICNESE DINO is released under the Apache 2.0 license. Please see the LICENSE file for more information. Copyright (c) IDEA. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use these files except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Bibtex If you find our work helpful for your research, please consider citing the following BibTeX entry. ```bibtex @misc{zhang2022dino, title={DINO: DETR with Improved DeNoising Anchor Boxes for End-to-End Object Detection}, author={Hao Zhang and Feng Li and Shilong Liu and Lei Zhang and Hang Su and Jun Zhu and Lionel M. Ni and Heung-Yeung Shum}, year={2022}, eprint={2203.03605}, archivePrefix={arXiv}, primaryClass={cs.CV} } @inproceedings{li2022dn, title={Dn-detr: Accelerate detr training by introducing query denoising}, author={Li, Feng and Zhang, Hao and Liu, Shilong and Guo, Jian and Ni, Lionel M and Zhang, Lei}, booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition}, pages={13619--13627}, year={2022} } @inproceedings{ liu2022dabdetr, title={{DAB}-{DETR}: Dynamic Anchor Boxes are Better Queries for {DETR}}, author={Shilong Liu and Feng Li and Hao Zhang and Xiao Yang and Xianbiao Qi and Hang Su and Jun Zhu and Lei Zhang}, booktitle={International Conference on Learning Representations}, year={2022}, url={https://openreview.net/forum?id=oMI9PjOb9Jl} } ```;[ICLR 2023] Official implementation of the paper "DINO: DETR with Improved DeNoising Anchor Boxes for End-to-End Object Detection";object-detection,computer-vision,deep-learning
IDEA-Research/DINO
megaease/easeprobe;EaseProbe EaseProbe is a simple, standalone, and lightweight tool that can do health/status checking, written in Go. Table of Contents 1. Introduction 1.1 Probe 1.2 Notification 1.3 Report \& Metrics 2. Getting Started 2.1 Build 2.2 Configure 2.3 Run 3. Deployment 4. User Manual 5. Benchmark 6. Contributing 7. Community 8. License 1. Introduction EaseProbe is designed to do three kinds of work - Probe , Notify , and Report . 1.1 Probe EaseProbe supports a variety of methods to perform its probes such as: HTTP . Checking the HTTP status code, Support mTLS, HTTP Basic Auth, setting Request Header/Body, and XPath response evaluation. ( HTTP Probe Manual ) TCP . Check whether a TCP connection can be established or not. ( TCP Probe Manual ) Ping . Ping a host to see if it is reachable or not. ( Ping Probe Manual ) Shell . Run a Shell command and check the result. ( Shell Command Probe Manual ) SSH . Run a remote command via SSH and check the result. Support the bastion/jump server ( SSH Command Probe Manual ) TLS . Connect to a given port using TLS and (optionally) validate for revoked or expired certificates ( TLS Probe Manual ) Host . Run an SSH command on a remote host and check the CPU, Memory, and Disk usage. ( Host Load Probe Manual ) Client . The following native clients are supported. They all support mTLS and data checking. ( Native Client Probe Manual ) MySQL . Connect to a MySQL server and run the SHOW STATUS SQL. Redis . Connect to a Redis server and run the PING command. Memcache . Connect to a Memcache server and run the version command or validate a given key/value pair. MongoDB . Connect to a MongoDB server and perform a ping. Kafka . Connect to a Kafka server and perform a list of all topics. PostgreSQL . Connect to a PostgreSQL server and run SELECT 1 SQL. Zookeeper . Connect to a Zookeeper server and run get / command. 1.2 Notification EaseProbe supports notification delivery to the following: Slack . Using Slack Webhook for notification delivery Discord . Using Discord Webhook for notification delivery Telegram . Using Telegram Bot for notification delivery Teams . Support the Microsoft Teams notification delivery Email . Support email notification delivery to one or more email addresses AWS SNS . Support the AWS Simple Notification Service WeChat Work . Support Enterprise WeChat Work notification delivery DingTalk . Support the DingTalk notification delivery Lark . Support the Lark(Feishu) notification delivery SMS . SMS notification delivery with support for multiple SMS service providers Twilio Vonage(Nexmo) YunPian Log . Write the notification into a log file or Syslog. Shell . Run a shell command to deliver the notification (see example ) RingCentral . Using RingCentral Webhook for notification delivery Note : 1) The notification is Edge-Triggered Mode by default, if you want to config it as Level-Triggered Mode with different interval and max notification, please refer to the manual - Alerting Interval . 2) Windows platforms do not support syslog as notification method. Check the Notification Manual to see how to configure it. 1.3 Report & Metrics EaseProbe supports the following report and metrics: SLA Report Notify . EaseProbe would send the daily, weekly, or monthly SLA report using the defined notify: methods. SLA Live Report . The EaseProbe would listen on the 0.0.0.0:8181 port by default. By accessing this service you will be provided with a live SLA report either as HTML at http://localhost:8181/ or as JSON at http://localhost:8181/api/v1/sla SLA Data Persistence . The SLA data will be persisted in $CWD/data/data.yaml by default. You can configure this path by editing the settings section of your configuration file. For more information, please check the Global Setting Configuration Prometheus Metrics . The EaseProbe would listen on the 8181 port by default. By accessing this service you will be provided with Prometheus metrics at http://easeprobe:8181/metrics . The metrics are prefixed with easeprobe_ and are documented in Prometheus Metrics Exporter 2. Getting Started You can get started with EaseProbe, by any of the following methods: * Download the release for your platform from https://github.com/megaease/easeprobe/releases * Use the available EaseProbe docker image docker run -it megaease/easeprobe * Build easeprobe from sources 2.1 Build Compiler Go 1.21+ (Generics Programming Support), checking the Go Installation to see how to install Go on your platform. Use make to build and produce the easeprobe binary file. The executable is produced under the build/bin directory. shell $ make 2.2 Configure Read the User Manual for detailed instructions on how to configure all EaseProbe parameters. Create a configuration file (eg. $CWD/config.yaml ) using the configuration template at ./resources/config.yaml , which includes the complete list of configuration parameters. The following simple configuration example can be used to get started: YAML http: # http probes - name: EaseProbe Github url: https://github.com/megaease/easeprobe notify: log: - name: log file # local log file file: /var/log/easeprobe.log settings: probe: timeout: 30s # the time out for all probes interval: 1m # probe every minute for all probes You can check the EaseProbe JSON Schema section to use a JSON Scheme file to make your life easier when you edit the configuration file. 2.3 Run You can run the following command to start EaseProbe once built shell $ build/bin/easeprobe -f config.yaml * -f configuration file or URL or path for multiple files which will be automatically merged into one. Can also be achieved by setting the environment variable PROBE_CONFIG * -d dry run. Can also be achieved by setting the environment variable PROBE_DRY 3. Deployment EaseProbe can be deployed by Systemd, Docker, Docker-Compose, & Kubernetes. You can find the details in Deployment Guide 4. User Manual For detailed instructions and features please refer to the User Manual 5. Benchmark We have performed an extensive benchmark on EaseProbe. For the benchmark results please refer to - Benchmark Report 6. Contributing If you're interested in contributing to the project, please spare a moment to read our CONTRIBUTING Guide 7. Community Join Slack Workspace for requirements, issues, and development. MegaEase on Twitter 8. License EaseProbe is under the Apache 2.0 license. See the LICENSE file for details.;A simple, standalone, and lightweight tool that can do health/status checking, written in Go.;probe,golang,alerting,go,monitoring,notifications,prometheus
megaease/easeprobe
adrianhajdin/project_professional_portfolio;Micael - The Ultimate Web Development Portfolio 🌟 Become a top 1% Next.js 13 developer in only one course 🚀 Land your dream programming job in 6 months Introduction This is a code repository for the corresponding video tutorial. Do you know the best way to show your skills to employers or potential clients? Stand out from the crowd by presenting a well-digitalized flexible portfolio and get your dream job. Stay up to date with new projects New major projects coming soon, subscribe to the mailing list to stay up to date https://resource.jsmasterypro.com/newsletter;This is a code repository for the corresponding YouTube video. In this tutorial we are going to build and deploy a real time chat application. Covered topics: React.js, SCSS, Framer Motion, Sanity;reactjs,sanity-io,framer-motion
adrianhajdin/project_professional_portfolio
nextest-rs/nextest;Nextest Nextest is a next-generation test runner for Rust. For more information, check out the website . This repository contains the source code for: cargo-nextest : a new, faster Cargo test runner libraries used by cargo-nextest: nextest-runner : core logic for cargo-nextest nextest-metadata : library for calling cargo-nextest over the command line nextest-filtering : parser and evaluator for filtersets Minimum supported Rust version The minimum supported Rust version to run nextest with is Rust 1.41. Nextest is not tested against versions that are that old, but it should work with any version of Rust released in the past year. (Please report a bug if not!) The minimum supported Rust version to build nextest with is Rust 1.74. For building, at least the last 3 versions of stable Rust are supported at any given time. See the stability policy for more details. While a crate is pre-release status (0.x.x) it may have its MSRV bumped in a patch release. Once a crate has reached 1.x, any MSRV bump will be accompanied with a new minor version. Contributing See the CONTRIBUTING file for how to help out. Looking to contribute to nextest and don't know where to get started? Check out the list of good first issues . License Nextest is Free Software. This project is available under the terms of either the Apache 2.0 license or the MIT license . Like all Free Software, nextest is a gift. Nextest is provided on an "AS IS" basis and there is NO WARRANTY attached to it. As a user, please treat the authors and contributors to this project as if you were treating the giver of a gift. In particular, you're asked to follow the code of conduct . This project is derived from diem-devtools . Upstream source code is used under the terms of the Apache 2.0 license and the MIT license . macOS support macOS is supported through the MacStadium Open Source Developer Program.;A next-generation test runner for Rust.;rust,testing,flaky-tests,cargo-plugin,cargo-subcommand,junit,nextest
nextest-rs/nextest
silverbulletmd/silverbullet;SilverBullet SilverBullet is a note-taking application optimized for people with a hacker mindset . We all take notes. There’s a million note taking applications out there. Literally . Wouldn’t it be nice to have one where your notes are more than plain text files? Where your notes essentially become a database that you can query; that you can build custom knowledge applications on top of? A hackable notebook , if you will? This is what SilverBullet aims to be. Absolutely. You use SilverBullet to quickly jot things down. It’s a notes app after all. However, this is just the beginning. Gradually, you start to annotate your notes using Frontmatter . You realize: “Hey, this note represents a person , let me tag it as such.” Before you know it, you’re turning your notes into Objects . Then you learn that in SilverBullet you can Live Query these objects. Your queries grow into reusable Templates written using a powerful Template Language . You find more and more uses of these templates, for instance to create new pages , or widgets automatically added to your pages. And then, before you know it — you realize you’re effectively building applications in your notes app. End-User Programming , y’all. It’s cool. You may have been told there is no such thing as a silver bullet . You were told wrong. Features SilverBullet... * Runs in any modern browser (including on mobile) as a PWA in two Client Modes ( online and synced mode), where the synced mode enables 100% offline operation , keeping a copy of content in the browser, syncing back to the server when a network connection is available. * Provides an enjoyable markdown writing experience with a clean UI, rendering text using Live Preview, further reducing visual noise while still providing direct access to the underlying markdown syntax. * Supports wiki-style page linking using the [[page link]] syntax. Incoming links are indexed and appear as “Linked Mentions” at the bottom of the pages linked to thereby providing bi-directional linking . * Optimized for keyboard-based operation : * Quickly navigate between pages using the page switcher (triggered with Cmd-k on Mac or Ctrl-k on Linux and Windows). * Run commands via their keyboard shortcuts or the command palette (triggered with Cmd-/ or Ctrl-/ on Linux and Windows). * Use Slash Commands to perform common text editing operations. * Provides a platform for end-user programming through its support for Objects, Live Queries and Live Templates. * Robust extension mechanism using plugs. * Self-hosted : you own your data. All content is stored as plain files in a folder on disk. Back up, sync, edit, publish, script with any additional tools you like. * SilverBullet is open source, MIT licensed software. Installing SilverBullet Check out the instructions . Developing SilverBullet SilverBullet is written in TypeScript and built on top of the excellent CodeMirror 6 editor component. Additional UI is built using Preact . ESBuild is used to build both the front-end and back-end bundles. The server backend runs as a HTTP server on Deno using and is written using Oak . To prepare the initial web and plug build run: shell deno task build To symlink silverbullet to your locally checked-out version, run: shell deno task install You can then run the server in “watch mode” (automatically restarting when you change source files) with: shell deno task watch-server <PATH-TO-YOUR-SPACE> After this initial build, it's convenient to run three commands in parallel (in separate terminals): shell deno task watch-web deno task watch-server <PATH-TO-YOUR-SPACE> deno task watch-plugs To typecheck the entire codebase (recommended before submitting PR): shell deno task check To run unit tests: shell deno task test Feedback If you (hypothetically) find bugs or have feature requests, post them in our issue tracker . Would you like to contribute? Check out the code , and the issue tracker as well for ideas on what to work on. Also be sure to check out our Discourse community .;The hackable notebook;knowledge-management,markdown,personal-knowledge-management,note-taking,end-user-programming
silverbulletmd/silverbullet
alibaba/EasyNLP;EasyNLP is a Comprehensive and Easy-to-use NLP Toolkit [![website online](https://cdn.nlark.com/yuque/0/2020/svg/2480469/1600310258840-bfe6302e-d934-409d-917c-8eab455675c1.svg)](https://www.yuque.com/easyx/easynlp/iobg30) [![Open in PAI-DSW](https://atp-modelzoo-sh.oss-cn-shanghai.aliyuncs.com/release/UI/PAI-DSW.svg)](https://dsw-dev.data.aliyun.com/#/?fileUrl=https://raw.githubusercontent.com/alibaba/EasyTransfer/master/examples/easytransfer-quick_start.ipynb&fileName=easytransfer-quick_start.ipynb) [![open issues](http://isitmaintained.com/badge/open/alibaba/EasyNLP.svg)](https://github.com/alibaba/EasyNLP/issues) [![GitHub pull-requests](https://img.shields.io/github/issues-pr/alibaba/EasyNLP.svg)](https://GitHub.com/alibaba/EasyNLP/pull/) [![GitHub latest commit](https://badgen.net/github/last-commit/alibaba/EasyNLP)](https://GitHub.com/alibaba/EasyNLP/commit/) [![PRs Welcome](https://img.shields.io/badge/PRs-welcome-brightgreen.svg?style=flat-square)](http://makeapullrequest.com) # EasyNLP [中文介绍](https://github.com/alibaba/EasyNLP/blob/master/README.cn.md) EasyNLP is an easy-to-use NLP development and application toolkit in PyTorch, first released inside Alibaba in 2021. It is built with scalable distributed training strategies and supports a comprehensive suite of NLP algorithms for various NLP applications. EasyNLP integrates knowledge distillation and few-shot learning for landing large pre-trained models, together with various popular multi-modality pre-trained models. It provides a unified framework of model training, inference, and deployment for real-world applications. It has powered more than 10 BUs and more than 20 business scenarios within the Alibaba group. It is seamlessly integrated to [Platform of AI (PAI)](https://www.aliyun.com/product/bigdata/product/learn) products, including PAI-DSW for development, PAI-DLC for cloud-native training, PAI-EAS for serving, and PAI-Designer for zero-code model training. # Main Features - **Easy to use and highly customizable:** In addition to providing easy-to-use and concise commands to call cutting-edge models, it also abstracts certain custom modules such as AppZoo and ModelZoo to make it easy to build NLP applications. It is equipped with the PAI PyTorch distributed training framework TorchAccelerator to speed up distributed training. - **Compatible with open-source libraries:** EasyNLP has APIs to support the training of models from Huggingface/Transformers with the PAI distributed framework. It also supports the pre-trained models in [EasyTransfer](https://github.com/alibaba/EasyTransfer) ModelZoo. - **Knowledge-injected pre-training:** The PAI team has a lot of research on knowledge-injected pre-training, and builds a knowledge-injected model that wins first place in the CCF knowledge pre-training competition. EasyNLP integrates these cutting-edge knowledge pre-trained models, including DKPLM and KGBERT. - **Landing large pre-trained models:** EasyNLP provides few-shot learning capabilities, allowing users to finetune large models with only a few samples to achieve good results. At the same time, it provides knowledge distillation functions to help quickly distill large models to a small and efficient model to facilitate online deployment. - **Multi-modality pre-trained models:** EasyNLP is not about NLP only. It also supports various popular multi-modality pre-trained models to support vision-language tasks that require visual knowledge. For example, it is equipped with CLIP-style models for text-image matching and DALLE-style models for text-to-image generation. # Technical Articles We have a series of technical articles on the functionalities of EasyNLP. - [BeautifulPrompt:PAI推出自研Prompt美化器,赋能AIGC一键出美图](https://zhuanlan.zhihu.com/p/636546340) - [PAI-Diffusion中文模型全面升级,海量高清艺术大图一键生成](https://zhuanlan.zhihu.com/p/632031092) - [EasyNLP集成K-Global Pointer算法,支持中文信息抽取](https://zhuanlan.zhihu.com/p/608560954) - [阿里云PAI-Diffusion功能再升级,全链路支持模型调优,平均推理速度提升75%以上](https://zhuanlan.zhihu.com/p/604483551) - [PAI-Diffusion模型来了!阿里云机器学习团队带您徜徉中文艺术海洋](https://zhuanlan.zhihu.com/p/590020134) - [模型精度再被提升,统一跨任务小样本学习算法 UPT 给出解法!](https://zhuanlan.zhihu.com/p/590611518) - [Span抽取和元学习能碰撞出怎样的新火花,小样本实体识别来告诉你!](https://zhuanlan.zhihu.com/p/590297824) - [算法 KECP 被顶会 EMNLP 收录,极少训练数据就能实现机器阅读理解](https://zhuanlan.zhihu.com/p/590024650) - [当大火的文图生成模型遇见知识图谱,AI画像趋近于真实世界](https://zhuanlan.zhihu.com/p/581870071) - [EasyNLP发布融合语言学和事实知识的中文预训练模型CKBERT](https://zhuanlan.zhihu.com/p/574853281) - [EasyNLP带你实现中英文机器阅读理解](https://zhuanlan.zhihu.com/p/568890245) - [跨模态学习能力再升级,EasyNLP电商文图检索效果刷新SOTA](https://zhuanlan.zhihu.com/p/568512230) - [EasyNLP玩转文本摘要(新闻标题)生成](https://zhuanlan.zhihu.com/p/566607127) - [中文稀疏GPT大模型落地 — 通往低成本&高性能多任务通用自然语言理解的关键里程碑](https://zhuanlan.zhihu.com/p/561320982) - [EasyNLP集成K-BERT算法,借助知识图谱实现更优Finetune](https://zhuanlan.zhihu.com/p/553816104) - [EasyNLP中文文图生成模型带你秒变艺术家](https://zhuanlan.zhihu.com/p/547063102) - [面向长代码序列的Transformer模型优化方法,提升长代码场景性能](https://zhuanlan.zhihu.com/p/540060701) - [EasyNLP带你玩转CLIP图文检索](https://zhuanlan.zhihu.com/p/528476134) - [阿里云机器学习PAI开源中文NLP算法框架EasyNLP,助力NLP大模型落地](https://zhuanlan.zhihu.com/p/505785399) - [预训练知识度量比赛夺冠!阿里云PAI发布知识预训练工具](https://zhuanlan.zhihu.com/p/449487792) # Installation You can setup from the source: ```bash $ git clone https://github.com/alibaba/EasyNLP.git $ cd EasyNLP $ python setup.py install ``` This repo is tested on Python 3.6, PyTorch >= 1.8. # Quick Start Now let's show how to use just a few lines of code to build a text classification model based on BERT. ```python from easynlp.appzoo import ClassificationDataset from easynlp.appzoo import get_application_model, get_application_evaluator from easynlp.core import Trainer from easynlp.utils import initialize_easynlp, get_args from easynlp.utils.global_vars import parse_user_defined_parameters from easynlp.utils import get_pretrain_model_path initialize_easynlp() args = get_args() user_defined_parameters = parse_user_defined_parameters(args.user_defined_parameters) pretrained_model_name_or_path = get_pretrain_model_path(user_defined_parameters.get('pretrain_model_name_or_path', None)) train_dataset = ClassificationDataset( pretrained_model_name_or_path=pretrained_model_name_or_path, data_file=args.tables.split(",")[0], max_seq_length=args.sequence_length, input_schema=args.input_schema, first_sequence=args.first_sequence, second_sequence=args.second_sequence, label_name=args.label_name, label_enumerate_values=args.label_enumerate_values, user_defined_parameters=user_defined_parameters, is_training=True) valid_dataset = ClassificationDataset( pretrained_model_name_or_path=pretrained_model_name_or_path, data_file=args.tables.split(",")[-1], max_seq_length=args.sequence_length, input_schema=args.input_schema, first_sequence=args.first_sequence, second_sequence=args.second_sequence, label_name=args.label_name, label_enumerate_values=args.label_enumerate_values, user_defined_parameters=user_defined_parameters, is_training=False) model = get_application_model(app_name=args.app_name, pretrained_model_name_or_path=pretrained_model_name_or_path, num_labels=len(valid_dataset.label_enumerate_values), user_defined_parameters=user_defined_parameters) trainer = Trainer(model=model, train_dataset=train_dataset,user_defined_parameters=user_defined_parameters, evaluator=get_application_evaluator(app_name=args.app_name, valid_dataset=valid_dataset,user_defined_parameters=user_defined_parameters, eval_batch_size=args.micro_batch_size)) trainer.train() ``` The complete example can be found [here](https://github.com/alibaba/EasyNLP/blob/master/examples/appzoo_tutorials/sequence_classification/bert_classify/run_train_eval_predict_user_defined_local.sh). You can also use AppZoo Command Line Tools to quickly train an App model. Take text classification on SST-2 dataset as an example. First you can download the [train.tsv](http://atp-modelzoo-sh.oss-cn-shanghai.aliyuncs.com/release/tutorials/classification/train.tsv), and [dev.tsv](http://atp-modelzoo-sh.oss-cn-shanghai.aliyuncs.com/release/tutorials/classification/dev.tsv), then start training: ```bash $ easynlp \ --mode=train \ --worker_gpu=1 \ --tables=train.tsv,dev.tsv \ --input_schema=label:str:1,sid1:str:1,sid2:str:1,sent1:str:1,sent2:str:1 \ --first_sequence=sent1 \ --label_name=label \ --label_enumerate_values=0,1 \ --checkpoint_dir=./classification_model \ --epoch_num=1 \ --sequence_length=128 \ --app_name=text_classify \ --user_defined_parameters='pretrain_model_name_or_path=bert-small-uncased' ``` And then predict: ```bash $ easynlp \ --mode=predict \ --tables=dev.tsv \ --outputs=dev.pred.tsv \ --input_schema=label:str:1,sid1:str:1,sid2:str:1,sent1:str:1,sent2:str:1 \ --output_schema=predictions,probabilities,logits,output \ --append_cols=label \ --first_sequence=sent1 \ --checkpoint_path=./classification_model \ --app_name=text_classify ``` To learn more about the usage of AppZoo, please refer to our [documentation](https://www.yuque.com/easyx/easynlp/kkhkai). # ModelZoo EasyNLP currently provides the following models in ModelZoo: 1. PAI-BERT-zh (from Alibaba PAI): pre-trained BERT models with a large Chinese corpus. 2. DKPLM (from Alibaba PAI): released with the paper [DKPLM: Decomposable Knowledge-enhanced Pre-trained Language Model for Natural Language Understanding](https://arxiv.org/pdf/2112.01047.pdf) by Taolin Zhang, Chengyu Wang, Nan Hu, Minghui Qiu, Chengguang Tang, Xiaofeng He and Jun Huang. 3. KGBERT (from Alibaba Damo Academy & PAI): pre-train BERT models with knowledge graph embeddings injected. 4. BERT (from Google): released with the paper [BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding](https://aclanthology.org/N19-1423.pdf) by Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova. 5. RoBERTa (from Facebook): released with the paper [RoBERTa: A Robustly Optimized BERT Pretraining Approach](https://arxiv.org/pdf/1907.11692.pdf) by Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer and Veselin Stoyanov. 6. Chinese RoBERTa (from HFL): the Chinese version of RoBERTa. 7. MacBERT (from HFL): released with the paper [Revisiting Pre-trained Models for Chinese Natural Language Processing](https://aclanthology.org/2020.findings-emnlp.58.pdf) by Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Shijin Wang and Guoping Hu. 8. WOBERT (from ZhuiyiTechnology): the word-based BERT for the Chinese language. 9. FashionBERT (from Alibaba PAI & ICBU): in progress. 10. GEEP (from Alibaba PAI): in progress. 11. Mengzi (from Langboat): released with the paper [Mengzi: Towards Lightweight yet Ingenious Pre-trained Models for Chinese](https://arxiv.org/pdf/2110.06696.pdf) by Zhuosheng Zhang, Hanqing Zhang, Keming Chen, Yuhang Guo, Jingyun Hua, Yulong Wang and Ming Zhou. 12. Erlangshen (from IDEA): released from the [repo](https://github.com/IDEA-CCNL/Fengshenbang-LM). Please refer to this [readme](https://github.com/alibaba/EasyNLP/blob/master/easynlp/modelzoo/README.md) for the usage of these models in EasyNLP. Meanwhile, EasyNLP supports to load pretrained models from Huggingface/Transformers, please refer to [this tutorial](https://www.yuque.com/easyx/easynlp/qmq8wh) for details. # EasyNLP Goes Multi-modal EasyNLP also supports various popular multi-modality pre-trained models to support vision-language tasks that require visual knowledge. For example, it is equipped with CLIP-style models for text-image matching and DALLE-style models for text-to-image generation. 1. [Text-image Matching](https://github.com/alibaba/EasyNLP/blob/master/examples/clip_retrieval/run_clip_local.sh) 2. [Text-to-image Generation](https://github.com/alibaba/EasyNLP/blob/master/examples/text2image_generation/run_appzoo_cli_local.sh) 3. [Image-to-text Generation](https://github.com/alibaba/EasyNLP/blob/master/examples/image2text_generation/run_appzoo_cli_local_clip.sh) # Landing Large Pre-trained Models EasyNLP provide few-shot learning and knowledge distillation to help land large pre-trained models. 1. [PET](https://github.com/alibaba/EasyNLP/blob/master/examples/fewshot_learning/run_fewshot_pet.sh) (from LMU Munich and Sulzer GmbH): released with the paper [Exploiting Cloze Questions for Few Shot Text Classification and Natural Language Inference](https://aclanthology.org/2021.eacl-main.20.pdf) by Timo Schick and Hinrich Schutze. We have made some slight modifications to make the algorithm suitable for the Chinese language. 2. [P-Tuning](https://github.com/alibaba/EasyNLP/blob/master/examples/fewshot_learning/run_fewshot_ptuning.sh) (from Tsinghua University, Beijing Academy of AI, MIT and Recurrent AI, Ltd.): released with the paper [GPT Understands, Too](https://arxiv.org/pdf/2103.10385.pdf) by Xiao Liu, Yanan Zheng, Zhengxiao Du, Ming Ding, Yujie Qian, Zhilin Yang and Jie Tang. We have made some slight modifications to make the algorithm suitable for the Chinese language. 3. [CP-Tuning](https://github.com/alibaba/EasyNLP/blob/master/examples/fewshot_learning/run_fewshot_cpt.sh) (from Alibaba PAI): released with the paper [Making Pre-trained Language Models End-to-end Few-shot Learners with Contrastive Prompt Tuning](https://arxiv.org/pdf/2204.00166.pdf) by Ziyun Xu, Chengyu Wang, Minghui Qiu, Fuli Luo, Runxin Xu, Songfang Huang and Jun Huang. 4. [Vanilla KD](https://github.com/alibaba/EasyNLP/tree/master/examples/knowledge_distillation) (from Alibaba PAI): distilling the logits of large BERT-style models to smaller ones. 5. [Meta KD](https://github.com/alibaba/EasyNLP/tree/master/examples/knowledge_distillation) (from Alibaba PAI): released with the paper [Meta-KD: A Meta Knowledge Distillation Framework for Language Model Compression across Domains](https://aclanthology.org/2021.acl-long.236.pdf) by Haojie Pan, Chengyu Wang, Minghui Qiu, Yichang Zhang, Yaliang Li and Jun Huang. 6. [Data Augmentation](https://github.com/alibaba/EasyNLP/tree/master/examples/knowledge_distillation/test_data_aug.sh) (from Alibaba PAI): augmentating the data based on the MLM head of pre-trained language models. # [CLUE Benchmark](https://www.cluebenchmarks.com/) EasyNLP provides [a simple toolkit](https://github.com/alibaba/EasyNLP/tree/master/benchmarks/clue) to benchmark clue datasets. You can simply use just this command to benchmark CLUE dataset. ```bash # Format: bash run_clue.sh device_id train/predict dataset # e.g.: bash run_clue.sh 0 train csl ``` We've tested chiese bert and roberta modelson the datasets, the results of dev set are: (1) bert-base-chinese: | Task | AFQMC | CMNLI | CSL | IFLYTEK | OCNLI | TNEWS | WSC | |------|--------|--------|--------|---------|--------|--------|--------| | P | 72.17% | 75.74% | 80.93% | 60.22% | 78.31% | 57.52% | 75.33% | | F1 | 52.96% | 75.74% | 81.71% | 60.22% | 78.30% | 57.52% | 80.82% | (2) chinese-roberta-wwm-ext: | Task | AFQMC | CMNLI | CSL | IFLYTEK | OCNLI | TNEWS | WSC | |------|--------|--------|--------|---------|--------|--------|--------| | P | 73.10% | 80.75% | 80.07% | 60.98% | 80.75% | 57.93% | 86.84% | | F1 | 56.04% | 80.75% | 81.50% | 60.98% | 80.75% | 57.93% | 89.58% | Here is the detailed [CLUE benchmark example](https://github.com/alibaba/EasyNLP/tree/master/benchmarks/clue). # Tutorials - [自定义文本分类示例](https://www.yuque.com/easyx/easynlp/ds35qn) - [QuickStart-文本分类](https://www.yuque.com/easyx/easynlp/rxne07) - [QuickStart-PAI DSW](https://www.yuque.com/easyx/easynlp/gvat1o) - [QuickStart-MaxCompute/ODPS数据](https://www.yuque.com/easyx/easynlp/vgwe7f) - [AppZoo-文本向量化](https://www.yuque.com/easyx/easynlp/ts4czl) - [AppZoo-文本分类/匹配](https://www.yuque.com/easyx/easynlp/vgbopy) - [AppZoo-序列标注](https://www.yuque.com/easyx/easynlp/qkwqmb) - [AppZoo-GEEP文本分类](https://www.yuque.com/easyx/easynlp/lepm0q) - [AppZoo-文本生成](https://www.yuque.com/easyx/easynlp/svde6x) - [基础预训练实践](https://www.yuque.com/easyx/easynlp/lm1a5t) - [知识预训练实践](https://www.yuque.com/easyx/easynlp/za7ywp) - [知识蒸馏实践](https://www.yuque.com/easyx/easynlp/ffu6p9) - [跨任务知识蒸馏实践](https://www.yuque.com/easyx/easynlp/izbfqt) - [小样本学习实践](https://www.yuque.com/easyx/easynlp/ochmnf) - [Rapidformer模型训练加速实践](https://www.yuque.com/easyx/easynlp/bi6nzc) - API docs: [http://atp-modelzoo-sh.oss-cn-shanghai.aliyuncs.com/release/easynlp/easynlp_docs/html/index.html](http://atp-modelzoo-sh.oss-cn-shanghai.aliyuncs.com/release/easynlp/easynlp_docs/html/index.html) # License This project is licensed under the [Apache License (Version 2.0)](https://github.com/alibaba/EasyNLP/blob/master/LICENSE). This toolkit also contains some code modified from other repos under other open-source licenses. See the [NOTICE](https://github.com/alibaba/EasyNLP/blob/master/NOTICE) file for more information. # ChangeLog - EasyNLP v0.0.3 was released in 01/04/2022. Please refer to [tag_v0.0.3](https://github.com/alibaba/EasyNLP/releases/tag/v0.0.3) for more details and history. # Contact Us Scan the following QR codes to join Dingtalk discussion group. The group discussions are mostly in Chinese, but English is also welcomed. # Reference - DKPLM: https://paperswithcode.com/paper/dkplm-decomposable-knowledge-enhanced-pre - MetaKD: https://paperswithcode.com/paper/meta-kd-a-meta-knowledge-distillation - CP-Tuning: https://paperswithcode.com/paper/making-pre-trained-language-models-end-to-end-1 - FashionBERT: https://paperswithcode.com/paper/fashionbert-text-and-image-matching-with We have [an arxiv paper](https://paperswithcode.com/paper/easynlp-a-comprehensive-and-easy-to-use) for you to cite for the EasyNLP library: ``` @article{easynlp, doi = {10.48550/ARXIV.2205.00258}, url = {https://arxiv.org/abs/2205.00258}, author = {Wang, Chengyu and Qiu, Minghui and Zhang, Taolin and Liu, Tingting and Li, Lei and Wang, Jianing and Wang, Ming and Huang, Jun and Lin, Wei}, title = {EasyNLP: A Comprehensive and Easy-to-use Toolkit for Natural Language Processing}, publisher = {arXiv}, year = {2022} } ```;EasyNLP: A Comprehensive and Easy-to-use NLP Toolkit;transformers,bert,nlp,pretrained-models,deep-learning,pytorch,fewshot-learning,knowledge-distillation,knowledge-pretraining,text-image-retrieval
alibaba/EasyNLP
blueagler/DeepL-Crack;DeepL Crack (Chromium Extension) This project is for research only. You should delete it within 24 hours. All tokens come from the internet. The author is not responsible for any action you take in using it. Preview https://user-images.githubusercontent.com/61572188/221816073-67c11553-1387-43a1-803c-bbc0692333d7.mov Features Bypass the free translator's limit of 5,000 characters Remove edit restriction (available for docx, doc, ppt, pptx, pdf) Remove DeepL Pro Banner for docx, doc, ppt, pptx files Use DeepL Pro Account Cookies/DeepL Api Free Token to translate (This can help you bypass frequency limitations of web api) Unlock Formal/informal tone Clean cookie and randomnize User Agent Limitations DeepL may ban your IP due to high frequency of requests to web api. There are 2 solutions: Use DeepL Pro Account Cookies/DeepL Api Free Token to translate. First, Use a proxy to change IP. Then, click clean cookie button. File translation quota and maximum upload size of 5 MB are not cracked due to server verification. Edge users should disable Advanced Security for deepl.com so that this extension can unlock PDF. Installation tutorial Go to release page and download the latest version (e.g. DeepL Crack v1.1.8.zip) Decompress this zip file Go to Chrome's plug-in settings page Enable developer mode Click to load the decompressed plug-in Select the decompressed folder How it works This extension is made with Preact and material-ui. It hijacks XMLHttpRequest. It use WebAssembly to unlock PDF files. Support me: Telegram Channel & Group;Bypass 5,000 characters, Remove edit restriction, Use DeepL Pro Account Cookies/DeepL Api Free Token to translate, Unlock Formal/informal tone, Randomize fingerprint;chrome-extension,chrome-extensions,crack,deepl
blueagler/DeepL-Crack
themesberg/flowbite-svelte;FLOWBITE-SVELTE ⚠️ Flowbite Svelte is currently in early development and APIs and packages are likely to change quite often. Build websites even faster with Svelte components on top of Tailwind CSS Flowbite Svelte is an official Flowbite UI component library for Svelte. All interactivities are handled by Svelte. Visualize this repo's codebase Installation Getting started Introduction Types How to contribute License Documentation For full documentation, visit flowbite-svelte.com . Components Alert Badge Breadcrumb Button Button group Card Dropdown Forms List group Typography Modal Tabs Navbar Pagination Timeline Progress bar Table Toast Tooltip Datepicker Spinner Footer Accordion Sidebar Carousel Avatar Rating Input Field File Input Search Input Select Textarea Checkbox Radio Toggle Range Slider Floating Label Mega Menu Skeleton KBD (keyboard) Drawer (offcanvas) Popover Video Heading Paragraph Blockquote Image List Link Text Horizontal line (HR) Speed Dial Stepper(TBA) Indicators Bottom Navigation Sticky Banner Gallery (Masonry) Community If you need help or just want to discuss about the library join the community on Github: ⌨️ Discuss about Flowbite on GitHub For casual chatting with others using the library: 💬 Join the Flowbite Discord Server Contribute Please read how to contribute if you'd like to be part of the Flowbite community of contributors. Changelog View the full changelog on this page. License Flowbite Svelte is open-source under the MIT License .;Official Svelte components built for Flowbite and Tailwind CSS;svelte,components,sveltejs,ui-components,accordion,tabs,timelines,tooltips,spinners,sidebar
themesberg/flowbite-svelte
microsoft/Codex-CLI;Codex CLI - Natural Language Command Line Interface This project uses GPT-3 Codex to convert natural language commands into commands in PowerShell, Z shell and Bash. The Command Line Interface (CLI) was the first major User Interface we used to interact with machines. It's incredibly powerful, you can do almost anything with a CLI, but it requires the user to express their intent extremely precisely. The user needs to know the language of the computer . With the advent of Large Language Models (LLMs), particularly those that have been trained on code, it's possible to interact with a CLI using Natural Language (NL). In effect, these models understand natural language and code well enough that they can translate from one to another. This project aims to offer a cross-shell NL->Code experience to allow users to interact with their favorite CLI using NL. The user enters a command, like "what's my IP address", hits Ctrl + G and gets a suggestion for a command idiomatic to the shell they're using. The project uses the GPT-3 Codex model off-the-shelf, meaning the model has not been explicitly trained for the task. Instead we rely on a discipline called prompt engineering (see section below) to coax the right commands from Codex. Note: The model can still make mistakes! Don't run a command if you don't understand it. If you're not sure what a command does, hit Ctrl + C to cancel it . This project took technical inspiration from the zsh_codex project, extending its functionality to span multiple shells and to customize the prompts passed to the model (see prompt engineering section below). Statement of Purpose This repository aims to grow the understanding of using Codex in applications by providing an example of implementation and references to support the Microsoft Build conference in 2022 . It is not intended to be a released product. Therefore, this repository is not for discussing OpenAI API or requesting new features. Requirements Python 3.7.1+ [Windows]: Python is added to PATH. An OpenAI account OpenAI API Key . OpenAI Organization Id . If you have multiple organizations, please update your default organization to the one that has access to codex engines before getting the organization Id. OpenAI Engine Id . It provides access to a model. For example, code-davinci-002 or code-cushman-001 . See here for checking available engines. Installation Please follow the installation instructions for PowerShell, bash or zsh from here . Usage Once configured for your shell of preference, you can use the Codex CLI by writing a comment (starting with # ) into your shell, and then hitting Ctrl + G . The Codex CLI supports two primary modes: single-turn and multi-turn. By default, multi-turn mode is off. It can be toggled on and off using the # start multi-turn and # stop multi-turn commands. If the multi-turn mode is on, the Codex CLI will "remember" past interactions with the model, allowing you to refer back to previous actions and entities. If, for example, you asked the Codex CLI to change your time zone to mountain, and then said "change it back to pacific", the model would have the context from the previous interaction to know that "it" is the user's timezone: ```powershell change my timezone to mountain tzutil /s "Mountain Standard Time" change it back to pacific tzutil /s "Pacific Standard Time" ``` The tool creates a current_context.txt file that keeps track of past interactions, and passes them to the model on each subsequent command. When multi-turn mode is off, this tool will not keep track of interaction history. There are tradeoffs to using multi-turn mode - though it enables compelling context resolution, it also increases overhead. If, for example, the model produces the wrong script for the job, the user will want to remove that from the context, otherwise future conversation turns will be more likely to produce the wrong script again. With multi-turn mode off, the model will behave completely deterministically - the same command will always produce the same output. Any time the model seems to output consistently incorrect commands, you can use the # stop multi-turn command to stop the model from remembering past interactions and load in your default context. Alternatively, the # default context command does the same while preserving the multi-turn mode as on. Commands | Command | Description | |--|--| | start multi-turn | Starts a multi-turn experience | | stop multi-turn | Stops a multi-turn experience and loads default context | | load context <filename> | Loads the context file from contexts folder | | default context | Loads default shell context | | view context | Opens the context file in a text editor | | save context <filename> | Saves the context file to contexts folder, if name not specified, uses current date-time | | show config | Shows the current configuration of your interaction with the model | | set <config-key> <config-value> | Sets the configuration of your interaction with the model | Feel free to improve your experience by changing the token limit, engine id and temperature using the set command. For example, # set engine cushman-codex , # set temperature 0.5 , # set max_tokens 50 . Prompt Engineering and Context Files This project uses a discipline called prompt engineering to coax GPT-3 Codex to generate commands from natural language. Specifically, we pass the model a series of examples of NL->Commands, to give it a sense of the kind of code it should be writing, and also to nudge it towards generating commands idiomatic to the shell you're using. These examples live in the contexts directory. See snippet from the PowerShell context below: ```powershell what's the weather in New York? (Invoke-WebRequest -uri "wttr.in/NewYork").Content make a git ignore with node modules and src in it "node_modules src" | Out-File .gitignore open it in notepad notepad .gitignore ``` Note that this project models natural language commands as comments, and provide examples of the kind of PowerShell scripts we expect the model to write. These examples include single line completions, multi-line completions, and multi-turn completions (the "open it in notepad" example refers to the .gitignore file generated on the previous turn). When a user enters a new command (say "what's my IP address"), we simple append that command onto the context (as a comment) and ask Codex to generate the code that should follow it. Having seen the examples above, Codex will know that it should write a short PowerShell script that satisfies the comment. Building your own Contexts This project comes pre-loaded with contexts for each shell, along with some bonus contexts with other capabilities. Beyond these, you can build your own contexts to coax other behaviors out of the model. For example, if you want the Codex CLI to produce Kubernetes scripts, you can create a new context with examples of commands and the kubectl script the model might produce: ```bash make a K8s cluster IP called my-cs running on 5678:8080 kubectl create service clusterip my-cs --tcp=5678:8080 ``` Add your context to the contexts folder and run load context <filename> to load it. You can also change the default context from to your context file inside src\prompt_file.py . Note that Codex will often produce correct scripts without any examples. Having been trained on a large corpus of code, it frequently knows how to produce specific commands. That said, building your own contexts helps coax the specific kind of script you're looking for - whether it's long or short, whether it declares variables or not, whether it refers back to previous commands, etc. You can also provide examples of your own CLI commands and scripts, to show Codex other tools it should consider using. One important thing to consider is that if you add a new context, keep the multi-turn mode on to avoid our automatic defaulting (which was added to keep faulty contexts from breaking your experience). We have added a cognitive services context which uses the cognitive services API to provide text to speech type responses as an example. Troubleshooting Use DEBUG_MODE to use a terminal input instead of the stdin and debug the code. This is useful when adding new commands and understanding why the tool is unresponsive. Sometimes the openai package will throws errors that aren't caught by the tool, you can add a catch block at the end of codex_query.py for that exception and print a custom error message. FAQ What OpenAI engines are available to me? You might have access to different OpenAI engines per OpenAI organization. To check what engines are available to you, one can query the List engines API for available engines. See the following commands: Shell curl https://api.openai.com/v1/engines \ -H 'Authorization: Bearer YOUR_API_KEY' \ -H 'OpenAI-Organization: YOUR_ORG_ID' PowerShell PowerShell v5 (The default one comes with Windows) powershell (Invoke-WebRequest -Uri https://api.openai.com/v1/engines -Headers @{"Authorization" = "Bearer YOUR_API_KEY"; "OpenAI-Organization" = "YOUR_ORG_ID"}).Content PowerShell v7 powershell (Invoke-WebRequest -Uri https://api.openai.com/v1/engines -Authentication Bearer -Token (ConvertTo-SecureString "YOUR_API_KEY" -AsPlainText -Force) -Headers @{"OpenAI-Organization" = "YOUR_ORG_ID"}).Content Can I run the sample on Azure? The sample code can be currently be used with Codex on OpenAI’s API. In the coming months, the sample will be updated so you can also use it with the Azure OpenAI Service .;CLI tool that uses Codex to turn natural language commands into their Bash/ZShell/PowerShell equivalents;[]
microsoft/Codex-CLI
pytorch/rl;TorchRL Documentation | TensorDict | Features | Examples, tutorials and demos | Citation | Installation | Asking a question | Contributing TorchRL is an open-source Reinforcement Learning (RL) library for PyTorch. It provides pytorch and python-first , low and high level abstractions for RL that are intended to be efficient , modular , documented and properly tested . The code is aimed at supporting research in RL. Most of it is written in python in a highly modular way, such that researchers can easily swap components, transform them or write new ones with little effort. This repo attempts to align with the existing pytorch ecosystem libraries in that it has a dataset pillar ( torchrl/envs ), transforms , models , data utilities (e.g. collectors and containers), etc. TorchRL aims at having as few dependencies as possible (python standard library, numpy and pytorch). Common environment libraries (e.g. OpenAI gym) are only optional. On the low-level end, torchrl comes with a set of highly re-usable functionals for cost functions, returns and data processing. TorchRL aims at (1) a high modularity and (2) good runtime performance. Read the full paper for a more curated description of the library. Getting started Check our Getting Started tutorials for quickly ramp up with the basic features of the library! Documentation and knowledge base The TorchRL documentation can be found here . It contains tutorials and the API reference. TorchRL also provides a RL knowledge base to help you debug your code, or simply learn the basics of RL. Check it out here . We have some introductory videos for you to get to know the library better, check them out: TorchRL intro at PyTorch day 2022 PyTorch 2.0 Q&A: TorchRL Writing simplified and portable RL codebase with TensorDict RL algorithms are very heterogeneous, and it can be hard to recycle a codebase across settings (e.g. from online to offline, from state-based to pixel-based learning). TorchRL solves this problem through TensorDict , a convenient data structure (1) that can be used to streamline one's RL codebase. With this tool, one can write a complete PPO training script in less than 100 lines of code ! Code ```python import torch from tensordict.nn import TensorDictModule from tensordict.nn.distributions import NormalParamExtractor from torch import nn from torchrl.collectors import SyncDataCollector from torchrl.data.replay_buffers import TensorDictReplayBuffer, \ LazyTensorStorage, SamplerWithoutReplacement from torchrl.envs.libs.gym import GymEnv from torchrl.modules import ProbabilisticActor, ValueOperator, TanhNormal from torchrl.objectives import ClipPPOLoss from torchrl.objectives.value import GAE env = GymEnv("Pendulum-v1") model = TensorDictModule( nn.Sequential( nn.Linear(3, 128), nn.Tanh(), nn.Linear(128, 128), nn.Tanh(), nn.Linear(128, 128), nn.Tanh(), nn.Linear(128, 2), NormalParamExtractor() ), in_keys=["observation"], out_keys=["loc", "scale"] ) critic = ValueOperator( nn.Sequential( nn.Linear(3, 128), nn.Tanh(), nn.Linear(128, 128), nn.Tanh(), nn.Linear(128, 128), nn.Tanh(), nn.Linear(128, 1), ), in_keys=["observation"], ) actor = ProbabilisticActor( model, in_keys=["loc", "scale"], distribution_class=TanhNormal, distribution_kwargs={"min": -1.0, "max": 1.0}, return_log_prob=True ) buffer = TensorDictReplayBuffer( LazyTensorStorage(1000), SamplerWithoutReplacement() ) collector = SyncDataCollector( env, actor, frames_per_batch=1000, total_frames=1_000_000 ) loss_fn = ClipPPOLoss(actor, critic, gamma=0.99) optim = torch.optim.Adam(loss_fn.parameters(), lr=2e-4) adv_fn = GAE(value_network=critic, gamma=0.99, lmbda=0.95, average_gae=True) for data in collector: # collect data for epoch in range(10): adv_fn(data) # compute advantage buffer.extend(data.view(-1)) for i in range(20): # consume data sample = buffer.sample(50) # mini-batch loss_vals = loss_fn(sample) loss_val = sum( value for key, value in loss_vals.items() if key.startswith("loss") ) loss_val.backward() optim.step() optim.zero_grad() print(f"avg reward: {data['next', 'reward'].mean().item(): 4.4f}") ``` Here is an example of how the environment API relies on tensordict to carry data from one function to another during a rollout execution: TensorDict makes it easy to re-use pieces of code across environments, models and algorithms. Code For instance, here's how to code a rollout in TorchRL: ```diff - obs, done = env.reset() + tensordict = env.reset() policy = SafeModule( model, in_keys=["observation_pixels", "observation_vector"], out_keys=["action"], ) out = [] for i in range(n_steps): - action, log_prob = policy(obs) - next_obs, reward, done, info = env.step(action) - out.append((obs, next_obs, action, log_prob, reward, done)) - obs = next_obs + tensordict = policy(tensordict) + tensordict = env.step(tensordict) + out.append(tensordict) + tensordict = step_mdp(tensordict) # renames next_observation_* keys to observation_* - obs, next_obs, action, log_prob, reward, done = [torch.stack(vals, 0) for vals in zip(*out)] + out = torch.stack(out, 0) # TensorDict supports multiple tensor operations ``` Using this, TorchRL abstracts away the input / output signatures of the modules, env, collectors, replay buffers and losses of the library, allowing all primitives to be easily recycled across settings. Code Here's another example of an off-policy training loop in TorchRL (assuming that a data collector, a replay buffer, a loss and an optimizer have been instantiated): ```diff - for i, (obs, next_obs, action, hidden_state, reward, done) in enumerate(collector): + for i, tensordict in enumerate(collector): - replay_buffer.add((obs, next_obs, action, log_prob, reward, done)) + replay_buffer.add(tensordict) for j in range(num_optim_steps): - obs, next_obs, action, hidden_state, reward, done = replay_buffer.sample(batch_size) - loss = loss_fn(obs, next_obs, action, hidden_state, reward, done) + tensordict = replay_buffer.sample(batch_size) + loss = loss_fn(tensordict) loss.backward() optim.step() optim.zero_grad() ``` This training loop can be re-used across algorithms as it makes a minimal number of assumptions about the structure of the data. TensorDict supports multiple tensor operations on its device and shape (the shape of TensorDict, or its batch size, is the common arbitrary N first dimensions of all its contained tensors): Code ```python # stack and cat tensordict = torch.stack(list_of_tensordicts, 0) tensordict = torch.cat(list_of_tensordicts, 0) # reshape tensordict = tensordict.view(-1) tensordict = tensordict.permute(0, 2, 1) tensordict = tensordict.unsqueeze(-1) tensordict = tensordict.squeeze(-1) # indexing tensordict = tensordict[:2] tensordict[:, 2] = sub_tensordict # device and memory location tensordict.cuda() tensordict.to("cuda:1") tensordict.share_memory_() ``` TensorDict comes with a dedicated tensordict.nn module that contains everything you might need to write your model with it. And it is functorch and torch.compile compatible! Code ```diff transformer_model = nn.Transformer(nhead=16, num_encoder_layers=12) + td_module = SafeModule(transformer_model, in_keys=["src", "tgt"], out_keys=["out"]) src = torch.rand((10, 32, 512)) tgt = torch.rand((20, 32, 512)) + tensordict = TensorDict({"src": src, "tgt": tgt}, batch_size=[20, 32]) - out = transformer_model(src, tgt) + td_module(tensordict) + out = tensordict["out"] ``` The `TensorDictSequential` class allows to branch sequences of `nn.Module` instances in a highly modular way. For instance, here is an implementation of a transformer using the encoder and decoder blocks: ```python encoder_module = TransformerEncoder(...) encoder = TensorDictSequential(encoder_module, in_keys=["src", "src_mask"], out_keys=["memory"]) decoder_module = TransformerDecoder(...) decoder = TensorDictModule(decoder_module, in_keys=["tgt", "memory"], out_keys=["output"]) transformer = TensorDictSequential(encoder, decoder) assert transformer.in_keys == ["src", "src_mask", "tgt"] assert transformer.out_keys == ["memory", "output"] ``` `TensorDictSequential` allows to isolate subgraphs by querying a set of desired input / output keys: ```python transformer.select_subsequence(out_keys=["memory"]) # returns the encoder transformer.select_subsequence(in_keys=["tgt", "memory"]) # returns the decoder ``` Check TensorDict tutorials to learn more! Features A common interface for environments which supports common libraries (OpenAI gym, deepmind control lab, etc.) (1) and state-less execution (e.g. Model-based environments). The batched environments containers allow parallel execution (2) . A common PyTorch-first class of tensor-specification class is also provided. TorchRL's environments API is simple but stringent and specific. Check the documentation and tutorial to learn more! Code ```python env_make = lambda: GymEnv("Pendulum-v1", from_pixels=True) env_parallel = ParallelEnv(4, env_make) # creates 4 envs in parallel tensordict = env_parallel.rollout(max_steps=20, policy=None) # random rollout (no policy given) assert tensordict.shape == [4, 20] # 4 envs, 20 steps rollout env_parallel.action_spec.is_in(tensordict["action"]) # spec check returns True ``` multiprocess and distributed data collectors (2) that work synchronously or asynchronously. Through the use of TensorDict, TorchRL's training loops are made very similar to regular training loops in supervised learning (although the "dataloader" -- read data collector -- is modified on-the-fly): Code ```python env_make = lambda: GymEnv("Pendulum-v1", from_pixels=True) collector = MultiaSyncDataCollector( [env_make, env_make], policy=policy, devices=["cuda:0", "cuda:0"], total_frames=10000, frames_per_batch=50, ... ) for i, tensordict_data in enumerate(collector): loss = loss_module(tensordict_data) loss.backward() optim.step() optim.zero_grad() collector.update_policy_weights_() ``` Check our distributed collector examples to learn more about ultra-fast data collection with TorchRL. efficient (2) and generic (1) replay buffers with modularized storage: Code ```python storage = LazyMemmapStorage( # memory-mapped (physical) storage cfg.buffer_size, scratch_dir="/tmp/" ) buffer = TensorDictPrioritizedReplayBuffer( alpha=0.7, beta=0.5, collate_fn=lambda x: x, pin_memory=device != torch.device("cpu"), prefetch=10, # multi-threaded sampling storage=storage ) ``` Replay buffers are also offered as wrappers around common datasets for offline RL : Code ```python from torchrl.data.replay_buffers import SamplerWithoutReplacement from torchrl.data.datasets.d4rl import D4RLExperienceReplay data = D4RLExperienceReplay( "maze2d-open-v0", split_trajs=True, batch_size=128, sampler=SamplerWithoutReplacement(drop_last=True), ) for sample in data: # or alternatively sample = data.sample() fun(sample) ``` cross-library environment transforms (1) , executed on device and in a vectorized fashion (2) , which process and prepare the data coming out of the environments to be used by the agent: Code ```python env_make = lambda: GymEnv("Pendulum-v1", from_pixels=True) env_base = ParallelEnv(4, env_make, device="cuda:0") # creates 4 envs in parallel env = TransformedEnv( env_base, Compose( ToTensorImage(), ObservationNorm(loc=0.5, scale=1.0)), # executes the transforms once and on device ) tensordict = env.reset() assert tensordict.device == torch.device("cuda:0") ``` Other transforms include: reward scaling (`RewardScaling`), shape operations (concatenation of tensors, unsqueezing etc.), concatenation of successive operations (`CatFrames`), resizing (`Resize`) and many more. Unlike other libraries, the transforms are stacked as a list (and not wrapped in each other), which makes it easy to add and remove them at will: ```python env.insert_transform(0, NoopResetEnv()) # inserts the NoopResetEnv transform at the index 0 ``` Nevertheless, transforms can access and execute operations on the parent environment: ```python transform = env.transform[1] # gathers the second transform of the list parent_env = transform.parent # returns the base environment of the second transform, i.e. the base env + the first transform ``` various tools for distributed learning (e.g. memory mapped tensors ) (2) ; various architectures and models (e.g. actor-critic ) (1) : Code ```python # create an nn.Module common_module = ConvNet( bias_last_layer=True, depth=None, num_cells=[32, 64, 64], kernel_sizes=[8, 4, 3], strides=[4, 2, 1], ) # Wrap it in a SafeModule, indicating what key to read in and where to # write out the output common_module = SafeModule( common_module, in_keys=["pixels"], out_keys=["hidden"], ) # Wrap the policy module in NormalParamsWrapper, such that the output # tensor is split in loc and scale, and scale is mapped onto a positive space policy_module = SafeModule( NormalParamsWrapper( MLP(num_cells=[64, 64], out_features=32, activation=nn.ELU) ), in_keys=["hidden"], out_keys=["loc", "scale"], ) # Use a SafeProbabilisticTensorDictSequential to combine the SafeModule with a # SafeProbabilisticModule, indicating how to build the # torch.distribution.Distribution object and what to do with it policy_module = SafeProbabilisticTensorDictSequential( # stochastic policy policy_module, SafeProbabilisticModule( in_keys=["loc", "scale"], out_keys="action", distribution_class=TanhNormal, ), ) value_module = MLP( num_cells=[64, 64], out_features=1, activation=nn.ELU, ) # Wrap the policy and value funciton in a common module actor_value = ActorValueOperator(common_module, policy_module, value_module) # standalone policy from this standalone_policy = actor_value.get_policy_operator() ``` exploration wrappers and modules to easily swap between exploration and exploitation (1) : Code ```python policy_explore = EGreedyWrapper(policy) with set_exploration_type(ExplorationType.RANDOM): tensordict = policy_explore(tensordict) # will use eps-greedy with set_exploration_type(ExplorationType.MODE): tensordict = policy_explore(tensordict) # will not use eps-greedy ``` A series of efficient loss modules and highly vectorized functional return and advantage computation. Code ### Loss modules ```python from torchrl.objectives import DQNLoss loss_module = DQNLoss(value_network=value_network, gamma=0.99) tensordict = replay_buffer.sample(batch_size) loss = loss_module(tensordict) ``` ### Advantage computation ```python from torchrl.objectives.value.functional import vec_td_lambda_return_estimate advantage = vec_td_lambda_return_estimate(gamma, lmbda, next_state_value, reward, done, terminated) ``` a generic trainer class (1) that executes the aforementioned training loop. Through a hooking mechanism, it also supports any logging or data transformation operation at any given time. various recipes to build models that correspond to the environment being deployed. If you feel a feature is missing from the library, please submit an issue! If you would like to contribute to new features, check our call for contributions and our contribution page. Examples, tutorials and demos A series of examples are provided with an illustrative purpose: - DQN - DDPG - IQL - CQL - TD3 - A2C - PPO - SAC - REDQ - Dreamer - Decision Transformers - RLHF and many more to come! Check the examples directory for more details about handling the various configuration settings. We also provide tutorials and demos that give a sense of what the library can do. Citation If you're using TorchRL, please refer to this BibTeX entry to cite this work: @misc{bou2023torchrl, title={TorchRL: A data-driven decision-making library for PyTorch}, author={Albert Bou and Matteo Bettini and Sebastian Dittert and Vikash Kumar and Shagun Sodhani and Xiaomeng Yang and Gianni De Fabritiis and Vincent Moens}, year={2023}, eprint={2306.00577}, archivePrefix={arXiv}, primaryClass={cs.LG} } Installation Create a conda environment where the packages will be installed. conda create --name torch_rl python=3.9 conda activate torch_rl PyTorch Depending on the use of functorch that you want to make, you may want to install the latest (nightly) PyTorch release or the latest stable version of PyTorch. See here for a detailed list of commands, including pip3 or other special installation instructions. Torchrl You can install the latest stable release by using pip3 install torchrl This should work on linux, Windows 10 and OsX (Intel or Silicon chips). On certain Windows machines (Windows 11), one should install the library locally (see below). The nightly build can be installed via pip install torchrl-nightly which we currently only ship for Linux and OsX (Intel) machines. Importantly, the nightly builds require the nightly builds of PyTorch too. To install extra dependencies, call pip3 install "torchrl[atari,dm_control,gym_continuous,rendering,tests,utils,marl,checkpointing]" or a subset of these. One may also desire to install the library locally. Three main reasons can motivate this: - the nightly/stable release isn't available for one's platform (eg, Windows 11, nightlies for Apple Silicon etc.); - contributing to the code; - install torchrl with a previous version of PyTorch (note that this should also be doable via a regular install followed by a downgrade to a previous pytorch version -- but the C++ binaries will not be available.) To install the library locally, start by cloning the repo: git clone https://github.com/pytorch/rl Go to the directory where you have cloned the torchrl repo and install it (after installing ninja ) cd /path/to/torchrl/ pip install ninja -U python setup.py develop (unfortunately, pip install -e . will not work). On M1 machines, this should work out-of-the-box with the nightly build of PyTorch. If the generation of this artifact in MacOs M1 doesn't work correctly or in the execution the message (mach-o file, but is an incompatible architecture (have 'x86_64', need 'arm64e')) appears, then try ARCHFLAGS="-arch arm64" python setup.py develop To run a quick sanity check, leave that directory (e.g. by executing cd ~/ ) and try to import the library. python -c "import torchrl" This should not return any warning or error. Optional dependencies The following libraries can be installed depending on the usage one wants to make of torchrl: ``` diverse pip3 install tqdm tensorboard "hydra-core>=1.1" hydra-submitit-launcher rendering pip3 install moviepy deepmind control suite pip3 install dm_control gym, atari games pip3 install "gym[atari]" "gym[accept-rom-license]" pygame tests pip3 install pytest pyyaml pytest-instafail tensorboard pip3 install tensorboard wandb pip3 install wandb ``` Troubleshooting If a ModuleNotFoundError: No module named ‘torchrl._torchrl errors occurs (or a warning indicating that the C++ binaries could not be loaded), it means that the C++ extensions were not installed or not found. One common reason might be that you are trying to import torchrl from within the git repo location. The following code snippet should return an error if torchrl has not been installed in develop mode: cd ~/path/to/rl/repo python -c 'from torchrl.envs.libs.gym import GymEnv' If this is the case, consider executing torchrl from another location. If you're not importing torchrl from within its repo location, it could be caused by a problem during the local installation. Check the log after the python setup.py develop . One common cause is a g++/C++ version discrepancy and/or a problem with the ninja library. If the problem persists, feel free to open an issue on the topic in the repo, we'll make our best to help! On MacOs , we recommend installing XCode first. With Apple Silicon M1 chips, make sure you are using the arm64-built python (e.g. here ). Running the following lines of code wget https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py python collect_env.py should display OS: macOS *** (arm64) and not OS: macOS **** (x86_64) Versioning issues can cause error message of the type undefined symbol and such. For these, refer to the versioning issues document for a complete explanation and proposed workarounds. Asking a question If you spot a bug in the library, please raise an issue in this repo. If you have a more generic question regarding RL in PyTorch, post it on the PyTorch forum . Contributing Internal collaborations to torchrl are welcome! Feel free to fork, submit issues and PRs. You can checkout the detailed contribution guide here . As mentioned above, a list of open contributions can be found in here . Contributors are recommended to install pre-commit hooks (using pre-commit install ). pre-commit will check for linting related issues when the code is committed locally. You can disable th check by appending -n to your commit command: git commit -m <commit message> -n Disclaimer This library is released as a PyTorch beta feature. BC-breaking changes are likely to happen but they will be introduced with a deprecation warranty after a few release cycles. License TorchRL is licensed under the MIT License. See LICENSE for details.;A modular, primitive-first, python-first PyTorch library for Reinforcement Learning.;ai,control,decision-making,distributed-computing,machine-learning,marl,model-based-reinforcement-learning,multi-agent-reinforcement-learning,pytorch,reinforcement-learning
pytorch/rl
instill-ai/instill-core;instill-core Explore 🔮 Instill Core , a full-stack AI infrastructure tool for data, model and pipeline orchestration, designed to streamline every aspect of building versatile AI-first applications. Accessing 🔮 Instill Core is straightforward, whether you opt for ☁️ Instill Cloud or self-hosting via the instill-core repository. Please consult the documentation for more details. 💧 Instill VDP - Pipeline orchestration for unstructured data ETL **💧 Instill VDP**, also known as **VDP (Versatile Data Pipeline)**, serves as a powerful pipeline orchestration tool tailored to address unstructured data ETL challenges. ⚗️ Instill Model - Model orchestration for MLOps/LLMOps **⚗️ Instill Model** is an advanced MLOps/LLMOps platform focused on seamlessly model serving, fine-tuning, and monitoring for persistent performance for unstructured data ETL. 💾 Instill Artifact (coming soon) - Data orchestration for unified unstructured data representation **💾 Instill Artifact** orchestrates unstructured data to transform documents (e.g., HTML, PDF, CSV, PPTX, DOC), images (e.g., JPG, PNG, TIFF), audio (e.g., WAV, MP3 ) and video (e.g., MP4, MOV) into a unified AI-ready format. It ensures your data is clean, curated, and ready for extracting insights and building your Knowledge Base. ⚙️ Instill Component - An extensible integration framework for 💧 Instill VDP **⚙️ Instill Component** enhances **💧 Instill VDP**, unlocking limitless possibilities. Please visit the [component](https://github.com/instill-ai/component) repository for details. ☁️ Instill Cloud Not quite into self-hosting? We've got you covered with ☁️ Instill Cloud . It's a fully managed public cloud service, providing you with access to all the features of 🔮 Instill Core without the burden of infrastructure management. All you need to do is to one-click sign up to start building your AI-first applications. Prerequisites macOS or Linux - 🔮 Instill Core works on macOS or Linux, but does not support Windows yet. Docker and Docker Compose - 🔮 Instill Core requires Docker Engine v25 or later and Docker Compose v2 or later to run all services locally. Please install the latest stable Docker and Docker Compose . Quick Start Use stable release version Execute the following commands to pull pre-built images with all the dependencies to launch: ```bash $ git clone -b v0.34.0-beta https://github.com/instill-ai/instill-core.git && cd instill-core Launch all services $ make all ``` [!NOTE] We have restructured our project repositories. If you need to access 🔮 Instill Core projects up to version v0.13.0-beta , please refer to the instill-ai/deprecated-core repository. Use the latest version for local development Execute the following commands to build images with all the dependencies to launch: ```bash $ git clone https://github.com/instill-ai/instill-core.git && cd instill-core Launch all services $ make latest PROFILE=all ``` [!IMPORTANT] Code in the main branch tracks under-development progress towards the next release and may not work as expected. If you are looking for a stable alpha version, please use latest release . 🚀 That's it! Once all the services are up with health status, the UI is ready to go at http://localhost:3000. Please find the default login credentials in the documentation . To shut down all running services: $ make down Explore the documentation to discover all available deployment options. Client Access To access 🔮 Instill Core and ☁️ Instill Cloud , you have a few options: - 📺 Instill Console - ⌨️ Instill CLI - 📦 Instill SDK : - Python SDK - TypeScript SDK - Stay tuned, as more SDKs are on the way! Documentation For comprehensive guidance and resources, explore our documentation website and delve into our API reference . Contributing We welcome contributions from the community! Whether you're a developer, designer, writer, or user, there are multiple ways to contribute: Issue Guidelines We foster a friendly and inclusive environment for issue reporting. Before creating an issue, check if it already exists. Use clear language and provide reproducible steps for bugs. Accurately tag the issue (bug, improvement, question, etc.). Code Contributions Please refer to the Contributing Guidelines for more details. Your code-driven innovations are more than welcome! Community We are committed to providing a respectful and welcoming atmosphere for all contributors. Please review our Code of Conduct to understand our standards. Efficient Triage Process We have implemented a streamlined Issues Triage Process aimed at swiftly categorizing new issues and pull requests (PRs), allowing us to take prompt and appropriate actions. Engage in Dynamic Discussions and Seek Support Head over to our Discussions for engaging conversations: General : Chat about anything related to our projects. Polls : Participate in community polls. Q&A : Seek help or ask questions; our community members and maintainers are here to assist. Show and Tell : Showcase projects you've created using our tools. Alternatively, you can also join our vibrant Discord community and direct your queries to the #ask-for-help channel. We're dedicated to supporting you every step of the way. Contributors ✨ Thanks goes to these wonderful people ( emoji key ): Vibhor Bhatt Miguel Ortiz Sajda Kabir Henry Chen Hari Bhandari Shiva Gaire Zubeen ShihChun-H Ikko Eltociear Ashimine Farookh Zaheer Siddiqui Brian Gallagher hairyputtar David Marx Deniz Parlak Po-Yu Chen Po Chun Chiu Sarthak HR Wu phelan Chang, Hui-Tang Xiaofei Du Ping-Lin Chang Tony Wang Pratik date Juan Vallés Naman Anand totuslink Praharsh Jain Utsav Paul CaCaBlocker Rafael Melo This project follows the all-contributors specification. Contributions of any kind welcome! License See the LICENSE file for licensing information.;🔮 Instill Core is a full-stack AI infrastructure tool for data, model and pipeline orchestration, designed to streamline every aspect of building versatile AI-first applications;unstructured-data,low-code,developer-tools,etl,no-code,open-source,hacktoberfest,ai,api,cli
instill-ai/instill-core
StudioCherno/Walnut;Walnut Walnut is a simple application framework built with Dear ImGui and designed to be used with Vulkan - basically this means you can seemlessly blend real-time Vulkan rendering with a great UI library to build desktop applications. The plan is to expand Walnut to include common utilities to make immediate-mode desktop apps and simple Vulkan applications. Currently supports Windows - with macOS and Linux support planned. Setup scripts support Visual Studio 2022 by default. Forest Launcher - an application made with Walnut Requirements Visual Studio 2022 (not strictly required, however included setup scripts only support this) Vulkan SDK (preferably a recent version) Getting Started Once you've cloned, run scripts/Setup.bat to generate Visual Studio 2022 solution/project files. Once you've opened the solution, you can run the WalnutApp project to see a basic example (code in WalnutApp.cpp ). I recommend modifying that WalnutApp project to create your own application, as everything should be setup and ready to go. 3rd party libaries Dear ImGui GLFW stb_image GLM (included for convenience) Additional Walnut uses the Roboto font ( Apache License, Version 2.0 );Walnut is a simple application framework for Vulkan and Dear ImGui apps;[]
StudioCherno/Walnut
virginiakm1988/ML2022-Spring;機器學習 Machine Learning 2022 Spring by National Taiwan University This repository contains code and slides of 15 homeworks for Machine Learning instructed by 李宏毅(Hung-Yi Lee). All the information about this course can be found on the course website . 15 Homeworks HW1 : Regression [Video] [Code] [Slide] HW2 : Classification [Video] [Code] [Slide] HW3 : CNN [Video] [Code] [Slide] HW4 : Self-Attention [Video] [Code] [Slide] HW5 : Transformer [Code] [Slide] HW6 : GAN [Code] [Slide] HW7 : BERT [Code] [Slide] HW8 : Autoencoder [Code] [Slide] HW9 : Explainable AI [Code] [Slide] HW10 : Adversarial Attack [Code] [Slide] HW11 : Adaptation [Code] [Slide] HW12 : Reinforcement Learning [Code] [Slide] HW13 : Network Compression [Code] [Slide] HW14 : Life-Long Learning [Code] [Slide] HW15 : Meta Learning [Code] [Slide] Lecture Videos The lecture videos are available on Hung-Yi Lee's youtube channel .;**Official** 李宏毅 (Hung-yi Lee) 機器學習 Machine Learning 2022 Spring;machine-learning,deep-learning
virginiakm1988/ML2022-Spring
google-research/big_vision;Big Vision This codebase is designed for training large-scale vision models using Cloud TPU VMs or GPU machines. It is based on Jax / Flax libraries, and uses tf.data and TensorFlow Datasets for scalable and reproducible input pipelines. The open-sourcing of this codebase has two main purposes: 1. Publishing the code of research projects developed in this codebase (see a list below). 2. Providing a strong starting point for running large-scale vision experiments on GPU machines and Google Cloud TPUs, which should scale seamlessly and out-of-the box from a single TPU core to a distributed setup with up to 2048 TPU cores. big_vision aims to support research projects at Google. We are unlikely to work on feature requests or accept external contributions, unless they were pre-approved (ask in an issue first). For a well-supported transfer-only codebase, see also vision_transformer . Note that big_vision is quite dynamic codebase and, while we intend to keep the core code fully-functional at all times, we can not guarantee timely updates of the project-specific code that lives in the .../proj/... subfolders. However, we provide a table with last known commits where specific projects were known to work. The following research projects were originally conducted in the big_vision codebase: Architecture research An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale , by Alexey Dosovitskiy , Lucas Beyer , Alexander Kolesnikov , Dirk Weissenborn , Xiaohua Zhai , Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby Scaling Vision Transformers , by Xiaohua Zhai , Alexander Kolesnikov , Neil Houlsby, and Lucas Beyer*\ Resources: config . How to train your ViT? Data, Augmentation, and Regularization in Vision Transformers , by Andreas Steiner , Alexander Kolesnikov , Xiaohua Zhai , Ross Wightman, Jakob Uszkoreit, and Lucas Beyer MLP-Mixer: An all-MLP Architecture for Vision , by Ilya Tolstikhin , Neil Houlsby , Alexander Kolesnikov , Lucas Beyer , Xiaohua Zhai, Thomas Unterthiner, Jessica Yung, Andreas Steiner, Daniel Keysers, Jakob Uszkoreit, Mario Lucic, Alexey Dosovitskiy\ Resources: config . Better plain ViT baselines for ImageNet-1k , by Lucas Beyer, Xiaohua Zhai, Alexander Kolesnikov\ Resources: config UViM: A Unified Modeling Approach for Vision with Learned Guiding Codes , by Alexander Kolesnikov^ , André Susano Pinto^ , Lucas Beyer , Xiaohua Zhai , Jeremiah Harmsen , Neil Houlsby \ Resources: readme , configs , colabs . FlexiViT: One Model for All Patch Sizes , by Lucas Beyer , Pavel Izmailov , Alexander Kolesnikov , Mathilde Caron , Simon Kornblith , Xiaohua Zhai , Matthias Minderer , Michael Tschannen , Ibrahim Alabdulmohsin , Filip Pavetic \ Resources: readme , configs . Dual PatchNorm , by Manoj Kumar, Mostafa Dehghani, Neil Houlsby. Getting ViT in Shape: Scaling Laws for Compute-Optimal Model Design , by Ibrahim Alabdulmohsin , Xiaohua Zhai , Alexander Kolesnikov, Lucas Beyer*. (partial) Scaling Vision Transformers to 22 Billion Parameters , by Mostafa Dehghani , Josip Djolonga , Basil Mustafa , Piotr Padlewski , Jonathan Heek , wow many middle authors , Neil Houlsby . (partial) Finite Scalar Quantization: VQ-VAE Made Simple , by Fabian Mentzer, David Minnen, Eirikur Agustsson, Michael Tschannen. GIVT: Generative Infinite-Vocabulary Transformers , by Michael Tschannen, Cian Eastwood, Fabian Mentzer\ Resources: readme , config , colab . Multimodal research LiT: Zero-Shot Transfer with Locked-image Text Tuning , by Xiaohua Zhai , Xiao Wang , Basil Mustafa , Andreas Steiner , Daniel Keysers, Alexander Kolesnikov, and Lucas Beyer*\ Resources: trainer , config , colab . Image-and-Language Understanding from Pixels Only , by Michael Tschannen, Basil Mustafa, Neil Houlsby\ Resources: readme , config , colab . Sigmoid Loss for Language Image Pre-Training , by Xiaohua Zhai , Basil Mustafa, Alexander Kolesnikov, Lucas Beyer \ Resources: colab and models , code TODO. A Study of Autoregressive Decoders for Multi-Tasking in Computer Vision , by Lucas Beyer , Bo Wan , Gagan Madan , Filip Pavetic , Andreas Steiner , Alexander Kolesnikov, André Susano Pinto, Emanuele Bugliarello, Xiao Wang, Qihang Yu, Liang-Chieh Chen, Xiaohua Zhai . Image Captioners Are Scalable Vision Learners Too , by Michael Tschannen , Manoj Kumar , Andreas Steiner , Xiaohua Zhai, Neil Houlsby, Lucas Beyer .\ Resources: readme , config , model . Three Towers: Flexible Contrastive Learning with Pretrained Image Models , by Jannik Kossen, Mark Collier, Basil Mustafa, Xiao Wang, Xiaohua Zhai, Lucas Beyer, Andreas Steiner, Jesse Berent, Rodolphe Jenatton, Efi Kokiopoulou. (partial) PaLI: A Jointly-Scaled Multilingual Language-Image Model , by Xi Chen, Xiao Wang, Soravit Changpinyo, wow so many middle authors , Anelia Angelova, Xiaohua Zhai, Neil Houlsby, Radu Soricut. (partial) PaLI-3 Vision Language Models: Smaller, Faster, Stronger , by Xi Chen, Xiao Wang, Lucas Beyer, Alexander Kolesnikov, Jialin Wu, Paul Voigtlaender, Basil Mustafa, Sebastian Goodman, Ibrahim Alabdulmohsin, Piotr Padlewski, Daniel Salz, Xi Xiong, Daniel Vlasic, Filip Pavetic, Keran Rong, Tianli Yu, Daniel Keysers, Xiaohua Zhai, Radu Soricut. Training Knowledge distillation: A good teacher is patient and consistent , by Lucas Beyer , Xiaohua Zhai , Amélie Royer , Larisa Markeeva , Rohan Anil, and Alexander Kolesnikov*\ Resources: README , trainer , colab . Sharpness-Aware Minimization for Efficiently Improving Generalization , by Pierre Foret, Ariel Kleiner, Hossein Mobahi, Behnam Neyshabur Surrogate Gap Minimization Improves Sharpness-Aware Training , by Juntang Zhuang, Boqing Gong, Liangzhe Yuan, Yin Cui, Hartwig Adam, Nicha Dvornek, Sekhar Tatikonda, James Duncan and Ting Liu \ Resources: trainer , config reproduced results Tuning computer vision models with task rewards , by André Susano Pinto , Alexander Kolesnikov , Yuge Shi, Lucas Beyer, Xiaohua Zhai. (partial) VeLO: Training Versatile Learned Optimizers by Scaling Up by Luke Metz, James Harrison, C. Daniel Freeman, Amil Merchant, Lucas Beyer, James Bradbury, Naman Agrawal, Ben Poole, Igor Mordatch, Adam Roberts, Jascha Sohl-Dickstein. Misc Are we done with ImageNet? , by Lucas Beyer , Olivier J. Hénaff , Alexander Kolesnikov , Xiaohua Zhai , and Aäron van den Oord* Codebase high-level organization and principles in a nutshell The main entry point is a trainer module, which typically does all the boilerplate related to creating a model and an optimizer, loading the data, checkpointing and training/evaluating the model inside a loop. We provide the canonical trainer train.py in the root folder. Normally, individual projects within big_vision fork and customize this trainer. All models, evaluators and preprocessing operations live in the corresponding subdirectories and can often be reused between different projects. We encourage compatible APIs within these directories to facilitate reusability, but it is not strictly enforced, as individual projects may need to introduce their custom APIs. We have a powerful configuration system, with the configs living in the configs/ directory. Custom trainers and modules can directly extend/modify the configuration options. Project-specific code resides in the .../proj/... namespace. It is not always possible to keep project-specific in sync with the core big_vision libraries, Below we provide the last known commit for each project where the project code is expected to work. Training jobs are robust to interruptions and will resume seamlessly from the last saved checkpoint (assuming a user provides the correct --workdir path). Each configuration file contains a comment at the top with a COMMAND snippet to run it, and some hint of expected runtime and results. See below for more details, but generally speaking, running on a GPU machine involves calling python -m COMMAND while running on TPUs, including multi-host, involves gcloud compute tpus tpu-vm ssh $NAME --zone=$ZONE --worker=all --command "bash big_vision/run_tpu.sh COMMAND" See instructions below for more details on how to run big_vision code on a GPU machine or Google Cloud TPU. By default we write checkpoints and logfiles. The logfiles are a list of JSON objects, and we provide a short and straightforward example colab to read and display the logs and checkpoints . Current and future contents The first release contains the core part of pre-training, transferring, and evaluating classification models at scale on Cloud TPU VMs. We have since added the following key features and projects: - Contrastive Image-Text model training and evaluation as in LiT and CLIP. - Patient and consistent distillation. - Scaling ViT. - MLP-Mixer. - UViM. Features and projects we plan to release in the near future, in no particular order: - ImageNet-21k in TFDS. - Loading misc public models used in our publications (NFNet, MoCov3, DINO). - Memory-efficient Polyak-averaging implementation. - Advanced JAX compute and memory profiling. We are using internal tools for this, but may eventually add support for the publicly available ones. We will continue releasing code of our future publications developed within big_vision here. Non-content The following exist in the internal variant of this codebase, and there is no plan for their release: - Regular regression tests for both quality and speed. They rely heavily on internal infrastructure. - Advanced logging, monitoring, and plotting of experiments. This also relies heavily on internal infrastructure. However, we are open to ideas on this and may add some in the future, especially if implemented in a self-contained manner. - Not yet published, ongoing research projects. GPU Setup We first discuss how to setup and run big_vision on a (local) GPU machine, and then discuss the setup for Cloud TPUs. Note that data preparation step for (local) GPU setup can be largely reused for the Cloud TPU setup. While the instructions skip this for brevity, we highly recommend using a virtual environment when installing python dependencies. Setting up python packages The first step is to checkout big_vision and install relevant python dependencies: git clone https://github.com/google-research/big_vision cd big_vision/ pip3 install --upgrade pip pip3 install -r big_vision/requirements.txt The latest version of jax library can be fetched as pip3 install --upgrade "jax[cuda]" -f https://storage.googleapis.com/jax-releases/jax_cuda_releases.html You may need a different jax package, depending on CUDA and cuDNN libraries installed on your machine. Please consult official jax documentation for more information. Preparing tfds data For unified and reproducible access to standard datasets we opted to use the tensorflow_datasets ( tfds ) library. It requires each dataset to be downloaded, preprocessed and then to be stored on a hard drive (or, if you use "Google Cloud", preferably stored in a "GCP bucket".). Many datasets can be downloaded and preprocessed automatically when used for the first time. Nevertheless, we intentionally disable this feature and recommend doing dataset preparation step separately, ahead of the first run. It will make debugging easier if problems arise and some datasets, like imagenet2012 , require manually downloaded data. Most of the datasets, e.g. cifar100 , oxford_iiit_pet or imagenet_v2 can be fully automatically downloaded and prepared by running cd big_vision/ python3 -m big_vision.tools.download_tfds_datasets cifar100 oxford_iiit_pet imagenet_v2 A full list of datasets is available at this link . Some datasets, like imagenet2012 or imagenet2012_real , require the data to be downloaded manually and placed into $TFDS_DATA_DIR/downloads/manual/ , which defaults to ~/tensorflow_datasets/downloads/manual/ . For example, for imagenet2012 and imagenet2012_real one needs to place the official ILSVRC2012_img_train.tar and ILSVRC2012_img_val.tar files in that directory and then run python3 -m big_vision.tools.download_tfds_datasets imagenet2012 imagenet2012_real (which may take ~1 hour). If you use Google Cloud and, TPUs in particular, you can then upload the preprocessed data (stored in $TFDS_DATA_DIR ) to "Google Cloud Bucket" and use the bucket on any of your (TPU) virtual machines to access the data. Running on a GPU machine Finally, after installing all python dependencies and preparing tfds data, the user can run the job using config of their choice, e.g. to train ViT-S/16 model on ImageNet data, one should run the following command: python3 -m big_vision.train --config big_vision/configs/vit_s16_i1k.py --workdir workdirs/`date '+%m-%d_%H%M'` or to train MLP-Mixer-B/16, run (note the gpu8 config param that reduces the default batch size and epoch count): python3 -m big_vision.train --config big_vision/configs/mlp_mixer_i1k.py:gpu8 --workdir workdirs/`date '+%m-%d_%H%M'` Cloud TPU VM setup Create TPU VMs To create a single machine with 8 TPU cores, follow the following Cloud TPU JAX document: https://cloud.google.com/tpu/docs/run-calculation-jax To support large-scale vision research, more cores with multiple hosts are recommended. Below we provide instructions on how to do it. First, create some useful variables, which we be reused: export NAME=<a name of the TPU deployment, e.g. my-tpu-machine> export ZONE=<GCP geographical zone, e.g. europe-west4-a> export GS_BUCKET_NAME=<Name of the storage bucket, e.g. my_bucket> The following command line will create TPU VMs with 32 cores, 4 hosts. gcloud compute tpus tpu-vm create $NAME --zone $ZONE --accelerator-type v3-32 --version tpu-ubuntu2204-base Install big_vision on TPU VMs Fetch the big_vision repository, copy it to all TPU VM hosts, and install dependencies. git clone https://github.com/google-research/big_vision gcloud compute tpus tpu-vm scp --recurse big_vision/big_vision $NAME: --zone=$ZONE --worker=all gcloud compute tpus tpu-vm ssh $NAME --zone=$ZONE --worker=all --command "bash big_vision/run_tpu.sh" Download and prepare TFDS datasets We recommend preparing tfds data locally as described above and then uploading the data to Google Cloud bucket. However, if you prefer, the datasets which do not require manual downloads can be prepared automatically using a TPU machine as described below. Note that TPU machines have only 100 GB of disk space, and multihost TPU slices do not allow for external disks to be attached in a write mode, so the instructions below may not work for preparing large datasets. As yet another alternative, we provide instructions on how to prepare tfds data on CPU-only GCP machine . Specifically, the seven TFDS datasets used during evaluations will be generated under ~/tensorflow_datasets on TPU machine with this command: gcloud compute tpus tpu-vm ssh $NAME --zone=$ZONE --worker=0 --command "TFDS_DATA_DIR=~/tensorflow_datasets bash big_vision/run_tpu.sh big_vision.tools.download_tfds_datasets cifar10 cifar100 oxford_iiit_pet oxford_flowers102 cars196 dtd uc_merced" You can then copy the datasets to GS bucket, to make them accessible to all TPU workers. gcloud compute tpus tpu-vm ssh $NAME --zone=$ZONE --worker=0 --command "rm -r ~/tensorflow_datasets/downloads && gsutil cp -r ~/tensorflow_datasets gs://$GS_BUCKET_NAME" If you want to integrate other public or custom datasets, i.e. imagenet2012, please follow the official guideline . Pre-trained models For the full list of pre-trained models check out the load function defined in the same module as the model code. And for example config on how to use these models, see configs/transfer.py . Run the transfer script on TPU VMs The following command line fine-tunes a pre-trained vit-i21k-augreg-b/32 model on cifar10 dataset. gcloud compute tpus tpu-vm ssh $NAME --zone=$ZONE --worker=all --command "TFDS_DATA_DIR=gs://$GS_BUCKET_NAME/tensorflow_datasets bash big_vision/run_tpu.sh big_vision.train --config big_vision/configs/transfer.py:model=vit-i21k-augreg-b/32,dataset=cifar10,crop=resmall_crop --workdir gs://$GS_BUCKET_NAME/big_vision/workdir/`date '+%m-%d_%H%M'` --config.lr=0.03" Run the train script on TPU VMs To train your own big_vision models on a large dataset, e.g. imagenet2012 ( prepare the TFDS dataset ), run the following command line. gcloud compute tpus tpu-vm ssh $NAME --zone=$ZONE --worker=all --command "TFDS_DATA_DIR=gs://$GS_BUCKET_NAME/tensorflow_datasets bash big_vision/run_tpu.sh big_vision.train --config big_vision/configs/bit_i1k.py --workdir gs://$GS_BUCKET_NAME/big_vision/workdir/`date '+%m-%d_%H%M'`" FSDP training. big_vision supports flexible parameter and model sharding strategies. Currently, we support a popular FSDP sharding via a simple config change, see this config example . For example, to run FSDP finetuning of a pretrained ViT-L model, run the following command (possible adjusting batch size depending on your hardware): gcloud compute tpus tpu-vm ssh $NAME --zone=$ZONE --worker=all --command "TFDS_DATA_DIR=gs://$GS_BUCKET_NAME/tensorflow_datasets bash big_vision/run_tpu.sh big_vision.train --config big_vision/configs/transfer.py:model=vit-i21k-augreg-l/16,dataset=oxford_iiit_pet,crop=resmall_crop,fsdp=True,batch_size=256 --workdir gs://$GS_BUCKET_NAME/big_vision/workdir/`date '+%m-%d_%H%M'` --config.lr=0.03" Image-text training with SigLIP. A minimal example that uses public coco captions data: gcloud compute tpus tpu-vm ssh $NAME --zone=$ZONE --worker=all --command "TFDS_DATA_DIR=gs://$GS_BUCKET_NAME/tensorflow_datasets bash big_vision/run_tpu.sh big_vision.trainers.proj.image_text.siglip --config big_vision/configs/proj/image_text/siglip_lit_coco.py --workdir gs://$GS_BUCKET_NAME/big_vision/`date '+%Y-%m-%d_%H%M'`" Sometimes useful gcloud commands Destroy the TPU machines: gcloud compute tpus tpu-vm delete $NAME --zone $ZONE Remove all big_vision-related folders on all hosts: gcloud compute tpus tpu-vm ssh $NAME --zone $ZONE --worker=all --command 'rm -rf ~/big_vision ~/bv_venv' Preparing tfds data on a standalone GCP CPU machine. First create a new machine and a disk (feel free to adjust exact machine type and disk settings/capacity): export NAME_CPU_HOST=<A name of a CPU-only machine> export NAME_DISK=<A name of a disk> gcloud compute instances create $NAME_CPU_HOST --machine-type c3-standard-22 --zone $ZONE --image-family ubuntu-2204-lts --image-project ubuntu-os-cloud gcloud compute disks create $NAME_DISK --size 1000GB --zone $ZONE --type pd-balanced Now attach the disk to the newly create machine: gcloud compute instances attach-disk $NAME_CPU_HOST --disk $NAME_DISK --zone $ZONE Next, ssh to the machine gcloud compute ssh $NAME_CPU_HOST --zone=$ZONE and follow instructions to format and mount the disk . Let's assume it was mounted to /mnt/disks/tfds . Almost there, now clone and set up big_vision : gcloud compute ssh $NAME_CPU_HOST --zone=$ZONE --command "git clone https://github.com/google-research/big_vision.git && cd big_vision && sh big_vision/run_tpu.sh" Finally, prepare the dataset (e.g. coco_captions ) using the utility script and copy the result to you google cloud bucket: gcloud compute ssh $NAME_CPU_HOST --zone=$ZONE --command "cd big_vision && TFDS_DATA_DIR=/mnt/disks/tfds/tensorflow_datasets bash big_vision/run_tpu.sh big_vision.tools.download_tfds_datasets coco_captions" gcloud compute ssh $NAME_CPU_HOST --zone=$ZONE --command "rm -rf /mnt/disks/tfds/tensorflow_datasets/downloads && gsutil cp -r /mnt/disks/tfds/tensorflow_datasets gs://$GS_BUCKET_NAME" ViT baseline We provide a well-tuned ViT-S/16 baseline in the config file named vit_s16_i1k.py . It achieves 76.5% accuracy on ImageNet validation split in 90 epochs of training, being a strong and simple starting point for research on the ViT models. Please see our arXiv note for more details and if this baseline happens to by useful for your research, consider citing @article{vit_baseline, url = {https://arxiv.org/abs/2205.01580}, author = {Beyer, Lucas and Zhai, Xiaohua and Kolesnikov, Alexander}, title = {Better plain ViT baselines for ImageNet-1k}, journal={arXiv preprint arXiv:2205.01580}, year = {2022}, } Project specific commits The last known commit where the specific project code is expected to work. The core code and configs are expected to work at head. | Project | Commit | |------------|-----------------------------------------------------------------------------------------------| | UViM | https://github.com/google-research/big_vision/commit/21bd6ebe253f070f584d8b777ad76f4abce51bef | | image_text | https://github.com/google-research/big_vision/commit/8921d5141504390a8a4f7b2dacb3b3c042237290 | | distill | https://github.com/google-research/big_vision/commit/2f3f493af048dbfd97555ff6060f31a0e686f17f | | GSAM | WIP | | CLIPPO | https://github.com/google-research/big_vision/commit/fd2d3bd2efc9d89ea959f16cd2f58ae8a495cd44 | | CapPa | https://github.com/google-research/big_vision/commit/7ace659452dee4b68547575352c022a2eef587a5 | | GIVT | https://github.com/google-research/big_vision/commit/0cb70881dd33b3343b769347dc19793c4994b8cb | Citing the codebase If you found this codebase useful for your research, please consider using the following BibTEX to cite it: @misc{big_vision, author = {Beyer, Lucas and Zhai, Xiaohua and Kolesnikov, Alexander}, title = {Big Vision}, year = {2022}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/google-research/big_vision}} } Disclaimer This is not an official Google Product. License Unless explicitly noted otherwise, everything in the big_vision codebase (including models and colabs) is released under the Apache2 license. See the LICENSE file for the full license text.;Official codebase used to develop Vision Transformer, SigLIP, MLP-Mixer, LiT and more.;[]
google-research/big_vision
LayerZero-Labs/LayerZero;LayerZero - an Omnichain Interoperability Protocol This repository contains the smart contracts for LayerZero Endpoints. For developers looking to build on top of LayerZero please refer to the docs Overview LayerZero is an Omnichain Interoperability Protocol designed for lightweight message passing across chains. LayerZero provides authentic and guaranteed message delivery with configurable trustlessness. The protocol is implemented as a set of gas-efficient, non-upgradable smart contracts. Development Interfaces add this to your package.json "@layerzerolabs/contracts": "latest", Setup copy .env.example to .env and fill in variables yarn install Testing yarn test Single Test File yarn test test/Endpoint.test.js Gas Uasge yarn test:gas Coverage yarn test:coverage Lint yarn lint only lints .js/.ts files Deployment Deploy networks are generated based on tags. Hardhat yarn dev spins up local environment and deploys contracts Development hardhat --network rinkeby-testnet deploy hardhat --network rinkeby-sandbox deploy Production hardhat --network ethereum deploy Adding a new network Update hardhat config with network refer to STAGING_MAP for staging environments supported Update endpoints.json with network Make sure that key in endpoints.json matches network name in hardhat Example: One LayerZero Network `` //hardhat.config.ts ethereum: { url: {rpc address}`, chainId: 1, //chainlist id } //endpoints.json "production": { ... "ethereum": { "id": 1 //layerzero chain id } } ``` Example: More than one LayerZero Network on same chain (using expandNetwork) `` //hardhat.config.ts ...expandNetwork({ ropsten: { url: {rpc address}`, chainId: 3, //chainlist id } }, ["testnet", "sandbox"]), //endpoints.json "development": { ... "ropsten": { "id": 4 //layerzero chain id } } ``` Acknowledgments Thank you to the core development team for building the LayerZero Endpoints: Ryan Zarick, Isaac Zhang, Caleb Banister, Carmen Cheng and T. Riley Schwarz LICENSING The primary license for LayerZero is the Business Source License 1.1 (BUSL-1.1). see LICENSE .;An Omnichain Interoperability Protocol;[]
LayerZero-Labs/LayerZero
dlt-hub/dlt;data load tool (dlt) — the open-source Python library for data loading Be it a Google Colab notebook, AWS Lambda function, an Airflow DAG, your local laptop, or a GPT-4 assisted development playground— dlt can be dropped in anywhere. 🚀 Join our thriving community of likeminded developers and build the future together! Installation dlt supports Python 3.8+. sh pip install dlt More options: Install via Conda or Pixi Quick Start Load chess game data from chess.com API and save it in DuckDB: ```python import dlt from dlt.sources.helpers import requests Create a dlt pipeline that will load chess player data to the DuckDB destination pipeline = dlt.pipeline( pipeline_name='chess_pipeline', destination='duckdb', dataset_name='player_data' ) Grab some player data from Chess.com API data = [] for player in ['magnuscarlsen', 'rpragchess']: response = requests.get(f'https://api.chess.com/pub/player/{player}') response.raise_for_status() data.append(response.json()) Extract, normalize, and load the data pipeline.run(data, table_name='player') ``` Try it out in our Colab Demo Features Automatic Schema: Data structure inspection and schema creation for the destination. Data Normalization: Consistent and verified data before loading. Seamless Integration: Colab, AWS Lambda, Airflow, and local environments. Scalable: Adapts to growing data needs in production. Easy Maintenance: Clear data pipeline structure for updates. Rapid Exploration: Quickly explore and gain insights from new data sources. Versatile Usage: Suitable for ad-hoc exploration to advanced loading infrastructures. Start in Seconds with CLI: Powerful CLI for managing, deploying and inspecting local pipelines. Incremental Loading: Load only new or changed data and avoid loading old records again. Open Source: Free and Apache 2.0 Licensed. Ready to use Sources and Destinations Explore ready to use sources (e.g. Google Sheets) in the Verified Sources docs and supported destinations (e.g. DuckDB) in the Destinations docs . Documentation For detailed usage and configuration, please refer to the official documentation . Examples You can find examples for various use cases in the examples folder. Adding as dependency dlt follows the semantic versioning with the MAJOR.MINOR.PATCH pattern. Currently, we are using pre-release versioning with the major version being 0. minor version change means breaking changes patch version change means new features that should be backward compatible any suffix change, e.g., post10 -> post11 , is considered a patch We suggest that you allow only patch level updates automatically: * Using the Compatible Release Specifier . For example dlt~=0.3.10 allows only versions >=0.3.10 and less than <0.4 * Poetry caret requirements . For example ^0.3.10 allows only versions >=0.3.10 to <0.4 Get Involved The dlt project is quickly growing, and we're excited to have you join our community! Here's how you can get involved: Connect with the Community : Join other dlt users and contributors on our Slack Report issues and suggest features : Please use the GitHub Issues to report bugs or suggest new features. Before creating a new issue, make sure to search the tracker for possible duplicates and add a comment if you find one. Track progress of our work and our plans : Please check out our public Github project Contribute Verified Sources : Contribute your custom sources to the dlt-hub/verified-sources to help other folks in handling their data tasks. Contribute code : Check out our contributing guidelines for information on how to make a pull request. Improve documentation : Help us enhance the dlt documentation. License dlt is released under the Apache 2.0 License .;data load tool (dlt) is an open source Python library that makes data loading easy 🛠️ ;data,python,data-engineering,data-lake,data-loading,data-warehouse,elt,extract,load,transform
dlt-hub/dlt
mywalkb/LSPosed_mod;LSPosed Framework Introduction LSPosed is a great XPosed Framework, but it has a big problem, only manager can manage scope. LSPosed team don't accept PR for CLI or API Module, the TODO issues are old more one year and never completed, is more important the GUI changed many times but not CLI or API Module. In my fork API Module and CLI are implemented. CLI require root user because must access files readable only by root. A Riru / Zygisk module trying to provide an ART hooking framework which delivers consistent APIs with the OG Xposed, leveraging LSPlant hooking framework. Xposed is a framework for modules that can change the behavior of the system and apps without touching any APKs. That's great because it means that modules can work for different versions and even ROMs without any changes (as long as the original code was not changed too much). It's also easy to undo. As all changes are done in the memory, you just need to deactivate the module and reboot to get your original system back. There are many other advantages, but here is just one more: multiple modules can do changes to the same part of the system or app. With modified APKs, you have to choose one. No way to combine them, unless the author builds multiple APKs with different combinations. Supported Versions Android 8.1 ~ 15 Beta 2.1 Install Install Magisk v24+ (For Zygisk flavor) or Magisk 23 (For Riru flavor) (For Riru flavor) Install Riru v26.1.7+ Download and install LSPosed in Magisk app Reboot Open LSPosed manager from notification Have fun :) Download For stable releases, please go to Github Releases page For canary build, please check Github Actions Note: debug builds are only available in Github Actions. Migration You can install LSPosed_mod on top of official LSPosed installation. If the app is installed and not parasitic, the app must be reinstalled from apk distribuited with LSPosed_mod. Get Help Only bug reports from THE LATEST DEBUG BUILD will be accepted. - GitHub issues: Issues - Wiki - (For Chinese speakers) 本项目只接受英语 标题 的issue。如果您不懂英语,请使用 翻译工具 For Developers Developers are welcome to write Xposed modules with hooks based on LSPosed Framework. A module based on LSPosed framework is fully compatible with the original Xposed Framework, and vice versa, a Xposed Framework-based module will work well with LSPosed framework too. Xposed Framework API We use our own module repository. We welcome developers to submit modules to our repository, and then modules can be downloaded in LSPosed. LSPosed Module Repository Translation Contributing You can contribute translation here . Credits LSPosed : fork source (makes all these possible) Magisk : makes all these possible Riru : provides a way to inject code into zygote process XposedBridge : the OG Xposed framework APIs Dobby : used for inline hooking LSPlant : the core ART hooking framework EdXposed : fork source XZ Embedded : for decompress debug_info section into stripped libraries ~ SandHook : ART hooking framework for SandHook variant~ ~ YAHFA : previous ART hooking framework~ ~ dexmaker and dalvikdx : to dynamically generate YAHFA hooker classes~ ~ DexBuilder : to dynamically generate YAHFA hooker classes~ License LSPosed is licensed under the GNU General Public License v3 (GPL-3) (http://www.gnu.org/copyleft/gpl.html).;My changes to LSPosed;[]
mywalkb/LSPosed_mod
Bugswriter/notflix;NOTFLIX f@#k netflix use notflix a tool which search magnet links and stream it with peerflix Watch this video to understand - bugswriter's notflix How does this work? This is a shell script. It scape 1337x and get the magnet link. After this it use peerflix to stream the video from magnet link. For scraping script use simple gnu utils like sed, awk, paste, cut. Requirements peerflix - A tool to stream torrent. sudo npm install peerflix -g Installation cURL cURL notflix to your $PATH and give execute permissions. sh $ sudo curl -sL "https://raw.githubusercontent.com/Bugswriter/notflix/master/notflix" -o /usr/local/bin/notflix $ sudo chmod +x /usr/local/bin/notflix - To update, just do curl again, no need to chmod anymore. - To uninstall, simply remove notflix from your $PATH , for example `sudo rm -f /usr/local/bin/notflix. License This project is licensed under GPL-3.0 .;Notflix is a shell script to search and stream torrent.;[]
Bugswriter/notflix
xioacd99/study-is-wonderful;study-is-wonderful 由于课程列表很长,推荐安装谷歌扩展 Smart TOC 来提升阅读体验(会自动在网页右侧边缘生成一个可跳转的目录。 Smart TOC 示例如下: Awesome public courses, welcome to share other wonderful learning resources by issue or PR. 本项目主要面向汉语人群,收集了一些比较好的课程资源,一起撸起袖子加油干 。 Resourse is collected from the following platforms, thanks to them. 客套话。 OK, let's start studying 🙌 Math 斯坦福 数学思维课 凸优化 傅里叶变换及其应用 矩阵论与应用 CMU 凸优化 最优化进阶与随机方法 离散微分几何 加州伯克利 最优化方法 数值分析 随机过程 牛津 数值方法 哈佛 概率论 抽象代数 MIT 概率论 应用统计 离散随机过程 傅里叶分析 线性代数 线性代数 离散数学 复分析 微分方程 图论与可加组合学 黎曼几何 耶鲁 博弈论 Computer Science general CMU·Great Ideas in Theoretical Computer Science MIT·The Missing Semester of Your CS Education 哈佛·CS50X 计算机入门 斯坦福·编程方法学 斯坦福·CS101 MIT·Introduction to Computer Science and Programming in Python Algorithms and Data Structures MIT·经典算法 MIT·算法导论 MIT·高级数据结构 MIT·高级算法 MIT·数据结构与算法设计 伯克利·CS 61B 斯坦福·CS106B Programming Abstractions CSE 373 "The Algorithm Design Manual" System and Architecture CMU·计算机系统介绍 加州伯克利·Operating System and Systems Programming MIT·操作系统 MIT·分布式系统 MIT·计算机系统安全 伯克利·CS 61C CMU·15-213: B 站翻译 , 课程网页 斯坦福·CS 107 南京大学·操作系统 Artificial Intelligence (general) 单独推荐两个 up 主, 跟李沐学AI 和 shuhuai008 ,前者是讲深度学习的,后者是讲统计学习的,都非常棒 🎉。 CMU·人工智能 (研究生) 李宏毅·Deep Learning Theory 李宏毅·Next Step of Machine Learning 李宏毅·Advanced Topics in Deep Learning 李宏毅·机器学习-v1 , 李宏毅·机器学习-v2 , 李宏毅·机器学习-v1 , 李宏毅·机器学习-TA 补充课 MIT·深度学习 MIT·机器学习 斯坦福·人工智能原理与技术 伯克利·CS 188 伯克利·CS 189 Computer Vision (CV) 李飞飞的 cs231n 我看过,感觉不咋地,就没加。 李宏毅·GAN-2018 , 李宏毅·GAN-2017 Natural Language Processing (NLP) CMU·自然语言处理 斯坦福·深度自然语言处理 CMU·高级自然语言处理 李宏毅·Deep Learning for Human Languange Processing 李宏毅·Deep Reinforcement Learning Reinforcement Learning (RL) 斯坦福·强化学习 MIT·识别,估计和学习 加州伯克利·深度强化学习 Statistical Learning (SL) 加州伯克利·统计机器学习 斯坦福·统计学习 Other Topics CMU·结构化数据机器学习 加州理工·机器学习与统计推断基础 李宏毅·Structured Learning 斯坦福·图机器学习 斯坦福·机器人学 MIT·医疗机器学习 Data Science CMU·应用数据科学 加州伯克利·数据科学原理与技术 加州理工·数据驱动算法设计 斯坦福·大数据概论 哈佛·大数据算法 斯坦福·CS 246 Database CMU·数据库系统介绍 CMU·数据库系统 加州伯克利·数据库导论 Parallel Computing CMU·并行计算结构与编程 加州伯克利·并行计算应用 加州伯克利·并行程序设计导论 Software Engineering 康奈尔·软件工程 CMU·智能系统软件工程 Compiler 斯坦福·编译原理 康奈尔·高级编译原理 Computer Network CMU·计算机网络 斯坦福·计算机网络 斯坦福·网络安全 Computer Graphics CMU·计算机图形学 闫令琪·现代计算机图形学入门 MIT·计算机图形学 Others CMU·人机交互系列讲座 普林斯顿·比特币与加密技术 斯坦福·密码学 斯坦福·C++ 中的抽象编程 MIT·计算机程序的构造与解释 MIT·信号与系统 奥本海姆·MIT·信号与系统 MIT·计算结构 MIT·python 编程 MIT·区块链与金钱 Economics 耶鲁·金融市场 MIT·微观经济学 MIT·金融理论 MIT·行为经济学 Physics 斯坦福·量子力学 斯坦福·广义相对论 斯坦福·弦理论和 M 理论 Psychology 加州伯克利·心理统计学 加州伯克利·社会认知心理学 耶鲁·心理学导论 哈佛·积极心理学 哈佛·心理学导论 哈佛·幸福课 Metaphysics 耶鲁·现代社会理论基础 耶鲁·资本主义 耶鲁·政治哲学导论 斯坦福·人类行为生物学 剑桥·美学 Others MIT·几何折叠算法 斯坦福·如何创业 斯坦福·SCI 论文写作课程 欧丽娟说红楼梦 罗翔讲刑法 耶鲁·聆听音乐 耶鲁·文学理论导论 耶鲁·如何管理情绪 耶鲁·谈判概论;awesome public courses and wonderful study resource;awesome-list,study,learing,courses
xioacd99/study-is-wonderful
DioxusLabs/taffy;Taffy Taffy is a flexible, high-performance, cross-platform UI layout library written in Rust . It currently implements the CSS Block , Flexbox and CSS Grid layout algorithms. Support for other paradigms is planned. For more information on this and other future development plans see the roadmap issue . This crate is a collaborative, cross-team project, and is designed to be used as a dependency for other UI and GUI libraries. Right now, it powers: Dioxus : a React-like library for building fast, portable, and beautiful user interfaces with Rust Bevy : an ergonomic, ECS-first Rust game engine The Lapce text editor via the Floem UI framework The Zed text editor via the GPUI UI framework Usage ```rust use taffy::prelude::*; // First create an instance of TaffyTree let mut tree : TaffyTree<()> = TaffyTree::new(); // Create a tree of nodes using TaffyTree.new_leaf and TaffyTree.new_with_children . // These functions both return a node id which can be used to refer to that node // The Style struct is used to specify styling information let header_node = tree .new_leaf( Style { size: Size { width: length(800.0), height: length(100.0) }, ..Default::default() }, ).unwrap(); let body_node = tree .new_leaf( Style { size: Size { width: length(800.0), height: auto() }, flex_grow: 1.0, ..Default::default() }, ).unwrap(); let root_node = tree .new_with_children( Style { flex_direction: FlexDirection::Column, size: Size { width: length(800.0), height: length(600.0) }, ..Default::default() }, &[header_node, body_node], ) .unwrap(); // Call compute_layout on the root of your tree to run the layout algorithm tree.compute_layout(root_node, Size::MAX_CONTENT).unwrap(); // Inspect the computed layout using TaffyTree.layout assert_eq!(tree.layout(root_node).unwrap().size.width, 800.0); assert_eq!(tree.layout(root_node).unwrap().size.height, 600.0); assert_eq!(tree.layout(header_node).unwrap().size.width, 800.0); assert_eq!(tree.layout(header_node).unwrap().size.height, 100.0); assert_eq!(tree.layout(body_node).unwrap().size.width, 800.0); assert_eq!(tree.layout(body_node).unwrap().size.height, 500.0); // This value was not set explicitly, but was computed by Taffy ``` Bindings to other languages Python via stretchable WIP C bindings WIP WASM bindings Learning Resources Taffy implements the Flexbox and CSS Grid specifications faithfully, so documentation designed for the web should translate cleanly to Taffy's implementation. For reference documentation on individual style properties we recommend the MDN documentation (for example this page on the width property). Such pages can usually be found by searching for "MDN property-name" using a search engine. If you are interested in guide-level documentation on CSS layout, then we recommend the following resources: Flexbox Flexbox Froggy . This is an interactive tutorial/game that allows you to learn the essential parts of Flexbox in a fun engaging way. A Complete Guide To Flexbox by CSS Tricks. This is detailed guide with illustrations and comprehensive written explanation of the different Flexbox properties and how they work. CSS Grid CSS Grid Garden . This is an interactive tutorial/game that allows you to learn the essential parts of CSS Grid in a fun engaging way. A Complete Guide To CSS Grid by CSS Tricks. This is detailed guide with illustrations and comprehensive written explanation of the different CSS Grid properties and how they work. Benchmarks (vs. Yoga ) Run on a 2021 MacBook Pro with M1 Pro processor using criterion The benchmarks measure layout computation only. They do not measure tree creation. Yoga benchmarks were run via the yoga crate (Rust bindings) Most popular websites seem to have between 3,000 and 10,000 nodes (although they also require text layout, which neither yoga nor taffy implement). Note that the table below contains multiple different units (milliseconds vs. microseconds) | Benchmark | Node Count | Depth | Yoga ( ba27f9d ) | Taffy ( 71027a8 ) | | --- | --- | --- | --- | --- | | yoga 'huge nested' | 1,000 | 3 | 364.60 µs | 329.04 µs | | yoga 'huge nested' | 10,000 | 4 | 4.1988 ms | 4.3486 ms | | yoga 'huge nested' | 100,000 | 5 | 45.804 ms | 38.559 ms | | big trees (wide) | 1,000 | 1 | 737.77 µs | 505.99 µs | | big trees (wide) | 10,000 | 1 | 7.1007 ms | 8.3395 ms | | big trees (wide) | 100,000 | 1 | 135.78 ms | 247.42 ms | | big trees (deep) | 4,000 | 12 | 2.2333 ms | 1.7400 ms | | big trees (deep) | 10,000 | 14 | 5.9477 ms | 4.4445 ms | | big trees (deep) | 100,000 | 17 | 76.755 ms | 63.778 ms | | super deep | 1,000 | 1,000 | 555.32 µs | 472.85 µs | Contributions Contributions welcome : if you'd like to use, improve or build taffy , feel free to join the conversation, open an issue or submit a PR . If you have questions about how to use taffy , open a discussion so we can answer your questions in a way that others can find.;A high performance rust-powered UI layout library;flexbox,ui,hacktoberfest,rust,css-grid,layout
DioxusLabs/taffy
huggingface/evaluate;🤗 Evaluate is a library that makes evaluating and comparing models and reporting their performance easier and more standardized. It currently contains: implementations of dozens of popular metrics : the existing metrics cover a variety of tasks spanning from NLP to Computer Vision, and include dataset-specific metrics for datasets. With a simple command like accuracy = load("accuracy") , get any of these metrics ready to use for evaluating a ML model in any framework (Numpy/Pandas/PyTorch/TensorFlow/JAX). comparisons and measurements : comparisons are used to measure the difference between models and measurements are tools to evaluate datasets. an easy way of adding new evaluation modules to the 🤗 Hub : you can create new evaluation modules and push them to a dedicated Space in the 🤗 Hub with evaluate-cli create [metric name] , which allows you to see easily compare different metrics and their outputs for the same sets of references and predictions. 🎓 Documentation 🔎 Find a metric , comparison , measurement on the Hub 🌟 Add a new evaluation module 🤗 Evaluate also has lots of useful features like: Type checking : the input types are checked to make sure that you are using the right input formats for each metric Metric cards : each metrics comes with a card that describes the values, limitations and their ranges, as well as providing examples of their usage and usefulness. Community metrics: Metrics live on the Hugging Face Hub and you can easily add your own metrics for your project or to collaborate with others. Installation With pip 🤗 Evaluate can be installed from PyPi and has to be installed in a virtual environment (venv or conda for instance) bash pip install evaluate Usage 🤗 Evaluate's main methods are: evaluate.list_evaluation_modules() to list the available metrics, comparisons and measurements evaluate.load(module_name, **kwargs) to instantiate an evaluation module results = module.compute(*kwargs) to compute the result of an evaluation module Adding a new evaluation module First install the necessary dependencies to create a new metric with the following command: bash pip install evaluate[template] Then you can get started with the following command which will create a new folder for your metric and display the necessary steps: bash evaluate-cli create "Awesome Metric" See this step-by-step guide in the documentation for detailed instructions. Credits Thanks to @marella for letting us use the evaluate namespace on PyPi previously used by his library .;🤗 Evaluate: A library for easily evaluating machine learning models and datasets.;evaluation,machine-learning
huggingface/evaluate
nathanhoad/godot_dialogue_manager;Dialogue Manager for Godot 4 Dialogue Manager is an addon for Godot 4 (there's a Godot 3 version too ) that provides a stateless branching dialogue editor and runtime. Write your dialogue in a script-like way and easily integrate it into your game. You can install it via the Asset Library or downloading a copy from GitHub. Documentation FAQ Writing Dialogue Settings Using dialogue in your game Example balloons Translations API C# wrapper Example Projects Wishlist my game Video Guides Contributors Dialogue Manager is made by Nathan Hoad with help from these cool people . License Licensed under the MIT license, see LICENSE for more information.;A powerful nonlinear dialogue system for Godot;godot,dialogue,editor,runtime,addon,gdscript,godot4,godotengine,csharp
nathanhoad/godot_dialogue_manager
Timothyxxx/Chain-of-ThoughtsPapers;Chain-of-ThoughtsPapers A trend starts from "Chain of Thought Prompting Elicits Reasoning in Large Language Models". Check Tool use LLMs and Environment Interactive LLMs for the newest good direction we are doing! Papers Chain of Thought Prompting Elicits Reasoning in Large Language Models. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Chi, Quoc Le, Denny Zhou [ pdf ] 2022.1 Self-Consistency Improves Chain of Thought Reasoning in Language Models. Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Denny Zhou [ pdf ] 2022.3 STaR: Self-Taught Reasoner Bootstrapping Reasoning With Reasoning. Eric Zelikman, Yuhuai Wu, Noah D. Goodman [ pdf ], [ code ] 2022.3 PaLM: Scaling Language Modeling with Pathways. Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, Parker Schuh, Kensen Shi, Sasha Tsvyashchenko, Joshua Maynez, Abhishek Rao, Parker Barnes, Yi Tay, Noam Shazeer, Vinodkumar Prabhakaran, Emily Reif, Nan Du, Ben Hutchinson, Reiner Pope, James Bradbury, Jacob Austin, Michael Isard, Guy Gur-Ari, Pengcheng Yin, Toju Duke, Anselm Levskaya, Sanjay Ghemawat, Sunipa Dev, Henryk Michalewski, Xavier Garcia, Vedant Misra, Kevin Robinson, Liam Fedus, Denny Zhou, Daphne Ippolito, David Luan, Hyeontaek Lim, Barret Zoph, Alexander Spiridonov, Ryan Sepassi, David Dohan, Shivani Agrawal, Mark Omernick, Andrew M. Dai, Thanumalayan Sankaranarayana Pillai, Marie Pellat, Aitor Lewkowycz, Erica Moreira, Rewon Child, Oleksandr Polozov, Katherine Lee, Zongwei Zhou, Xuezhi Wang, Brennan Saeta, Mark Diaz, Orhan Firat, Michele Catasta, Jason Wei, Kathy Meier-Hellstern, Douglas Eck, Jeff Dean, Slav Petrov, Noah Fiedel [ pdf ] 2022.4 Can language models learn from explanations in context?. Andrew K. Lampinen, Ishita Dasgupta, Stephanie C. Y. Chan, Kory Matthewson, Michael Henry Tessler, Antonia Creswell, James L. McClelland, Jane X. Wang, Felix Hill [ pdf ] 2022.4 Inferring Implicit Relations with Language Models. Uri Katz, Mor Geva, Jonathan Berant [ pdf ] 2022.4 The Unreliability of Explanations in Few-Shot In-Context Learning. Xi Ye, Greg Durrett [ pdf ] 2022.5 Large Language Models are Zero-Shot Reasoners. Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, Yusuke Iwasawa [ pdf ], [ code ] 2022.5 Least-to-Most Prompting Enables Complex Reasoning in Large Language Models. Denny Zhou, Nathanael Schärli, Le Hou, Jason Wei, Nathan Scales, Xuezhi Wang, Dale Schuurmans, Olivier Bousquet, Quoc Le, Ed Chi [ pdf ] 2022.5 Selection-Inference: Exploiting Large Language Models for Interpretable Logical Reasoning. Antonia Creswell, Murray Shanahan, Irina Higgins [ pdf ] 2022.5 On the Advance of Making Language Models Better Reasoners. Yifei Li, Zeqi Lin, Shizhuo Zhang, Qiang Fu, Bei Chen, Jian-Guang Lou, Weizhu Chen [ pdf ] 2022.6 Emergent Abilities of Large Language Models. Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, Ed H. Chi, Tatsunori Hashimoto, Oriol Vinyals, Percy Liang, Jeff Dean, William Fedus [ pdf ] 2022.6 Minerva: Solving Quantitative Reasoning Problems with Language Models. Posted by Ethan Dyer and Guy Gur-Ari, Research Scientists, Google Research, Blueshift Team [ blog ] 2022.6 JiuZhang: A Chinese Pre-trained Language Model for Mathematical Problem Understanding. Wayne Xin Zhao, Kun Zhou, Zheng Gong, Beichen Zhang, Yuanhang Zhou, Jing Sha, Zhigang Chen, Shijin Wang, Cong Liu, Ji-Rong Wen [ pdf ], [ code ] 2022.6 A Dataset and Benchmark for Automatically Answering and Generating Machine Learning Final Exams Sarah Zhang, Reece Shuttleworth, Derek Austin, Yann Hicke, Leonard Tang, Sathwik Karnik, Darnell Granberry, Iddo Drori [ pdf ] 2022.6 Rationale-Augmented Ensembles in Language Models. Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Denny Zhou [ pdf ] 2022.7 Language Model Cascades. David Dohan, Winnie Xu, Aitor Lewkowycz, Jacob Austin, David Bieber, Raphael Gontijo Lopes, Yuhuai Wu, Henryk Michalewski, Rif A. Saurous, Jascha Sohl-dickstein, Kevin Murphy, Charles Sutton [ pdf ], [ code ] 2022.7 Text and Patterns: For Effective Chain of Thought, It Takes Two to Tango. Aman Madaan, Amir Yazdanbakhsh [ pdf ] 2022.9 Compositional Semantic Parsing with Large Language Models. Andrew Drozdov, Nathanael Schärli, Ekin Akyürek, Nathan Scales, Xinying Song, Xinyun Chen, Olivier Bousquet, Denny Zhou [ pdf ] 2022.9 Dynamic Prompt Learning via Policy Gradient for Semi-structured Mathematical Reasoning. Pan Lu, Liang Qiu, Kai-Wei Chang, Ying Nian Wu, Song-Chun Zhu, Tanmay Rajpurohit, Peter Clark, Ashwin Kalyan [ pdf ] 2022.9 Language Models are Multilingual Chain-of-Thought Reasoners. Freda Shi, Mirac Suzgun, Markus Freitag, Xuezhi Wang, Suraj Srivats, Soroush Vosoughi, Hyung Won Chung, Yi Tay, Sebastian Ruder, Denny Zhou, Dipanjan Das, Jason Wei [ pdf ], [ code ] 2022.10 Automatic Chain of Thought Prompting in Large Language Models. Zhuosheng Zhang, Aston Zhang, Mu Li, Alex Smola [ pdf ], [ code ] 2022.10 Binding Language Models in Symbolic Languages. Zhoujun Cheng , Tianbao Xie , Peng Shi, Chengzu Li, Rahul Nadkarni, Yushi Hu, Caiming Xiong, Dragomir Radev, Mari Ostendorf, Luke Zettlemoyer, Noah A. Smith, Tao Yu [ pdf ], [ code ] 2022.10 ReAct: Synergizing Reasoning and Acting in Language Models. Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, Yuan Cao [ pdf ], [ code ] 2022.10 Ask Me Anything: A simple strategy for prompting language models. Simran Arora, Avanika Narayan, Mayee F. Chen, Laurel Orr, Neel Guha, Kush Bhatia, Ines Chami, Frederic Sala, Christopher Ré [ pdf ], [ code ] 2022.10 Language Models of Code are Few-Shot Commonsense Learners. Aman Madaan, Shuyan Zhou, Uri Alon, Yiming Yang, Graham Neubig [ pdf ], [ code ] 2022.10 Large Language Models Can Self-Improve. Jiaxin Huang, Shixiang Shane Gu, Le Hou, Yuexin Wu, Xuezhi Wang, Hongkun Yu, Jiawei Han [ pdf ] 2022.10 Large Language Models are few(1)-shot Table Reasoners. Wenhu Chen [ pdf ], [ code ] 2022.10 PAL: Program-aided Language Models. Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon, Pengfei Liu, Yiming Yang, Jamie Callan, Graham Neubig [ pdf ] 2022.11 Program of Thoughts Prompting: Disentangling Computation from Reasoning for Numerical Reasoning Tasks. Wenhu Chen, Xueguang Ma, Xinyi Wang, William W. Cohen [ pdf ], [ code ] 2022.11 Self-Prompting Large Language Models for Zero-Shot Open-Domain QA. Junlong Li, Zhuosheng Zhang, Hai Zhao [ pdf ] 2022.12 Reasoning with Language Model Prompting: A Survey. Shuofei Qiao, Yixin Ou, Ningyu Zhang, Xiang Chen, Yunzhi Yao, Shumin Deng, Chuanqi Tan, Fei Huang, Huajun Chen [ pdf ], [ code ] 2022.12 Towards Reasoning in Large Language Models: A Survey. Jie Huang, Kevin Chen-Chuan Chang [ pdf ], [ code ] 2022.12 Large Language Models are reasoners with Self-Verification. Yixuan Weng, Minjun Zhu, Shizhu He, Kang Liu, Jun Zhao [ pdf ] [ code ] 2022.12 Towards Understanding Chain-of-Thought Prompting: An Empirical Study of What Matters. Boshi Wang, Sewon Min, Xiang Deng, Jiaming Shen, You Wu, Luke Zettlemoyer, Huan Sun [ pdf ], [ code ] 2022.12 Large Language Models Are Reasoning Teachers. Namgyu Ho, Laura Schmid, Se-Young Yun [ pdf ] [ code ] 2022.12 Faithful Chain-of-Thought Reasoning Qing Lyu*, Shreya Havaldar*, Adam Stein*, Li Zhang, Delip Rao, Eric Wong, Marianna Apidianaki, Chris Callison-Burch [ pdf ], [ code ] 2023.01 Large Language Models are Versatile Decomposers: Decompose Evidence and Questions for Table-based Reasoning Yunhu Ye, Binyuan Hui, Min Yang, Binhua Li, Fei Huang, Yongbin Li [ pdf ], [ code ] 2023.02 Multimodal Chain-of-Thought Reasoning in Language Models Zhuosheng Zhang, Aston Zhang, Mu Li, Hai Zhao, George Karypis, Alex Smola [ pdf ], [ code ] 2023.02 Large Language Models Can Be Easily Distracted by Irrelevant Context Freda Shi, Xinyun Chen, Kanishka Misra, Nathan Scales, David Dohan, Ed Chi, Nathanael Schärli, Denny Zhou [ pdf ], [ code ] 2023.02 Active Prompting with Chain-of-Thought for Large Language Models Shizhe Diao, Pengcheng Wang, Yong Lin, Tong Zhang [ pdf ], [ code ] 2023.02 MM-REACT: Prompting ChatGPT for Multimodal Reasoning and Action Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Ehsan Azarnasab, Faisal Ahmed, Zicheng Liu, Ce Liu, Michael Zeng, Lijuan Wang [ pdf ], [ code ] 2023.03 Exploring Human-Like Translation Strategy with Large Language Models Zhiwei He, Tian Liang, Wenxiang Jiao, Zhuosheng Zhang, Yujiu Yang, Rui Wang, Zhaopeng Tu, Shuming Shi, Xing Wang [ pdf ], [ code ] 2023.05 Reasoning Implicit Sentiment with Chain-of-Thought Prompting Hao Fei, Bobo Li, Qian Liu, Lidong Bing, Fei Li, Tat-Seng Chua [ pdf ], [ code ] 2023.05 Element-aware Summarization with Large Language Models: Expert-aligned Evaluation and Chain-of-Thought Method Yiming Wang, Zhuosheng Zhang, Rui Wang [ pdf ], [ code ] 2023.05 Chain Of Thought Prompting Under Streaming Batch: A Case Study Yuxin Tang [ pdf ] 2023.06 Tab-CoT: Zero-shot Tabular Chain of Thought Ziqi Jin and Wei Lu [ pdf ], [ code ] 2023.06 Reasoning with Language Model is Planning with World Model Shibo Hao*, Yi Gu*, Haodi Ma, Joshua Jiahua Hong, Zhen Wang, Daisy Zhe Wang, Zhiting Hu [ pdf ], [ code ] 2023.05 Recursion of Thought: A Divide and Conquer Approach to Multi-Context Reasoning with Language Models Soochan Lee and Gunehee Kim [ pdf ], [ code ], [ poster ] The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning Seungone Kim, Se June Joo, Doyoung Kim, Joel Jang, Seonghyeon Ye, Jamin Shin, Minjoon Seo [ pdf ] Cumulative Reasoning with Large Language Models Yifan Zhang*, Jinqqin Yang*, Yang Yuan, Andrew Chi-Chih Yao [ pdf ], [ code ] 2023.08;A trend starts from "Chain of Thought Prompting Elicits Reasoning in Large Language Models".;chain-of-thought,large-language-models,prompt-learning,palm,codex,gpt-3,in-context-learning
Timothyxxx/Chain-of-ThoughtsPapers
antfu/vscode-settings;Anthony's VS Code Settings .vscode/settings.json .vscode/extensions.json .vscode/global.code-snippets Preview Theme | Vitesse Theme Font | Input Mono File Icons | Catppuccin Icons Product Icons | Carbon LICENSE MIT;My VS Code settings and extensions ;vscode,vscode-settings,vscode-extension,vscode-settings-json
antfu/vscode-settings
deepfence/PacketStreamer;PacketStreamer Deepfence PacketStreamer is a high-performance remote packet capture and collection tool. It is used by Deepfence's ThreatStryker security observability platform to gather network traffic on demand from cloud workloads for forensic analysis. Primary design goals: Stay light, capture and stream, no additional processing Portability, works across virtual machines, Kubernetes and AWS Fargate . Linux and Windows PacketStreamer sensors are started on the target servers. Sensors capture traffic, apply filters, and then stream the traffic to a central receiver. Traffic streams may be compressed and/or encrypted using TLS. The PacketStreamer receiver accepts PacketStreamer streams from multiple remote sensors, and writes the packets to a local pcap capture file PacketStreamer sensors collect raw network packets on remote hosts. It selects packets to capture using a BPF filter, and forwards them to a central receiver process where they are written in pcap format. Sensors are very lightweight and impose little performance impact on the remote hosts. PacketStreamer sensors can be run on bare-metal servers, on Docker hosts, and on Kubernetes nodes. The PacketStreamer receiver accepts network traffic from multiple sensors, collecting it into a single, central `pcap` file. You can then process the pcap file or live feed the traffic to the tooling of your choice, such as `Zeek`, `Wireshark` `Suricata`, or as a live stream for Machine Learning models. ## When to use PacketStreamer PacketStreamer meets more general use cases than existing alternatives. For example , Use PacketStreamer if you need a lightweight, efficient method to collect raw network data from multiple machines for central logging and analysis. ## Quick Start ![PacketStreamer QuickStart](docs/docs/packetstreamer/img/packetstreamer.svg) For full instructions, refer to the [PacketStreamer Documentation](https://docs.deepfence.io/packetstreamer/). You will need to install the golang toolchain and `libpcap-dev` before building PacketStreamer. ```shell script # Pre-requisites (Ubuntu): sudo apt install golang-go libpcap-dev git clone https://github.com/deepfence/PacketStreamer.git cd PacketStreamer/ make ``` Run a PacketStreamer receiver, listening on port **8081** and writing pcap output to **/tmp/dump_file** (see [receiver.yaml](contrib/config/receiver.yaml)): ```shell script ./packetstreamer receiver --config ./contrib/config/receiver.yaml ``` Run one or more PacketStreamer sensors on local and remote hosts. Edit the **server address** in [sensor.yaml](contrib/config/sensor-local.yaml): ```shell script # run on the target hosts to capture and forward traffic # copy and edit the sample sensor-local.yaml file, and add the address of the receiver host cp ./contrib/config/sensor-local.yaml ./contrib/config/sensor.yaml ./packetstreamer sensor --config ./contrib/config/sensor.yaml ``` ## Who uses PacketStreamer? * Deepfence [ThreatStryker](https://deepfence.io/threatstryker/) uses PacketStreamer to capture traffic from production platforms for forensics and anomaly detection. ## Get in touch Thank you for using PacketStreamer. * [ ](https://docs.deepfence.io/packetstreamer/) Start with the documentation * [ ](https://join.slack.com/t/deepfence-community/shared_invite/zt-podmzle9-5X~qYx8wMaLt9bGWwkSdgQ) Got a question, need some help? Find the Deepfence team on Slack * [![GitHub issues](https://img.shields.io/github/issues/deepfence/PacketStreamer)](https://github.com/deepfence/PacketStreamer/issues) Got a feature request or found a bug? Raise an issue * [productsecurity *at* deepfence *dot* io](SECURITY.md): Found a security issue? Share it in confidence * Find out more at [deepfence.io](https://deepfence.io/) ## Security and Support For any security-related issues in the PacketStreamer project, contact [productsecurity *at* deepfence *dot* io](SECURITY.md). Please file GitHub issues as needed, and join the Deepfence Community [Slack channel](https://join.slack.com/t/deepfence-community/shared_invite/zt-podmzle9-5X~qYx8wMaLt9bGWwkSdgQ). ## License The Deepfence PacketStreamer project (this repository) is offered under the [Apache2 license](https://www.apache.org/licenses/LICENSE-2.0). [Contributions](CONTRIBUTING.md) to Deepfence PacketStreamer project are similarly accepted under the Apache2 license, as per [GitHub's inbound=outbound policy](https://docs.github.com/en/github/site-policy/github-terms-of-service#6-contributions-under-repository-license).;:star: :star: Distributed tcpdump for cloud native environments :star: :star:;soc,network-analysis,tcpdump-like,packet-capture,packet-sniffer,observability,security-tools,snort,zeek,suricata
deepfence/PacketStreamer
cider-security-research/cicd-goat;Deliberately vulnerable CI/CD environment. Hack CI/CD pipelines, capture the flags. :triangular_flag_on_post: Created by Cider Security (Acquired by Palo Alto Networks) . Table of Contents Description Download & Run Linux & Mac Windows (Powershell) Usage Instructions Take the challenge Troubleshooting Solutions Contributing Description The CI/CD Goat project allows engineers and security practitioners to learn and practice CI/CD security through a set of 11 challenges, enacted against a real, full blown CI/CD environment. The scenarios are of varying difficulty levels, with each scenario focusing on one primary attack vector. The challenges cover the Top 10 CI/CD Security Risks , including Insufficient Flow Control Mechanisms, PPE (Poisoned Pipeline Execution), Dependency Chain Abuse, PBAC (Pipeline-Based Access Controls), and more.\ The different challenges are inspired by Alice in Wonderland, each one is themed as a different character. The project’s environment is based on Docker containers and can be run locally. These containers are: 1. Gitea (minimal git server) 2. Jenkins 3. Jenkins agent 4. LocalStack (cloud service emulator that runs in a single container) 5. Prod - contains Docker in Docker and Lighttpd service 6. CTFd (Capture The Flag framework) 7. GitLab 8. GitLab runner 9. Docker in Docker The images are configured to interconnect in a way that creates fully functional pipelines. Download & Run There's no need to clone the repository. Linux & Mac sh curl -o cicd-goat/docker-compose.yaml --create-dirs https://raw.githubusercontent.com/cider-security-research/cicd-goat/main/docker-compose.yaml cd cicd-goat && docker compose up -d Windows (Powershell) PowerShell mkdir cicd-goat; cd cicd-goat curl -o docker-compose.yaml https://raw.githubusercontent.com/cider-security-research/cicd-goat/main/docker-compose.yaml get-content docker-compose.yaml | %{$_ -replace "bridge","nat"} docker compose up -d Usage Instructions Spoiler alert! Avoid browsing the repository files as they contain spoilers. To configure your git client for accessing private repositories we suggest cloning using the http url. In each challenge, find the flag - in the format of flag# (e.g flag2 ), or another format if mentioned specifically. Each challenge stands on its own. Do not use access gained in one challenge to solve another challenge. If needed, use the hints on CTFd. There is no need to exploit CVEs. No need to hijack admin accounts of Gitea or Jenkins (named "admin" or "red-queen"). Take the challenge After starting the containers, it might take up to 5 minutes until the containers configuration process is complete. Login to CTFd at http://localhost:8000 to view the challenges: Username: alice Password: alice Hack: Jenkins http://localhost:8080 Username: alice Password: alice Gitea http://localhost:3000 Username: thealice Password: thealice GitLab http://localhost:4000 Username: alice Password: alice1234 Insert the flags on CTFd and find out if you got it right. Troubleshooting If Gitea shows a blank page, refresh the page. When forking a repository, don't change the name of the forked repository. If any of the services doesn't start or is not configured correctly try adding more cpu and memory to the docker engine and update it to the lateset version. Solutions Warning: Spoilers! :see_no_evil: See Solutions . BSidesLV talk: Climbing the Production Mountain: Practical CI/CD Attacks Using CI/CD Goat - Featuring solutions of the Caterpillar, Mock Turtle and Dormouse challenges. Contributing See Contributing .;A deliberately vulnerable CI/CD environment. Learn CI/CD security through multiple challenges.;appsec,cicd,devops,infosec,devsecops,security,jenkins,ctf,gitlab
cider-security-research/cicd-goat
penk/penkesu;Penkesu Computer - A Homebrew Retro-style Handheld PC The Penkēsu (Japanese: ペンケース ) is a retro-style handheld device powered by a Raspberry Pi Zero 2 W, a 7.9 inch widescreen display (1280 x 400 resolution), and a 48-key ortholinear mechanical keyboard. The Design The enclosure of the Penkesu Computer is designed around the display and keyboard to ensure (relatively) compact physical dimensions. Repurposed Gameboy Advance SP hinges and a ribbon cable for HDMI are used to keep the hinge design thin, yet they hold the weight of the display so that it won't tip over. Electronics are intentionally kept minimal (3 internal components) and most of the parts are either 3D printable or available as off-the-shelf products. | | | |-----------------------------|-----------------------------| | | | | | | See also: the keyboard sound test video . Open Source Hardware Ever since the CutiePi tablet was successfully funded and started shipping, I felt the need to work on a new project; something that I didn't need to worry too much about (ie: commercial viability), and to remind myself why I started tinkering. A "rebound" project, so to speak. And since there are no immediate plans on selling kits or making the Penkesu Computer mass producible, I wanted to publish all the designs and plans so there's enough information for anyone interested in making one. Bill of Materials Display Waveshare 7.9inch Capacitive Touch Screen Adafruit DIY HDMI Cable Parts - Right Angle adapter , Mini HDMI adapter , and 20cm Ribbon Cable Case Gameboy Advance SP Replacement Hinges 3D printed parts ( STL files and STEP file ) M2x6mm screws x 6 (8 if intending to secure the keyboard to the bottom tray. See part 2 below for more info.) M2x6mm threaded heat-set inserts x 6 (8 if intending to secure the keyboard to the bottom tray. See part 2 below for more info.) Electronics Raspberry Pi Zero 2 W 3.7V 606090 (or similar size) Li-Po battery Adafruit PowerBoost 1000C Keyboard Kailh Low Profile Choc v1 Switches x 48 MBK Choc Low Profile Keycaps x 48 1N4148 Diode x 48 Arduino Pro Micro x 1 PCB x 1 ( gerber file and QMK firmware ) Links are not affiliate links, and only provided as references. Notes on the Keyboard About the keyboard: The keyboard is called Koda , which is originally designed by larrbo and released under Creative Commons BY-NC-SA 4.0 License. I've tweaked the layout so that it better fits my needs and looks closer to the Planck . More on this below. If one wishes to use a different 40% keyboard for the build, it can be done by editing the STEP file and adjusting the compartment size in the chassis. A thin metal sheet was glued to the base as the counterweight, your mileage may vary depending on how you like the weight distribution For the keycaps: The legends on keycaps were printed with a laser engraver , which I used black dip powder for nails as pigment. More information about this method can be found with keywords laser dye-sub keycaps There are custom printing services for keycaps e.g. yushakobo.jp if one does not have access to a laser engraver. Keyboard Layout: The lower key activates a layer that primarily has number keys from ` to 1 - 0 across the top row (excluding the top right key, which is the backspace key in all layers). The raise key activates a layer that has the shifted version of all of the numeral keys from the lower layer. As well as function keys using the tab,a,s,d,f,g and shift,z,x,c,v,b keys for F1-F6 and F7-F12 accordingly. Pushing func key down and holding it activates a mouse layer. The mouse layer uses an accelerated mode but allows one to temporarily activate the constant mode using an additional key. As you might have guessed, when using the accelerated mode the speed of the cursor is initially slow but over time increases in speed. This mode is active as soon as mouse mode is entered. (by holding down the func key) Your w,a,s,d keys are you cursor movement keys. Your left, right, and middle mouse buttons are j,k, and l respectively. Your scroll wheel uses the t,f,g,h keys. Finally mouse cursor speed can be toggled by tapping or by holding. If tapped the keys change the speed of acceleration. If the keys are held they will activate constant mode at the equivalent mousing speed. There are 3 overall speeds: 0, 1, and 2, with 0 being the slowest and most precise, and 2 being the fasted and most inaccurate. You access speed 0 using the v key. Speed 1 using the b key. And the fastest speed (2) using the n key. The Assembly Glue the two hinges to the chassis (my 3D printer is not accurate enough to print a functional hinge lock, so I simply glued them with 5 minute epoxy.) You want to make sure that the hinge is able to still turn after the epoxy has set. Add heat-set threaded inserts (M2x6mm) to the 4 corners of screen bezel, and 2 to the hinge cover. You may also use heat insert at the front corners of the keyboard tray. Just note that placing these inserts are very difficult, and not entirely necessary. For ease of access you may wish to not use them at all. Wrap the ribbon cable twice and pull it out through the hinge cover. If you use a toothpick, it might make it easier to ensure you do this cleanly through the display cover. Wiring: | Component | Pin | |-----------|--------| | battery positive | PowerBoost Bat pin | | battery negative | PowerBoost GND pin | | switch 1 pin | PowerBoost GND pin | | switch 2 pin | PowerBoost EN pin | | PowerBoost 5V OUT | display and Pi Zero's VCC | | PowerBoost GND | display and Pi Zero's GND | Connect the keyboard's micro USB and the display cable into the mini HDMI port of the Pi Zero 2 W; inset the micro SD card into the Pi Zero 2 W. Fasten all components with M2x6mm screws. If you made it this far, you are welcome to check out my other project, the CutiePi tablet , which is also 100% open source hardware! :-) Copyright and License Copyright (c) 2022 Penk Chen. All rights reserved. All files are licensed under MIT license, see the LICENSE for more information.;Penkesu Computer - A Homebrew Retro-style Handheld PC;[]
penk/penkesu
j-hui/fidget.nvim;💫 Fidget Extensible UI for Neovim notifications and LSP progress messages. Demo setup _Note that this demo may not always reflect the exact behavior of the latest release._ This screen recording was taken as I opened a Rust file I'm working on, triggering `rust-analyzer` to send me some LSP progress messages. As those messages are ongoing, I trigger some notifications with the following: ```lua local fidget = require("fidget") vim.keymap.set("n", "A", function() fidget.notify("This is from fidget.notify().") end) vim.keymap.set("n", "B", function() fidget.notify("This is also from fidget.notify().", vim.log.levels.WARN) end) vim.keymap.set("n", "C", function() fidget.notify("fidget.notify() supports annotations...", nil, { annote = "MY NOTE", key = "foobar" }) end) vim.keymap.set("n", "D", function() fidget.notify(nil, vim.log.levels.ERROR, { annote = "bottom text", key = "foobar" }) fidget.notify("... and overwriting notifications.", vim.log.levels.WARN, { annote = "YOUR AD HERE" }) end) ``` (I use normal mode keymaps to avoid going into ex mode, which would pause Fidget rendering and make the demo look glitchy...) Visible elements: - Terminal + font: [Kitty](https://sw.kovidgoyal.net/kitty/) + [Comic Shanns Mono](https://github.com/shannpersand/comic-shanns) - Editor: [Neovim v0.9.4](https://github.com/neovim/neovim/tree/v0.9.4) - Theme: [catppuccin/nvim (mocha, dark)](https://github.com/catppuccin/nvim) - Status line: [nvim-lualine/lualine.nvim](https://github.com/nvim-lualine/lualine.nvim) - Color columns: `:set colorcolumn=81,121,+1,+2` (sorry) - Scrollbar: [petertriho/nvim-scrollbar](https://github.com/petertriho/nvim-scrollbar) Why? Fidget is an unintrusive window in the corner of your editor that manages its own lifetime. Its goals are: to provide a UI for Neovim's $/progress handler to provide a configurable vim.notify() backend to support basic ASCII animations (Fidget spinners!) to indicate signs of life to be easy to configure, sane to maintain, and fun to hack on There's only so much information one can stash into the status line. Besides, who doesn't love a little bit of terminal eye candy, as a treat? Getting Started Requirements Fidget requires Neovim v0.9.0+. If you would like to see progress notifications, you must have configured Neovim with an LSP server that uses the $/progress handler. For an up-to-date list of LSP servers this plugin is known to work with, see this Wiki page . Installation Install this plugin using your favorite plugin manager. See the documentation for setup() options. Lazy lua { "j-hui/fidget.nvim", opts = { -- options }, } vim-plug ```vim Plug 'j-hui/fidget.nvim' " Make sure the plugin is installed using :PlugInstall. Then, somewhere after plug#end(): lua <<EOF require("fidget").setup { -- options } EOF ``` rocks.nvim vim :Rocks install fidget.nvim Versioning Fidget is actively developed on the main branch, and may occasionally undergo breaking changes. If you would like to ensure configuration/API stability, you can pin your tag to one of the release tags . For instance, using Lazy : lua { "j-hui/fidget.nvim", tag = "v1.0.0", -- Make sure to update this to something recent! opts = { -- options }, } Options Fidget can be configured by passing a table of options to the setup() . Available options are shown below: ```lua { -- Options related to LSP progress subsystem progress = { poll_rate = 0, -- How and when to poll for progress messages suppress_on_insert = false, -- Suppress new messages while in insert mode ignore_done_already = false, -- Ignore new tasks that are already complete ignore_empty_message = false, -- Ignore new tasks that don't contain a message clear_on_detach = -- Clear notification group when LSP server detaches function(client_id) local client = vim.lsp.get_client_by_id(client_id) return client and client.name or nil end, notification_group = -- How to get a progress message's notification group key function(msg) return msg.lsp_client.name end, ignore = {}, -- List of LSP servers to ignore -- Options related to how LSP progress messages are displayed as notifications display = { render_limit = 16, -- How many LSP messages to show at once done_ttl = 3, -- How long a message should persist after completion done_icon = "✔", -- Icon shown when all LSP progress tasks are complete done_style = "Constant", -- Highlight group for completed LSP tasks progress_ttl = math.huge, -- How long a message should persist when in progress progress_icon = -- Icon shown when LSP progress tasks are in progress { pattern = "dots", period = 1 }, progress_style = -- Highlight group for in-progress LSP tasks "WarningMsg", group_style = "Title", -- Highlight group for group name (LSP server name) icon_style = "Question", -- Highlight group for group icons priority = 30, -- Ordering priority for LSP notification group skip_history = true, -- Whether progress notifications should be omitted from history format_message = -- How to format a progress message require("fidget.progress.display").default_format_message, format_annote = -- How to format a progress annotation function(msg) return msg.title end, format_group_name = -- How to format a progress notification group's name function(group) return tostring(group) end, overrides = { -- Override options from the default notification config rust_analyzer = { name = "rust-analyzer" }, }, }, -- Options related to Neovim's built-in LSP client lsp = { progress_ringbuf_size = 0, -- Configure the nvim's LSP progress ring buffer size log_handler = false, -- Log `$/progress` handler invocations (for debugging) }, }, -- Options related to notification subsystem notification = { poll_rate = 10, -- How frequently to update and render notifications filter = vim.log.levels.INFO, -- Minimum notifications level history_size = 128, -- Number of removed messages to retain in history override_vim_notify = false, -- Automatically override vim.notify() with Fidget configs = -- How to configure notification groups when instantiated { default = require("fidget.notification").default_config }, redirect = -- Conditionally redirect notifications to another backend function(msg, level, opts) if opts and opts.on_open then return require("fidget.integration.nvim-notify").delegate(msg, level, opts) end end, -- Options related to how notifications are rendered as text view = { stack_upwards = true, -- Display notification items from bottom to top icon_separator = " ", -- Separator between group name and icon group_separator = "---", -- Separator between notification groups group_separator_hl = -- Highlight group used for group separator "Comment", render_message = -- How to render notification messages function(msg, cnt) return cnt == 1 and msg or string.format("(%dx) %s", cnt, msg) end, }, -- Options related to the notification window and buffer window = { normal_hl = "Comment", -- Base highlight group in the notification window winblend = 100, -- Background color opacity in the notification window border = "none", -- Border around the notification window zindex = 45, -- Stacking priority of the notification window max_width = 0, -- Maximum width of the notification window max_height = 0, -- Maximum height of the notification window x_padding = 1, -- Padding from right edge of window boundary y_padding = 0, -- Padding from bottom edge of window boundary align = "bottom", -- How to align the notification window relative = "editor", -- What the notification window position is relative to }, }, -- Options related to integrating with other plugins integration = { ["nvim-tree"] = { enable = true, -- Integrate with nvim-tree/nvim-tree.lua (if installed) }, ["xcodebuild-nvim"] = { enable = true, -- Integrate with wojciech-kulik/xcodebuild.nvim (if installed) }, }, -- Options related to logging logger = { level = vim.log.levels.WARN, -- Minimum logging level max_size = 10000, -- Maximum log file size, in KB float_precision = 0.01, -- Limit the number of decimals displayed for floats path = -- Where Fidget writes its logs to string.format("%s/fidget.nvim.log", vim.fn.stdpath("cache")), }, } ``` For more details, see fidget-option.txt . Lua API Fidget has a Lua API, with documentation generated from source code. You are encouraged to hack around with that. Commands Fidget exposes some of its Lua API functions through :Fidget sub-commands (e.g., :Fidget clear ), which support shell-like arguments and completion. These sub-commands are documented below. :Fidget sub-commands :Fidget clear Clear active notifications Arguments Positional arguments: - **`{group_key}`**: _`(any)`_ group to clear :Fidget clear_history Clear notifications history Arguments Flags: - **`--before {seconds}`**: _`(number)`_ filter history for items updated at least this long ago - **`--group_key {group_key}`**: _`(any)`_ clear history by group key - **`--include_active {true|false}`**: _`(boolean)`_ whether to clear items that have not been removed (default: true) - **`--include_removed {true|false}`**: _`(boolean)`_ whether to clear items that have have been removed (default: true) - **`--since {seconds}`**: _`(number)`_ filter history for items updated at most this long ago Positional arguments: - **`{group_key}`**: _`(any)`_ clear history by group key :Fidget history Show notifications history Arguments Flags: - **`--before {seconds}`**: _`(number)`_ filter history for items updated at least this long ago - **`--group_key {group_key}`**: _`(any)`_ filter history by group key - **`--include_active {true|false}`**: _`(boolean)`_ whether to clear items that have not been removed (default: `true`) - **`--include_removed {true|false}`**: _`(boolean)`_ whether to clear items that have have been removed (default: `true`) - **`--since {seconds}`**: _`(number)`_ filter history for items updated at most this long ago Positional arguments: - **`{group_key}`**: _`(any)`_ filter history by group key :Fidget lsp_suppress Suppress LSP progress notifications Arguments Positional arguments: - **`{suppress}`**: _`(boolean)`_ whether to suppress (omitting this argument toggles suppression) :Fidget suppress Suppress notification window Arguments Positional arguments: - **`{suppress}`**: _`(boolean)`_ whether to suppress (omitting this argument toggles suppression) Highlights Rather than defining its own highlights, Fidget's default configuration uses built-in highlight groups that are typically overridden by custom Vim color schemes. With any luck, these will look reasonable when rendered, but the visual outcome will really depend on what your color scheme decided to do with those highlight groups. You can override these highlight groups (e.g., icon_style ) using the :h fidget-options shown above. Related Work rcarriga/nvim-notify is first and foremost a vim.notify() backend, and it also supports LSP progress notifications (with the integration seems to have been packaged up in mrded/nvim-lsp-notify ). vigoux/notifier.nvim is a vim.notify() backend that comes with first-class LSP notification support. neoclide/coc.nvim provides a nice LSP progress UI in the status line, which first inspired my desire to have this feature for nvim-lsp. arkav/lualine-lsp-progress was the original inspiration for Fidget, and funnels LSP progress messages into nvim-lualine/lualine.nvim . I once borrowed some of its code (though much of that code has since been rewritten). nvim-lua/lsp-status.nvim also supports showing progress text, though it requires some configuration to integrate that into their status line. Acknowledgements Most of the Fidget spinner patterns were adapted from the npm package sindresorhus/cli-spinners .;💫 Extensible UI for Neovim notifications and LSP progress messages.;neovim,neovim-plugin
j-hui/fidget.nvim
akutz/go-generics-the-hard-way;Go generics the hard way I started using Go back around 2015 and was immediately surprised by the lack of a generic type system. Sure, the empty interface{} existed, but that was hardly the same. At first I thought I ~~wanted~~ needed generics in Go, but over time I began appreciating the simplicity of the language. Therefore I was ambivalent at best when I learned of discussions to introduce generics in Go 2.0, and once the timetable was accelerated to 1.18, I decided it was time to dig into the proposal. After a while, I gained an appreciation for how generics are implemented with the same elegance as Golang itself, and this moved me to share my experience. Go generics the hard way is a culmination of the time I spent playing with this new feature and provides a hands-on approach to learning all about generics in Go. Labs : a hands-on approach to learning Go generics FAQ : answers to some of the most frequently asked questions regarding Go generics Links : links to related reference material and projects that use generics Labs Prerequisites : how to install the prerequisites required to run the examples in this repository Hello world : a simple example using generics Getting started : an introduction to go generics Getting going : basic concepts explored Internals : how generics are implemented in golang Benchmarks : basic benchmarks for common patterns using generics Lessons learned : lessons learned from digging into generics FAQ How are you using generics in the Go playground? What is T ? What is this any I keep seeing everywhere? What does the tilde ~ do? Do Go generics use type erasure ? How are you using generics in the Go playground? We can use the Go playground in “Go dev branch” mode to edit and run your program with generics. What is T ? The symbol T is often used when discussing generic types because T is the first letter of the word t ype . That is really all there is too it. Just like x or i are often the go-to variable names for loops, T is the go-to symbol for generic types. For what is worth, K is often used when there is more than one generic type, ex. T, K . What is this any I keep seeing everywhere? The word any is a new, predeclared identifier and is equivalent to the empty interface in all ways . Simply put, writing and reading any is just more user friendly than interface{} :smiley:. What does the tilde ~ do? The ~ symbol is used to express that T may be satisfied by a defined or named type directly or by a type definition that has the same, underlying type as another defined or named type. To learn more about type constraints and the ~ symbol, please refer to the section Tilde ~ . Do Go generics use type erasure ? Generics in Go are not implemented with type erasure. Please jump to Internals for more information. Links Additional reading Type parameter proposal : the accepted proposal for introdicing generics to go Getting started with generics : a tutorial from the authors of go for getting started with generics Go language specification : the reference manual for the go language Go FAQ : frequently asked questions for go Projects using generics Controller-runtime : a write-up and patchset for implementing conditions logic, patch helpers, and simple reconcilers using generics Go collections : generic utility functions for dealing with collections in go go-generics-example : examples using generics;A hands-on approach to getting started with Go generics.;[]
akutz/go-generics-the-hard-way
ibeatai/apm;APM What will happen when agent has mind? APM ( Agent plus Mind ) will give you the final answer.;What will happen when agent has mind? APM ( Agent plus Mind ) will give you the final answer. ;[]
ibeatai/apm
opencsapp/opencsapp.github.io;欢迎来到 Open CS Application opencs.app | csmsapp.org | opencsapp.github.io 本站内容开源协同创作,期待你 贡献内容 。如果你喜欢这个网站,请点亮 :star:Star 支持我们! 欢迎加入社区讨论: - ~~QQ① 群:466094878~~( 已满 ) - QQ 24Fall 群:832875166 - QQ② 群:544855574 - Discord:https://discord.gg/HeB9QXZdFR Contributed By OpenCSApp follows Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License;Open CS Application | 开源CS申请;application,graduate-application,master,phd,phd-application,computer,computer-science
opencsapp/opencsapp.github.io
antfu/vue-starport;Shared Vue component across routes with animations Live Demo English | 简体中文 Note : With the View Transitions API coming to the browsers, you may not this library anymore (even tho it's not a 1:1 replacement as View Transition does not preseve dom and state). Why? It's quite common you might have a same component used in different routes (pages) with a bit different sizes and positions. Sometimes you might want to animate them when user navigates between routes to provide a smooth UX. While such animation is common to be seen in native apps, it's could be a bit challenging to do it in Web. Vue's component structure is presented as a tree , and the child components are in different branches with their own instances. Meaning when users navigate between routes, the components are not shared across routes. By that means you can't directly animate the changes because they are in two different instances. The good news is, there is a technique called FLIP to enumerate the transitions between them. However, FLIP only solves the problem of transitions, the components are still not the same. During the navigation, the internal state of the component will lost. Thus I started this new approach Starport to experiment with a better solution to fit this requirement. How? So since we can't share the components across different branches in the component tree, we could actually hoist them to the root so they become independent from the routes. To allow each page to still have control of the components, we introduced a Proxy component to present the expected size and position of that component. The proxy will pass the props and position information to the actual component and let it "fly over" the proxy with animations. When the transition ends and it arrived to the expected position, it will then "land down" to the actual component using the <Teleport/> component. With this "landing" mechanism, the DOM tree will be preserved as what you will have with the original tree structure. When navigating to another route, the component then will "lift off" back to the floating state, "fly" to the new proxy's position and "land" again. This is very similar to Terran's Buildings in StarCraft (able to leave the ground and fly to new locations). It's also the inspiration source of the project name Starport . Install ⚗️ Experimental npm i vue-starport Vue Starport only works for Vue 3 Usage Add <StarportCarrier> component from vue-starport to your root component ( app.vue ). All <Starport> usage should be inside <StarportCarrier> component. ```html ``` In routes, wrap the component with the <Starport> component. ```html ``` On the other page, we do the same thing with the same port id to identify the instance. ```html ``` Note that you might need to apply some styles to <Starport> to make it have a defined size indicating the area for the "floating starcraft" to land. Checkout the Playground for more examples. Register Components Globally ```ts // main.ts import StarportPlugin from 'vue-starport' app.use(StarportPlugin()) ``` And then you can use Starport and StarportCarrier components without importing. Keep Alive By default, when navigating to a page without a corresponding <Starport> proxy to land, the component will be destroyed. If you want to keep the component alive even when it's not presented in the current route, you can set keepAlive to true for that specific instance. html <Starport keep-alive port="my-id"> <MyComponent /> </Starport> To configure it globally, you can pass options to the plugin: ```ts // main.ts import StarportPlugin from 'vue-starport' app.use(StarportPlugin({ keepAlive: true })) ``` Debug To debug what happens during the transition, you can add the follow CSS to highlight the parts css [data-starport-craft] { background: #0805; } [data-starport-proxy]:not([data-starport-landed]) { background: #8005; } Special Thanks Thanks to @hangsman who helped to provide the initial solution of proper teleport the element and made this idea valid. Also thanks to the viewers of my live-streaming on Bilibili , those who spend time with me to working on this idea and provided useful feedback during the live. You can check the recordings of my live-streams (in Chinese) , where I wrote this project from scratch. 你可以在哔哩哔哩观看我实现此项目的 直播录像 。 Sponsors License MIT License © 2022 Anthony Fu;🛰 Shared component across routes with animations;[]
antfu/vue-starport
p0dalirius/Awesome-RCE-techniques;Awesome RCE techniques Awesome list of techniques to achieve Remote Code Execution (RCE) on various apps! Goal of this project The goal of this project is to provide an OpenSource knowledge database of all the techniques to achieve Remote Code Execution (RCE) on various applications. All of these techniques also comes with a test environnement (usually a Docker image) for you to train these techniques. Techniques Content-Management-Systems-(CMS) Drupal : (3 techniques) FuelCMS : (1 technique) Joomla : (1 technique) SweetRice : (2 techniques) Typo3 : (1 technique) Wordpress : (3 techniques) Frameworks Apache-Tomcat : (2 techniques) JBoss : (1 technique) JoGet : (1 technique) WildFly : (1 technique) Learning-Management-Systems-(LMS) Moodle : (1 technique) Other GiTea : (1 technique) Gitlab : (1 technique) Jenkins : (1 technique) LimeSurvey : (1 technique) PHP : (1 technique) Rocket.Chat : (1 technique) Webmin : (1 technique) Work in progress These techniques are a work in progress. You can help us finish them by opening a pull request! Troubleshooting Report all the issues on https://github.com/p0dalirius/Awesome-RCE-techniques/issues. Contributors Pull requests are welcome. Feel free to open an issue if you want to add other Remote Code Execution (RCE) techniques.;Awesome list of step by step techniques to achieve Remote Code Execution on various apps!;rce,framework,awesome-list,code,execution,exploit,bugbounty,cms
p0dalirius/Awesome-RCE-techniques
Helium314/HeliBoard;HeliBoard HeliBoard is a privacy-conscious and customizable open-source keyboard, based on AOSP / OpenBoard. Does not use internet permission, and thus is 100% offline. Table of Contents Features FAQ / Common Issues Hidden Functionality Contributing Reporting Issues Translations Dictionary Creation Code Contribution To-do License Credits Features Add dictionaries for suggestions and spell check build your own, or get them here , or in the experimental section (quality may vary) additional dictionaries for emojis or scientific symbols can be used to provide suggestions (similar to "emoji search") note that for Korean layouts, suggestions only work using this dictionary , the tools in the dictionary repository are not able to create working dictionaries Customize keyboard themes (style, colors and background image) can follow the system's day/night setting on Android 10+ (and on some versions of Android 9) can follow dynamic colors for Android 12+ Customize keyboard layouts (only available when disabling use system languages ) Customize special layouts, like symbols, number, or functional key layout Multilingual typing Glide typing ( only with closed source library ☹️) library not included in the app, as there is no compatible open source library available can be extracted from GApps packages (" swypelibs "), or downloaded here (click on the file and then "raw" or the tiny download button) Clipboard history One-handed mode Split keyboard (only available if the screen is large enough) Number pad Backup and restore your settings and learned word / history data FAQ / Common Issues Add a dictionary : First download the dictionary file, e.g. from here . Then go to language settings, click on the language, then on + next to dictionary the add and select the file. Alternatively you can open a .dict file in a file explorer with HeliBoard and then select the language. Note that the latter method does not work with all file explorers. Emoji search : You can get addon dictionaries for emoji suggestions in the dictionaries repo . An actual search function does not exist yet. Cannot switch choose layout : This is only possible when use system languages is disabled. You can select the layout when tapping on the language. How to customize layout : Go to layout selection and use the + button, then you can add a custom layout, either from a file or you can copy and edit an existing layout. No suggestions for some language : Check dictionaries repo whether a dictionary is available. If there is one, download it and add it in the language settings for this language. No suggestions in some app / text field : This app respects the no suggestions flag set by some input fields, i.e. the developer does not want you to see suggestions here. Best do in issue report for that app if you think this behavior is wrong. Alternatively you can enable the always show suggestions setting that overrides the no suggestions flag. Multilingual typing (type in multiple languages without switching manually): Enable in Languages & Layouts , select the main language and tap the + button next to multilingual typing to add a language. Note that the selection is limited to languages with the same script as the main language, and to languages that have a dictionary (see above for how to add). How to enable glide typing : There is no glide typing built into this app, but you can load compatible libraries: Go to advanced settings -> load gesture typing library and point to a file (setting not available in nouserlib version). You can extract the file from GApps packages (" swypelibs "), or download one here . Make sure to use the correct version (app will tell you in the dialog to load the library). Glide typing is not working after loading a library : Possibly the download was corrupted, or you downloaded the wrong file. If you get a " unknown file " confirmation popup, it is likely you are not using the correct file (or you might be using a different version of the library). In rare cases, there might be crashes when the file is not in internal storage, or some Samsung-specific problems . German layout with / without umlauts : German (Germany) layout has umlauts, German layout doesn't Spell checker is not checking all languages in multilingual typing : Make sure you actually enabled HeliBoard spell checker. Usually it can be found in System Settings -> System -> Languages -> Advanced -> Spell Checker, but this may depend on Android version. Words added to Gboard dictionary are not suggested : Gboard uses its own dictionary instead of the system's personal dictionary. See here for how to export the words. What is the nouserlib version? : The normal version ( release ) allows the user to provide a library for glide typing, while the nouserlib version does not. Running code that isn't supplied with the app is dynamic code loading , which is a security risk. Android Studio warns about this: Dynamically loading code from locations other than the application's library directory or the Android platform's built-in library directories is dangerous, as there is an increased risk that the code could have been tampered with. Applications should use loadLibrary when possible, which provides increased assurance that libraries are loaded from one of these safer locations. Application developers should use the features of their development environment to place application native libraries into the lib directory of their compiled APKs. The app checks the SHA256 checksum of the library and warns the user if it doesn't match with known library versions. A mismatch indicates the library was modified, but may also occur if the user intentionally provides a different library than expected (e.g. a self-built variant). Note that if the the app is installed as a system app, both versions have access to the system glide typing library (if it is installed). * App crashing when using as system app : This happens if you do not install the app, but just copy the APK. Then the app's own library is not extracted from the APK, and not accessible to the app. You will need tp either install the app over itself, or provide a library. Hidden Functionality Features that may go unnoticed, and further potentially useful information * Long-pressing toolbar keys results in additional functionality: clipboard -> paste, move left/right -> move full left/right, move up/down -> page up/down, copy -> copy all, select word -> select all, undo <-> redo * Long-press the Comma-key to access Clipboard View, Emoji View, One-handed Mode, Settings, or Switch Language: * Emoji View and Language Switch will disappear if you have the corresponding key enabled; * For some layouts it\'s not the Comma-key, but the key at the same position (e.g. it\'s q for Dvorak layout). * When incognito mode is enabled, no words will be learned, and no emojis will be added to recents. * Sliding key input: Swipe from shift or symbol key to another key. This will enter a single uppercase key or symbol and return to the previous keyboard. * Hold shift or symbol key, press one or more keys, and then release shift or symbol key to return to the previous keyboard. * Long-press a suggestion in the suggestion strip to show more suggestions, and a delete button to remove this suggestion. * Swipe up from a suggestion to open more suggestions, and release on the suggestion to select it. * Long-press an entry in the clipboard history to pin it (keep it in clipboard until you unpin). * Swipe left in clipboard view to remove an entry (except when it's pinned) * Select text and press shift to switch between uppercase, lowercase and capitalize words * You can add dictionaries by opening the file * This only works with content-uris and not with file-uris , meaning that it may not work with some file explorers. * Not really a feature, but you can restart the keyboard by going to the settings and swiping it away from recents * Debug mode / debug APK * Long-press a suggestion in the suggestion strip twice to show the source dictionary. * When using debug APK, you can find Debug Settings within the Advanced Preferences , though the usefulness is limited except for dumping dictionaries into the log. * For a release APK, you need to tap the version in About several times, then you can find debug settings in Advanced Preferences . * When enabling Show suggestion infos , suggestions will have some tiny numbers on top showing some internal score and source dictionary. * In the event of an application crash, you will be prompted whether you want the crash logs when you open the Settings. * When using multilingual typing, space bar will show an confidence value used for determining the currently used language. * For users doing manual backups with root access: Starting at Android 7, some files and the main shared preferences file are not in the default location, because the app is using device protected storage . This is necessary so the settings and layout files can be read before the device is unlocked, e.g. at boot. The files are usually located in /data/user_de/0/<package_id>/ , though the location may depend on the device and Android version. Contributing ❤ Reporting Issues Whether you encountered a bug, or want to see a new feature in HeliBoard, you can contribute to the project by opening a new issue here . Your help is always welcome! Before opening a new issue, be sure to check the following: - Does the issue already exist? Make sure a similar issue has not been reported by browsing existing issues . Please search open and closed issues. - Is the issue still relevant? Make sure your issue is not already fixed in the latest version of HeliBoard. - Is it a single topic? If you want to suggest multiple things, open multiple issues. - Did you use the issue template? It is important to make life of our kind contributors easier by avoiding issues that miss key information to their resolution. Note that issues that that ignore part of the issue template will likely get treated with very low priority, as often they are needlessly hard to read or understand (e.g. huge screenshots, not providing a proper description, or addressing multiple topics). If you're interested, you can read the following useful text about effective bug reporting (a bit longer read): https://www.chiark.greenend.org.uk/~sgtatham/bugs.html Translations Translations can be added using Weblate . You will need an account to update translations and add languages. Add the language you want to translate to in Languages -> Manage translated languages in the top menu bar. Updating translations in a PR will not be accepted, as it may cause conflicts with Weblate translations. Dictionary Creation There will not be any further dictionaries bundled in this app. However, you can add dictionaries to the dictionaries repository . To create or update a dictionary for your language, you can use this tool . You will need a wordlist, as described here and in the repository readme. Code Contribution See Contribution Guidelines To-do Planned features and improvements: * Improve support for modifier keys ( alt , ctrl , meta and fn ), some ideas: * keep modifier keys on with long press * keep modifier keys on until the next key press * use sliding input * Less complicated addition of new keyboard languages (e.g. #519) * Additional and customizable key swipe functionality * Some functionality will not be possible when using glide typing * Ability to enter all emojis independent of Android version (optional, #297) * Add and enable emoji dictionaries by default (if available for language) * Clearer / more intuitive arrangement of settings * Maybe hide some less used settings by default (similar to color customization) * Customizable currency keys * Ability to export/import (share) custom colors * Make use of the .com key in URL fields (currently only available for tablets) * With language-dependent TLDs * Internal cleanup (a lot of over-complicated and convoluted code) * Bug fixes What will not be added: * Material 3 (not worth adding 1.5 MB to app size) * Dictionaries for more languages (you can still download them) * Anything that requires additional permissions, unless there is a very good reason License HeliBoard (as a fork of OpenBoard) is licensed under GNU General Public License v3.0. Permissions of this strong copyleft license are conditioned on making available complete source code of licensed works and modifications, which include larger works using a licensed work, under the same license. Copyright and license notices must be preserved. Contributors provide an express grant of patent rights. See repo's LICENSE file. Since the app is based on Apache 2.0 licensed AOSP Keyboard, an Apache 2.0 license file is provided. The icon is licensed under Creative Commons BY-SA 4.0 . A license file is also included. Credits Icon by Fabian OvrWrt with contributions from The Eclectic Dyslexic OpenBoard AOSP Keyboard LineageOS Simple Keyboard Indic Keyboard FlorisBoard Our contributors;Customizable and privacy-conscious open-source keyboard;[]
Helium314/HeliBoard
dotnet-presentations/dotnet-maui-workshop;.NET MAUI - Workshop Today we will build a .NET MAUI application that will display a list of Monkeys from around the world. We will start by building the business logic backend that pulls down json-encoded data from a RESTful endpoint. We will then leverage .NET MAUI to find the closest monkey to us and also show the monkey on a map. We will also see how to display data in many different ways and then finally fully theme the application. Languages This workshop is available in the following languages: * English - default README files * Chinese (Simplified) - README files ending with .zh-cn.md (Translated by Kinfey Lo ) * Chinese (Traditional) - README filed ending with .zh-tw.md (Translated by James Tsai ) Setup Guide Hey there! This workshop will be a hands on and a bring your own device workshop. You can develop on PC or Mac and all you will need to do is install Visual Studio 2022 or Visual Studio for Mac 2022 with the .NET MAUI workload. It is built on .NET 8, which means you will need version 17.9 of Visual Studio 2022 or newer. See full installation guide for .NET MAUI for more information. Before starting the workshop, I recommend going through the quick 10 minute .NET MAUI Tutorial that will guide you through installation and also ensuring everything is configured correct. If you are new to mobile development, we recommend deploying to a physical Android device which can be setup in just a few steps. If you don't have a device, don't worry as you can setup an Android emulator with hardware acceleration . If you don't have time to set this up ahead of time, don't worry as we are here to help during the workshop. Beyond that you will be good to go for the workshop! Agenda I have also put together an abstract of what you can expect for the day long workshop: Part 0 - 30 Min Session - Introduction to .NET MAUI Session & Setup Help Part 1 - Single Page List of Data Part 2 - MVVM & Data Binding Part 3 - Navigation Part 4 - Implementing Platform Features Part 5 - CollectionView & Beyond Part 6 - Theming the app To get started open the Part 1 - Displaying Data folder and open MonkeyFinder.sln . You can use this throughout the workshop. Each part has a README file with directions for that part. If you came in late, you can open any of the folders and there is a starting project for that section. Video Walkthrough James recorded a full 4-hour walkthrough end-to-end on his YouTube ! More links and resources: .NET MAUI Website .NET MAUI on Microsoft Learn .NET MAUI Documentation .NET MAUI on GitHub .NET Beginner Series Videos If you have any questions please reach out to me on Twitter @JamesMontemagno .;A full day workshop (.NET MAUI Workshop in a Box) on how to build apps with .NET MAUI for iOS, Android, macOS, and Windows;dotnet-maui,dotnet,dotnetmaui
dotnet-presentations/dotnet-maui-workshop
SwiftcordApp/Swiftcord;Swiftcord Native Discord client for macOS built in Swift [!WARNING] I have fully moved my development time and attention to the next generation of Swiftcord, which means I will not be frequently monitoring this repository and its issues. Read this discussion to find out more! We are very near to release, and I can't wait to let everyone experience the future of Swiftcord! This image doesn't animate properly in Safari, unfortunately. Click on it to view the original video. Swiftcord is beautiful, follows design principals of the official client while keeping the macOS look and feel that you love, and most importantly, its (really) fast! Powered by DiscordKit , a Swift Discord implementation built from the ground up. If you like this project, please smash the star button and be one of my stargazers 🌟! It motivates me to continue investing time into Swiftcord. Supporters Supporters get feature releases 2 weeks before they are made public! Be a supporter to support me and this project's future! Perfect if you'd like to contribute but don't have the skills or time required! It's a great way of thanking me for my work. I'll be eternally grateful! Contents Motivation Releases FAQ Roadmap Copyright Notice Motivation Swiftcord was created to offer a Discord-like UI and experience while having the performance and memory benefits of native apps. The idea started brewing when I was tight on RAM, then noticed Discord using 600+MB of RAM. I then realized that was the perfect opportunity to explore SwiftUI, since it was relatively new to me at that time. Hence, Swiftcord was born! Releases You'll need macOS Monterey and above (>= 12.0) to run Swiftcord. Releases from the channels below are universal bundles, and run natively on both Apple Silicon and Intel. Nightly Builds (Latest fixes/features, built from the latest commit on main , might be unstable) For the latest features and fixes, a pre-built version of the latest commit is available here Alpha (More stable, less updated) Alpha releases are available at GitHub Releases Homebrew Swiftcord is also available on homebrew as a cask: brew install swiftcord . Versions are lock stepped with GitHub releases. TestFlight Coming soon! FAQ Covers a few common questions I have encountered, click on the question to expand the answer Will I get banned for using Swiftcord/Is using Swiftcord illegal? Nobody really knows what Discord's official stance on unofficial clients is. However, hundreds of people and I have been using Swiftcord for quite a while, and nobody has been banned to date. I do not take any responsibility for account bans due to the use of Swiftcord, whether direct or indirect, although there's a very low possibility of that occurring. I recommend trying Swiftcord with an alt if possible. Feature x is missing! When will y be implemented? Swiftcord currently is in the alpha stage, and hasn't achieved feature parity with the official Discord client yet (it's quite far behind). Many features are planned, but I do not currently have a timeline for them. Development is progressing at a fast pace, but sometimes bugs may take an unexpectedly long time to fix. I appreciate contributions, bug reports, and suggestions :) Swiftcord just crashed! Although I'm aiming for 0 crashes (which is made easier by Swift), sometimes the unexpected happens xD. If you experience a crash, please open an issue with appropriate information like the line the error occurs on, relevant logs, and what you were doing that might have caused the crash. If you can solve the bug causing the crash, that's even better! Roadmap Take a look at Swiftcord's GitHub Projects board to get a rough idea of what's brewing! Copyright Notice Copyright (c) 2023 Vincent Kwok & Swiftcord Contributors This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. The above copyright notice, this permission notice, and its license shall be included in all copies or substantial portions of the Software. You can find a copy of the GNU General Public License v3 in LICENSE or https://www.gnu.org/licenses/. I ❤️ Open Source;A fully native Discord client for macOS built 100% in Swift!;discord,swift,swiftui,macos,native,native-apps
SwiftcordApp/Swiftcord
aolofsson/awesome-opensource-hardware;awesome-opensource-hardware A curated list of awesome open source hardware tools, generators, and reusable designs. Categorized Alphabetical (per category) Requirements link should be to source code repository open source projects only working projects only (not WIP/rusty) One tag line sentence per project R = Recommended Table of Contents PDKs Manufacturable PDKs Virtual PDKs Compilers Build systems Circuit compilers FPGA compilers Layout compilers Project Documentation Design and Verification Tools Benchmarks Board design Digital design FPGA design Formal verification Linters Register design Schematics Simulators Verification frameworks Physics Waveform Viewers Designs & Generators Accelerators Analog circuits Chip packaging Boards Connectivity CPUs FPGA architectures Libraries Memory Systems Education Analog design ASIC design Digital design FPGA design PDKs Manufacturable PDKs gf180 GlobalFoundries 180nm CMOS PDK sg13g2 IHP 130nm BiCMOS PDK sky130 Skywater 130nm CMOS PDK Virtual PDKs asap7 Predictive 7nm PDK freepdk45 Predictive 45nm PDK probe3.0 Process/design DTCO path finding technology Compilers Build Systems bazelhdl Bazel based hdl build system bender Dependency management tool for hardware projects. chipyard Agile RISC-V SoC Design Framework. cocoon Infrastructure for integrated EDA edalize Abstraction library for interfacing EDA tools. flgen Generate a filelist for EDA tools fusesoc Package manager and build abstraction tool for FPGA/ASIC development. hammer Agile physical design component part of UC Berkeley Chipyard framework. hwtbuildsystem Library of utils for interaction with the vendor tools. legohdl Command line HDL package manager and development tool. mflowgen Build-system generator for ASIC and FPGA design-space exploration. siliconcompiler (R) :star: Modular distributed build system for hardware Circuit Compilers abc (R) :star: System for sequential logic synthesis and formal verification act Asynchronous circuit compiler tools aihwkit IBM Analog Hardware Acceleration Kit amaranth Python based hardware design framework bigspicy Tool for merging circuit descriptions bsc Compiler, simulator, and tools for the Bluespec Hardware Description Language calyx Intermediate language and compilers that generate custom hardware accelerators chisel (R) :star: Scala based hardware description language circt Circuit IR Compilers and Tools circuitgraph Tools for working with circuits as graphs in python circuitops Infrastructure for dataset generation and model deployment in Generative AI clash Haskell to VHDL/Verilog/SystemVerilog compiler coreir LLVM-style hardware compiler with first class support for generators dfiant Dataflow Hardware Description Language fault Design-for-testing (DFT) Solution finn Dataflow compiler for QNN inference firrtl Intermediate Representation for RTL gamma Optimizes mapping of DNN models on DNN Accelerators gamora Graph Learning based Symbolic Reasoning for Large-Scale Boolean Networks ghdl-yosys-plugin VHDL synthesis (based on ghdl) halide Language for fast, portable data-parallel computation halide-to-hardware Hardware generator combining halide and coreir hastlayer VHDL generator from .NET languages (C#, F#, and others) and FPGA framework for .NET hardware acceleration hdl21 Hardware Description Library hdlconvertor Verilog/VHDL parser preprocessor and code generator for C++/Python based on ANTL4 hs-to-coq Convert Haskell source code to Coq source code ipyxact Python-based IP-XACT parser livehd Infrastructure for live interactive synthesis and simulation llhd Intermediate representation for digital circuit descriptions lsoracle Famework built on EPFL logic synthesis libraries. lstools Showcase examples for EPFL logic synthesis libraries kami Platform for High-Level Parametric Hardware Specification and Verification magma Python based hardware design language matchlib Synthesizable SystemC/C++ library of commonly-used hardware functions matchclib_connections Synthesizable SystemC library implementing latency-insensitive channels mockturtle C++ logic network library myhdl Python based hardware description and verification language naja Structural Netlist API for EDA post synthesis flow development netlist-paths A library and command-line tool for querying a Verilog netlist panda-bambu High level synthesis (HLS) C/C++ framework pipelinec C-like hardware description language (HDL) with automatic pipelining pygears Python based hardware design framework pymtl3 Python hardware generation, simulation, and verification framework pyrtl Python integrated design and simulation framework pysysc Python package to make SystemC usable from Python pyverilog Python design toolkit for Verilog HDL rohd Dart based framework for describing and verifying hardware scip Solving Constraint Integer Problems silice Language that simplifies prototyping and writing algorithms on FPGA architectures skidl SKiDL is a module that extends Python with the ability to design electronic circuits slang Library for lexing, parsing, type checking, and elaborating SystemVerilog code sodaopt (R) :star: Optimizer leveraging mlir to extract, optimize, translate HLSinto LLVM IR spinalhdl Scala based HDL spydrnet Framework for analyzing and transforming Verilog netlists surelog (R) :star: SystemVerilog IEEE 2017 Pre-processor, Parser, Elaborator, UHDM Compiler sv-parser SystemVerilog IEEE 1800-2017 parser library sv2v (R) :star: SystemVerilog to Verilog conversion systemc (R) :star: SystemC system design and verification language that spans hardware and software systemc-compiler Translates synthesizable SystemC to synthesizable Verilog synlig SystemVerilog support for Yosys tapasco Heterogeneous system composer tce Application-specific instruction-set processor (ASIP) toolset uhdm Universal object model for IEEE SystemVerilog designs verible SystemVerilog developer tools, including a parser, style-linter, and formatter veriloggen Mixed-Paradigm Hardware Construction Framework veryl Modern Hardware Description Language based on Rust/SV verik Kotlin based hardware description language vlsir Interchange formats for chip design xls Google framework for hardware synthesis yosys (R) :star: Yosys Open SYnthesis Suite FPGA Compilers amf-placer Timing-driven analytical mixed-size FPGA placer dreamplacefpga Analytical Placer for Large Scale Heterogeneous FPGA flowtune FPGA synehsis and PNR optimizer nextpnr FPGA place and route tool vtr (R) :star: FPGA place and route tool Layout Compilers align Automatic layout generator for analog circuits autodmp Automated DREAMPlace-based Macro Placement bag Berkeley analog layout generator coriolis RTL2GDS toolchain for mature nodes dreamplace Deep learning toolkit-enabled VLSI placement gdsfactory Platform for chip design and layout gds3d Render GDS files in 3D gdsiistl Converts GDSII files to STL files gdstk C++/Python library for creation and manipulation of GDSII and OASIS files. gdspy Python module for creating GDSII stream files, usually CAD layouts. ieda RTL2GDS infrastructure klayout (R) :star: Layout viewer kweb Klayout Web Viewer lclayout Layout generator for CMOS standard-cells layout21 Integrated Circuit Layout magic Magic VLSI layout tool magical Machine Generated Analog IC Layout openroad (R) :star: Complete RTL2GDS platform phidl Python GDS layout and CAD geometry creation Design and Verification Tools Benchmarks big-doe-openroad Framework for launching massive RTL2GDS experiements bringup-bench Collection of minimal programs useful for system bringup bsg_pipeclean_suite Collection of designs used to stress test new CAD flows corescore Benchmark for FPGAs and their synthesis/P&R tools epfl-benchmarks Combinational Benchmark Suite for logic synthesis fpga-tool-perf FPGA tool performance profiling opdb Princeton design benchmark generators rdf-2020 IEEE CEDA eda benchmark flow sv-tests SystemVerilog compliance test suite verilog-eval Verilog evaluation benchmark for large language model Board Design boardview Reads KiCAD PCB layout files and writes ASCII Boardview files cuflow Experimental procedural PCB layout program datasheet-scrubber Utility that scrubs PDF datasheets/documents in order to extract key circuit information freecad (R) :star: 3D parametric CAD for building models of components for KiCad 3D preview (also enclosures) freerouting PCB auto-router kicad (R) :star: Board design framework kicanvas KiCAD web viewer librepcb Board design framework pcbflow Python based Printed Circuit Board (PCB) layout and design package based on CuFlow Digital Design digital Digital logic designer and circuit simulator DigSim An interactive digital logic simulator with verilog support (Yosys) verilog-mode Popular free Verilog mode for Emacs vsrtl Visual Simulation of Register Transfer Logic vscode-systemverilog SystemVerilog support in VS Code vscode-teroshdl Full IDE for RTL development in VS Code Documentation elk Eclipse Layout Kernel - Automatic layout for Java applications. graphviz Python library for graph cration and rendering in DOT language gds3d Reads GDSII layout and renders in 3D hdelk Web-based HDL diagramming tool kythe Verible based SystemVerilog source file indexer memory-layout-diagram Diagrams for memory map layouts netlistsvg Draws an SVG schematic from a JSON netlist netlist-viewer SPICE netlist visualizer nn-svg Publication-ready NN-architecture schematics pcbdraw Convert KiCAD board into 2D drawing suitable for pinout diagrams pinion Generate interactive Diagrams for your PCBs pinout Python package that generates hardware pinout diagrams as SVG images sphinx Document builder sphinx-verilog-domain Sphinx domain to allow integration of Verilog / SystemVerilog documentation into Sphinx. sphinxcontrib-hdl-diagrams Sphinx plugin to automatically generate diagrams from RTL. symbolator HDL symbol generator undulate Python compatible wavedrom module with extensions and console rendering support wavedrom (R) :star: Digital timing diagram rendering engine wavedrompy Python comptabled Wavedrom module FPGA Design byteman Bitstream relocation and manipulation tool icestudio Visual editor for open FPGA boards f4fpga FPGA toolchain foedag Framework Open EDA Gui logik FPGA toolchain openfpgaloader (R) :star: Universal utility for programming FPGA rphax Automation flow to develop and prototype hardware accelerators on Xilinx FPGAs Formal Verification boolector SMT solver for the theories of fixed-size bit-vectors, arrays and uninterpreted functions cvc5 SMT automatic theorem prover ilang Princeton modeling and Verification Platform for SoCs using ILAs autosva Generates FV testbenches and SVA properties for RTL modules based on interface annotations + GPT4 autocc A frontend for JG/SBY to automatically discover covert channels in time-shared hardware pono Extensible SMT-based model checker implemented in C++. sby Front-end for Yosys-based formal verification flows. z3 Microsoft research theorem prover Linters svlint SystemVerilog linter svlint-action GitHub action for svlint verible SystemVerilog developer tools, including a parser, style-linter, and formatter verilator (R) :star: SystemVerilog simulator and lint system Register Design gen_registers Python based tool for generating hardware registers and their associated files rggen Configuration and status register generator open-register-design-tool Generate register RTL, models, and docs using SystemRDL or JSpec input peakrdl SystemRDL based control & status register (CSR) toolchain systemrdl Generic compiler front-end for Accellera's SystemRDL 2.0 register description language Schematics d3-hwschematics Schematic visualizer kaktus2dev Graphical EDA tool based on the IP-XACT standard openplc_editor IDE capable of creating programs for the OpenPLC Runtime oregano Schematic capture and circuit simulator qucs_s Integrated circuit simulator with Graphical User Interface hdl21schematics Hdl21 Schematics xschem Schematic editor for VLSI/Asic/Analog custom designs Electronics Simulators champsim Trace-based simulator for a microarchitecture study dromajo RISC-V RV64GC functional emulator eesim Browser-based SPICE circuit simulator essent High-performance FIRRTL (Chisel) simulator firesim FPGA-accelerated Cycle-accurate Hardware Simulation in the Cloud gem5 Modular simulator platform for computer-system architecture research muchisim Cycle-level simulator for PPA and cost analysis of distributed multi-chiplet tile-based manycore designs. ghdl (R) :star: VHDL 2008/93/87 simulator icarus (R) :star: Verilog IEEE-1364 simulator irsim Switch-level simulator for digital circuits libsystemctlm-soc (R) :star: SystemC/TLM-2.0 Co-simulation framework logisim-evolution Digital logic design tool and simulator lwtr4sc Transaction recording for SystemC ngspice (R) :star: Spice simulator noxim Network on Chip Simulator nvc VHDL compiler and simulator pysysc Python package to make SystemC usable from Python qemu (R) :star: Generic and open source machine & userspace emulator and virtualizer ramulator2 Cycle accurate DRAM simulator renode Generic and open source machine emulator sax S-parameter based frequency domain circuit simulation simulide SimulIDE is a simple real-time electronic circuit simulator systemc-components SystemC simulation productivity library tiny-five Lightweight RISC-V emulator and assembler written entirely in Python with examples for AI/ML xictools Circuit simulation package xyce (R) :star: Parallel spice simulator from Sandia national labs verilator (R) :star: SystemVerilog simulator and lint system Verification Frameworks adc-eval Python tools for ADC performance analysis awsteria_infra Middleware for AWS hosted FPGA applications anasysmod Framework for FPGA emulation of mixed-signal systems cocotb Coroutine based cosimulation library for writing VHDL and Verilog testbenches in Python cocotbext-axi AXI interface modules for Cocotb cocotbext-pcie PCI express simulation framework for Cocotb constrainedrandom Python package for creating and solving constrained randomization problems cvc CVC: Circuit Validity Checker core-v-verif Functional verification project for the CORE-V family of RISC-V cores ddr5_phy UVM testbench for DDR5 PHY fault Python package for testing hardware force-riscv Instruction Set Generator for RISC-V frame Fast Roofline Analytical Modeling and Estimation fstdumper Verilog VPI module to dump FST (Fast Signal Trace) databases lctime Library cell characterization maestro Analytical cost model evaluating DNN mappings (dataflows and tiling) msdsl Automatic generation of real number models from analog circuits netgen LVS tool for comparing SPICE or verilog netlists openplc_v3 OpenPLC Runtime version 3 opensta (R) :star: Signoff quality STA engine used by OpenRoad opentimer High performance static timing analysis openvaf Next generation Verilog-A compiler osvvm A VHDL verification framework pcievhost PCIe (1.0a to 2.0) Virtual host model for verilog pyspice Python interface for ngspice and xyce pyucis Python API to Unified Coverage Interoperability Standard (UCIS) Data pyuvm SystemVerilog UVM written in Python pyvsc Python packages or SystemVerilog UVM style Verification Stimulus and Coverage raft Rapid Abstraction FPGA Toolbox riscv-dv Random instruction generator for RISC-V processor verification rohd-cosim Framework for cosimulation between the ROHD simulator and SystemVerilog simulators. rohd-vf ROHD-based verification and testbench framework in Dart. switchboard (R) :star: Communication framework for RTL simulation and emulation svreal Synthesizable real number library in SystemVerilog (fixed & floating point formats) systemctlm-cosim-demo Demo system for libsystemctlm-soc library sv_waveterm Generate text waves in simulation log file tvip-apb UVM based AMBA APB VIP tvip-axi UVM based AMBA AXI VIP uvvm Library for making very structured VHDL-based testbenches. v2k-top Parser/simulation framework for Verilog & C++ vidbo Virtual development board vunit Unit testing framework for VHDL/SystemVerilog Physics devsim TCAD Semiconductor Device Simulator elmer Finite Element Solver femwell Finite element based simulation tool for integrated circuits, electric and photonic hotspot Thermal modeling tool for use in architectural studies meep Finite-difference-time-domain (FDTD) electromagneic simulation paraview Data Analysis and Visualization Application pact Thermal simulator scikit-rf RF and Microwave Engineering Scikit Waveform Viewers scviewer Eclipse plugins to display VCD (e.g. created by SystemC VCD trace). d3wave D3.js based wave (signal) visualizer gtkwave (R) :star: GTK+ based VCD waveform viewer iio-oscilloscope GTK+ based oscilloscope application for interfacing with various IIO devices konata Instruction pipeline visualizer for Gem5 npTDMS Python module for reading TDMS files produced by LabView scopy Software oscilloscope and signal analysis toolset sigrok Portable, signal analysis software suite (logic analyzers, scopes, multimeters, and more) simview Text-based SystemVerilog design browser and waveform viewer sootty Command-line tool for displaying vcd waveforms spyci Python package to parse spice raw data files verilog-vcd-parser Parser for Value Change Dump (VCD) files wavebin Oscilloscope waveform capture viewer and converter waveforms-live Browser based analog waveform viewer Designs & Generators Accelerators aes Symmetric block cipher AES (Advanced Encryption Standard) ara Vector Unit, compatible with the RISC-V Vector Extension bfg Compiler for Reduced-Complexity Reconfigurable Fabrics bismp Chisel-based bit-serial matrix multiplication accelerator generator finn Quantized NN to FPGA dataflow accelerator generator fftgenerator MMIO-Based FFT Generator fpu Synthesizable ieee 754 floating point library in verilog garnet CGRA generator gemmini Berkeley Spatial Array Generator gplgpu GPL v3 2D/3D graphics engine in verilog core_jpeg High throughput JPEG decoder in Verilog for FPGA fftgenerator Chisel based FFT generator h265-encoder-rtl H.265 Video Encoder IP Core logicnets Train and generate LUT-based neural networks nngen Fully-Customizable Hardware Synthesis Compiler for Deep Neural Network nvdla NVIDIA Deep Learning Accelerator (NVDLA) nyuziprocessor GPGPU microprocessor architecture opencgra Parametrizable Coarse-Grained Reconfigurable Array (CGRA) Generator openofdm 802.11 OFDM PHY decoder openspike Spiking neural network accelerator project-zipline Zipline lossless compression implementation pyfda Python Filter Design Analysis Tool ranc Reconfigurable architecture for neuromorphic computing sha256 SHA-256 hash function (NIST FIPS 180-4) sha512 SHA-512 hash function (NIST FIPS 180-4) sha3 Berkeley SHAR3 ROCC Accelerator serpens HBM FPGA based SpMV Accelerator sextans FPGA accelerator for Sparse-Matrix Dense-Matrix Multiplication (SpMM) spiral Spiral based FFT generator tvm-vta Opwn, modular, deep learning accelerator verigood-ml Verilog Generator, Optimized for Designs for Machine Learning verigpu OpenSource GPU, loosely based on RISC-V ISA verilog-lfsr Parametrizable combinatorial parallel LFSR/CRC module vortex Full-system RISCV-based GPGPU processor Analog Circuits ams_kgd Repository for Known Good Analog Designs (KGDs) analog_blocks Basic building blocks (OTA, BandGap and LDO) in Skywater 130nm. openfasoc Automated Mixed Signal SoC Synthesis Framework open-pmic Current mode buck converter on the SKY130 PDK Chip Packaging bsg_packaging Open-Source Hardware Accelerator Packages and Sockets Boards bsg_motherboards BaseJump Hardware Accelerator Motherboards gmm7550 CologneChip GateMate FPGA Module: GMM-7550 google-coral-baseboard Open hardware baseboard for the Google Coral i.MX8 + Edge TPU SoM hardware-components Collection of KiCad components parallella-hw Collection of open source boards from Adapteva Connectivity aib Advanced Interface Bus (AIB) die to die hardware aib-protocols Advanced Interface Bus (AIB) Protocol IP axi AXI SystemVerilog synthesizable IP axi4_aib_bridge AXI4/AIB Bridge RTL bsg_ddr3_io BaseJump DDR3 I/O Design core_ddr3_controller DDR3 memory controller in Verilog for various FPGAs ctucanfd_ip_core CAN with Flexible Data-rate IP Core developed at Department of Measurement of FEE CTU hdmi Send video/audio over HDMI on an FPGA i2c Fully featured implementation of Inter-IC (I2C) bus master idma Modular, parametrizable, and highly flexible Data Movement Accelerator io-gen IO cell generator litedram Small footprint and configurable DRAM (litex) liteeth Small footprint and configurable Ethernet core litescope Small footprint and configurable embedded FPGA logic analyzer litepice Small footprint and configurable PCIe core nocrouter Network-on-Chip Router omi_device_ice Open memory interface example device opencapi_accel OpenCAPI acceleration framework opencapi_client OpenCAPI client reference design openserdes Digitally synthesizable architecture for SerDes using Skywater130 pymtl3-net Cornell parameterizable OCN (on-chip network) generator ravenoc Configurable HDL NoC (Network-On-Chip) tnoc Network on Chip Implementation written in SytemVerilog usb3_camera USB C Industrial Camera Project usb_cdc Minimal USB CDC (ACM) implementation in verilog usb_dfu Verilog implementation of the USB Device Class Specification for Device Firmware Upgrade (DFU), version 1.1 umi (R) :star: Universal Memory Interface verilog-axis Verilog AXI stream components for FPGA implementation verilog-ethernet Verilog Ethernet components for FPGA implementation verilog-i2c Verilog I2C interface for FPGA implementation verilog-uart Verilog UART verilog-pcie Verilog PCI express components verilog-wishbone Verilog wishbone components vis4mesh Visualization tool for designing mesh Network-on-Chips vivado-library IP cores and interface definitions compatible with Xilinx Vivado IP Catalog wav-d2d-hw 8lane Wlink with D2D and a single AXI Target/Initiator wav-lpddr-hw DDR (WDDR) Physical interface (PHY) Hardware wav-slink-hw Chiplet link wav-wlink-hw Chiplet link CPUs a2i A2I POWER processor core RTL (VHDL) ara 64-bit Vector unit coprocessor to Ccva6 black-parrot Linux-capable RISC-V multicore cfu-playground Famework for playing with custom opcodes to accelerate TensorFlow Lite for Microcontrollers cores-swerv SweRV EH1 RISC-Vcore cores-swerv-el2 SweRV EL2 RISC-V Core core-v-verif Functional verification project for the CORE-V family of RISC-V cores cva6 (R) :star: Linux capable RISC-V CPU cve2 Small two-stage 32 bit RISC-V CPU core (RV32IMC/EMC) cv32e40s RV32IMFCX RISC-V 4-stage secure RISC-V CPU cv32e40x RV32IMFCX RISC-V 4-stage compute RISC-V CPU cvw Configurable RISC-V Processor for RISC-V System-on-Chip Design textbook. ibex (R) :star: Small 32 bit RISC-V CPU core lizard Cornell modular RV64IM Out-of-Order Processor Built with PyMTL microwatt Open POWER ISA softcore written in VHDL 2008 minimax A Compressed-First, Microcoded RISC-V CPU muntjac Simple 64-bit RISC-V multicore processor neorv32 Customizable and highly extensible MCU-class 32-bit RISC-V (VHDL) openxiangshan Open-source high-performance RISC-V processor picorv32 (R) :star: Size-Optimized RISC-V CPU rocket-chip (R) :star: Linux capable RISC-V Rocket Chip Generator rioschip Out of order RISC-V core serv SErial RISC-V CPU snitch Lean but mean RISC-V system veer 32-bit integer machine-mode RISC-V CPU vroom High performance RISC-V CPU FPGA Architectures fabulous Fabric generator and CAD tools fabric_team Simple Berkeley FPGA generator class project openfpga FPGA IP Generator prga Open-source FPGA research and prototyping framework Libraries basejump_stl Library of SystemVerilog components basic_verilog Library of SystemVerilog components berkeley-hardfloat Berkeley hardware floating point units common_cells Library of SystemVerilog components cvfpu Parametric floating-point unit hdl Library of Analog Deveices specific components lambdalib (R) :star: Hardware abstraction library lambdapdk (R) :star: Library of open source Process Design Kits (PDKs) libsv Parameterized SystemVerilog digital hardware library mathlib SystemVerilog MathLib oh (R) :star: Library of Verilog components pztb-core Collection of class libraries for testbench development pzbcm Basic common modules rohd-hcl Library of reusable & configurable hardware components developed with ROHD vlsiffra Fast and efficient standard cell based adders, multipliers and multiply-adders Memory core_axi_cache 128KB AXI cache (32-bit in, 256-bit out) cv-hpdcache High-Performance L1 Dcache bsg_fakeram Fake RAM generator huancun Open-source high-performance non-blocking cache openram Static random access memory (SRAM) compiler. lake Synthesizable memory generator Systems caliptra Caliptra Root of Trust Architecture caliptra-rtl Caliptra Root of Trust (RTL) beagle_sdr_gps KiwiSDR: BeagleBone web-accessible GPS/SDR bsg_manycore Tile based architecture designed for computing efficiency, scalability cep RISC-V based Common Evaluation Platform (CEP) esp Heterogeneous SoC architecture and IP design platform falcon Fast Analysis of LTE Control channels hero FPGA-based research platform for heterogeneous design litex SoC builder framework openfasoc Open Source FASOC generators openpiton General purpose, multithreaded manycore processor opentitan Open source silicon root of trust openwifi-hw IEEE 802.11 WiFi baseband FPGA (chip) design pulp Multicore RISC-V based SoC pulpissimo Single core RISC-V based SoC rose Unified simulation platform for robotic systems senseq Mixed-signal system on chip for nanopore-based DNA sequencing verilogboy Game Boy compatible machine with Verilog wulpus Wearable low-power ultrasound probe x-heep Extendable and configurable RISC-V SoC Boards artix-dc-scm Antmicro OCP data center secure control module arty-mpw-tester Antmicro Caravel fanout board fomu Tiny USB FPGA board icebreaker Low cost FPGA development board lpddr5-testbed Antmicro lpddr5 testbed PicoEVB M.2 80mm Artix FPGA evaluation board qomu-dev-board Quicklogic efpga USB dev board scalenode-cm4-baseboard Antmicro basedboard for RPI CM4 sodimm-ddr5-tester Antmicro ddr5 tester board Education Analog Design book-on-mos-stagse Analysis and Design of Elementary MOS Amplifier Stages SiliWiz Browser based interactive circuit design tool. Board Design Digital Design cornell-ece4750 ECE 4750 Computer Architecture cornell-ece5745 ECE 5745 Complex Digital ASIC Design stanford-ee272a EE272A Design Projects in VLSI Systems I stanford-ee272b EE272B Design Projects in VLSI Systems II FPGA Design Other Awesome Lists ben-marshall Hardware verification computer-engineering-resources A curated list of Computer Engineering/Architecture resources delftopenhardware Open hardware materials drom HDL languages hdl Hardware description resources kicad-3rd-party-tools List of 3rd party KiCad software packages mattvenn ASIC resources pkuzjx Open source EDA resources semiconduoctor-startups Semiconductor startups;List of awesome open source hardware tools, generators, and reusable designs;[]
aolofsson/awesome-opensource-hardware
bellingcat/octosuite;A framework for gathering open-source intelligence on GitHub users, repositories and organisations Wiki Refer to the Wiki for installation instructions, in addition to all other documentation. Features [x] Fetches an organisation's profile information [x] Fetches an oganization's events [x] Returns an organisation's repositories [x] Returns an organisation's public members [x] Fetches a repository's information [x] Returns a repository's contributors [x] Returns a repository's languages [x] Fetches a repository's stargazers [x] Fetches a repository's forks [x] Fetches a repository's releases [x] Returns a list of files in a specified path of a repository [x] Fetches a user's profile information [x] Returns a user's gists [x] Returns organisations that a user owns/belongs to [x] Fetches a user's events [x] Fetches a list of users followed by the target [x] Fetches a user's followers [x] Checks if user A follows user B [x] Checks if user is a public member of an organisations [x] Gets a user's subscriptions [x] Searches users [x] Searches repositories [x] Searches topics [x] Searches issues [x] Searches commits [x] Automatically logs network/user activity (.logs folder) [x] User can manage logs (view, read, delete) [x] Results can be saved to a .csv file (varies) [x] User can manage csv files (view, read, delete) [x] All the above can be used with command-line arguments (PyPI Package only) [x] And more... TODO [ ] Rewrite the GUI in Visual Basic .NET (in progress) Note Octosuite automatically logs network and user activity of each session, the logs are saved by date and time in the .logs folder License Credits The code used for finding emails from usernames is taken from Somdev Sangwan 's Zen Donations If you like OctoSuite and would like to show support, you can Buy A Coffee for the developer using the button below Your support will be much appreciated😊;GitHub Data Analysis Framework.;github,data-analysis
bellingcat/octosuite
open-mmlab/mmrotate;OpenMMLab website HOT OpenMMLab platform TRY IT OUT [![PyPI](https://img.shields.io/pypi/v/mmrotate)](https://pypi.org/project/mmrotate) [![docs](https://img.shields.io/badge/docs-latest-blue)](https://mmrotate.readthedocs.io/en/latest/) [![badge](https://github.com/open-mmlab/mmrotate/workflows/build/badge.svg)](https://github.com/open-mmlab/mmrotate/actions) [![codecov](https://codecov.io/gh/open-mmlab/mmrotate/branch/main/graph/badge.svg)](https://codecov.io/gh/open-mmlab/mmrotate) [![license](https://img.shields.io/github/license/open-mmlab/mmrotate.svg)](https://github.com/open-mmlab/mmrotate/blob/main/LICENSE) [![open issues](https://isitmaintained.com/badge/open/open-mmlab/mmrotate.svg)](https://github.com/open-mmlab/mmrotate/issues) [![issue resolution](https://isitmaintained.com/badge/resolution/open-mmlab/mmrotate.svg)](https://github.com/open-mmlab/mmrotate/issues) [📘Documentation](https://mmrotate.readthedocs.io/en/latest/) | [🛠️Installation](https://mmrotate.readthedocs.io/en/latest/install.html) | [👀Model Zoo](https://mmrotate.readthedocs.io/en/latest/model_zoo.html) | [🆕Update News](https://mmrotate.readthedocs.io/en/latest/changelog.html) | [🚀Ongoing Projects](https://github.com/open-mmlab/mmrotate/projects) | [🤔Reporting Issues](https://github.com/open-mmlab/mmrotate/issues/new/choose) English | [简体中文](README_zh-CN.md) Introduction MMRotate is an open-source toolbox for rotated object detection based on PyTorch. It is a part of the OpenMMLab project . The master branch works with PyTorch 1.6+ . https://user-images.githubusercontent.com/10410257/154433305-416d129b-60c8-44c7-9ebb-5ba106d3e9d5.MP4 Major Features - **Support multiple angle representations** MMRotate provides three mainstream angle representations to meet different paper settings. - **Modular Design** We decompose the rotated object detection framework into different components, which makes it much easy and flexible to build a new model by combining different modules. - **Strong baseline and State of the art** The toolbox provides strong baselines and state-of-the-art methods in rotated object detection. What's New Highlight We are excited to announce our latest work on real-time object recognition tasks, RTMDet , a family of fully convolutional single-stage detectors. RTMDet not only achieves the best parameter-accuracy trade-off on object detection from tiny to extra-large model sizes but also obtains new state-of-the-art performance on instance segmentation and rotated object detection tasks. Details can be found in the technical report . Pre-trained models are here . | Task | Dataset | AP | FPS(TRT FP16 BS1 3090) | | ------------------------ | ------- | ------------------------------------ | ---------------------- | | Object Detection | COCO | 52.8 | 322 | | Instance Segmentation | COCO | 44.6 | 188 | | Rotated Object Detection | DOTA | 78.9(single-scale)/81.3(multi-scale) | 121 | 0.3.4 was released in 01/02/2023: Fix compatibility with numpy, scikit-learn, and e2cnn. Support empty patch in Rotate Transform use iof for RRandomCrop validation Please refer to changelog.md for details and release history. Installation MMRotate depends on PyTorch , MMCV and MMDetection . Below are quick steps for installation. Please refer to Install Guide for more detailed instruction. shell conda create -n open-mmlab python=3.7 pytorch==1.7.0 cudatoolkit=10.1 torchvision -c pytorch -y conda activate open-mmlab pip install openmim mim install mmcv-full mim install mmdet git clone https://github.com/open-mmlab/mmrotate.git cd mmrotate pip install -r requirements/build.txt pip install -v -e . Get Started Please see get_started.md for the basic usage of MMRotate. We provide colab tutorial , and other tutorials for: learn the basics learn the config customize dataset customize model useful tools Model Zoo Results and models are available in the README.md of each method's config directory. A summary can be found in the Model Zoo page. Supported algorithms: - [x] [Rotated RetinaNet-OBB/HBB](configs/rotated_retinanet/README.md) (ICCV'2017) - [x] [Rotated FasterRCNN-OBB](configs/rotated_faster_rcnn/README.md) (TPAMI'2017) - [x] [Rotated RepPoints-OBB](configs/rotated_reppoints/README.md) (ICCV'2019) - [x] [Rotated FCOS](configs/rotated_fcos/README.md) (ICCV'2019) - [x] [RoI Transformer](configs/roi_trans/README.md) (CVPR'2019) - [x] [Gliding Vertex](configs/gliding_vertex/README.md) (TPAMI'2020) - [x] [Rotated ATSS-OBB](configs/rotated_atss/README.md) (CVPR'2020) - [x] [CSL](configs/csl/README.md) (ECCV'2020) - [x] [R 3 Det](configs/r3det/README.md) (AAAI'2021) - [x] [S 2 A-Net](configs/s2anet/README.md) (TGRS'2021) - [x] [ReDet](configs/redet/README.md) (CVPR'2021) - [x] [Beyond Bounding-Box](configs/cfa/README.md) (CVPR'2021) - [x] [Oriented R-CNN](configs/oriented_rcnn/README.md) (ICCV'2021) - [x] [GWD](configs/gwd/README.md) (ICML'2021) - [x] [KLD](configs/kld/README.md) (NeurIPS'2021) - [x] [SASM](configs/sasm_reppoints/README.md) (AAAI'2022) - [x] [Oriented RepPoints](configs/oriented_reppoints/README.md) (CVPR'2022) - [x] [KFIoU](configs/kfiou/README.md) (arXiv) - [x] [G-Rep](configs/g_reppoints/README.md) (stay tuned) Data Preparation Please refer to data_preparation.md to prepare the data. FAQ Please refer to FAQ for frequently asked questions. Contributing We appreciate all contributions to improve MMRotate. Please refer to CONTRIBUTING.md for the contributing guideline. Acknowledgement MMRotate is an open source project that is contributed by researchers and engineers from various colleges and companies. We appreciate all the contributors who implement their methods or add new features, as well as users who give valuable feedbacks. We wish that the toolbox and benchmark could serve the growing research community by providing a flexible toolkit to reimplement existing methods and develop their own new methods. Citation If you use this toolbox or benchmark in your research, please cite this project. bibtex @inproceedings{zhou2022mmrotate, title = {MMRotate: A Rotated Object Detection Benchmark using PyTorch}, author = {Zhou, Yue and Yang, Xue and Zhang, Gefan and Wang, Jiabao and Liu, Yanyi and Hou, Liping and Jiang, Xue and Liu, Xingzhao and Yan, Junchi and Lyu, Chengqi and Zhang, Wenwei and Chen, Kai}, booktitle={Proceedings of the 30th ACM International Conference on Multimedia}, year={2022} } License This project is released under the Apache 2.0 license . Projects in OpenMMLab MMCV : OpenMMLab foundational library for computer vision. MIM : MIM installs OpenMMLab packages. MMClassification : OpenMMLab image classification toolbox and benchmark. MMDetection : OpenMMLab detection toolbox and benchmark. MMDetection3D : OpenMMLab's next-generation platform for general 3D object detection. MMRotate : OpenMMLab rotated object detection toolbox and benchmark. MMSegmentation : OpenMMLab semantic segmentation toolbox and benchmark. MMOCR : OpenMMLab text detection, recognition, and understanding toolbox. MMPose : OpenMMLab pose estimation toolbox and benchmark. MMHuman3D : OpenMMLab 3D human parametric model toolbox and benchmark. MMSelfSup : OpenMMLab self-supervised learning toolbox and benchmark. MMRazor : OpenMMLab model compression toolbox and benchmark. MMFewShot : OpenMMLab fewshot learning toolbox and benchmark. MMAction2 : OpenMMLab's next-generation action understanding toolbox and benchmark. MMTracking : OpenMMLab video perception toolbox and benchmark. MMFlow : OpenMMLab optical flow toolbox and benchmark. MMEditing : OpenMMLab image and video editing toolbox. MMGeneration : OpenMMLab image and video generative models toolbox. MMDeploy : OpenMMLab model deployment framework.;OpenMMLab Rotated Object Detection Toolbox and Benchmark;rotated-object,pytorch,openmmlab,detection
open-mmlab/mmrotate
pojntfx/weron;weron Overlay networks based on WebRTC. ⚠️ weron has not yet been audited! While we try to make weron as secure as possible, it has not yet undergone a formal security audit by a third party. Please keep this in mind if you use it for security-critical applications. ⚠️ Overview weron provides lean, fast & secure overlay networks based on WebRTC. It enables you too ... Access nodes behind NAT : Because weron uses WebRTC to establish connections between nodes, it can easily traverse corporate firewalls and NATs using STUN, or even use a TURN server to tunnel traffic. This can be very useful to for example SSH into your homelab without forwarding any ports on your router. Secure your home network : Due to the relatively low overhead of WebRTC in low-latency networks, weron can be used to secure traffic between nodes in a LAN without a significant performance hit. Join local nodes into a cloud network : If you run for example a Kubernetes cluster with nodes based on cloud instances but also want to join your on-prem nodes into it, you can use weron to create a trusted network. Bypass censorship : The underlying WebRTC suite, which is what popular videoconferencing tools such as Zoom, Teams and Meet are built on, is hard to block on a network level, making it a valuable addition to your toolbox for bypassing state or corporate censorship. Write your own peer-to-peer protocols : The simple API makes writing distributed applications with automatic reconnects, multiple datachannels etc. easy. Installation Containerized You can get the OCI image like so: shell $ podman pull ghcr.io/pojntfx/weron Natively Static binaries are available on GitHub releases . On Linux, you can install them like so: shell $ curl -L -o /tmp/weron "https://github.com/pojntfx/weron/releases/latest/download/weron.linux-$(uname -m)" $ sudo install /tmp/weron /usr/local/bin $ sudo setcap cap_net_admin+ep /usr/local/bin/weron # This allows rootless execution On macOS, you can use the following: shell $ curl -L -o /tmp/weron "https://github.com/pojntfx/weron/releases/latest/download/weron.darwin-$(uname -m)" $ sudo install /tmp/weron /usr/local/bin On Windows, the following should work (using PowerShell as administrator): shell PS> Invoke-WebRequest https://github.com/pojntfx/weron/releases/latest/download/weron.windows-x86_64.exe -OutFile \Windows\System32\weron.exe You can find binaries for more operating systems and architectures on GitHub releases . Usage TL;DR: Join a layer 3 (IP) overlay network on the hosted signaling server with sudo weron vpn ip --community mycommunity --password mypassword --key mykey --ips 2001:db8::1/32,192.0.2.1/24 and a layer 2 (Ethernet) overlay network with sudo weron vpn ethernet --community mycommunity --password mypassword --key mykey 1. Start a Signaling Server with weron signaler The signaling server connects peers with each other by exchanging connection information between them. It also manages access to communities through the --password flag of clients and can maintain persistent communities even after all peers have disconnected. While it is possible and reasonably private (in addition to TLS, connection information is encrypted using the --key flag of clients) to use the hosted signaling server at wss://weron.up.railway.app/ , hosting it yourself has many benefits, such as lower latency and even better privacy. The signaling server can use an in-process broker with an in-memory database or Redis and PostgreSQL; for production use the latter configuration is strongly recommended, as it allows you to easily scale the signaling server horizontally. This is particularly important if you want to scale your server infrastructure across multiple continents, as intra-cloud backbones usually have lower latency than residential connections, which reduces the amount of time required to connect peers with each other. Expand containerized instructions ```shell $ sudo podman network create weron $ sudo podman run -d --restart=always --label "io.containers.autoupdate=image" --name weron-postgres --network weron -e POSTGRES_HOST_AUTH_METHOD=trust -e POSTGRES_DB=weron_communities postgres $ sudo podman generate systemd --new weron-postgres | sudo tee /lib/systemd/system/weron-postgres.service $ sudo podman run -d --restart=always --label "io.containers.autoupdate=image" --name weron-redis --network weron redis $ sudo podman generate systemd --new weron-redis | sudo tee /lib/systemd/system/weron-redis.service $ sudo podman run -d --restart=always --label "io.containers.autoupdate=image" --name weron-signaler --network weron -p 1337:1337 -e DATABASE_URL='postgres://postgres@weron-postgres:5432/weron_communities?sslmode=disable' -e REDIS_URL='redis://weron-redis:6379/1' -e API_PASSWORD='myapipassword' ghcr.io/pojntfx/weron:unstable weron signaler $ sudo podman generate systemd --new weron-signaler | sudo tee /lib/systemd/system/weron-signaler.service $ sudo systemctl daemon-reload $ sudo systemctl enable --now weron-postgres $ sudo systemctl enable --now weron-redis $ sudo systemctl enable --now weron-signaler $ sudo firewall-cmd --permanent --add-port=1337/tcp $ sudo firewall-cmd --reload ``` Expand native instructions ```shell sudo podman run -d --restart=always --label "io.containers.autoupdate=image" --name weron-postgres -e POSTGRES_HOST_AUTH_METHOD=trust -e POSTGRES_DB=weron_communities -p 127.0.0.1:5432:5432 postgres sudo podman generate systemd --new weron-postgres | sudo tee /lib/systemd/system/weron-postgres.service sudo podman run -d --restart=always --label "io.containers.autoupdate=image" --name weron-redis -p 127.0.0.1:6379:6379 redis sudo podman generate systemd --new weron-redis | sudo tee /lib/systemd/system/weron-redis.service sudo tee /etc/systemd/system/weron-signaler.service<<'EOT' [Unit] Description=weron Signaling Server After=weron-postgres.service weron-redis.service [Service] ExecStart=/usr/local/bin/weron signaler --verbose=7 Environment="DATABASE_URL=postgres://postgres@localhost:5432/weron_communities?sslmode=disable" Environment="REDIS_URL=redis://localhost:6379/1" Environment="API_PASSWORD=myapipassword" [Install] WantedBy=multi-user.target EOT sudo systemctl daemon-reload sudo systemctl restart weron-postgres sudo systemctl restart weron-redis sudo systemctl restart weron-signaler sudo firewall-cmd --permanent --add-port=1337/tcp sudo firewall-cmd --reload ``` It should now be reachable on ws://localhost:1337/ . To use it in production, put this signaling server behind a TLS-enabled reverse proxy such as Caddy or Traefik . You may also either want to keep API_PASSWORD empty to disable the management API completely or use OpenID Connect to authenticate instead; for more information, see the signaling server reference . You can also embed the signaling server in your own application using it's Go API . 2. Manage Communities with weron manager While it is possible to create ephemeral communities on a signaling server without any kind of authorization, you probably want to create a persistent community for most applications. Ephemeral communities get created and deleted automatically as clients join or leave, persistent communities will never get deleted automatically. You can manage these communities using the manager CLI. If you want to work on your self-hosted signaling server, first set the remote address: shell $ export WERON_RADDR='http://localhost:1337/' Next, set the API password using the API_PASSWORD env variable: shell $ export API_PASSWORD='myapipassword' If you use OIDC to authenticate, you can instead set the API password using goit like so: shell $ export OIDC_CLIENT_ID='Ab7OLrQibhXUzKHGWYDFieLa2KqZmFzb' OIDC_ISSUER='https://pojntfx.eu.auth0.com/' OIDC_REDIRECT_URL='http://localhost:11337' $ export API_KEY="$(goit)" If we now list the communities, we see that none currently exist: shell $ weron manager list id,clients,persistent We can create a persistent community using weron create : shell $ weron manager create --community mycommunity --password mypassword id,clients,persistent mycommunity,0,true It is also possible to delete communities using weron delete , which will also disconnect all joined peers: shell $ weron manager delete --community mycommunity For more information, see the manager reference . You can also embed the manager in your own application using its Go API . 3. Test the System with weron chat If you want to work on your self-hosted signaling server, first set the remote address: shell $ export WERON_RADDR='ws://localhost:1337/' The chat is an easy way to test if everything is working correctly. To join a chatroom, run the following: shell $ weron chat --community mycommunity --password mypassword --key mykey --names user1,user2,user3 --channels one,two,three On another peer, run the following (if your signaling server is public, you can run this anywhere on the planet): shell $ weron chat --community mycommunity --password mypassword --key mykey --names user1,user2,user3 --channels one,two,three .wss://weron.up.railway.app/ user2! +user1@one +user1@two +user1@three user2> You can now start sending and receiving messages or add new peers to your chatroom to test the network. For more information, see the chat reference . You can also embed the chat in your own application using its Go API . 4. Measure Latency with weron utility latency An insightful metric of your network is its latency, which you can measure with this utility; think of this as ping , but for WebRTC. First, start the latency measurement server like so: shell $ weron utility latency --community mycommunity --password mypassword --key mykey --server On another peer, launch the client, which should start measuring the latency immediately; press CTRL C to stop it and get the total statistics: ```shell $ weron utility latency --community mycommunity --password mypassword --key mykey ... 128 B written and acknowledged in 110.111µs 128 B written and acknowledged in 386.12µs 128 B written and acknowledged in 310.458µs 128 B written and acknowledged in 335.341µs 128 B written and acknowledged in 264.149µs ^CAverage latency: 281.235µs (5 packets written) Min: 110.111µs Max: 386.12µs ``` For more information, see the latency measurement utility reference . You can also embed the utility in your own application using its Go API . 5. Measure Throughput with weron utility throughput If you want to transfer large amounts of data, your network's throughput is a key characteristic. This utility allows you to measure this metric between two nodes; think of it as iperf , but for WebRTC. First, start the throughput measurement server like so: shell $ weron utility throughput --community mycommunity --password mypassword --key mykey --server On another peer, launch the client, which should start measuring the throughput immediately; press CTRL C to stop it and get the total statistics: ```shell $ weron utility throughput --community mycommunity --password mypassword --key mykey ... 97.907 MB/s (783.253 Mb/s) (50 MB read in 510.690403ms) 64.844 MB/s (518.755 Mb/s) (50 MB read in 771.076908ms) 103.360 MB/s (826.881 Mb/s) (50 MB read in 483.745832ms) 89.335 MB/s (714.678 Mb/s) (50 MB read in 559.692495ms) 85.582 MB/s (684.657 Mb/s) (50 MB read in 584.233931ms) ^CAverage throughput: 74.295 MB/s (594.359 Mb/s) (250 MB written in 3.364971672s) Min: 64.844 MB/s Max: 103.360 MB/s ``` For more information, see the throughput measurement utility reference . You can also embed the utility in your own application using it's Go API . 6. Create a Layer 3 (IP) Overlay Network with weron vpn ip If you want to join multiple nodes into an overlay network, the IP VPN is the best choice. It works similarly to i.e. Tailscale/WireGuard and can either dynamically allocate an IP address from a CIDR notation or statically assign one for you. On Windows, make sure to install TAP-Windows first. Also note that due to technical limitations, only one IPv4 or IPv6 network and only one VPN instance at a time is supported on Windows; on macOS, only IPv6 networks are supported and IPv4 networks are ignored. To get started, launch the VPN on the first peer: shell $ sudo weron vpn ip --community mycommunity --password mypassword --key mykey --ips 2001:db8::1/64,192.0.2.1/24 {"level":"info","addr":"wss://weron.up.railway.app/","time":"2022-05-06T22:20:51+02:00","message":"Connecting to signaler"} {"level":"info","id":"[\"2001:db8::6a/64\",\"192.0.2.107/24\"]","time":"2022-05-06T22:20:56+02:00","message":"Connected to signaler"} On another peer, launch the VPN as well: shell $ sudo weron vpn ip --community mycommunity --password mypassword --key mykey --ips 2001:db8::1/64,192.0.2.1/24 {"level":"info","addr":"wss://weron.up.railway.app/","time":"2022-05-06T22:22:30+02:00","message":"Connecting to signaler"} {"level":"info","id":"[\"2001:db8::b9/64\",\"192.0.2.186/24\"]","time":"2022-05-06T22:22:36+02:00","message":"Connected to signaler"} {"level":"info","id":"[\"2001:db8::6a/64\",\"192.0.2.107/24\"]","time":"2022-05-06T22:22:36+02:00","message":"Connected to peer"} You can now communicate between the peers: shell $ ping 2001:db8::b9 PING 2001:db8::b9(2001:db8::b9) 56 data bytes 64 bytes from 2001:db8::b9: icmp_seq=1 ttl=64 time=1.07 ms 64 bytes from 2001:db8::b9: icmp_seq=2 ttl=64 time=1.36 ms 64 bytes from 2001:db8::b9: icmp_seq=3 ttl=64 time=1.20 ms 64 bytes from 2001:db8::b9: icmp_seq=4 ttl=64 time=1.10 ms ^C --- 2001:db8::b9 ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 3002ms rtt min/avg/max/mdev = 1.066/1.180/1.361/0.114 ms If you temporarily lose the network connection, the network topology changes etc. it will automatically reconnect. For more information and limitations on proprietary operating systems like macOS, see the IP VPN reference . You can also embed the utility in your own application using its Go API . 7. Create a Layer 2 (Ethernet) Overlay Network with weron vpn ethernet If you want more flexibility or work on non-IP networks, the Ethernet VPN is a good choice. It works similarly to n2n or ZeroTier. Due to API restrictions, this VPN type is not available on macOS ; use Asahi Linux , a computer that respects your freedoms or the layer 3 (IP) VPN instead. To get started, launch the VPN on the first peer: shell $ sudo weron vpn ethernet --community mycommunity --password mypassword --key mykey {"level":"info","addr":"wss://weron.up.railway.app/","time":"2022-05-06T22:42:10+02:00","message":"Connecting to signaler"} {"level":"info","id":"fe:60:a5:8b:81:36","time":"2022-05-06T22:42:11+02:00","message":"Connected to signaler"} If you want to add an IP address to the TAP interface, do so with iproute2 or your OS tools: shell $ sudo ip addr add 192.0.2.1/24 dev tap0 $ sudo ip addr add 2001:db8::1/32 dev tap0 On another peer, launch the VPN as well: shell $ sudo weron vpn ethernet --community mycommunity --password mypassword --key mykey {"level":"info","addr":"wss://weron.up.railway.app/","time":"2022-05-06T22:52:56+02:00","message":"Connecting to signaler"} {"level":"info","id":"b2:ac:ae:b6:32:8c","time":"2022-05-06T22:52:57+02:00","message":"Connected to signaler"} {"level":"info","id":"fe:60:a5:8b:81:36","time":"2022-05-06T22:52:57+02:00","message":"Connected to peer"} And add the IP addresses: shell $ sudo ip addr add 192.0.2.2/24 dev tap0 $ sudo ip addr add 2001:db8::2/32 dev tap0 You can now communicate between the peers: shell $ ping 2001:db8::2 PING 2001:db8::2(2001:db8::2) 56 data bytes 64 bytes from 2001:db8::2: icmp_seq=1 ttl=64 time=1.20 ms 64 bytes from 2001:db8::2: icmp_seq=2 ttl=64 time=1.14 ms 64 bytes from 2001:db8::2: icmp_seq=3 ttl=64 time=1.24 ms ^C --- 2001:db8::2 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2002ms rtt min/avg/max/mdev = 1.136/1.193/1.239/0.042 ms If you temporarily lose the network connection, the network topology changes etc. it will automatically reconnect. You can also embed the utility in your own application using its Go API . 8. Write your own protocol with wrtcconn It is almost trivial to build your own distributed applications with weron, similarly to how PeerJS works. Here is the core logic behind a simple echo example: ```go // ... for { select { case id := <-ids: log.Println("Connected to signaler", id) case peer := <-adapter.Accept(): log.Println("Connected to peer", peer.PeerID, "and channel", peer.ChannelID) go func() { defer func() { log.Println("Disconnected from peer", peer.PeerID, "and channel", peer.ChannelID) }() reader := bufio.NewScanner(peer.Conn) for reader.Scan() { log.Printf("%s", reader.Bytes()) } }() go func() { for { if _, err := peer.Conn.Write([]byte("Hello!\n")); err != nil { return } time.Sleep(time.Second) } }() } } ``` You can either use the minimal adapter or the named adapter ; the latter negotiates a username between the peers, while the former does not check for duplicates. For more information, check out the Go API and take a look at the provided examples , utilities and services in the package for examples. 🚀 That's it! We hope you enjoy using weron. Reference Command Line Arguments ```shell $ weron --help Overlay networks based on WebRTC. Find more information at: https://github.com/pojntfx/weron Usage: weron [command] Available Commands: chat Chat over the overlay network completion Generate the autocompletion script for the specified shell help Help about any command manager Manage a signaling server signaler Start a signaling server utility Utilities for overlay networks vpn Join virtual private networks built on overlay networks Flags: -h, --help help for weron -v, --verbose int Verbosity level (0 is disabled, default is info, 7 is trace) (default 5) Use "weron [command] --help" for more information about a command. ``` Expand subcommand reference #### Signaling Server ```shell $ weron signaler --help Start a signaling server Usage: weron signaler [flags] Aliases: signaler, sgl, s Flags: --api-password string Password for the management API (can also be set using the API_PASSWORD env variable). Ignored if any of the OIDC parameters are set. --api-username string Username for the management API (can also be set using the API_USERNAME env variable). Ignored if any of the OIDC parameters are set. (default "admin") --cleanup (Warning: Only enable this after stopping all other servers accessing the database!) Remove all ephemeral communities from database and reset client counts before starting --ephemeral-communities Enable the creation of ephemeral communities (default true) --heartbeat duration Time to wait for heartbeats (default 10s) -h, --help help for signaler --laddr string Listening address (can also be set using the PORT env variable) (default ":1337") --oidc-client-id string OIDC Client ID (i.e. myoidcclientid) (can also be set using the OIDC_CLIENT_ID env variable) --oidc-issuer string OIDC Issuer (i.e. https://pojntfx.eu.auth0.com/) (can also be set using the OIDC_ISSUER env variable) --postgres-url string URL of PostgreSQL database to use (i.e. postgres://myuser:mypassword@myhost:myport/mydatabase) (can also be set using the DATABASE_URL env variable). If empty, a in-memory database will be used. --redis-url string URL of Redis database to use (i.e. redis://myuser:mypassword@myhost:myport/1) (can also be set using the REDIS_URL env variable). If empty, a in-process broker will be used. Global Flags: -v, --verbose int Verbosity level (0 is disabled, default is info, 7 is trace) (default 5) ``` #### Manager ```shell $ weron manager --help Manage a signaling server Usage: weron manager [command] Aliases: manager, mgr, m Available Commands: create Create a persistent community delete Delete a persistent or ephemeral community list List persistent and ephemeral communities Flags: -h, --help help for manager Global Flags: -v, --verbose int Verbosity level (0 is disabled, default is info, 7 is trace) (default 5) Use "weron manager [command] --help" for more information about a command. ``` #### Chat ```shell $ weron chat --help Chat over the overlay network Usage: weron chat [flags] Aliases: chat, cht, c Flags: --channels strings Comma-separated list of channels in community to join (default [weron/chat/primary]) --community string ID of community to join --force-relay Force usage of TURN servers -h, --help help for chat --ice strings Comma-separated list of STUN servers (in format stun:host:port) and TURN servers to use (in format username:credential@turn:host:port) (i.e. username:credential@turn:global.turn.twilio.com:3478?transport=tcp) (default [stun:stun.l.google.com:19302]) --id-channel string Channel to use to negotiate names (default "weron/chat/id") --key string Encryption key for community --kicks duration Time to wait for kicks (default 5s) --names strings Comma-separated list of names to try and claim one from --password string Password for community --raddr string Remote address (default "wss://weron.up.railway.app/") --timeout duration Time to wait for connections (default 10s) Global Flags: -v, --verbose int Verbosity level (0 is disabled, default is info, 7 is trace) (default 5) ``` #### Latency Measurement Utility ```shell $ weron utility latency --help Measure the latency of the overlay network Usage: weron utility latency [flags] Aliases: latency, ltc, l Flags: --community string ID of community to join --force-relay Force usage of TURN servers -h, --help help for latency --ice strings Comma-separated list of STUN servers (in format stun:host:port) and TURN servers to use (in format username:credential@turn:host:port) (i.e. username:credential@turn:global.turn.twilio.com:3478?transport=tcp) (default [stun:stun.l.google.com:19302]) --key string Encryption key for community --packet-length int Size of packet to send and acknowledge (default 128) --password string Password for community --pause duration Time to wait before sending next packet (default 1s) --raddr string Remote address (default "wss://weron.up.railway.app/") --server Act as a server --timeout duration Time to wait for connections (default 10s) Global Flags: -v, --verbose int Verbosity level (0 is disabled, default is info, 7 is trace) (default 5) ``` #### Throughput Measurement Utility ```shell $ weron utility throughput --help Measure the throughput of the overlay network Usage: weron utility throughput [flags] Aliases: throughput, thr, t Flags: --community string ID of community to join --force-relay Force usage of TURN servers -h, --help help for throughput --ice strings Comma-separated list of STUN servers (in format stun:host:port) and TURN servers to use (in format username:credential@turn:host:port) (i.e. username:credential@turn:global.turn.twilio.com:3478?transport=tcp) (default [stun:stun.l.google.com:19302]) --key string Encryption key for community --packet-count int Amount of packets to send before waiting for acknowledgement (default 1000) --packet-length int Size of packet to send (default 50000) --password string Password for community --raddr string Remote address (default "wss://weron.up.railway.app/") --server Act as a server --timeout duration Time to wait for connections (default 10s) Global Flags: -v, --verbose int Verbosity level (0 is disabled, default is info, 7 is trace) (default 5) ``` #### Layer 3 (IP) Overlay Networks ```shell $ weron vpn ip --help Join a layer 3 overlay network Usage: weron vpn ip [flags] Aliases: ip, i Flags: --community string ID of community to join --dev string Name to give to the TUN device (i.e. weron0) (default is auto-generated; only supported on Linux) --force-relay Force usage of TURN servers -h, --help help for ip --ice strings Comma-separated list of STUN servers (in format stun:host:port) and TURN servers to use (in format username:credential@turn:host:port) (i.e. username:credential@turn:global.turn.twilio.com:3478?transport=tcp) (default [stun:stun.l.google.com:19302]) --id-channel string Channel to use to negotiate names (default "weron/ip/id") --ips strings Comma-separated list of IP networks to claim an IP address from and and give to the TUN device (i.e. 2001:db8::1/32,192.0.2.1/24) (on Windows, only one IP network (either IPv4 or IPv6) is supported; on macOS, IPv4 networks are ignored) --key string Encryption key for community --kicks duration Time to wait for kicks (default 5s) --max-retries int Maximum amount of times to try and claim an IP address (default 200) --parallel int Amount of threads to use to decode frames (default 20) --password string Password for community --raddr string Remote address (default "wss://weron.up.railway.app/") --static Try to claim the exact IPs specified in the --ips flag statically instead of selecting a random one from the specified network --timeout duration Time to wait for connections (default 10s) Global Flags: -v, --verbose int Verbosity level (0 is disabled, default is info, 7 is trace) (default 5) ``` #### Layer 2 (Ethernet) Overlay Networks ```shell $ weron vpn ethernet --help Join a layer 2 overlay network Usage: weron vpn ethernet [flags] Aliases: ethernet, eth, e Flags: --community string ID of community to join --dev string Name to give to the TAP device (i.e. weron0) (default is auto-generated; only supported on Linux and macOS) --force-relay Force usage of TURN servers -h, --help help for ethernet --ice strings Comma-separated list of STUN servers (in format stun:host:port) and TURN servers to use (in format username:credential@turn:host:port) (i.e. username:credential@turn:global.turn.twilio.com:3478?transport=tcp) (default [stun:stun.l.google.com:19302]) --key string Encryption key for community --mac string MAC address to give to the TAP device (i.e. 3a:f8:de:7b:ef:52) (default is auto-generated; only supported on Linux) --parallel int Amount of threads to use to decode frames (default 20) --password string Password for community --raddr string Remote address (default "wss://weron.up.railway.app/") --timeout duration Time to wait for connections (default 10s) Global Flags: -v, --verbose int Verbosity level (0 is disabled, default is info, 7 is trace) (default 5) ``` Environment Variables All command line arguments described above can also be set using environment variables; for example, to set --max-retries to 300 with an environment variable, use WERON_MAX_RETRIES=300 . Acknowledgements songgao/water provides the TUN/TAP device library for weron. pion/webrtc provides the WebRTC functionality. Contributing To contribute, please use the GitHub flow and follow our Code of Conduct . To build and start a development version of weron locally, run the following: ```shell $ git clone https://github.com/pojntfx/weron.git $ cd weron $ make depend $ make && sudo make install $ weron signal # Starts the signaling server In another terminal $ weron chat --raddr ws://localhost:1337 --community mycommunity --password mypassword --key mykey --names user1,user2,user3 --channels one,two,three In another terminal $ weron chat --raddr ws://localhost:1337 --community mycommunity --password mypassword --key mykey --names user1,user2,user3 --channels one,two,three ``` Of course, you can also contribute to the utilities and VPNs like this. Have any questions or need help? Chat with us on Matrix ! License weron (c) 2023 Felicitas Pojtinger and contributors SPDX-License-Identifier: AGPL-3.0;Overlay networks based on WebRTC.;golang,nat,networking,overlay-network,p2p,pion,tuntap,vpn,webrtc
pojntfx/weron
editablejs/editable;Editable Editable is an extensible rich text editor framework that focuses on stability, controllability, and performance. To achieve this, we did not use the native editable attribute ~~contenteditable~~ , but instead used a custom renderer that allows us to better control the editor's behavior. From now on, you no longer have to worry about cross-platform and browser compatibility issues (such as Selection , Input ), just focus on your business logic. preview You can see a demo here: https://docs.editablejs.com/playground Why not use canvas rendering? Although canvas rendering may be faster than DOM rendering in terms of performance, the development experience of canvas is not good and requires writing more code. Why use React for rendering? React makes plugins more flexible and has a good ecosystem. However, React's performance is not as good as native DOM. In my ideal frontend framework for rich text, it should be like this: No virtual DOM No diff algorithm No proxy object Therefore, I compared frontend frameworks such as Vue , Solid-js , and SvelteJS and found that Solid-js meets the first two criteria, but each property is wrapped in a proxy , which may cause problems when comparing with pure JS objects using === during extension development. To improve performance, we are likely to refactor it for native DOM rendering in future development. Currently, React meets the following two standards: [x] Development experience [x] Plugin extensibility [ ] Cross-frontend compatibility [ ] Rendering performance In the subsequent refactoring selection, we will try to balance these four standards as much as possible. Quick Start Currently, you still need to use it with React for the current version, but we will refactor it for native DOM rendering in future versions. Install @editablejs/models and @editablejs/editor dependencies: bash npm i --save @editablejs/models @editablejs/editor Here's a minimal text editor that you can edit: ```tsx import * as React from 'react' import { createEditor } from '@editablejs/models' import { EditableProvider, ContentEditable, withEditable } from '@editablejs/editor' const App = () => { const editor = React.useMemo(() => withEditable(createEditor()), []) return ( ) } ``` Data Model @editablejs/models provides a data model for describing the state of the editor and operations on the editor state. ts { type: 'paragraph', children: [ { type: 'text', text: 'Hello World' } ] } As you can see, its structure is very similar to Slate , and we did not create a new data model, but directly used Slate's data model and extended it (added Grid , List related data structures and operations). Depending on these mature and excellent data structures can make our editor more stable. We have encapsulated all of Slate's APIs into @editablejs/models , so you can find all of Slate's APIs in @editablejs/models. If you are not familiar with Slate, you can refer to its documentation: https://docs.slatejs.org/ Plugins Currently, we provide some out-of-the-box plugins that not only implement basic functionality, but also provide support for keyboard shortcuts , Markdown syntax , Markdown serialization , Markdown deserialization , HTML serialization , and HTML deserialization . Common Plugins @editablejs/plugin-context-menu provides a right-click menu. Since we do not use some of the functionality of the native contenteditable menu, we need to define our own right-click menu functionality. @editablejs/plugin-align for text alignment @editablejs/plugin-blockquote for block quotes @editablejs/plugin-codeblock for code blocks @editablejs/plugin-font includes font color, background color, and font size @editablejs/plugin-heading for headings @editablejs/plugin-hr for horizontal lines @editablejs/plugin-image for images @editablejs/plugin-indent for indentation @editablejs/plugin-leading for line spacing @editablejs/plugin-link for links @editablejs/plugin-list includes ordered lists, unordered lists, and task lists @editablejs/plugin-mark includes bold , italic , strikethrough , underline , superscript , subscript , and code @editablejs/plugin-mention for mentions @editablejs/plugin-table for tables The usage method of a single plugin, taking plugin-mark as an example: ```tsx import { withMark } from '@editablejs/mark' const editor = React.useMemo(() => { const editor = withEditable(createEditor()) return withMark(editor) }, []) ``` You can also use the following method to quickly use the above common plugins via withPlugins in @editablejs/plugins : ```tsx import { withPlugins } from '@editablejs/plugins' const editor = React.useMemo(() => { const editor = withEditable(createEditor()) return withPlugins(editor) }, []) ``` History Plugin The @editablejs/plugin-history plugin provides undo and redo functionality. ```tsx import { withHistory } from '@editablejs/plugin-history' const editor = React.useMemo(() => { const editor = withEditable(createEditor()) return withHistory(editor) }, []) ``` Title Plugin When developing document or blog applications, we usually have a separate title and main content, which is often implemented using an input or textarea outside of the editor. If in a collaborative environment, since it is independent of the editor, additional work is required to achieve real-time synchronization of the title. The @editablejs/plugin-title plugin solves this problem by using the editor's first child node as the title, integrating it into the editor's entire data structure so that it can have the same features as the editor. tsx import { withTitle } from '@editablejs/plugin-title' const editor = React.useMemo(() => { const editor = withEditable(createEditor()) return withTitle(editor) }, []) It also has a separate placeholder property for setting the placeholder for the title. tsx return withTitle(editor, { placeholder: 'Please enter a title' }) Yjs Plugin The @editablejs/plugin-yjs plugin provides support for Yjs, which can synchronize the editor's data in real-time to other clients. You need to install the following dependencies: yjs The core library of Yjs @editablejs/yjs-websocket Yjs websocket communication library In addition, it also provides the implementation of the nodejs server, which you can use to set up a yjs service: ```ts import startServer from '@editablejs/yjs-websocket/server' startServer() `` - @editablejs/plugin-yjs` Yjs plugin used with the editor bash npm i yjs @editablejs/yjs-websocket @editablejs/plugin-yjs Instructions: ```tsx import * as Y from 'yjs' import { withYHistory, withYjs, YjsEditor, withYCursors, CursorData, useRemoteStates } from '@editablejs/plugin-yjs' import { WebsocketProvider } from '@editablejs/yjs-websocket' // Create a yjs document const document = React.useMemo(() => new Y.Doc(), []) // Create a websocket provider const provider = React.useMemo(() => { return typeof window === 'undefined' ? null : new WebsocketProvider(yjsServiceAddress, 'editable', document, { connect: false, }) }, [document]) // Create an editor const editor = React.useMemo(() => { // Get the content field from yjs document, which is of type XmlText const sharedType = document.get('content', Y.XmlText) as Y.XmlText let editor = withYjs(withEditable(createEditor()), sharedType, { autoConnect: false }) if (provider) { // Synchronize cursors with other clients editor = withYCursors(editor, provider.awareness, { data: { name: 'Test User', color: '#f00', }, }) } // History record editor = withHistory(editor) // yjs history record editor = withYHistory(editor) }, [provider]) // Connect to yjs service React.useEffect(() => { provider?.connect() return () => { provider?.disconnect() } }, [provider]) ``` Custom Plugin Creating a custom plugin is very simple. We just need to intercept the renderElement method, and then determine if the current node is the one we need. If it is, we will render our custom component. An example of a custom plugin: ```tsx import { Editable } from '@editablejs/editor' import { Element, Editor } from '@editablejs/models' // Define the type of the plugin export interface MyPlugin extends Element { type: 'my-plugin' // ... You can also define other properties } export const MyPlugin = { // Determine if a node is a plugin for MyPlugin isMyPlugin(editor: Editor, element: Element): element is MyPlugin { return Element.isElement(value) && element.type === 'my-plugin' } } export const withMyPlugin = (editor: T) => { const { isVoid, renderElement } = editor // Intercept the isVoid method. If it is a node for MyPlugin, return true // Besides the isVoid method, there are also methods such as `isBlock` `isInline`, which can be intercepted as needed. editor.isVoid = element => { return MyPlugin.isMyPlugin(editor, element) || isVoid(element) } // Intercept the renderElement method. If it is a node for MyPlugin, render the custom component // attributes are the attributes of the node, we need to pass it to the custom component // children are the child nodes of the node, which contains the child nodes of the node. We must render them // element is the current node, and you can find your custom properties in it editor.renderElement = ({ attributes, children, element }) => { if (MyPlugin.isMyPlugin(editor, element)) { return My Plugin {children} } return renderElement({ attributes, children, element }) } return editor } ``` Serialization @editablejs/serializer provides a serializer that can serialize editor data into html , text , and markdown formats. The serialization transformers for the plugins provided have already been implemented, so you can use them directly. HTML Serialization ```tsx // html serializer import { HTMLSerializer } from '@editablejs/serializer/html' // import the HTML serializer transformer of the plugin-mark plugin, and other plugins are the same import { withMarkHTMLSerializerTransform } from '@editablejs/plugin-mark/serializer/html' // use the transformer HTMLSerializer.withEditor(editor, withMarkHTMLSerializerTransform, {}) // serialize to HTML const html = HTMLSerializer.transformWithEditor(editor, { type: 'paragraph', children: [{ text: 'hello', bold: true }] }) // output: hello ``` Text Serialization ```tsx // text serializer import { TextSerializer } from '@editablejs/serializer/text' // import the Text serializer transformer of the plugin-mention plugin import { withMentionTextSerializerTransform } from '@editablejs/plugin-mention/serializer/text' // use the transformer TextSerializer.withEditor(editor, withMentionTextSerializerTransform, {}) // serialize to Text const text = TextSerializer.transformWithEditor(editor, { type: 'paragraph', children: [{ text: 'hello' }, { type: 'mention', children: [{ text: '' }], user: { name: 'User', id: '1', }, }] }) // output: hello @User ``` Markdown Serialization ```tsx // markdown serializer import { MarkdownSerializer } from '@editablejs/serializer/markdown' // import the Markdown serializer transformer of the plugin-mark plugin import { withMarkMarkdownSerializerTransform } from '@editablejs/plugin-mark/serializer/markdown' // use the transformer MarkdownSerializer.withEditor(editor, withMarkMarkdownSerializerTransform, {}) // serialize to Markdown const markdown = MarkdownSerializer.transformWithEditor(editor, { type: 'paragraph', children: [{ text: 'hello', bold: true }] }) // output: **hello** ``` Every plugin requires importing its own serialization converter, which is cumbersome, so we provide the serialization converters for all built-in plugins in @editablejs/plugins . ```tsx import { withHTMLSerializerTransform } from '@editablejs/plugins/serializer/html' import { withTextSerializerTransform } from '@editablejs/plugins/serializer/text' import { withMarkdownSerializerTransform, withMarkdownSerializerPlugin } from '@editablejs/plugins/serializer/markdown' useLayoutEffect(() => { withMarkdownSerializerPlugin(editor) withTextSerializerTransform(editor) withHTMLSerializerTransform(editor) withMarkdownSerializerTransform(editor) }, [editor]) ``` Deserialization @editablejs/serializer provides a deserializer that can deserialize data in html , text , and markdown formats into editor data. The deserialization transformers for the plugins provided have already been implemented, so you can use them directly. The usage is similar to serialization, except that the package path for importing needs to be changed from @editablejs/serializer to @editablejs/deserializer . Contributors ✨ Welcome 🌟 Stars and 📥 PRs! Let's work together to build a better rich text editor! The contributing guide is here, please feel free to read it. If you have a good plugin, please share it with us. Special thanks to Sparticle for their support and contribution to the open source community. Finally, thank you to everyone who has contributed to this project! ( emoji key ): Kevin Lin 💻 kailunyao 💻 ren.chen 📖 han 📖 This project follows the all-contributors specification. Contributions of any kind welcome! Thanks We would like to thank the following open-source projects for their contributions: Slate - provides support for data modeling. Yjs - provides basic support for CRDTs, used for collaborative editing support. React - provides support for the view layer. Zustand - a minimal front-end state management tool. Other dependencies We use the following open-source projects to help us build a better development experience: Turborepo -- pnpm + turbo is a great monorepo manager and build system. License See LICENSE for details.;🌱 A collaborative rich-text editor framework that focuses on stability, controllability, extensibility, and performance. 一款强到离谱的富文本编辑器框架,专注于稳定性、可控性、扩展性和性能。;editable,react-editor,rich-editor,slate-editor,text-editor
editablejs/editable
Matthew-J-Spencer/Ultimate-2D-Controller;Ultimate 2D Controller An updated, smoother version using standard Unity physics is on my Patreon . A great starting point for your 2D controller. Making use of all the hidden tricks like coyote, buffered actions, speedy apex, anti grav apex, etc Watch the video: https://www.youtube.com/watch?v=3sWTzMsmdx8 Play the game: https://tarodev.itch.io/ultimate-2d-controller Leave a ⭐ if you found it helpful! User guide: Set the player layer in the Player Controller asset located at Tarodev 2D Controller/Stat Presets/Player Controller That's it! Check the demo scene if you're stuck :) Feel free to use the code in your production games. Attribution welcomed :) About the 'Extended' controller (Patreon) Converted to use standard unity physics, making it much easier to use and incorporate into your game. It's even smoother than the current version. Moving platforms & one-way platforms. External forces (explosions, sword hits, bouncy... things). Dash, double jump, crouch/slide. Slopes. Ledge sliding, grabbing & climbing. Tilemap Support. New input system support. Fixed a bunch of bugs. And of course better support. Click here;A great starting point for your 2D controller. Making use of all the hidden tricks like coyote, buffered actions, speedy apex, anti grav apex, etc;[]
Matthew-J-Spencer/Ultimate-2D-Controller
iaddis/metalnes;MetalNES Transistor level NES-001 simulation. Builds on OSX only for now. No MMU support. Only possible due to the tremendous efforts of Visual2C02 and Visual2A03 . Added support chips for main board. Support voltage ladders for composite output and audio. Needs lots of optimization.;Transistor level NES simulation ;[]
iaddis/metalnes
dominikbraun/graph;中文版 | English Version A library for creating generic graph data structures and modifying, analyzing, and visualizing them. Are you using graph? Check out the graph user survey. Features Generic vertices of any type, such as int or City . Graph traits with corresponding validations, such as cycle checks in acyclic graphs. Algorithms for finding paths or components, such as shortest paths or strongly connected components. Algorithms for transformations and representations, such as transitive reduction or topological order. Algorithms for non-recursive graph traversal, such as DFS or BFS. Vertices and edges with optional metadata, such as weights or custom attributes. Visualization of graphs using the DOT language and Graphviz. Integrate any storage backend by using your own Store implementation. Extensive tests with ~90% coverage, and zero dependencies. Status: Because graph is in version 0, the public API shouldn't be considered stable. This README may contain unreleased changes. Check out the latest documentation . Getting started go get github.com/dominikbraun/graph Quick examples Create a graph of integers ```go g := graph.New(graph.IntHash) _ = g.AddVertex(1) _ = g.AddVertex(2) _ = g.AddVertex(3) _ = g.AddVertex(4) _ = g.AddVertex(5) _ = g.AddEdge(1, 2) _ = g.AddEdge(1, 4) _ = g.AddEdge(2, 3) _ = g.AddEdge(2, 4) _ = g.AddEdge(2, 5) _ = g.AddEdge(3, 5) ``` Create a directed acyclic graph of integers ```go g := graph.New(graph.IntHash, graph.Directed(), graph.Acyclic()) _ = g.AddVertex(1) _ = g.AddVertex(2) _ = g.AddVertex(3) _ = g.AddVertex(4) _ = g.AddEdge(1, 2) _ = g.AddEdge(1, 3) _ = g.AddEdge(2, 3) _ = g.AddEdge(2, 4) _ = g.AddEdge(3, 4) ``` Create a graph of a custom type To understand this example in detail, see the concept of hashes . ```go type City struct { Name string } cityHash := func(c City) string { return c.Name } g := graph.New(cityHash) _ = g.AddVertex(london) ``` Create a weighted graph ```go g := graph.New(cityHash, graph.Weighted()) _ = g.AddVertex(london) _ = g.AddVertex(munich) _ = g.AddVertex(paris) _ = g.AddVertex(madrid) _ = g.AddEdge("london", "munich", graph.EdgeWeight(3)) _ = g.AddEdge("london", "paris", graph.EdgeWeight(2)) _ = g.AddEdge("london", "madrid", graph.EdgeWeight(5)) _ = g.AddEdge("munich", "madrid", graph.EdgeWeight(6)) _ = g.AddEdge("munich", "paris", graph.EdgeWeight(2)) _ = g.AddEdge("paris", "madrid", graph.EdgeWeight(4)) ``` Perform a Depth-First Search This example traverses and prints all vertices in the graph in DFS order. ```go g := graph.New(graph.IntHash, graph.Directed()) _ = g.AddVertex(1) _ = g.AddVertex(2) _ = g.AddVertex(3) _ = g.AddVertex(4) _ = g.AddEdge(1, 2) _ = g.AddEdge(1, 3) _ = g.AddEdge(3, 4) _ = graph.DFS(g, 1, func(value int) bool { fmt.Println(value) return false }) ``` 1 3 4 2 Find strongly connected components ```go g := graph.New(graph.IntHash) // Add vertices and edges ... scc, _ := graph.StronglyConnectedComponents(g) fmt.Println(scc) ``` [[1 2 5] [3 4 8] [6 7]] Find the shortest path ```go g := graph.New(graph.StringHash, graph.Weighted()) // Add vertices and weighted edges ... path, _ := graph.ShortestPath(g, "A", "B") fmt.Println(path) ``` [A C E B] Find spanning trees ```go g := graph.New(graph.StringHash, graph.Weighted()) // Add vertices and edges ... mst, _ := graph.MinimumSpanningTree(g) ``` Perform a topological sort ```go g := graph.New(graph.IntHash, graph.Directed(), graph.PreventCycles()) // Add vertices and edges ... // For a deterministic topological ordering, use StableTopologicalSort. order, _ := graph.TopologicalSort(g) fmt.Println(order) ``` [1 2 3 4 5] Perform a transitive reduction ```go g := graph.New(graph.StringHash, graph.Directed(), graph.PreventCycles()) // Add vertices and edges ... transitiveReduction, _ := graph.TransitiveReduction(g) ``` Prevent the creation of cycles ```go g := graph.New(graph.IntHash, graph.PreventCycles()) _ = g.AddVertex(1) _ = g.AddVertex(2) _ = g.AddVertex(3) _ = g.AddEdge(1, 2) _ = g.AddEdge(1, 3) if err := g.AddEdge(2, 3); err != nil { panic(err) } ``` panic: an edge between 2 and 3 would introduce a cycle Visualize a graph using Graphviz The following example will generate a DOT description for g and write it into the given file. ```go g := graph.New(graph.IntHash, graph.Directed()) _ = g.AddVertex(1) _ = g.AddVertex(2) _ = g.AddVertex(3) _ = g.AddEdge(1, 2) _ = g.AddEdge(1, 3) file, _ := os.Create("./mygraph.gv") _ = draw.DOT(g, file) ``` To generate an SVG from the created file using Graphviz, use a command such as the following: dot -Tsvg -O mygraph.gv The DOT function also supports rendering graph attributes: go _ = draw.DOT(g, file, draw.GraphAttribute("label", "my-graph")) Draw a graph as in this documentation This graph has been rendered using the following program: ```go package main import ( "os" "github.com/dominikbraun/graph" "github.com/dominikbraun/graph/draw" ) func main() { g := graph.New(graph.IntHash) _ = g.AddVertex(1, graph.VertexAttribute("colorscheme", "blues3"), graph.VertexAttribute("style", "filled"), graph.VertexAttribute("color", "2"), graph.VertexAttribute("fillcolor", "1")) _ = g.AddVertex(2, graph.VertexAttribute("colorscheme", "greens3"), graph.VertexAttribute("style", "filled"), graph.VertexAttribute("color", "2"), graph.VertexAttribute("fillcolor", "1")) _ = g.AddVertex(3, graph.VertexAttribute("colorscheme", "purples3"), graph.VertexAttribute("style", "filled"), graph.VertexAttribute("color", "2"), graph.VertexAttribute("fillcolor", "1")) _ = g.AddVertex(4, graph.VertexAttribute("colorscheme", "ylorbr3"), graph.VertexAttribute("style", "filled"), graph.VertexAttribute("color", "2"), graph.VertexAttribute("fillcolor", "1")) _ = g.AddVertex(5, graph.VertexAttribute("colorscheme", "reds3"), graph.VertexAttribute("style", "filled"), graph.VertexAttribute("color", "2"), graph.VertexAttribute("fillcolor", "1")) _ = g.AddEdge(1, 2) _ = g.AddEdge(1, 4) _ = g.AddEdge(2, 3) _ = g.AddEdge(2, 4) _ = g.AddEdge(2, 5) _ = g.AddEdge(3, 5) file, _ := os.Create("./simple.gv") _ = draw.DOT(g, file) } ``` It has been rendered using the neato engine: dot -Tsvg -Kneato -O simple.gv The example uses the Brewer color scheme supported by Graphviz. Storing edge attributes Edges may have one or more attributes which can be used to store metadata. Attributes will be taken into account when visualizing a graph . For example, this edge will be rendered in red color: go _ = g.AddEdge(1, 2, graph.EdgeAttribute("color", "red")) To get an overview of all supported attributes, take a look at the DOT documentation . The stored attributes can be retrieved by getting the edge and accessing the Properties.Attributes field. go edge, _ := g.Edge(1, 2) color := edge.Properties.Attributes["color"] Storing edge data It is also possible to store arbitrary data inside edges, not just key-value string pairs. This data is of type any . go _ = g.AddEdge(1, 2, graph.EdgeData(myData)) The stored data can be retrieved by getting the edge and accessing the Properties.Data field. go edge, _ := g.Edge(1, 2) myData := edge.Properties.Data Updating edge data Edge properties can be updated using Graph.UpdateEdge . The following example adds a new color attribute to the edge (A,B) and sets the edge weight to 10. go _ = g.UpdateEdge("A", "B", graph.EdgeAttribute("color", "red"), graph.EdgeWeight(10)) The method signature and the accepted functional options are exactly the same as for Graph.AddEdge . Storing vertex attributes Vertices may have one or more attributes which can be used to store metadata. Attributes will be taken into account when visualizing a graph . For example, this vertex will be rendered in red color: go _ = g.AddVertex(1, graph.VertexAttribute("style", "filled")) The stored data can be retrieved by getting the vertex using VertexWithProperties and accessing the Attributes field. go vertex, properties, _ := g.VertexWithProperties(1) style := properties.Attributes["style"] To get an overview of all supported attributes, take a look at the DOT documentation . Store the graph in a custom storage You can integrate any storage backend by implementing the Store interface and initializing a new graph with it: go g := graph.NewWithStore(graph.IntHash, myStore) To implement the Store interface appropriately, take a look at the documentation . graph-sql is a ready-to-use SQL store implementation. Documentation The full documentation is available at pkg.go.dev . Are you using graph? Check out the graph user survey.;A library for creating generic graph data structures and modifying, analyzing, and visualizing them.;graph,graph-algorithms,graph-theory,graph-traversal,graph-visualization,algorithm,graphs,graphviz,visualization
dominikbraun/graph
pablouser1/ProxiTok;ProxiTok Use Tiktok with an alternative frontend, inspired by Nitter. Features Privacy: All requests made to TikTok are server-side, so you will never connect to their servers See user's feed See trending and discovery tab See tags See video by id Themes RSS Feed for user, trending and tag (just add /rss to the url) Self-hosting Please check this wiki article for info on how to self-host your own instance Public instances This wiki article contains a list with all the known public instances. Extensions If you want to automatically redirect Tiktok links to ProxiTok you can use: * Libredirect * Redirector You can use the following config if you want to use Redirector (you can change https://proxitok.pabloferreiro.es with whatever instance you want to use): Description: TikTok to ProxiTok Example URL: https://www.tiktok.com/@tiktok Include pattern: (.*//.*)(tiktok.com)(.*) Redirect to: https://proxitok.pabloferreiro.es$3 Example result: https://proxitok.pabloferreiro.es/@tiktok Pattern type: Regular Expression Apply to: Main window (address bar) TODO / Known issues Replace placeholder favicon Make video on /video fit screen and don't overflow Fix embed styling Fix crash when invalid vm.tiktok.com/CODE or www.tiktok.com/t/CODE is provided Add custom amount of videos per page Credits TheFrenchGhosty ( Github ): Initial Dockerfile and fixes to a usable state. Jennifer Wjertzoch : Carousel CSS Implementation External libraries TikScraperPHP Latte bramus/router PHP dotenv Bulma and Bulmaswatch;Open source alternative frontend for TikTok made using PHP;tiktok,alternative-frontends,php,proxitok,tiktok-scraper
pablouser1/ProxiTok
microsoft/DeepSpeed-MII;Latest News [2024/01] DeepSpeed-FastGen: Introducting Mixtral, Phi-2, and Falcon support with major performance and feature enhancements. [2023/11] DeepSpeed-FastGen: High-throughput Text Generation for LLMs via MII and DeepSpeed-Inference [2022/11] Stable Diffusion Image Generation under 1 second w. DeepSpeed MII [2022/10] Announcing DeepSpeed Model Implementations for Inference (MII) Contents DeepSpeed-MII Key Technologies How does MII work? Supported Models Getting Started DeepSpeed Model Implementations for Inference (MII) Introducing MII, an open-source Python library designed by DeepSpeed to democratize powerful model inference with a focus on high-throughput, low latency, and cost-effectiveness. MII features include blocked KV-caching, continuous batching, Dynamic SplitFuse, tensor parallelism, and high-performance CUDA kernels to support fast high throughput text-generation for LLMs such as Llama-2-70B, Mixtral (MoE) 8x7B, and Phi-2. The latest updates in v0.2 add new model families, performance optimizations, and feature enhancements. MII now delivers up to 2.5 times higher effective throughput compared to leading systems such as vLLM. For detailed performance results please see our latest DeepSpeed-FastGen blog and DeepSpeed-FastGen release blog . We first announced MII in 2022, which covers all prior releases up to v0.0.9. In addition to language models, we also support accelerating text2image models like Stable Diffusion . For more details on our previous releases please see our legacy APIs . Key Technologies MII for High-Throughput Text Generation MII provides accelerated text-generation inference through the use of four key technologies: Blocked KV Caching Continuous Batching Dynamic SplitFuse High Performance CUDA Kernels For a deeper dive into understanding these features please refer to our blog which also includes a detailed performance analysis. MII Legacy In the past, MII introduced several key performance optimizations for low-latency serving scenarios: DeepFusion for Transformers Multi-GPU Inference with Tensor-Slicing ZeRO-Inference for Resource Constrained Systems Compiler Optimizations How does MII work? Figure 1: MII architecture, showing how MII automatically optimizes OSS models using DS-Inference before deploying them. DeepSpeed-FastGen optimizations in the figure have been published in our blog post . Under-the-hood MII is powered by DeepSpeed-Inference . Based on the model architecture, model size, batch size, and available hardware resources, MII automatically applies the appropriate set of system optimizations to minimize latency and maximize throughput. Supported Models MII currently supports over 20,000 models across eight popular model architectures. We plan to add additional models in the near term, if there are specific model architectures you would like supported please file an issue and let us know. All current models leverage Hugging Face in our backend to provide both the model weights and the model's corresponding tokenizer. For our current release we support the following model architectures: model family | size range | ~model count ------ | ------ | ------ falcon | 7B - 180B | 300 llama | 7B - 65B | 19,000 llama-2 | 7B - 70B | 900 mistral | 7B | 6,000 mixtral (MoE) | 8x7B | 1,100 opt | 0.1B - 66B | 1,300 phi-2 | 2.7B | 200 qwen | 7B - 72B | 200 MII Legacy Model Support MII Legacy APIs support over 50,000 different models including BERT, RoBERTa, Stable Diffusion, and other text-generation models like Bloom, GPT-J, etc. For a full list please see our legacy supported models table . Getting Started with MII DeepSpeed-MII allows users to create non-persistent and persistent deployments for supported models in just a few lines of code. Installation Non-Persistent Pipeline Persistent Deployment Installation The fasest way to get started is with our PyPI release of DeepSpeed-MII which means you can get started within minutes via: bash pip install deepspeed-mii For ease of use and significant reduction in lengthy compile times that many projects require in this space we distribute a pre-compiled python wheel covering the majority of our custom kernels through a new library called DeepSpeed-Kernels . We have found this library to be very portable across environments with NVIDIA GPUs with compute capabilities 8.0+ (Ampere+), CUDA 11.6+, and Ubuntu 20+. In most cases you shouldn't even need to know this library exists as it is a dependency of DeepSpeed-MII and will be installed with it. However, if for whatever reason you need to compile our kernels manually please see our advanced installation docs . Non-Persistent Pipeline A non-persistent pipeline is a great way to try DeepSpeed-MII. Non-persistent pipelines are only around for the duration of the python script you are running. The full example for running a non-persistent pipeline deployment is only 4 lines. Give it a try! python import mii pipe = mii.pipeline("mistralai/Mistral-7B-v0.1") response = pipe(["DeepSpeed is", "Seattle is"], max_new_tokens=128) print(response) The returned response is a list of Response objects. We can access several details about the generation (e.g., response[0].prompt_length ): generated_text: str Text generated by the model. prompt_length: int Number of tokens in the original prompt. generated_length: int Number of tokens generated. finish_reason: str Reason for stopping generation. stop indicates the EOS token was generated and length indicates the generation reached max_new_tokens or max_length . If you want to free device memory and destroy the pipeline, use the destroy method: python pipe.destroy() Tensor parallelism Taking advantage of multi-GPU systems for greater performance is easy with MII. When run with the deepspeed launcher, tensor parallelism is automatically controlled by the --num_gpus flag: ```bash Run on a single GPU deepspeed --num_gpus 1 mii-example.py Run on multiple GPUs deepspeed --num_gpus 2 mii-example.py ``` Pipeline Options While only the model name or path is required to stand up a non-persistent pipeline deployment, we offer customization options to our users: mii.pipeline() Options : - model_name_or_path: str Name or local path to a HuggingFace model. - max_length: int Sets the default maximum token length for the prompt + response. - all_rank_output: bool When enabled, all ranks return the generated text. By default, only rank 0 will return text. Users can also control the generation characteristics for individual prompts (i.e., when calling pipe() ) with the following options: max_length: int Sets the per-prompt maximum token length for prompt + response. min_new_tokens: int Sets the minimum number of tokens generated in the response. max_length will take precedence over this setting. max_new_tokens: int Sets the maximum number of tokens generated in the response. ignore_eos: bool (Defaults to False ) Setting to True prevents generation from ending when the EOS token is encountered. top_p: float (Defaults to 0.9 ) When set below 1.0 , filter tokens and keep only the most probable, where token probabilities sum to ≥ top_p . top_k: int (Defaults to None ) When None , top-k filtering is disabled. When set, the number of highest probability tokens to keep. temperature: float (Defaults to None ) When None , temperature is disabled. When set, modulates token probabilities. do_sample: bool (Defaults to True ) When True , sample output logits. When False , use greedy sampling. return_full_text: bool (Defaults to False ) When True , prepends the input prompt to the returned text Persistent Deployment A persistent deployment is ideal for use with long-running and production applications. The persistent model uses a lightweight GRPC server that can be queried by multiple clients at once. The full example for running a persistent model is only 5 lines. Give it a try! python import mii client = mii.serve("mistralai/Mistral-7B-v0.1") response = client.generate(["Deepspeed is", "Seattle is"], max_new_tokens=128) print(response) The returned response is a list of Response objects. We can access several details about the generation (e.g., response[0].prompt_length ): generated_text: str Text generated by the model. prompt_length: int Number of tokens in the original prompt. generated_length: int Number of tokens generated. finish_reason: str Reason for stopping generation. stop indicates the EOS token was generated and length indicates the generation reached max_new_tokens or max_length . If we want to generate text from other processes, we can do that too: python client = mii.client("mistralai/Mistral-7B-v0.1") response = client.generate("Deepspeed is", max_new_tokens=128) When we no longer need a persistent deployment, we can shutdown the server from any client: python client.terminate_server() Model Parallelism Taking advantage of multi-GPU systems for better latency and throughput is also easy with the persistent deployments. Model parallelism is controlled by the tensor_parallel input to mii.serve : python client = mii.serve("mistralai/Mistral-7B-v0.1", tensor_parallel=2) The resulting deployment will split the model across 2 GPUs to deliver faster inference and higher throughput than a single GPU. Model Replicas We can also take advantage of multi-GPU (and multi-node) systems by setting up multiple model replicas and taking advantage of the load-balancing that DeepSpeed-MII provides: python client = mii.serve("mistralai/Mistral-7B-v0.1", replica_num=2) The resulting deployment will load 2 model replicas (one per GPU) and load-balance incoming requests between the 2 model instances. Model parallelism and replicas can also be combined to take advantage of systems with many more GPUs. In the example below, we run 2 model replicas, each split across 2 GPUs on a system with 4 GPUs: python client = mii.serve("mistralai/Mistral-7B-v0.1", tensor_parallel=2, replica_num=2) The choice between model parallelism and model replicas for maximum performance will depend on the nature of the hardware, model, and workload. For example, with small models users may find that model replicas provide the lowest average latency for requests. Meanwhile, large models may achieve greater overall throughput when using only model parallelism. RESTful API MII makes it easy to setup and run model inference via RESTful APIs by setting enable_restful_api=True when creating a persistent MII deployment. The RESTful API can receive requests at http://{HOST}:{RESTFUL_API_PORT}/mii/{DEPLOYMENT_NAME} . A full example is provided below: python client = mii.serve( "mistralai/Mistral-7B-v0.1", deployment_name="mistral-deployment", enable_restful_api=True, restful_api_port=28080, ) 📌 Note: While providing a deployment_name is not necessary (MII will autogenerate one for you), it is good practice to provide a deployment_name so that you can ensure you are interfacing with the correct RESTful API. You can then send prompts to the RESTful gateway with any HTTP client, such as curl : bash curl --header "Content-Type: application/json" --request POST -d '{"prompts": ["DeepSpeed is", "Seattle is"], "max_length": 128}' http://localhost:28080/mii/mistral-deployment or python : python import json import requests url = f"http://localhost:28080/mii/mistral-deployment" params = {"prompts": ["DeepSpeed is", "Seattle is"], "max_length": 128} json_params = json.dumps(params) output = requests.post( url, data=json_params, headers={"Content-Type": "application/json"} ) Persistent Deployment Options While only the model name or path is required to stand up a persistent deployment, we offer customization options to our users. mii.serve() Options : - model_name_or_path: str (Required) Name or local path to a HuggingFace model. - max_length: int (Defaults to maximum sequence length in model config) Sets the default maximum token length for the prompt + response. - deployment_name: str (Defaults to f"{model_name_or_path}-mii-deployment" ) A unique identifying string for the persistent model. If provided, client objects should be retrieved with client = mii.client(deployment_name) . - tensor_parallel: int (Defaults to 1 ) Number of GPUs to split the model across. - replica_num: int (Defaults to 1 ) The number of model replicas to stand up. - enable_restful_api: bool (Defaults to False ) When enabled, a RESTful API gateway process is launched that can be queried at http://{host}:{restful_api_port}/mii/{deployment_name} . See the section on RESTful APIs for more details. - restful_api_port: int (Defaults to 28080 ) The port number used to interface with the RESTful API when enable_restful_api is set to True . mii.client() Options : - model_or_deployment_name: str Name of the model or deployment_name passed to mii.serve() Users can also control the generation characteristics for individual prompts (i.e., when calling client.generate() ) with the following options: max_length: int Sets the per-prompt maximum token length for prompt + response. min_new_tokens: int Sets the minimum number of tokens generated in the response. max_length will take precedence over this setting. max_new_tokens: int Sets the maximum number of tokens generated in the response. ignore_eos: bool (Defaults to False ) Setting to True prevents generation from ending when the EOS token is encountered. top_p: float (Defaults to 0.9 ) When set below 1.0 , filter tokens and keep only the most probable, where token probabilities sum to ≥ top_p . top_k: int (Defaults to None ) When None , top-k filtering is disabled. When set, the number of highest probability tokens to keep. temperature: float (Defaults to None ) When None , temperature is disabled. When set, modulates token probabilities. do_sample: bool (Defaults to True ) When True , sample output logits. When False , use greedy sampling. return_full_text: bool (Defaults to False ) When True , prepends the input prompt to the returned text Contributing This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com. When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA. This project has adopted the Microsoft Open Source Code of Conduct . For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments. Trademarks This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft's Trademark & Brand Guidelines . Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party's policies.;MII makes low-latency and high-throughput inference possible, powered by DeepSpeed.;deep-learning,inference,pytorch
microsoft/DeepSpeed-MII
nusu/avvvatars;Avvvatars Beautifully crafted unique avatar placeholder for your next react project Lightweight and customizable ❤️ https://user-images.githubusercontent.com/1702215/158075475-c23004ab-827a-45ad-bdba-aee29ac5b582.mp4 Live Demo 🧩 | Website 🧘‍♀️ Built by Nusu Alabuga and Oguz Yagiz Kara 🙏 Special thanks to Monika Michalczyk for awesome shapes 🙏 Features 🌈 40 Colors - Colors are so on point that most of the projects can use it without changing it 💠 60 Shapes - Beautifully crafted shapes that are unique to your user with color combination 🆎 Text or Shapes 🔸 - Use letters (eg. JD for John Doe) or unique shapes 🤠 Unique to user - Generated avatars are unique to the string that you provide, it means if you pass janedoe@gmail.com you will always get the same avatar 🕊 Lightweight - less than 20kb compressed + gzipped ✍️ Customizable - use shadows, change size, provide alternative text to display Installation With yarn jsx yarn add avvvatars-react With npm jsx npm install avvvatars-react Getting Started Import Avvvatars to your app, then use it anywhere you want. ```jsx import Avvvatars from 'avvvatars-react' export default function MyAvatar() { return ( ) } ``` Customization value: string This is required for plugin to work, each value generates a random avatar to unique to this value, so each time plugin renders, you will get the same results. jsx <Avvvatars value="best_user@gmail.com" /> displayValue?: string Override default text by providing displayValue for example if you provide value=”best_user@gmail.com” the character output will be the first 2 letters of value which is “BE”, if you pass displayValue=”BU” you can override it to BU jsx <Avvvatars value="best_user@gmail.com" displayValue="BE" /> style?: character | shape (default character) Use shape or character as avatar. jsx <Avvvatars value="best_user@gmail.com" style="character" /> size?: number (default 32) Override default size (32px) by providing a number. jsx <Avvvatars value="best_user@gmail.com" size={32} /> shadow?: boolean (default false) Enable shadow around the avatar. jsx <Avvvatars value="best_user@gmail.com" shadow={false} /> radius?: number (default size ) Override the radius of the avatar, it takes size by default to always turn it to a circle jsx <Avvvatars value="best_user@gmail.com" radius={10} /> border?: boolean (default false) Toggle border jsx <Avvvatars value="best_user@gmail.com" border={false} /> borderSize?: number (default 2) Override border width jsx <Avvvatars value="best_user@gmail.com" borderSize={2} /> borderColor?: string (default #fff) Override border color jsx <Avvvatars value="best_user@gmail.com" borderColor="#fff" /> Figma If you want to access design files to change something or customize it to your own, use our Figma File License MIT;Beautifully crafted unique avatar placeholder for your next react project;react,placeholder-avatars,avatar-generator,avatar-placeholder,avatar,profile-picture
nusu/avvvatars
alibaba/SREWorks;SREWorks Cloud Native DataOps & AIOps Platform Documentation Website English | 中文 Introduction SREWorks: Alibaba Cloud Big Data SRE team's cloud-native operation and maintenance (O&M) platform was born out of nearly a decade of business precipitation, using the thinking of "Big Data and AI" for O&M work(we call it DataOps and AIOps), to help more practitioners use DataOps and AIOps to do a efficient O&M work. Google suggested a job title of SRE (Site Reliability Engineer) in 2003. It consists of a team of software engineers and system administrators who place a premium on O & M personnel's development skills, forcing them to devote less than half of their time to daily tasks and the other half to the creation of automation technologies to decrease labor needs. SREWorks focuses on the application-centric one-stop "cloud native" and "DataOps and AIOps" O & M SaaS management suite as an engineering practice for the Alibaba Cloud Big Data SRE team's SRE concept. It enables companies to achieve the delivery and maintenance of cloud-native apps and resources via two primary capabilities: enterprise application and resource management and O & M development. Alibaba Cloud Big Data SRE team has been working hard to practice the "DataOps and AIOps" concept, the industry's DataOps (data operation and maintenance) first proposed by the team, is naturally close to big data and AI, is very familiar with Big Data & AI technology, and has the big data and AI computing power resources on demand, has been working hard to practice the "DataOps and AIOps" concept, the industry's DataOps Standard O & M warehouses, data O & M platforms, and operation centers are among the end-to-end DataOps closed-loop engineering methods in the SREWorks. There are many excellent open-source O & M platforms that reflect cloud-native scenarios in the traditional IT O&M field. There are currently no systematic O & M solutions available. With the rise of the cloud-native era, the Alibaba Cloud Big Data SRE team will open-source its O & M platform, SREWorks, in the hopes of providing O & M engineers with an out-of-the-box experience. Getting Started Quick Install Installation from source code Document Online Demo Roadmap ROADMAP Contributing We'd love to accept your patches and contributions to SREWorks. Please refer to CONTRIBUTING for a few small guidelines you need to follow. Community Wechat Chat Group ( Chinese ): Broker wechat to add you into the user group. Dingtalk Chat Group ( Chinese ): 35853026 Code of Conduct Contributions to the SREWorks are expected to adhere to our Code of Conduct .;Cloud Native DataOps & AIOps Platform | 云原生数智运维平台 ;kubernetes,sre,application,saas,cloudnative,dataops,aiops,oam,engineering,maintenance
alibaba/SREWorks
life-itself/web3;Awesome sensemaking for crypto/web3 👉 April 2022 Website for the web3 sensemaking project 👈 🎉 Nov 2022 Full guide to web3 & crypto including evaluation of claims pro and con 🎉 Awesome rigorous evaluation of crypto/web3, etc. Contributions are welcome. Critique General The problem with NFTs - 2022-01-21 - by Dan Olson (Documentary) 📺 [👉 Highly recommended 👈] Three things Web3 should fix in 2022 a response to The Problem with NFTs - 28 Jan 2022 Stephen Diehl series - https://www.stephendiehl.com/blog.html The Case Against Crypto - December 31, 2021 Blockchainism - December 11, 2021 Web3 is Bullshit - December 4, 2021 The Internet's Casino Boats - December 1, 2021 The Token Disconnect - November 27, 2021 The Handwavy Technobabble Nothingburger - November 24, 2021 Ice-Nine for Markets - November 23, 2021 The Tinkerbell Griftopia - November 19, 2021 Decentralized Woo Hoo - November 16, 2021 The Intellectual Incoherence of Cryptoassets - November 7, 2021 On Unintentional Scams - July 23, 2021 How to Destroy Bitcoin - July 13, 2021 The Non-Innovation of Cryptocurrency - July 7, 2021 The Oncoming Ransomware Storm - May 11, 2021 Et tu, Signal? - April 7, 2021 The Political Case for a Blanket Cryptocurrency Ban - March 30, 2021 Bitcoin: The Postmodern Ponzi - February 27, 2021 The Crypto Chernobyl - February 10, 2021 Gamestop, Bitcoin and the Commoditization of Populist Rage - February 3, 2021 Today on Sick Sad World: How The Cryptobros Have Fallen - 2022-01-04 by Jamie Zawinski (legendary coder, co-founder of Mozilla etc.) Web3 First Impressions - 2022-01-07 Moxie Marlinspike, co-founder of Signal etc. Bitcoin, Currencies, and Fragility by Nassim Taleb - 27 Jun 2021 - highly critical paper by author Black Swan etc. https://watershed.co.uk/studio/news/2021/12/03/case-against-crypto The European Money and Finance Forum - The encrypted threat: Bitcoin’s social cost and regulatory responses - Jan 2022. A comprehensive study by SUERF - The European Money and Finance Forum that details the net negative effects of bitcoin to society. The Third Web - 2021-12-17 - long critical essay including detailed history by Tante Tante's Web3/NFT FAQ https://rufuspollock.com/2016/07/02/reflections-on-the-blockchain/ - 2016-07-02 - by Rufus Pollock (mainly a critique of early DAOs and techno-solutionism) Web3 takes trust, too - 2022-01-10 by Matt Levine on Bloomberg.com Revolution Now! With Peter Joseph | Bitcoin and Financialization - May 21, 2021 The Web3 Fraud - 2021-12-16 by Nicholas Weaver on usenix.com Molly White series - https://blog.mollywhite.net/blockchain/ Blockchain-based systems are not what they say they are It's not still the early days Abuse and harassment on the blockchain Anonymous cryptocurrency wallets are not so simple Cryptocurrency off-ramps, and the pressure towards centralization Cryptocurrency’s Robinhood effect Abuse on the blockchain – Guest lecture at Stanford University Against Web3 and Faux-Decentralization - 2021-10-19 by Soatok The technological case against Bitcoin and blockchain - 2022-03-05 by Luke Plant The Case Against Crypto - 2021-12-03 by Martin O'Leary The Case Against Bitcoin - 2021-05-14 by Michael W. Green. A portfolio manager discusses the case against bitcoin from a financial and geopolitical perspective. The Register: The dark equation of harm versus good means blockchain’s had its day - 2021-12-06 Blockchains and Cryptocurrencies: Burn It With Fire - 2018-04-20 by Nicholas Weaver 📺 Nicholas Weaver is a staff researcher with the International Computer Science Institute (ICSI) and lecturer in EECS, where he teaches machine structures and computer security. He earned his Ph.D. in computer science from Berkeley in 2003 and joined ICSI to study network security and measurement. "The entire cryptocurrency and blockchain ecology is rife with frauds, criminalities, and tulip-mania style hype and needs to be properly disposed of into the ashes of history. A “blockchain” is just a horribly inefficient append-only file which costs a literal fortune to secure without actually providing meaningful distributed trust, while cryptocurrencies are provably inferior than actual currencies for legal real world transactions. Beyond the sheer uselessness have emerged a whole host of bad ideas, ranging from the “put a bird^H^H^H^H blockchain on it” hype to unregistered (and mostly fraudulent) securities with “Initial Coin Offerings” to an invitation for massive theft in the form of “smart” contracts." Ross Anderson et al: Bitcoin Redux: crypto crime, and how to tackle it ( full paper )- 2018-06-01 - Anderson is a Professor of Security Engineering at the University Cambridge. Bitcoin Redux explains what’s going wrong in the world of cryptocurrencies. The bitcoin exchanges are developing into a shadow banking system, which do not give their customers actual bitcoin but rather display a "balance" and allow them to transact with others. However if Alice sends Bob a bitcoin, and they’re both customers of the same exchange, it just adjusts their balances rather than doing anything on the blockchain. This is an e-money service, according to European law, but is the law enforced? Not where it matters. We’ve been looking at the details. Ross Anderson: Why Bitcoin is Not Cash - 2018-04-10 - 📺 walks through why bitcoin is not cash and the complex legal questions it would need to deal with if it wanted to be. Ross Anderson: Tracing Stolen Bitcoin - 2018-03-23 - 📺 Simon Wardley: A Spoiler for the Future of Bitcoin - 2013-11-27 - "As you can guess, I'm not a fan of bitcoin. If left unchecked then I find it has the potential to undermine the importance of Government which is actually not good for competition and not good for the market. I hope none of the above happens and would rather see bitcoin disappear in a puff of history." (NB: he predicts massive appreciation in bitcoin and is concerned how it can undermine government and tax revenue.) Kai Stinchcombe series that discusses whether blockchain can solve various real world use-cases better than traditional technologies Kai Stinchcombe: Ten years in, nobody has come up with a use for blockchain - 2017-12-23 - "Each purported use case — from payments to legal documents, from escrow to voting systems—amounts to a set of contortions to add a distributed, encrypted, anonymous ledger where none was needed. What if there isn’t actually any use for a distributed ledger at all? What if, ten years after it was invented, the reason nobody has adopted a distributed ledger at scale is because nobody wants it?" Kai Stinchcombe: Blockchain is not only crappy technology but a bad vision for the future - 2018-05-04 - "Blockchain is not only crappy technology but a bad vision for the future. Its failure to achieve adoption to date is because systems built on trust, norms, and institutions inherently function better than the type of no-need-for-trusted-parties systems blockchain envisions. That’s permanent: no matter how much blockchain improves it is still headed in the wrong direction." Cory Doctorow: When crypto-exchanges go broke, you'll lose it all - 2022-02-03. Why state backed money is a good thing (a feature not a bug). If you've spent much time around cryptocurrency people, you've probably heard a rant or two about "sound money" and the need to "depoliticize money." This is a foundation of blockchainism: the belief that money is born separate from states, and states invade on the private realm when they "meddle" in the money system. There are at least two serious problems with this ideology. First, it's plain wrong on the historical facts. Money did not emerge from barter systems among people. Money was and is a product of state. But even if you stipulate that money didn't originate among private markets there's another serious historical problem with "sound money." ... It's this: central banks didn't emerge to usurp the private sector's control over money. Central banks were created because without them, finance was subject to wild, terrifying, ruinous boom/bust cycles. What's more, without a central bank, money was subject to naked political meddling, which central banks (sometimes) moderated. Internet pioneer/Silicon Valley legend Tim O'Reilly on Web3: Why it’s too early to get excited about Web3 - 2021-12-13 "Get ready for the crash" - CBS Money Watch - 2022-02-09 Crypto and NFTs are "Pretty Serious Speculative Bubble" - 2022-02-10 David Rosenthal: Can We Mitigate Cryptocurrencies' Externalities? - 2022-02-09. Having built a decentralized consensus system using Proof-of-Work (http://dx.doi.org/10.1145/945445.945451) the author has the technical knowledge to explain the design faults and limitations of permissionless blockchain systems, as well as highlighting the economic and environmental issues. Summary of critique: That the externalities I describe don't exist. You'll have a hard time proving that the waste of electricity and hardware, and the crime wave, are imaginary. That although the externalities do exist, the benefits of decentralization outweigh them. The problem here is that since the systems are not actually decentralized, we get the externalities but don't get the benefits. That although the externalities do exist, and the systems aren't dencentralized, they're making so much money that we shouldn't worry. The problem here is that the amount of actual money you can get out of a cryptocurrency equals the amount of actual money that has been put in, minus the actual costs of mining. So the big picture is that although there may be winners, in aggregate the system loses money. Economies of Scale in Peer-to-Peer Networks - 2014-10-07. Network effects lead to centralization in p2p (e.g. Bitcoin) and no good way to mitigate this. Charlie Stross: Why I want Bitcoin to Die in Fire - 2013-12 The Maltese Falcon - critique of bitcoin and financial properties of crypto assets from the CIO of JP Morgan bank. 2021-02-10 Vivaldi CEO: Why Vivaldi will never create ThinkCoin - 2022-01-13 - Jon von Tetzchner: “if you look beyond the hype, you’ll find nothing more than a pyramid scheme posing as currency.” Centralizing Control: Why Bitcoin is Dangerous - 2022-04-02 - Sal Bayat: “Democratic governance is fundamentally incompatible with existing cryptocurrency systems as they can only represent the interests of those in control of the system.” Economists Stephanie Kelton Cryptocurrency and Fiat Money - 2017-12-23 Richard Thaler Economics Nobel prize winner, Richard Thaler: “The market that looks most like a bubble to me is Bitcoin and its brethren” - 2018-01-22 Various 'Only good for drug dealers': More Nobel prize winners snub bitcoin - 2018-04-27 Robert Shiller The Old Allure of New Money - 2018-05-21 Abhijit Banerjee Nobel Prize Winning Economist Abhijit Banerjee: Is Blockchain the Key to Financial Inclusion? - 2020-01-20 Steve Keen Cryptocurrencies, Debt, and the Economy: Steve Keen interviewed by Layne Hartsell - 2021-02-17 Amartya Sen Prannoy Roy's Townhall With Amartya Sen On Economy, Farm Laws: Full Transcript - 2021-03-06 Jeffrey Sachs Famed economist Jeffrey Sachs rails against Bitcoin: Highly polluting and ‘almost like counterfeiting’ - 2021-03-16 Paul Krugman Technobabble, Libertarian Derp and Bitcoin - 2021-05-20 Tyler Cowen What the Crypto Crowd Doesn't Understand About Economics - 2021-06-20 Yanis Varoufakis What is money, really? And why Bitcoin is not the answer (even if blockchain is brilliant & potentially helpful in democratising money) - 2021-08-02 Daron Acemoğlu The Bitcoin Fountainhead - 2021-10-05 Joseph Stiglitz Nobel Prize Economist Joseph Stiglitz Calls Regulators to Ban Cryptocurrencies - 2021-10-28 Richard Thaler Economics Nobel prize winner, Richard Thaler: “The market that looks most like a bubble to me is Bitcoin and its brethren” - 2018-01-22 Yanis Varoufakis Yanis Varoufakis on Crypto & the Left, and Techno-Feudalism - 2022-01-26 Tyler Cowen The Crypto Crash Strengthens the Case for Crypto - 2022-01-27 Jesse Frederik Blockchain, the amazing solution for almost nothing - 2020-08-21 - "Blockchain technology is going to change everything: the shipping industry, the financial system, government … in fact, what won’t it change? But enthusiasm for it mainly stems from a lack of knowledge and understanding. The blockchain is a solution in search of a problem." Vice: ‘Crypto Ruined My Life’: The Mental Health Crisis Hitting Bitcoin Investors - 2022-02-16 - The stress and anxiety that goes with funneling your life savings into a volatile market is no joke. Ed Zitron: Solutions That Create Problems - 2022-02-22 - The thing about Web3 is that it is uniquely useless. I have actively searched for an explanation as to why it's the future, what products it would allow us to build, what sort of good it would provide, and I cannot even at my most optimistic find a real use case Ponzi aspect Financial Times: Why bitcoin is worse than a Madoff-style Ponzi scheme - 2021-12-22. A Ponzi scheme is a zero-sum enterprise. But bitcoin is a negative-sum phenomenon that you can’t even pursue a claim against, argues Robert McCauley. Original Seattle Times: Bitcoin is basically a Ponzi scheme - 2018-01-30 Bitcoin is a Ponzi - 2020-12-13 by Prof Jorge Stolfi Financial Times: Albanian lessons for regulators nervously eyeing the crypto world - 2021-07-05 - Albania’s 1990s pyramid scheme debacle highlights risks of regulatory paralysis on the cryptocurrency explosion Once upon a time in Albania, a scrappy, alternative finance industry emerged to take on and eventually supplant a sclerotic, technologically-backward banking system. The lessons from its dramatic collapse remain relevant today. Essentially, what was initially touted as a post-communist entrepreneurial success story proved to be pyramid schemes of breathtaking proportions. Slick marketing and lofty promises turned an informal, decentralised, crime-facilitating ecosystem into a mainstream mania that sucked in multitudes of people, unchecked by feeble and fitful regulatory warnings. Jacobin: Cryptocurrency Is a Giant Ponzi Scheme - 2022-01-21 Crypto and energy consumption Bitcoin Energy Consumption Index Why Bitcoin Is Bad For The Environment - 2021-04-22 Energy power usage CryptoArt, ETH, Blockchain spreadsheet How Do We Solve Bitcoin's Energy Problem? - 2022-01-30 Scams/frauds People Building ‘Blockchain City’ in Wyoming Scammed by Hackers - Vice - 2022-01-12 - On Monday, CityDAO—the group that bought 40 acres of Wyoming in hopes of "building a city on the Ethereum blockchain”—announced that its Discord server was hacked and members' funds were successfully stolen as a result. Web3 is going just great - A timeline of scams related to cryptocurrencies, NFTs, and web3 projects since the beginning of 2021 by Molly White DAOs Is The DAO going to be DOA? - 2016-05-16 - by Dan Larimer (founder of BitShares and much else). Larimer sets out most of the basic critiques of DAOs as governance innovation extremely well: Fancy technology can obscure our assessment of what is really going on. The DAO solves a single problem: the corrupt trustee or administrator. It replaces voluntary compliance with a corporation’s charter under threat of lawsuit, with automated compliance with software defined rules. This subtle change may be enough to bypass regulatory hurdles facing traditional trustee’s and administrators, but it doesn’t solve most of the problems the regulations were attempting to address. What The DAO doesn’t solve is all of the other problems inherent with any joint venture. These are people problems, economic problems, and political problems. In some sense, The DAO creates many new problems caused by its ridged rules and expensive machine-enforced process for change. The DAO doesn’t solve the “group trap” where by losers subsidize winners. It disempowers the individual actor and forces him to submit to group decision making. It doesn’t make raising money cheaper for companies, it just adds blockchain-enforced bureaucratic and political processes. DAOs and the nature of human collaboration - 2021-08-12 by Marin Petrov. A critique of DAOs and technosolutionism. NFTs Non-fungible tokens. OpenSea, Web3, and Aggregation Theory - 2022-01-05 - Ben Thompson of Stratechery Brian Eno on NFTs & Automaticism Detailed twitter thread by @NFTEthics alleging fraudulent or close to fraudulent behaviour by a major NTF influencer named BeanieMaxi - 2022-01-17 ( cached ) Jacobin: NFTs Are, Quite Simply, Bullshit - 2022-01-26 Specific use cases Event ticketing: NFT tickets — a realistic look at a big trend – 2021-12-14 NFT games: “Play-to-earn” and Bullshit Jobs - December 28, 2021 by Paul Butler - An interesting reflexion linking web3's "Play-to-earn" concept to David Graeber's Bullshit Jobs NFT games: Crypto Games: Report from hell - Good video reviewing and discussing crypto games Humour Crypto Curious - South Park on NFTs - 2021-12-21 N-FT: Non-Functioning Tower - NFT satire - 2022-03-07 “a normal person explains cryptocurrency” by Avalon Penrose - 2021-12-22 “my crypto friend calls me every day and this is what he sound like” by Flula - 2021-02-22 The Billion-Dollar Bitcoin Scam - Ordinary Things - 2020-05-31 - “What is Bitcoin? Is Bitcoin a scam? And how did Bitcoin become what it is today? Who was the Dread Pirate Roberts and what happened to the Silk Road?” Cryptocurrencies: Last Week Tonight with John Oliver (HBO) - 2018-03-12 Don’t Understand Bitcoin? This Man Will Mumble An Explanation At You by ClickHole - 2015-07-7 If Cryptocurrency was Honest If NFTs were Honest Brave New Web - ani utopian Web3 satire by Nikolay Vlasov - 2022-04-10 Twitter users Whilst these users may not solely discuss crypto or web3, they do discuss it regularly, and have consistently provided well-written critique. https://twitter.com/web3isgreat https://twitter.com/ncweaver https://twitter.com/molly0xFFF https://twitter.com/smdiehl Crypto Criticism Threads https://twitter.com/rufuspollock https://twitter.com/troll_lock https://twitter.com/CasPiancey -"Under promise, under deliver" co-host @cryptocriticpod opinions are mine, not my employer odds and ends @protos hold no crypto or crypto stonks https://twitter.com/BennettTomlin - I do data science and track down frauds | 74% backed | Co-host @CryptoCriticPod | Writing @fud_letter | Discord: https://discord.gg/YpAUqNkhSC https://twitter.com/SilvermanJacob (staff writer New Republic) & https://twitter.com/ben_mckenzie - "apparently I now write about crypto" https://twitter.com/doctorow Tether, and other stablecoins Bennett Tomlin: Tether and Bitfinex Introduction - 2021-08-10 - Tether and Bitfinex are two of the most important companies in the cryptocurrency ecosystem. Tether is the largest stablecoin, and the primary driver of volume and liquidity. Bitfinex used to be the largest cryptocurrency exchange, and still is a frequently used exchange. Tether and Bitfinex have an incredibly problematic past and are quite possibly the largest corporate fraud in history. Detailed overview of Tether and Bitfinex and their connection. Tether Papers: This is exactly who acquired 70% of all USDT ever issued - 2021-11-10 Bloomberg: Tether’s Latest Black Eye Is CFTC Fine for Lying About Reserves - 2021-10-15 - Biggest stablecoin issuer hit with $41 million penalty. Affiliated crypto exchange Bitfinex also fined $1.5 million. Bloomberg: Anyone Seen Tether’s Billions? - 2021-10-07 - A wild search for the U.S. dollars supposedly backing the stablecoin at the center of the global cryptocurrency trade—and in the crosshairs of U.S. regulators and prosecutors. [paywalled] ( cached ) Bloomberg: Tether Fails to Dispel Mystery on Stablecoin’s Crucial Reserves - 2021-12-03 - Holding include $30.6 billion in commercial paper and CDs. About $1 billion moved from reverse repo notes to money funds Central Bank Digital Currencies Money and Payments: The U.S. Dollar in the Age of Digital Transformation - provides a high level overview of the current state of central bank and private sector currencies in the US, and identifies risks and challenges with the implementation of a central bank digital currency. From the paper summary: "The paper summarizes the current state of the domestic payments system and discusses the different types of digital payment methods and assets that have emerged in recent years, including stablecoins and other cryptocurrencies. It concludes by examining the potential benefits and risks of a CBDC, and identifies specific policy considerations." Trading/Market Microstructure/Security Risks Quantifying Blockchain Extractable Value: How dark is the forest? - Qin et al., 2021. Technical paper characterizing and quantifying miner extracted value on Ethereum's DeFi smart contracts. High-Frequency Trading on Decentralized On-Chain Exchanges - Zhou et al., 2020. Technical paper detailing the "front-running" that occurs on Ethereum. An Anatomy of Bitcoin Price Manipulation - Matt Ranger, 2022. Speculative analysis of centralized cryptocurrency exchange market data to support a price manipulation hypothesis. Former bitcoin enthusiasts turned skeptics Money corrupts; bitcoin corrupts absolutely. by Angelino Desmet - 12-03-2021 I wish I never bought bitcoin. by Angelino Desmet - 01-06-2020 Religious skeptical angles Buddhist Sujato Bhikkhu on Crypto by Sujato Bhikkhu. A monk explains why crypto is incompatible with the teachings of the Buddha from both moral and spiritual dimensions. Christian The Christian case against Bitcoin and blockchain by Luke Plant, A reading of bitcoin philosophy and cult like phenomenon from a biblical perspective 2021-03-2022. What you should know about Bitcoin by Joe Carter. A well-researched, accurate introduction to Bitcoin from a Christian perspective, 2017-12-27. Ask the Economist: Should a Christian Invest in Bitcoin? by Greg Phelan, 2021-10-27. What is blockchain, web3, etc. Best intros/overviews of blockchain, crypto, web3, etc. On Blockchain and Trust - February 12, 2019 by Bruce Schneier. The article also appeared on wired.com as There's No Good Reason to Trust Blockchain Technology . The Myth of Decentralization and Lies about Web 2.0 - 2022-01-07 by Emily Gorcenski http://kernel.community - A custom web3 educational community with free learning resources at https://kernel.community/en/learn/ Iron-manning the pro arguments Here we collect the best theses for why blockchain/crypto“currency”/web3 is supposedly important/interesting/world-changing. Bitcoin Bitcoin for the Open-Minded Skeptic - May 2020 - by [[people/Matt Huang]]. Note: more an argument for why Bitcoin will "make it" than any argument why that is socially valuable (or not). 7 Things To Read About Bitcoin (For Institutional Investors) - May 2020 - by [[people/Matt Huang]] General JumpCrypto Crypto Reading List (on Github) Web3 Sean Bonner: Why Web3 - 2021-10-26 - by Sean Bonner. "Web3 upends the power structures we’ve grown accustomed to and puts artists and creators back into the drivers seat…Web3 offers a future where people are in charge of their own identities, not beholden to the whims of data hoarding corporations. People control their own accounts, own their own futures…So if you are asking “Why Web3?” The answer is simple. Web3 is the future." Fat protocols From https://www.usv.com/writing/2016/08/fat-protocols/ The previous generation of shared protocols (TCP/IP, HTTP, SMTP, etc.) produced immeasurable amounts of value, but most of it got captured and re-aggregated on top at the applications layer, largely in the form of data (think Google, Facebook and so on). The Internet stack, in terms of how value is distributed, is composed of “thin” protocols and “fat” applications. This relationship between protocols and applications is reversed in the blockchain application stack. Value concentrates at the shared protocol layer and only a fraction of that value is distributed along at the applications layer. It’s a stack with “fat” protocols and “thin” applications. Crypto Tokens and the Coming Age of Protocol Innovation - 2016-07-28 - by Albert Wenger at USV. Move about incentivizing investment in the protocols Fat Protocols - Aug 2016 - Joel Monegro at USV - more about incentivizing adoption Fairer governance Can support more democratic, distributed governance, e.g. cooperatives (somehow). Can save Democracy. If I Only had a Heart: a DisCO Manifesto - Dec 2019 - A joint publication by DisCO.coop, the Transnational Institute and Guerrilla Media Collective. "Value Sovereignty, Care Work, Commons and Distributed Cooperative Organizations. The DisCO Manifesto is a deep dive into the world of Distributed Cooperative Organizations. Over its 80 colorful pages, you will read about how DisCOs are a P2P/Commons, cooperative and Feminist Economic alternative to Decentralized Autonomous Organizations (DAOs). The DisCO Manifesto also includes some background on topics like blockchain, AI, the commons, feminism, cooperatives, cyberpunk, and more." Wired: The Father of Web3 Wants You to Trust Less - 2021-11-29 - Gavin Wood, who coined the term Web3 in 2014, believes decentralized technologies are the only hope of preserving liberal democracy. Fairer Economy Li Jin on the future of the creator economy - Shared ownership and control of online platforms is the way forward (via crypto) Note: we probably all want that wonderful outcome it's just that crypto is neither necessary nor likely to get us there. See https://rufuspollock.com/fixing-facebook/ Reference History of speculation, manias, etc. Devil Take the Hindmost: A History of Financial Speculation by Edward Chancellor (1998) Manias, Panics, and. Crashes. A History of Financial Crises by by CP Kindleberger (1978) Inbox This is a section for links that haven't yet been reviewed and/or allocated to a particular section. https://the-crypto-syllabus.com/web3-a-map-in-search-of-territory/ - Jan 2022 - by Evgeny Morozov Proof of Work vs Proof of Stake, and the Stablecoin Centralization Problem - good overview of PoW vs PoS and the complexity/problems PoS adds. Second half of the article expounds on how "any smart contract blockchain that relies heavily on DeFi for its use case, can have the outcome of its hard forks significantly determined by centralized stablecoin custodians." Long article and could fit under multiple headings here. https://www.reddit.com/r/anticryptocurrency/ - reddit with a significant number of links https://www.profgalloway.com/web3/ - 2022-01-15 - Prof Scott Galloway @ NYU. Unequal, focused on getting rich, facilitating crime, centralized Cryptoeconomics as a Limitation on Governance - 2021-11-11 - Nathan Schneider, University of Colorado Boulder Financial Times: Matt Damon’s crypto ad is more than just cringeworthy (paywall) Francesca Bria on Decentralisation, Sovereignty, and Web3 Booming NFT art market plagued by 'mind-blowing' fraud Pros BanklessDAO: State of the DAOs #7: Social Tokens and the Future of Work - 2022-01-13 Scanning the European Ecosystem of Distributed Ledger Technologies for Social and Public Good - Oct 2020 - by Samer Hassan and colleagues Twitter thread: https://twitter.com/samerP2P/status/1317123399295041541 Other suggestions New topic concerning the psychological harm, such as: gambling, greed, cultism, etc.;Making sense of web3 & crypto. Introduction to key concepts and ideas. Rigorous, constructive analysis of key claims pro and con. A look at the deeper hopes and aspirations.;awesome,awesome-list,blockchain,cryptocurrency,web3,technosolutionism,bitcoin,ethereum
life-itself/web3
alibaba/EasyCV;[![PyPI](https://img.shields.io/pypi/v/pai-easycv)](https://pypi.org/project/pai-easycv/) [![Documentation Status](https://readthedocs.org/projects/easy-cv/badge/?version=latest)](https://easy-cv.readthedocs.io/en/latest/) [![license](https://img.shields.io/github/license/alibaba/EasyCV.svg)](https://github.com/open-mmlab/mmdetection/blob/master/LICENSE) [![open issues](https://isitmaintained.com/badge/open/alibaba/EasyCV.svg)](https://github.com/alibaba/EasyCV/issues) [![GitHub pull-requests](https://img.shields.io/github/issues-pr/alibaba/EasyCV.svg)](https://GitHub.com/alibaba/EasyCV/pull/) [![GitHub latest commit](https://badgen.net/github/last-commit/alibaba/EasyCV)](https://GitHub.com/alibaba/EasyCV/commit/) EasyCV English | 简体中文 Introduction EasyCV is an all-in-one computer vision toolbox based on PyTorch, mainly focuses on self-supervised learning, transformer based models, and major CV tasks including image classification, metric-learning, object detection, pose estimation, and so on. Major features SOTA SSL Algorithms EasyCV provides state-of-the-art algorithms in self-supervised learning based on contrastive learning such as SimCLR, MoCO V2, Swav, DINO, and also MAE based on masked image modeling. We also provide standard benchmarking tools for ssl model evaluation. Vision Transformers EasyCV aims to provide an easy way to use the off-the-shelf SOTA transformer models trained either using supervised learning or self-supervised learning, such as ViT, Swin Transformer, and DETR Series. More models will be added in the future. In addition, we support all the pretrained models from timm . Functionality & Extensibility In addition to SSL, EasyCV also supports image classification, object detection, metric learning, and more areas will be supported in the future. Although covering different areas, EasyCV decomposes the framework into different components such as dataset, model and running hook, making it easy to add new components and combining it with existing modules. EasyCV provides simple and comprehensive interface for inference. Additionally, all models are supported on PAI-EAS , which can be easily deployed as online service and support automatic scaling and service monitoring. Efficiency EasyCV supports multi-gpu and multi-worker training. EasyCV uses DALI to accelerate data io and preprocessing process, and uses TorchAccelerator and fp16 to accelerate training process. For inference optimization, EasyCV exports model using jit script, which can be optimized by PAI-Blade What's New [🔥 2023.05.09] 09/05/2023 EasyCV v0.11.0 was released. Support EasyCV as a plug-in for [modelscope](https://github.com/modelscope/modelscope. [🔥 2023.03.06] 06/03/2023 EasyCV v0.10.0 was released. Add segmentation model STDC Add skeleton based video recognition model STGCN Support ReID and Multi-len MOT [🔥 2023.01.17] 17/01/2023 EasyCV v0.9.0 was released. Support Single-lens MOT Support video recognition (X3D, SWIN-video) [🔥 2022.12.02] 02/12/2022 EasyCV v0.8.0 was released. bevformer-base NDS increased by 0.8 on nuscenes val, training speed increased by 10%, and inference speed increased by 40%. Support Objects365 pretrain and Adding the DINO++ model can achieve an accuracy of 63.4mAP at a model scale of 200M(Under the same scale, the accuracy is the best). [🔥 2022.08.31] We have released our YOLOX-PAI that achieves SOTA results within 40~50 mAP (less than 1ms). And we also provide a convenient and fast export/predictor api for end2end object detection. To get a quick start of YOLOX-PAI, click here ! 31/08/2022 EasyCV v0.6.0 was released. Release YOLOX-PAI which achieves SOTA results within 40~50 mAP (less than 1ms) Add detection algo DINO which achieves 58.5 mAP on COCO Add mask2former algo Releases imagenet1k, imagenet22k, coco, lvis, voc2012 data with BaiduDisk to accelerate downloading Please refer to change_log.md for more details and history. Technical Articles We have a series of technical articles on the functionalities of EasyCV. * EasyCV开源|开箱即用的视觉自监督+Transformer算法库 * MAE自监督算法介绍和基于EasyCV的复现 * 基于EasyCV复现ViTDet:单层特征超越FPN * 基于EasyCV复现DETR和DAB-DETR,Object Query的正确打开方式 * YOLOX-PAI: 加速YOLOX, 比YOLOv6更快更强 * EasyCV带你复现更好更快的自监督算法-FastConvMAE * EasyCV DataHub 提供多领域视觉数据集下载,助力模型生产 * 使用EasyCV Mask2Former轻松实现图像分割 Installation Please refer to the installation section in quick_start.md for installation. Get Started Please refer to quick_start.md for quick start. We also provides tutorials for more usages. self-supervised learning image classification metric learning object detection with yolox-pai model compression with yolox using torchacc file io for local and oss files using mmdetection model in EasyCV batch prediction tools notebook * self-supervised learning * image classification * object detection with yolox-pai * metric learning Model Zoo Architectures Self-Supervised Learning Image Classification Object Detection Segmentation Object Detection 3D BYOL (NeurIPS'2020) DINO (ICCV'2021) MiXCo (NeurIPS'2020) MoBY (ArXiv'2021) MoCov2 (ArXiv'2020) SimCLR (ICML'2020) SwAV (NeurIPS'2020) MAE (CVPR'2022) FastConvMAE (ArXiv'2022) ResNet (CVPR'2016) ResNeXt (CVPR'2017) HRNet (CVPR'2019) ViT (ICLR'2021) SwinT (ICCV'2021) EfficientFormer (ArXiv'2022) DeiT (ICML'2021) XCiT (ArXiv'2021) TNT (NeurIPS'2021) ConViT (ArXiv'2021) CaiT (ICCV'2021) LeViT (ICCV'2021) ConvNeXt (CVPR'2022) ResMLP (ArXiv'2021) CoaT (ICCV'2021) ConvMixer (ICLR'2022) MLP-Mixer (ArXiv'2021) NesT (AAAI'2022) PiT (ArXiv'2021) Twins (NeurIPS'2021) Shuffle Transformer (ArXiv'2021) DeiT III (ECCV'2022) Hydra Attention (2022) FCOS (ICCV'2019) YOLOX (ArXiv'2021) YOLOX-PAI (ArXiv'2022) DETR (ECCV'2020) DAB-DETR (ICLR'2022) DN-DETR (CVPR'2022) DINO (ArXiv'2022) Instance Segmentation Mask R-CNN (ICCV'2017) ViTDet (ArXiv'2022) Mask2Former (CVPR'2022) Semantic Segmentation FCN (CVPR'2015) UperNet (ECCV'2018) Panoptic Segmentation Mask2Former (CVPR'2022) BEVFormer (ECCV'2022) Please refer to the following model zoo for more details. self-supervised learning model zoo classification model zoo detection model zoo detection3d model zoo segmentation model zoo pose model zoo Data Hub EasyCV have collected dataset info for different scenarios, making it easy for users to finetune or evaluate models in EasyCV model zoo. Please refer to data_hub.md . License This project is licensed under the Apache License (Version 2.0) . This toolkit also contains various third-party components and some code modified from other repos under other open source licenses. See the NOTICE file for more information. Contact This repo is currently maintained by PAI-CV team, you can contact us by * Dingding group number: 41783266 * Email: easycv@list.alibaba-inc.com Enterprise Service If you need EasyCV enterprise service support, or purchase cloud product services, you can contact us by DingDing Group.;An all-in-one toolkit for computer vision;self-supervised-learning,transformers,classification,computer-vision,object-detection,pytorch,vision-transformer
alibaba/EasyCV
widgetti/solara;A Pure Python, React-style Framework for Scaling Your Jupyter and Web Apps Come chat with us on Discord to ask questions or share your thoughts or creations! Introducing Solara While there are many Python web frameworks out there, most are designed for small data apps or use paradigms unproven for larger scale. Code organization, reusability, and state tend to suffer as apps grow in complexity, resulting in either spaghetti code or offloading to a React application. Solara addresses this gap. Using a React-like API, we don't need to worry about scalability. React has already proven its ability to support the world's largest web apps. Solara uses a pure Python implementation of React (Reacton), creating ipywidget-based applications. These apps work both inside the Jupyter Notebook and as standalone web apps with frameworks like FastAPI. This paradigm enables component-based code and incredibly simple state management. By building on top of ipywidgets, we automatically leverage an existing ecosystem of widgets and run on many platforms, including JupyterLab, Jupyter Notebook, Voilà, Google Colab, DataBricks, JetBrains Datalore, and more. We care about developer experience. Solara will give your hot code reloading and type hints for faster development. Installation Run: pip install solara Or follow the Installation instructions for more detailed instructions. First script Put the following Python snippet in a file (we suggest sol.py ), or put it in a Jupyter notebook cell: ```python import solara Declare reactive variables at the top level. Components using these variables will be re-executed when their values change. sentence = solara.reactive("Solara makes our team more productive.") word_limit = solara.reactive(10) @solara.component def Page(): # Calculate word_count within the component to ensure re-execution when reactive variables change. word_count = len(sentence.value.split()) solara.SliderInt("Word limit", value=word_limit, min=2, max=20) solara.InputText(label="Your sentence", value=sentence, continuous_update=True) # Display messages based on the current word count and word limit. if word_count >= int(word_limit.value): solara.Error(f"With {word_count} words, you passed the word limit of {word_limit.value}.") elif word_count >= int(0.8 * word_limit.value): solara.Warning(f"With {word_count} words, you are close to the word limit of {word_limit.value}.") else: solara.Success("Great short writing!") The following line is required only when running the code in a Jupyter notebook: Page() ``` Run from the command line in the same directory where you put your file ( sol.py ): bash $ solara run sol.py Solara server is starting at http://localhost:8765 Or copy-paste this to a Jupyter notebook cell and execute it (the Page() expression at the end will cause it to automatically render the component in the notebook). See this snippet run live at https://solara.dev/documentation/getting_started Demo The following demo app can be used to explore a dataset (buildin or upload yourself) using a scatter plot. The plot can be interacted with to filter the dataset, and the filtered dataset can be downloaded. Source code Running in solara-server The solara server is build on top of Starlette/FastAPI and runs standalone. Ideal for production use. Running in Jupyter By building on top of ipywidgets, we automatically leverage an existing ecosystem of widgets and run on many platforms, including JupyterLab, Jupyter Notebook, Voilà, Google Colab, DataBricks, JetBrains Datalore, and more. This means our app can also run in Jupyter: Resources Visit our main website or jump directly to the introduction Note that the solara.dev website is created using Solara;A Pure Python, React-style Framework for Scaling Your Jupyter and Web Apps;dataapp,fastapi,flask,ipywidgets,jupyter,starlette,webapp
widgetti/solara
BrunoLevy/geogram;geogram Geogram is a programming library with geometric algorithms. It has geometry-processing functionalities: - surface reconstruction - remeshing - parameterization and texturing - Intersections and Boolean operations - Constructive Solid Geometry It also has lower-level algorithm: - Exact numbers / exact predicates - Delaunay triangulations in 2D and highly efficient parallel Delaunay triangulations in 3D - Memory efficient surfacic/volumetric/hybrid mesh data structure - Efficient geometric search data structures for intersection and raytracing (AABBs, KdTrees, ...) - Spectral mesh processing - Linear solver on CPU and GPU Geogram received the Symposium on Geometry Processing Software Award in 2023. Geogram contains the main results in Geometry Processing from the former ALICE Inria project, that is, more than 30 research articles published in ACM SIGGRAPH, ACM Transactions on Graphics, Symposium on Geometry Processing and Eurographics. It was supported by two grants from the European Research Council (ERC): GOODSHAPE and VORPALINE. Links Documentation, how to compile, tutorials.... Programmer's reference manuals... Releases Projects with geogram Graphite , an experimental 3D modeler built around geogram. Geogram in-browser demos (How is it possible ? more on this here ) Data How does it compare to other geometry-processing libraries ? See FAQ;a programming library with geometric algorithms;graphics-libraries,graphics-programming,mesh-generation,mesh-processing,geometry-processing
BrunoLevy/geogram
copilot-emacs/copilot.el;Copilot.el Copilot.el is an Emacs plugin for GitHub Copilot. Warning: This plugin is unofficial and based on binaries provided by copilot.vim . Note: You need access to GitHub Copilot to use this plugin. Current maintainer: @emil-vdw , @jcs090218 , @rakotomandimby . Retired maintainer: @zerolfx . Installation Ensure your Emacs version is at least 27, the dependency package editorconfig ( melpa ) and jsonrpc ( elpa , >= 1.0.14) are both installed. Install Node.js v18+. (You can specify the path to node executable by setting copilot-node-executable .) Setup copilot.el as described in the next section. Install the copilot server by M-x copilot-install-server . Login to Copilot by M-x copilot-login . You can also check the status by M-x copilot-diagnose ( NotAuthorized means you don't have a valid subscription). Enjoy! Configurations Example for Doom Emacs Add package definition to `~/.doom.d/packages.el`: ```elisp (package! copilot :recipe (:host github :repo "copilot-emacs/copilot.el" :files ("*.el"))) ``` Configure copilot in `~/.doom.d/config.el`: ```elisp ;; accept completion from copilot and fallback to company (use-package! copilot :hook (prog-mode . copilot-mode) :bind (:map copilot-completion-map (" " . 'copilot-accept-completion) ("TAB" . 'copilot-accept-completion) ("C-TAB" . 'copilot-accept-completion-by-word) ("C- " . 'copilot-accept-completion-by-word))) ``` Strongly recommend to enable `childframe` option in `company` module (`(company +childframe)`) to prevent overlay conflict. If pressing tab to complete sometimes doesn't work you might want to bind completion to another key or try: ```elisp (after! (evil copilot) ;; Define the custom function that either accepts the completion or does the default behavior (defun my/copilot-tab-or-default () (interactive) (if (and (bound-and-true-p copilot-mode) ;; Add any other conditions to check for active copilot suggestions if necessary ) (copilot-accept-completion) (evil-insert 1))) ; Default action to insert a tab. Adjust as needed. ;; Bind the custom function to in Evil's insert state (evil-define-key 'insert 'global (kbd " ") 'my/copilot-tab-or-default)) ``` Example for Spacemacs Edit your `~/.spacemacs`: ```elisp ;; =================== ;; dotspacemacs/layers ;; =================== ;; add or uncomment the auto-completion layer dotspacemacs-configuration-layers '( ... auto-completion ... ) ;; add copilot.el to additional packages dotspacemacs-additional-packages '((copilot :location (recipe :fetcher github :repo "copilot-emacs/copilot.el" :files ("*.el")))) ;; ======================== ;; dotspacemacs/user-config ;; ======================== ;; accept completion from copilot and fallback to company (with-eval-after-load 'company ;; disable inline previews (delq 'company-preview-if-just-one-frontend company-frontends)) (with-eval-after-load 'copilot (define-key copilot-completion-map (kbd " ") 'copilot-accept-completion) (define-key copilot-completion-map (kbd "TAB") 'copilot-accept-completion) (define-key copilot-completion-map (kbd "C-TAB") 'copilot-accept-completion-by-word) (define-key copilot-completion-map (kbd "C- ") 'copilot-accept-completion-by-word)) (add-hook 'prog-mode-hook 'copilot-mode) ``` General Configurations #### 1. Load `copilot.el` ##### Option 1: Load via `straight.el` or `quelpa` (recommended) ###### `straight.el`: ```elisp (use-package copilot :straight (:host github :repo "copilot-emacs/copilot.el" :files ("*.el")) :ensure t) ;; you can utilize :map :hook and :config to customize copilot ``` ###### `quelpa` + `quelpa-use-package`: ```elisp (use-package copilot :quelpa (copilot :fetcher github :repo "copilot-emacs/copilot.el" :branch "main" :files ("*.el"))) ;; you can utilize :map :hook and :config to customize copilot ``` ##### Option 2: Load manually Please make sure you have these dependencies installed (available in ELPA/MELPA): + `dash` + `s` + `editorconfig` + `f` After installing those, clone this repository then insert the below snippet into your config file. ```elisp (add-to-list 'load-path "/path/to/copilot.el") (require 'copilot) ``` #### 2. Configure completion ##### Option 1: Use `copilot-mode` to automatically provide completions ```elisp (add-hook 'prog-mode-hook 'copilot-mode) ``` To customize the behavior of `copilot-mode`, please check `copilot-enable-predicates` and `copilot-disable-predicates`. ##### Option 2: Manually provide completions You need to bind `copilot-complete` to some key and call `copilot-clear-overlay` inside `post-command-hook`. #### 3. Configure completion acceptation Use tab to accept completions (you may also want to bind `copilot-accept-completion-by-word` to some key): ```elisp (define-key copilot-completion-map (kbd " ") 'copilot-accept-completion) (define-key copilot-completion-map (kbd "TAB") 'copilot-accept-completion) ``` Programming language detection Copilot.el detects the programming language of a buffer based on the major-mode name, stripping the -mode part. Resulting languageId should match table here . You can add unusual major-mode mappings to copilot-major-mode-alist . Without the proper language set suggestions may be of poorer quality. elisp (add-to-list 'copilot-major-mode-alist '("enh-ruby" . "ruby")) Commands copilot-diagnose Check the current status of the plugin. Also you can check logs in the *copilot events* buffer and stderr output in the *copilot stderr* buffer. copilot-login Login to GitHub, required for using the plugin. copilot-mode Enable/disable copilot mode. copilot-complete Try to complete at the current point. copilot-accept-completion Accept the current completion. copilot-clear-overlay Clear copilot overlay in the current buffer. copilot-accept-completion-by-line / copilot-accept-completion-by-word Similar to copilot-accept-completion , but accept the completion by line or word. You can use prefix argument to specify the number of lines or words to accept. copilot-next-completion / copilot-previous-completion Cycle through the completion list. copilot-logout Log out from GitHub. Customization copilot-node-executable The executable path of Node.js . copilot-idle-delay Time in seconds to wait before starting completion (default to 0). Note Copilot itself has a ~100ms delay because of network communication. copilot-enable-predicates / copilot-disable-predicates A list of predicate functions with no argument to enable/disable triggering Copilot in copilot-mode . copilot-enable-display-predicates / copilot-disable-display-predicates A list of predicate functions with no argument to enable/disable showing Copilot's completions in copilot-mode . copilot-clear-overlay-ignore-commands A list of commands that won't cause the overlay to be cleared. copilot-network-proxy Format: '(:host "127.0.0.1" :port 7890 :username: "user" :password: "password") , where :username and :password are optional. For example: elisp (setq copilot-network-proxy '(:host "127.0.0.1" :port 7890)) Known Issues Wrong Position of Other Completion Popups This is an example of using together with default frontend of company-mode . Because both company-mode and copilot.el use overlay to show completion, so the conflict is inevitable. To solve the problem, I recommend you to use company-box (only available on GUI), which is based on child frame rather than overlay. After using company-box , you have: In other editors (e.g. VS Code , PyCharm ), completions from copilot and other sources can not show at the same time. But I decided to allow them to coexist, allowing you to choose a better one at any time. Cursor Jump to End of Line When Typing If you are using whitespace-mode , make sure to remove newline-mark from whitespace-style . Reporting Bugs Make sure you have restarted your Emacs (and rebuild the plugin if necessary) after updating the plugin. Please enable event logging by customize copilot-log-max (to e.g. 1000), then paste related logs in the *copilot events* and *copilot stderr* buffer. If an exception is thrown, please also paste the stack trace (use M-x toggle-debug-on-error to enable stack trace). Roadmap [x] Setup Copilot without Neovim [x] Cycle through suggestions [x] Add Copilot minor-mode [ ] ~~Add package to MELPA~~ Thanks These projects helped me a lot: https://github.com/TommyX12/company-tabnine/ https://github.com/cryptobadger/flight-attendant.el https://github.com/github/copilot.vim;An unofficial Copilot plugin for Emacs.;[]
copilot-emacs/copilot.el
audulus/rui;rui Experimental Rust UI library, inspired by SwiftUI. Early days, but some stuff already works. rui will be used for a future version of Audulus rui is GPU rendered and updates reactively (when your state changes). The focus of rui is to have the best ergonomics, and use the simplest possible implementation. As such, there is no retained view tree (DOM) or view diffing. Everything is re-rendered when state changes, under the assumption that we can do that quickly. discord server macOS ✅ Windows ✅ Linux ✅ iOS ✅ (see https://github.com/audulus/rui-ios) wasm (WIP) Examples obligatory Counter ( cargo run --example counter ): ```rust use rui::*; fn main() { state( || 1, |count, cx| { vstack(( cx[count].padding(Auto), button("increment", move |cx| { cx[count] += 1; }) .padding(Auto), )) }, ) .run() } ``` some shapes ( cargo run --example shapes ): ```rust use rui::*; fn main() { vstack(( circle() .color(RED_HIGHLIGHT) .padding(Auto), rectangle() .corner_radius(5.0) .color(AZURE_HIGHLIGHT) .padding(Auto) )) .run() } ``` canvas for gpu drawing ( cargo run --example canvas ): ```rust use rui::*; fn main() { canvas(|_, rect, vger| { vger.translate(rect.center() - LocalPoint::zero()); let paint = vger.linear_gradient( [-100.0, -100.0], [100.0, 100.0], AZURE_HIGHLIGHT, RED_HIGHLIGHT, 0.0, ); let radius = 100.0; vger.fill_circle(LocalPoint::zero(), radius, paint); }) .run() } ``` slider with map ( cargo run --example slider ): ```rust use rui::*; [derive(Default)] struct MyState { value: f32, } /// A slider with a value. fn my_slider(s: impl Binding ) -> impl View { with_ref(s, move |v| { vstack(( v.to_string().font_size(10).padding(Auto), hslider(s).thumb_color(RED_HIGHLIGHT).padding(Auto), )) }) } fn main() { state(MyState::default, |state, cx| map( cx[state].value, move |v, cx| cx[state].value = v, |s, _| my_slider(s), ), ) .run() } ``` widget gallery ( cargo run --example gallery ): Goals Encode UI in types to ensure stable identity. Optimize to reduce redraw. Use vger-rs for rendering. Minimal boilerplate. Good looking. No unsafe . Accessibility for assistive technologies. Optional Features winit - ( enabled by default ) use winit for windowing. Use default-features = false if you are embedding rui (see https://github.com/audulus/rui-ios). Why and how? In the long term, I'd like to move Audulus over to Rust. After looking at other available UI options, it seemed best to implement something resembling the existing immediate mode UI system I already have working in Audulus, but better. I had been enjoying the ergonomics of SwiftUI, but SwiftUI simply can't handle big node graphs very well ( we have tried and had to fall back to manual layout and render with Canvas , so we couldn't put custom Views within each node). What you find with SwiftUI (particularly when profiling) is that there's a lot of machinery dealing with the caching aspects of things. It's opaque, scary (crashes on occasion, parts are implemented in C++ not Swift!), and can be rather slow. Often, it seems to be caching things thare are trivial to recompute in the first place. Not so long ago, before programmable shaders, it was necessary to cache parts of a UI in textures (CoreAnimation for example does this) to get good performance. Now we have extremely fast GPUs and such caching is not necessary to achieve good performance. In fact if enough is animating, lots of texture caching can hinder performance, since the caches need to be updated so often. Plus, the textures consume a fair amount of memory, and when you have an unbounded node-graph like Audulus, that memory usage would be unbounded. And what resolution do you pick for those textures? So rui starts from the assumption that 2D UI graphics (not general vector graphics!) are a trivial workload for a GPU. If you consider how advanced games are now, doing realtime global illumination and such, this seems intuitively correct, but Audulus more-or-less proves it. So that means we can do away with the texture caching, and we really might not even need damage regions either. I'm also skeptical of the need for parallel encoding or caching parts of the scene for 2D UI graphics, since, again, it's just a trivial GPU workload. Layout, on the other hand, can't easily be offloaded to GPU free-performance land. It's necessary to cache layout information and try not to recompute it all the time. So rui caches layout and only recomputes it when the state changes (unlike a typical immediate mode UI which computes layout on the fly and is constrained to very simple layouts). For Audulus, this isn't quite enough, since some view-local state will be changing all the time as things are animating (Audulus solves this by only recomputing layout when the central document state changes). Perhaps this is where proponents of DOM-ish things (some other OOP-ish tree of widgets) would jump in and make their case, but I'm skeptical that's really necessary. Think of what actually needs to be (re)computed: a layout box for each (ephemeral) View. Does this really require a separate tree of objects? Time will tell! Status ✅ basic shapes: circle, rounded rectangle ✅ basic gestures: tap, drag ✅ hstack/vstack ✅ text ✅ padding ✅ offsets ✅ state ✅ zstack ✅ canvas (GPU vector graphics with vger) ✅ bindings ✅ list ✅ sliders ✅ knobs ✅ editable text (still a bit rough) ✅ any_view (view type erasure) ✅ layout feedback ✅ animation ✅ UI unit testing References Towards principled reactive UI Towards a unified theory of reactive UI Flutter's Rendering Pipeline Static Types in SwiftUI How Layout Works in SwiftUI Xilem: an architecture for UI in Rust;Declarative Rust UI library;rust,gui,gpu,winit,wgpu,vger,graphics,ui,declarative-ui,user-interface
audulus/rui
vpavlenko/study-music;Awesome Music Theory Where to start Play 1. Pentatonic sequencer 2. Music Mouse 🐭 3. The Infinite Drum Machine 🥁 or Groove Pizza or Groove Pizzeria 4. Chord Player (check out "Melody" and "Explore" tabs) or aQWERTYon Interact 1. Go through Ableton's guide on music and Ableton's guide on synths 1. Bartosz Ciechanowski. Sound 2. Chrome Music Lab 16. 🤖 AI demos : Magenta , MusicLM , LakhNES , Muzic , Jazz Transformer Wander around 1. Explore Hooktheory's TheoryTab : search for your favorite songs and anime openings. 1. Ishkur's evolution of electronic music 12. Press Alt+"scan" at Every Noise 🌐 13. Piano rolls in 12 colors: Famicom/NES 👾 , popular music in MIDI 15. TuttiTempi: Chopin's Funeral March ⚰️ 10. Click "Show Timeline" for patterns similar to octatonic used in jazz solos: upward , downward 11. See how form can be visualized in MusicPlot and in BriFormer Watch 1. How a track emerges: - on the OP-1 🎛️ - in a studio with live instruments 🎻 - on a vocal looper 🎤 - in TidalCycles 💻 - Also, a Piano Phase jam in TidalCycles 3. Ravel's Bolero 2. The Art of Mixing 🎚️ 4. Nopia 🎹 - a chord-based synthesizer 2. 🍿 Two-chord changes typical for movie soundtracks: LP , H , T6 , S , F and N 14. Watch a gamelan multitrack and try to make sense of it , maybe with a help of a larger multitrack for another piece Read 1. 📚 Hooktheory 📚 - interactive books on pop harmony. A must-read for anyone 1. Music Theory for Musicians and Normal People 1. Dig into the structure of Beethoven's sonata #5 movement #1 , also see what we as a society know about it . 17. Visualizations: classical , jazz harmony , jazz solos , rock Sing 1. Arabic maqamat 2. Indonesian gamelan Лекции - 🎥 Есть мои видеолекции Western music languages Music languages can be divided into a number of families. Historically, the most dominant and influencial one is Western family of languages. Its languages share some common traits: - 12-tone temperament - major/minor keys - homophony: melody over chords, chords give a separate narrative - chords as stacked thirds - any of the 12 notes can be a tonic - after two repetitions of any idea there should be a contrasting idea - mostly 4/4 and 3/4 - cadences are chord patterns The languages are (roughly speaking): - Rock - probably worth exploring the first, as it's the simplest and pretty popular. It makes sense to start here and expand into other Western languages later on - as they share a lot of concepts. Rock here is an umbrella term for pop, soul/RnB, blues rock, folk rock, alternative, punk, prog, and heavy metal. Advanced - Classical - the biggest chapter here, as it's the main focus of research and teaching until recently (despite its unpopularity according to streaming stats and decolonization ideas . Subtopics: pre-classical , advanced , Bach chorales - Jazz . Subtopics: harmony , lego , solo - Groove/blues - funk, R&B - Barbershop - Movies (neo-Riemannian) - Video games - EDM - Other genres like country, gospel, contemporary worship music, rap - Western regional traditions (eg. Latin , flamenco?) Somewhat related to that are church chants: Gregorian, Byzantine, Armenian, Znamenny Non-Western music languages Non-Western music languages are different families. As they were developed all over the globe, they don't share many common features. The gradient of families is (roughly speaking): - Balkan languages - Maqam languages - Indian music - Gamelan , piphat and other gong chime languages - many other traditions: Chinese , Kyrgyz komuz , Georgian polyphonic singing , Japanese Broad overview on non-Western languages Topics Research MusoRepo: a Directory of Resources for Computational Musicology - curated by Mark Gotham corpus studies expressive performance interactive harmonic analysis Composition Visualizations and notation Maps of genres Listening guides - how to enjoy classical music without a deep commitment to learn theory Ear training Piano , guitar Rhythm Topics, tropes, meaning Pseudoscience Improvisation Sociology Psychology YouTube, podcasts and lists of resources Topics on electronic music Sound design Digital composition Neural networks , 🔥 tokenization 🔥 Transcription Mixing Microtonal music Notable instruments Institute of Sonology: One-Year Course Contacts I post updates and other rant on music theory on Twitter (in English) and on Telegram (in Russian) Do you know how to enroll in a music theory program (master's/PhD) after a computer science BSc and two years of jazz college ( linkedin )? Please, let me know: cxielamiko@gmail.com, t.me/vitalypavlenko (asking for myself) I'm always happy to chat about visualisation-aided music education and research popularisation. Also, I constantly feel severely deprived of communication with the real academic theoretic community, so drop me a line ;) Also, if you're in the UK, and especially in London, drop me a line and let's grab coffee.;An "awesome music theory" kinda wiki with books, resources and courses for studying everything about music and sound;classical-music,ear-training,jazz,music,music-history,music-theory,musicology,sound-design,electronic-music,sound
vpavlenko/study-music
pingcap/ossinsight;Open Source Software Insight! Data Explorer • Repo Rankings • Developer Analytics • Repo Analytics • Collections • Workshop • Blog • API • Twitter ## Introduction OSS Insight is a powerful tool that provides comprehensive, valuable, and trending insights into the open source world by analyzing 5+ billion rows of GitHub events data. OSS Insight's Data Explorer provides a new way to explore GitHub data. Simply ask your question in natural language and Data Explorer will generate SQL, query the data, and present the results visually. OSS Insight also provides in-depth analysis of individual GitHub repositories and developers, as well as the ability to compare two repositories using the same metrics. [🎦 Video - OSS Insight: Easiest New Way to Analyze Open Source Software](https://www.youtube.com/watch?v=6ofDBgXh4So&t=1s) ### **Feature 0: Shareable Insight Widgets** | Repository Activity Trends | Collaborative Productivity - Last 28 days | | ----------- | ----------- | | | | | Repository Performance Stats - Last 28 days | Active Contributors - Last 28 days | | ----------- | ----------- | | | | | Star Geographic Distribution | Star History | | ----------- | ----------- | | | | For more charming widgets, please [Check it out 👉](https://next.ossinsight.io/widgets?utm_source=github&utm_medium=referral) ### **Feature 1: GPT-Powered Data Exploration** Data Explorer provides a new way to discover trends and insights into 5+ billion rows of GitHub data. Simply ask your question in natural language and Data Explorer will generate SQL, query the data, and present the results visually. It's built with Chat2Query , a GPT-powered SQL generator in TiDB Cloud. Examples: * [Projects similar to @facebook/react](https://ossinsight.io/explore?id=ba186a53-b2ab-4cad-a46f-e2c36566cacd) * [The most interesting Web3 projects](https://ossinsight.io/explore?id=f829026d-491c-44e0-937a-287f97a3cba7) * [Where are @kubernetes/kubernetes contributors from?](https://ossinsight.io/explore?id=754a681e-913f-4333-b55d-dbd8598bd84d) * [More popular questions](https://ossinsight.io/explore/) [🎦 Video - Data Explorer: Discover insights in GitHub data with GPT-generated SQL](https://www.youtube.com/watch?v=rZZfgOJ-quI) ### **Feature 2: Technical Fields Analytics 👁️** * **GitHub Collections Analysis** Find insights about the monthly or historical rankings and trends in technical fields with curated repository lists. **Examples**: * [Collection: Web Framework](https://ossinsight.io/collections/web-framework) * [Collection: Artificial Intelligence](https://ossinsight.io/collections/artificial-intelligence) * [Collection: Web3](https://ossinsight.io/collections/web3) * [More](https://ossinsight.io/collections/open-source-database) ... **Welcome to add collections** 👏 We welcome your contributions here! You can add a collection on our website by submitting PRs. Please create a `.yml` file under [the collections file path]( https://github.com/pingcap/ossinsight/tree/main/etl/meta/collections). [Here](https://github.com/pingcap/ossinsight/blob/main/CONTRIBUTING.md#add-a-collection) is a file template that describes what you need to include. We look forward to your PRs! * **Deep Insight into some popular fields of technology** Share with you many deep insights into some popular fields of technology, such as open source Databases, JavaScript Framework, Low-code Development Tools and so on. **Examples**: * [Deep Insight Into Open Source Databases](https://ossinsight.io/blog/deep-insight-into-open-source-databases) * [JavaScript Framework Repos Landscape 2021](https://ossinsight.io/blog/deep-insight-into-js-framework-2021) * [Web Framework Repos Landscape 2021](https://ossinsight.io/blog/deep-insight-into-web-framework-2021) * [More](https://ossinsight.io/blog) ... We’ll also share the SQL commands that generate all these analytical results above each chart, so you can use them on your own on TiDB Cloud following this [10-minute tutorial](https://ossinsight.io/blog/try-it-yourself/). ### **Feature 3: Developer Analytics** Insights about **developer productivity**, **work cadence**, and **collaboration** from developers' contribution behavior. * **Basic**: * Stars, behavior, most used languages,and contribution trends * Code (commits, pull requests, pull request size and code line changes), code reviews, and issues * **Advanced**: * Contribution time distribution for all kind of contribution activities * Monthly stats about contribution activities in all public repositories ### **Feature 4: Repository Analytics** Insights about the **code update frequency & degree of popularity** from repositories' status. * **Basic**: * Stars, forks, issues, commits, pull requests, contributors, programming languages, and lines of code modified * Historical trends of these metrics * Time cost of issues, pull requests * **Advanced**: * Geographical distribution of stargazers, issue creators, and pull request creators * Company distribution of stargazers, issue creators, and pull request creators **Examples**: * [React](https://ossinsight.io/analyze/facebook/react) * [TiDB](https://ossinsight.io/analyze/pingcap/tidb) * [web3.js](https://ossinsight.io/analyze/web3/web3.js) * [Ant Design](https://ossinsight.io/analyze/ant-design/ant-design) * [Chaos Mesh](https://ossinsight.io/analyze/chaos-mesh/chaos-mesh) ### **Feature 5: Compare Projects 🔨** Compare two projects using the repo metrics mentioned in **Repository Analytics**. **Examples**: * [Compare Vue and React](https://ossinsight.io/analyze/vuejs/vue?vs=facebook/react) * [Compare CockroachDB and TiDB](https://ossinsight.io/analyze/pingcap/tidb?vs=cockroachdb/cockroach) * [Compare PyTorch and TensorFlow](https://ossinsight.io/analyze/pytorch/pytorch?vs=tensorflow/tensorflow) ## Contribution We've released OSS Insight because it can do more insights about GitHub.We hope that others can benefit from the project. You are more than welcome to participate in capacity building. We are thankful for any [contributions](https://github.com/pingcap/ossinsight/blob/main/CONTRIBUTING.md) from the community. * [GitHub Discussion](https://github.com/pingcap/ossinsight/discussions). Best for: help with building, discussion about OSS Insight best practices. * [GitHub Issues](https://github.com/pingcap/ossinsight/issues). Best for: bugs and errors you encounter using OSS Insight. * [GitHub PR](https://github.com/pingcap/ossinsight/pulls). Best for: pull request the features you wish for OSS Insight. * [collection](https://github.com/pingcap/ossinsight/blob/main/CONTRIBUTING.md#add-a-collection) You can add a collection on our website. * [blog](https://github.com/pingcap/ossinsight/blob/main/CONTRIBUTING.md#add-a-blog) You are welcome to share blogs about using OSS Insight. * [fix](https://github.com/pingcap/ossinsight/blob/main/CONTRIBUTING.md#add-a-collection) You can make fixes to current issues. * [feat](https://github.com/pingcap/ossinsight/blob/main/CONTRIBUTING.md#feature-requests) You are welcome to contribute if you have new feature ideas. ## Contact We have a few channels for contact: * [GitHub Discussions](https://github.com/pingcap/ossinsight/discussions): You can ask a question or discuss here. * [@OSS Insight](https://twitter.com/OSSInsight) on Twitter * [mail](mailto:ossinsight@pingcap.com):If you want to analyze more, please [contact us](mailto:ossinsight@pingcap.com) ✉️ ## Development * [⚙️ Setup](https://github.com/pingcap/ossinsight/blob/main/CONTRIBUTING.md#development) * [💡 How to build your own insight tool](https://ossinsight.io/docs/workshop/mini-ossinsight/introduction) ## Sponsors;Analysis, Comparison, Trends, Rankings of Open Source Software, you can also get insight from more than 7 billion with natural language (powered by OpenAI). Follow us on Twitter: https://twitter.com/ossinsight;insight,oss,github,realtime,analytics,htap,demo,openai,aisql,text2sql
pingcap/ossinsight
DataDog/stratus-red-team;Stratus Red Team Stratus Red Team is " Atomic Red Team ™" for the cloud, allowing to emulate offensive attack techniques in a granular and self-contained manner. Read the announcement blog posts: - https://www.datadoghq.com/blog/cyber-attack-simulation-with-stratus-red-team/ - https://blog.christophetd.fr/introducing-stratus-red-team-an-adversary-emulation-tool-for-the-cloud/ Getting Started Stratus Red Team is a self-contained Go binary. See the documentation at stratus-red-team.cloud : - Stratus Red Team Concepts Installing Stratus Red Team - Homebrew formula, Docker image and pre-built binaries available Available Attack Techniques , mapped to MITRE ATT&CK Installation Direct install Requires Go 1.18+ go install -v github.com/datadog/stratus-red-team/v2/cmd/stratus@latest Homebrew brew tap datadog/stratus-red-team https://github.com/DataDog/stratus-red-team brew install datadog/stratus-red-team/stratus-red-team Pre-build binaries For Linux / Windows / Mac OS: download one of the pre-built binaries . Docker bash IMAGE="ghcr.io/datadog/stratus-red-team" alias stratus="docker run --rm -v $HOME/.stratus-red-team/:/root/.stratus-red-team/ -e AWS_ACCESS_KEY_ID -e AWS_SECRET_ACCESS_KEY -e AWS_SESSION_TOKEN -e AWS_DEFAULT_REGION $IMAGE" asdf You can install specific versions (or latest) of stratus-red-team using asdf and this stratus-red-team plugin : bash asdf plugin add stratus-red-team https://github.com/asdf-community/asdf-stratus-red-team.git asdf install stratus-red-team latest Community The following section lists posts and projects from the community leveraging Stratus Red Team. Open-source projects: - Threatest - AWS Threat Detection with Stratus Red Team Videos: - Reproducing common attacks in the cloud with Stratus Red Team - Stratus Red Team: AWS EC2 Instance Credential Theft | Threat SnapShot Blog posts: - AWS threat emulation and detection validation with Stratus Red Team and Datadog Cloud SIEM - Adversary emulation on AWS with Stratus Red Team and Wazuh - Sky’s the Limit: Stratus Red Team for Azure - Detecting realistic AWS cloud-attacks using Azure Sentinel - A Data Driven Comparison of Open Source Adversary Emulation Tools - Making Security Relevant in the Cloud - Detonating attacks with Datadog Stratus Red Team - AWS CloudTrail cheatsheet - Adversary emulation on GCP with Stratus Red Team and Wazuh - Automated First-Response in AWS using Sigma and Athena - AWS Cloud Detection Lab: Cloud Pen-testing with Stratus Red Team Talks: - Purple Teaming & Adversary Emulation in the Cloud with Stratus Red Team, DEF CON Cloud Village 2022 (recorded after the event as the talks were not recorded) - Threat-Driven Development with Stratus Red Team by Ryan Marcotte Cobb - Cloudy With a Chance of Purple Rain: Leveraging Stratus Red Team - BSides Portland 2022 Papers: - A Purple Team Approach to Attack Automation in the Cloud Native Environment Using Stratus Red Team as a Go Library See Examples and Programmatic Usage . Development Building Locally bash make ./bin/stratus --help Running Locally bash go run cmd/stratus/*.go list Running the Tests bash make test Building the Documentation For local usage: ``` pip install mkdocs-material mkdocs-awesome-pages-plugin make docs mkdocs serve ``` Acknowledgments Maintainer: @christophetd Similar projects (see how Stratus Red Team compares ): - Atomic Red Team by Red Canary - Leonidas by F-Secure - pacu by Rhino Security Labs - Amazon GuardDuty Tester - CloudGoat by Rhino Security Labs Inspiration and relevant resources: - https://expel.io/blog/mind-map-for-aws-investigations/ - https://rhinosecuritylabs.com/aws/aws-privilege-escalation-methods-mitigation/ - https://github.com/elastic/detection-rules/tree/main/rules/integrations/aws;:cloud: :zap: Granular, Actionable Adversary Emulation for the Cloud;aws,adversary-emulation,purple-team,mitre-attack,cloud-security,cloud-native-security,detection-engineering,threat-detection,security,aws-security
DataDog/stratus-red-team
lyfe00011/whatsapp-bot-md;WhatsApp MD User Bot A simple WhatsApp User bot. Setup Deploy on Heroku Click SCAN and scan the QR code through the "WhatsApp Linked Devices" option in your WhatsApp app. You will get a session ID in WhatsApp, copy the ID only. If you don't have an account on Heroku , create an account now . If you don't have a GitHub account, sign up now. FORK this repository. Now DEPLOY . Deploy on Koyeb Create an account on Koyeb . Sign up now . Get DATABASE_URL . You'll need this while deploying. Get SESSION_ID . Open Linked Devices in WhatsApp and SCAN now. Get the Koyeb API key. Let's Go . DEPLOY now. Enter Environment Variables . Read More . Enter a name and click "Create Service." Deploy on VPS or PC (Example here as in Ubuntu) Install with script wget -N -O levanter.sh http://bit.ly/43JqREw && chmod +x levanter.sh && ./levanter.sh Install without a script Install git, ffmpeg, and curl: sudo apt -y update && sudo apt -y upgrade sudo apt -y install git ffmpeg curl Install nodejs: sudo apt -y remove nodejs curl -fsSl https://deb.nodesource.com/setup_lts.x | sudo bash - && sudo apt -y install nodejs Install yarn: npm install -g yarn Install pm2: sudo yarn global add pm2 Clone the repository and install packages: git clone https://github.com/lyfe00011/whatsapp-bot-md botName cd botName yarn install --network-concurrency 1 Enter Environment Variables: Copy-paste the lines below (remove SESSION_ID if not needed): echo "SESSION_ID = Session_Id_you_Got_After_Scan_Dont_Add_This_Line_If_You_Can_Scan_From_Terminal_Itself PREFIX = . STICKER_PACKNAME = LyFE ALWAYS_ONLINE = false RMBG_KEY = null LANGUAG = en WARN_LIMIT = 3 FORCE_LOGOUT = false BRAINSHOP = 159501,6pq8dPiYt7PdqHz3 MAX_UPLOAD = 200 REJECT_CALL = false SUDO = 989876543210 TZ = Asia/Kolkata VPS = true AUTO_STATUS_VIEW = true SEND_READ = true AJOIN = true DISABLE_START_MESSAGE = false PERSONAL_MESSAGE = null" > config.env Read More Edit the config.env using nano if needed. To save, press Ctrl + O , then press Enter , and to exit, press Ctrl + X . Start and stop the bot: To start the bot: pm2 start . --name botName --attach --time To stop the bot: pm2 stop botName Thanks To Yusuf Usta for WhatsAsena @adiwajshing for Baileys;A whatsapp Multi Device bot based on baileys;[]
lyfe00011/whatsapp-bot-md
github/advisory-database;GitHub Advisory Database A database of CVEs and GitHub-originated security advisories affecting the open source world. The database is free and open source and is a tool for and by the community. Submit pull requests to help improve our database of software vulnerability information for all. Goals To provide a free and open-source repository of security advisories. To enable our community to crowd-source their knowledge about these advisories. To surface vulnerabilities in an industry-accepted formatting standard for machine interoperability. Features All advisories acknowledged by GitHub are stored as individual files in this repository. They are formatted in the Open Source Vulnerability (OSV) format . You can submit a pull request to this database (see, Contributions ) to change or update the information in each advisory. Pull requests will be reviewed and either merged or closed by our internal security advisory curation team. If the advisory originated from a GitHub repository, we will also @mention the original publisher for optional commentary. Sources We add advisories to the GitHub Advisory Database from the following sources: Security advisories reported on GitHub The National Vulnerability Database The npm Security Advisories Database The FriendsOfPHP Database The Go Vulnerability Database The Python Packaging Advisory Database The Ruby Advisory Database The RustSec Advisory Database Community contributions to this repository If you know of another database we should be importing advisories from, tell us about it by opening an issue in this repository . Contributions There are two ways to contribute to the information provided in this repository. From any individual advisory on github.com/advisories , click Suggest improvements for this vulnerability (shown below) to open an "Improve security advisory" form. Edit the information in the form and click Submit improvements to open a pull request with your proposed changes. Alternatively, you can submit a pull request directly against a file in this repository. To do so, follow the contribution guidelines . Supported ecosystems Unfortunately, we cannot accept community contributions to advisories outside of our supported ecosystems. Our curation team reviews each community contribution thoroughly and needs to be able to assess each change. Generally speaking, our ecosystems are the namespace used by a package registry. As such they’re focused on packages within the registry which tend to be dependencies used in software development. Our supported ecosystems are: Composer (registry: https://packagist.org) Erlang (registry: https://hex.pm/) GitHub Actions (registry: https://github.com/marketplace?type=actions) Go (registry: https://pkg.go.dev/) Maven (registry: https://repo.maven.apache.org/maven2) npm (registry: https://www.npmjs.com/) NuGet (registry: https://www.nuget.org/) pip (registry: https://pypi.org/) Pub (registry: https://pub.dev/) RubyGems (registry: https://rubygems.org/) Rust (registry: https://crates.io/) Swift (registry: namespaced by dns ) If you have a suggestion for a new ecosystem we should support, please open an issue for discussion. License This project is licensed under the terms of the CC-BY 4.0 open source license. Please see our documentation for the full terms. GHSA IDs Each security advisory, regardless of its type, has a unique identifier referred to as a GHSA ID . A GHSA-ID qualifier is assigned when a new advisory is created on GitHub or added to the GitHub Advisory Database from any of the supported sources. The syntax of GHSA IDs follows this format: GHSA-xxxx-xxxx-xxxx where x is a letter or a number from the following set: 23456789cfghjmpqrvwx . Outside the GHSA portion of the name: The numbers and letters are randomly assigned. All letters are lowercase. You can validate a GHSA ID using a regular expression: /GHSA(-[23456789cfghjmpqrvwx]{4}){3}/ database_specific Values The OSV Schema supports several database_specific JSON object fields that are used to add context to various other parts of the OSV schema, namely an affected package , a package's affected ranges , and the vulnerability as a whole. Per the spec, these fields are used for holding additional information about the package, range, or vulnerability "as defined by the database from which the record was obtained." It additionally stipulates that the meaning and format of these custom values "is entirely defined by the database [of record]" and outside of the scope of the OSV Schema itself. For its purposes, GitHub uses a number of database_specific values in its OSV files. They are used primarily in support of Community Contributions and are intended for internal use only unless otherwise specified. These values and their format are subject to change without notice. Consuming systems should not rely on them for processing vulnerability information. | Scope | Field | Purpose | |---|---|---| | vulnerability | severity | The OSV schema supports quantitative severity scores such as CVSS. GitHub additionally assigns each vulnerability a non-quantitative human-readable severity value. | | vulnerability | cwe_ids | GitHub assigns each vulnerability at least one Common Weakness Enumeration (CWE) as part of its vulnerability curation process. These IDs map directly to CWE IDs tracked in the CWE Database . | | vulnerability | github_reviewed | Whether a vulnerability has been reviewed by one of GitHub's Security Curators. | | vulnerability | github_reviewed_at | The timestamp of the last review by a GitHub Security Curator. | | range | last_known_affected_version_range | The OSV schema does not have native support for all of the potential ways GitHub represents vulnerabile version ranges internally. It is used to track version range information that is not representable in OSV format, or that GitHub needs to be able to track separately from the OSV ranges. This field may appear in addition to or in place of OSV affected range events. See this comment a technical explanation. | FAQ Who reviews the pull requests? Our internal Security Advisory Curation team reviews the pull requests. They make the ultimate decision to merge or close. If the advisory originated from a GitHub repository, we will also @mention the original publisher for optional commentary. Why is the base branch changed on a PR? This repository is a mirror of our advisory database. All contributions to this repository are merged into the main branch via our primary data source to preserve data integrity. We automatically create a staging branch for each PR to preserve the GitHub workflow you're used to. When a contribution is accepted from a PR in this repository, the changes are merged into the staging branch and then pushed to the primary data source to be merged into main by a separate process, at which point the staging branch is deleted. Will the structure of the database change? Here at GitHub, we ship to learn! As usage patterns emerge, we may iterate on how we organize this database and potentially make backwards-incompatible changes to it. Where can I get more information about GitHub advisories? Information about creating a repository security advisory can be found here , and information about browsing security advisories in the GitHub Advisory Database can be found here .;Security vulnerability database inclusive of CVEs and GitHub originated security advisories from the world of open source software.;[]
github/advisory-database
vue-macros/vue-macros;Vue Macros Explore more macros and syntax sugar to Vue. 📜 Documentation Features ✨ Explore more macros and syntax sugar to Vue. 💚 Supports both Vue 2.7 and Vue 3 out-of-the-box. 🦾 Full TypeScript / Volar support. ⚡️ Supports Vite, Nuxt, Webpack, Vue CLI, Rollup 3, esbuild and more, powered by unplugin . Installation bash npm i -D unplugin-vue-macros Sponsors Contributors 💕 Thank you to all the contributors! Related Libraries vue-functional-ref - Functional-style refs for Vue. License MIT License © 2022-PRESENT 三咲智子;Explore and extend more macros and syntax sugar to Vue.;vue,unplugin,rollup,vite,esbuild,sfc,script-setup,options,webpack,macros
vue-macros/vue-macros
teamssix/awesome-cloud-security;Awesome Cloud Security 云安全资源汇总 💫 Awesome Cloud Security 项目是从 T Wiki 云安全知识文库独立出来的一个项目, T Wiki 云安全知识文库中包含了自己在云安全方向的学习笔记以及大家一起贡献补充的云安全资源, T Wiki 云安全知识文库地址: wiki.teamssix.com The Awesome Cloud Security project is from the T Wiki cloud security knowledge base, The T Wiki cloud security knowledge base contains my learning notes on cloud security and cloud security resources contributed by everyone, T Wiki cloud security knowledge base site: wiki.teamssix.com 提示:Mac 按住 command 键,Windows 或 Linux 按住 ctrl 键,然后再点击链接可以在新标签页中打开 0x01 资料 :books: 1 综合 T Wiki 云安全知识文库 :fire: 地址 Hacking The Cloud(英文) 地址 Cloud Security Wiki By NotSoSecure(英文) 地址 Cloud Security Wiki By WithSecure(英文) 地址 由「Kagantua」师傅补充,感谢支持 云服务漏洞库(英文) 地址 2021 年云安全事件回顾(英文) 地址 云渗透技巧 HackTricks Cloud(英文) 地址 云风险百科(英文) 地址 火线云安全知识库 地址 云安全文库(英文) 地址 Sysdig 2023 年全球云威胁报告(英文) 地址 云渗透笔记 CloudPentestCheatsheets(英文) 地址 由「Kfzz1」师傅补充,感谢支持 AWS 攻击知识库 WeirdAAL (英文) 地址 T Wiki 云安全知识文库项目 地址 T Wiki 文库现已开源,可部署到自己本地方便内网阅读 云安全入门资料 地址 云安全向导 地址 2 博客资讯 0xd4y 博客(英文) 地址 Aqua 博客(英文) 地址 AWS 安全公告(英文) 地址 Bridgecrew 博客(英文) 地址 Christophe Tafani-Dereeper 博客(英文) 地址 Chris Farris 的个人博客(英文) 地址 CIS Benchmarks 下载页(英文) 地址 CNCF 博客(英文) 地址 Deepfence 博客(英文) 地址 DevOps 安全博客(英文) 地址 DevOps 资讯(英文) 地址 Ermetic 博客(英文) 地址 Gafnit Amiga 的个人博客(英文) 地址 HashiCorp 博客(英文) 地址 Humanitec 博客(英文) 地址 Lacework 博客(英文) 地址 Lightspin 博客(英文) 地址 Mystic0x1 博客(英文) 地址 Nick Frichette 的个人博客(英文) 地址 Orca 博客(英文) 地址 PeoplActive 博客(英文) 地址 Praetorian 博客(英文) 地址 Rhino Security Labs 博客(英文) 地址 Sysdig 云安全报告资讯(英文) 地址 Sysdig 博客(英文) 地址 TeamsSix 的个人博客 地址 Trend Micro 博客(英文) 地址 WIZ 博客(英文) 地址 安全大道资讯(英文) 地址 福布斯 Cloud 100(英文) 地址 火线安全每日云安全资讯 地址 绿盟技术博客 地址 容器杂志资讯(英文) 地址 腾讯云鼎每日云安全资讯 地址 云安全资讯(每周更新一次)(英文) 地址 云计算市场资讯(英文) 地址 云原生实验室博客 地址 由「DVKunion」师傅补充,感谢支持 3 公众号 TeamsSix 火线 Zone 云鼎实验室 绿盟科技研究通讯 默安逐日实验室 Linux 云计算网络 由「zxynull」师傅补充,感谢支持 云原生技术社区 由「zxynull」师傅补充,感谢支持 进击云原生 由「zxynull」师傅补充,感谢支持 CNCF 容器魔方 云计算D1net 云原生社区动态 大可不加冰 小佑科技 由「宅独青年」师傅补充,感谢支持 4 推特 0xd4y Andy Robbins Beau Bullock Chris Farris Christophe Tafani-Dereeper Dirk-jan Dr. Nestori Syynimaa Emilien Socchi Fabian Bader Fawaz gafnit inversecosᵘʷᵘ Jason Ostrom Joosua Santasalo Karl Kfzz1 Liv Matan Marco Lancini Melvin langvik Merill mx7krshell Nathan McNulty Nick Frichette Nikhil Mittal Nir Ohfeld Raunak Parmar Rhino Security Labs Roberto Rodriguez rootsecdev rvrsh3ll Ryan Hausknecht Sami Lamppu Sean Metcalf Seth Art Shir Tamari Simon Décosse Skyworship Thomas Naunheim 5 书籍 《云原生安全-攻防实践与体系构建》 《Hacking Kubernetes》 《Hands-On AWS Penetration Testing with Kali Linux》 6 视频 0xd4y 频道(英文) 地址 CNCF 频道(英文) 地址 WIZ 频道(英文) 地址 火线云安全沙龙视频 地址 7 证书 AWS 安全认证-专业 AWS Certified Security - Specialty 地址 AWS 认证解决方案架构师-助理 AWS Certified Solutions Architect – Associate 地址 Azure 基础知识认证 Azure Fundamentals 地址 Azure 安全工程师助理 Azure Security Engineer Associate 地址 CompTIA Cloud+ 地址 GCP 专业云安全工程师 GCP Professional Cloud Security Engineer 地址 GCP 云工程师助理 Associate Cloud Engineer 地址 Kubernetes 认证安全专家 Certified Kubernetes Security Specialist (CKS) 地址 认证云安全专家 Certified Cloud Security Professional (CCSP) 地址 阿里云专业工程师 Alibaba Cloud Certified Professional (ACP) 地址 阿里云云计算架构师 Alibaba Cloud Certified Expert - Cloud Computing (ACE) 地址 阿里云助理工程师 Alibaba Cloud Certified Associate (ACA) 地址 8 云服务文章 综合 浅谈云上攻防——云服务器攻防矩阵 地址 浅谈云上攻防——对象存储服务访问策略评估机制研究 地址 红队视角下的公有云基础组件安全 地址 红队视角下的公有云基础组件安全(二) 地址 公有云 IP 重用的威胁和防御方法分析 Paper(英文) 地址 企业迁移到公有云之前要问的5个问题 地址 云上攻防:RED TEAMING FOR CLOUD 地址 云上攻防二三事(续) 地址 云计算隔离问题:PostgreSQL 的漏洞影响到多个云计算供应商(英文) 地址 常规云服务业务侧攻防视角研究 地址 云安全学习建议与方向(英文) 地址 60 种云攻击的方法(英文) 地址 由「程皮糖别皮」师傅补充,感谢支持 云服务安全漏洞汇总 地址 Lightspin 2022 年 7 大云攻击路径(英文) 地址 AWS AWS S3 对象存储攻防 地址 AWS EC2 弹性计算服务攻防 地址 针对 AWS Lambda 的运行时攻击 地址 利用 AWS RDS 读取实例凭证(英文) 地址 利用 AWS RDS 读取实例凭证(中文翻译) 地址 风险最高的 10 种 AWS 配置错误 地址 在 AWS 下查看自己所拥有的权限 地址 AWS 枚举(第一部分)(英文) 地址 当 0day 和访问密钥在云上被结合利用时:应对 SugarCRM 0day 漏洞 (英文) 地址 利用 AWS 官方对 log4j 漏洞的热补丁实现容器逃逸(英文) 地址 AWS 创建后门的几种方法(英文) 地址 AWS 权限提升(英文) 地址 Azure 微软云 对象存储攻防 地址 微软云 VM 攻防 地址 Azure Cloud Shell 命令注入窃取用户的访问令牌(英文) 地址 Azure 资源收集项目 Awesome-Azure-Pentest 地址 由「橘子怪」师傅补充,感谢支持 GCP 谷歌云 对象存储攻防 地址 谷歌云 Compute Engine 攻防 地址 Google Cloud Shell 命令注入(英文) 地址 GCP 渗透测试笔记(英文) 地址 阿里云 阿里云 OSS 对象存储攻防 地址 阿里云 ECS 攻防 地址 从云服务器 SSRF 漏洞到接管你的阿里云控制台 地址 我用 CF 打穿了他的云上内网 地址 记录一次平平无奇的云上攻防过程 地址 一次简单的"云"上野战记录 地址 记一次打穿云上内网的攻防实战 地址 腾讯云 腾讯云 COS 对象存储攻防 地址 腾讯云服务器攻防(CVM+轻量应用服务器) 地址 华为云 华为云 OBS 对象存储攻防 地址 华为云 ECS 弹性云服务器攻防 地址 华为云 CTF cloud 非预期解之 k8s 渗透实战 地址 9 云原生文章 综合 红蓝对抗中的云原生漏洞挖掘及利用实录 地址 CIS 基准检测手册(英文) 地址 由「zhengjim」师傅补充,感谢支持 浅谈 Linux Cgroup 机制 地址 由「zxynull」师傅补充,感谢支持 保障云和容器安全的十个注意事项(英文) 地址 CNCF 云原生安全白皮书 v2 地址 awesome-cloud-native-security from Metarget 地址 Docker 特权模式下 Docker 逃逸手法总结 地址 容器逃逸方法检测指北(附检测脚本) 地址 Docker 核心技术与实现原理 地址 由「zxynull」师傅补充,感谢支持 容器安全清单 container-security-checklist 地址 由「zxynull」师傅补充,感谢支持 Kubernetes 利用 gateway-api,我支配了 kubernetes 地址 浅析 k8s 各种未授权攻击方法 地址 云原生之 Kubernetes 安全 地址 RCE 进入内网接管 K8s 并逃逸进 xx 网 地址 从零开始的 Kubernetes 攻防 地址 eBPF 使用 eBPF 逃逸容器技术分析与实践 地址 由「zxynull」师傅补充,感谢支持 内核态 eBPF 程序实现容器逃逸与隐藏账号rootkit 地址 由「zxynull」师傅补充,感谢支持 基于 eBPF 实现容器运行时安全 地址 由「zxynull」师傅补充,感谢支持 初探 eBPF 地址 Terraform Terraform 中文教程 地址 Terraform 使用入门以及在云上攻防中的作用 地址 APISIX APISIX CVE-2022-29266 漏洞分析与复现 地址 CI/CD CI/CD 攻击场景 - KCon 2023 议题 地址 由「宅独青年」师傅补充,感谢支持 0x02 工具 :hammer_and_wrench: 1 云服务工具 辅助工具 综合 在线搜索目标网站下的云资产 recon.cloud 地址 在线多云管理平台 行云管家 地址 由「半人间丶」师傅补充,感谢支持 AK 等敏感信息查找工具 trufflehog 地址 多云基线扫描工具 ScoutSuite 地址 云安全态势管理工具 CloudSploit 地址 由「da Vinci【达文西】」师傅补充,感谢支持 基础设施关系绘制工具 Cartography 地址 多云对象存储管理工具 qiniuClient 地址 由「半人间丶」师傅补充,感谢支持 云渗透信息收集工具 cloudfox 地址 云服务资源枚举工具 cloud_enum 地址 开源多云安全合规扫描平台 RiskScanner 地址 由「想走安全的小白」师傅补充,感谢支持 多云对象存储扫描工具 Cloud-Bucket-Leak-Detection-Tools 地址 适用于 AWS 和 Azure 的扫描工具 SkyArk 地址 云上公开资产枚举 CloudBrute 地址 多云资产收集工具 cloudlist 地址 由「Kfzz1」师傅补充,感谢支持 权限升级路径分析工具 PurplePanda 地址 云上攻击模拟工具 Leonidas 地址 开源的轻量级云管平台 CloudExplorer Lite 地址 红队云操作系统 RedCloudOS 地址 云资产管理工具 cloudTools 地址 由「弱鸡」师傅补充,感谢支持 云服务枚举工具 cloud service enum 地址 AWS 在线搜索公开的存储桶 buckets.grayhatwarfare.com 地址 AWS 文档 GPT 工具 地址 AWS S3 浏览器 S3 Browser 地址 由「Poker」师傅补充,感谢支持 本地 AWS 环境部署工具 LocalStack 地址 由「Esonhugh」师傅补充,感谢支持 AWS 官方 CLI 工具 地址 AWS 环境分析工具 CloudMapper 地址 S3 策略扫描工具 S3Scanner 地址 AWS IAM 权限枚举工具 Principal Mapper 地址 AWS IAM 权限枚举工具 enumerate-iam 地址 S3 公开存储桶密钥扫描工具 S3cret Scanner 地址 AWS 常见配置错误审计工具 YATAS 地址 检测多云环境中存在 dangling DNS 记录的工具 findmytakeover 地址 Route53/CloudFront 漏洞评估工具 地址 CloudTrail 日志分析 IAM 权限工具 Cloudtrail2IAM 地址 Azure Azure 官方 CLI 工具 地址 Azure MFA 检测工具 地址 Azure AD 和 Office 365 的 PowerShell 管理模块 AADInternals 地址 BloodHound 收集 Azure 数据工具 AzureHound 地址 由「Kfzz1」师傅补充,感谢支持 Azure AD 信息收集工具 AzureGraph 地址 由「Kfzz1」师傅补充,感谢支持 GCP GCP 官方 CLI 工具 地址 GCP 资源枚举工具 地址 GCP 攻击面资源枚举工具 地址 GCP 资源分析工具 Hayat 地址 GCP IAM 权限收集工具 gcp-iam-collector 地址 Google Workspace 目录转储工具 Google Workspace Directory Dump Tool 地址 阿里云 阿里云官方 OSS 管理工具 地址 由「半人间丶」师傅补充,感谢支持 阿里云官方 CLI 工具 地址 腾讯云 腾讯云轻量服务器管理工具 地址 由「tanger」师傅补充,感谢支持 腾讯云官方 COS 辅助工具 地址 由「Esonhugh」师傅补充,感谢支持 腾讯云官方 CLI 工具 地址 华为云 华为云 OBS 官方管理工具 OBS Browser+ 地址 天翼云 天翼云对象存储连接工具 地址 青云 青云官方 CLI 工具 地址 由 「苏打养乐多」师傅补充,感谢支持 利用工具 多云 阿里云/腾讯云 AK 资源管理工具 地址 由「Esonhugh」师傅补充,感谢支持 支持 GUI 的 AWS、GCP 利用工具 Vajra 地址 由「Kfzz1」师傅补充,感谢支持 AWS AWS 综合利用工具 pacu 地址 AWS 渗透工具集 aws-pentest-tools 地址 AWS Lambda 密码喷洒工具 CredKing 地址 AWS AccessKey 泄漏利用工具 awsKeyTools 地址 由「1derian」和「ShangRui-hash」师傅联合补充,感谢支持 AWS 渗透测试工具 Endgame 地址 AWS 控制台接管利用工具 aws_consoler 地址 AWS 红队利用脚本 Redboto 地址 AWS 域控卷影拷贝工具 CloudCopy 地址 Azure Azure 安全评估 PowerShell 工具包 MicroBurst 地址 Azure 红队利用工具 Stormspotter 地址 由「da Vinci【达文西】」师傅补充,感谢支持 Azure AD 利用工具集 ROADtools 地址 枚举、喷洒、渗透 O365 AAD 帐户工具 TeamFiltration 地址 Azure JWT 令牌操作工具集 TokenTactics 地址 Microsoft 365 安全工具箱 DCToolbox 地址 滥用 Microsoft 365 OAuth 授权流程进行网络钓鱼攻击的概念验证脚本 Microsoft365_devicePhish 地址 Azure AD 身份保护 Cookie 重放测试工具 地址 用于攻击 Azure Function 应用程序的 PowerShell 工具 FuncoPop 地址 GCP GCP 利用工具集 地址 GCP Bucket 枚举工具 GCPBucketBrute 地址 GCP IAM 权限提升方法 GCP-IAM-Privilege-Escalation 地址 由「da Vinci【达文西】」师傅补充,感谢支持 GCP Token 复用工具 地址 Google Workspace Simple Workspace ATT&CK Tool - SWAT 地址 阿里云 阿里云 AccessKey 利用工具 aliyun-accesskey-Tools 地址 由「半人间丶」师傅补充,感谢支持 阿里云 ECS、策略组辅助小工具 alicloud-tools 地址 由「半人间丶」师傅补充,感谢支持 阿里云 AccessKey 泄漏利用工具 AliyunAccessKeyTools 地址 由「半人间丶」师傅补充,感谢支持 腾讯云 腾讯云 AccessKey 利用工具 Tencent_Yun_tools 地址 2 云原生工具 辅助工具 综合 开源的云原生安全平台 HummerRisk 地址 由「Ma1tobiose」师傅补充,感谢支持 开源云原生安全防护平台 neuvector 地址 由「Idle Life」师傅补充,感谢支持 Docker 一个支持在线分析容器镜像的网站 contains 地址 由「zxynull」师傅补充,感谢支持 容器镜像分析工具 DIVE 地址 由「zxynull」师傅补充,感谢支持 镜像扫描工具 trivy 地址 由「zxynull」师傅补充,感谢支持 容器镜像漏洞静态扫描工具 Clair 地址 由「zxynull」师傅补充,感谢支持 检查生产环境中部署容器的最佳实践 Docker_Bench_Security 地址 由「zxynull」师傅补充,感谢支持 原生支持容器的系统可见性工具 sysdig 地址 由「zxynull」师傅补充,感谢支持 Docker 镜像扫描工具 Anchore 地址 由「zxynull」师傅补充,感谢支持 Docker 静态分析工具 Dagda 地址 由「zxynull」师傅补充,感谢支持 容器逃逸检测工具 container-escape-check 地址 Kubernetes 基于终端 UI 的 k8s 集群管理工具 k9s 地址 k8s 异常活动检测工具 Falco 地址 由「zxynull」师傅补充,感谢支持 CIS 基准检测工具 kube bench 地址 由「zhengjim」师傅补充,感谢支持 k8s 集群安全漏洞发现工具 kube hunter 地址 由「zhengjim」师傅补充,感谢支持 k8s 集群风险权限扫描工具 KubiScan 地址 由「UzJu」师傅补充,感谢支持 k8s 安全风险检测工具 StackRox 地址 工具介绍 由「m4d3bug」师傅补充,感谢支持 k8s 安全审计工具 kubestriker 地址 由「zhengjim」师傅补充,感谢支持 基于 kubectl 的红队 k8s 安全评估工具 red kube 地址 由「zhengjim」师傅补充,感谢支持 k8s 调试辅助工具 validkube 地址 Terraform Terraform 可视化 地址 利用工具 容器渗透工具集 CDK 地址 容器安全工具集 veinmind-tools 地址 k8s 渗透测试工具 Peirates 地址 由「Idle Life」师傅补充,感谢支持 容器渗透测试工具 BOtB 地址 由「Idle Life」师傅补充,感谢支持 容器利用工具 CCAT 地址 由「zhengjim」师傅补充,感谢支持 0x03 靶场 :dart: 云服务靶场 在线收费的包含云安全实验的靶场 Attack Defense 地址 在线免费的 AWS 渗透测试靶场 Free AWS Security Labs 地址 由「cr」师傅补充,感谢支持 在线多云渗透靶场 pwnedlabs 地址 由「RBPi」师傅补充,感谢支持 AWS 靶场部署工具 cloudgoat 地址 AWS 靶场 AWSGoat 地址 Azure 靶场 AzureGoat 地址 由「Kfzz1」师傅补充,感谢支持 多云靶场搭建工具 TerraformGoat 地址 AWS IAM 靶场 IAM Vulnerable 地址 GCP 靶场部署工具 GCPGoat 地址 由「Kfzz1」师傅补充,感谢支持 云原生靶场 WIZ K8s 靶场 WIZ K8S LAN Party 地址 由「feng」师傅补充,感谢支持 k8s 靶场部署工具 Kubernetes Goat 地址 由「UzJu」师傅补充,感谢支持 CI/CD 靶场部署工具 地址 由「Kfzz1」师傅补充,感谢支持 云原生靶场部署工具 metarget 地址 贡献者 :confetti_ball: 感谢你们的支持 ~ TeamsSix 1derian ShangRui-hash 半人间丶 UzJu Idle Life zhengjim zxynull m4d3bug da Vinci【达文西】 tanger 想走安全的小白 Esonhugh Kfzz1 cr Ma1tobiose DVKunion 苏打养乐多 橘子怪 宅独青年 弱鸡 RBPi 程皮糖别皮 Kagantua feng Poker 想要一起补充?直接给本项目提 PR 或者使用右侧链接中的方法: 补充说明地址 更新日志 :calendar: 在 T Wiki 云安全文库的更新日志中,记录了 Awesome Cloud Security 项目和文库的更新情况,在 wiki.teamssix.com/Changelog 这里可以查看。 另外我的个人微信公众号: TeamsSix 欢迎你来关注 师傅都看到这了,还不点个 Star :star2: 再走吗 ~;awesome cloud security 收集一些国内外不错的云安全资源,该项目主要面向国内的安全人员;awesome,cloud-native,cloud-security,awesome-cloud-security,cloudnative,cloudsecurity,cybersecurity,docker,kubernetes,tools
teamssix/awesome-cloud-security
therealgliz/blooket-hacks;blooket-hack Hell i'm actually gliz who created the blooket hacks. I got the repo from the guy who was impersonating me. This repo will not be updated at all. If you have any questions join the discord server below I'll be answering them. discord server: https://discord.gg/Nj9Zs5VtFp Proof thats it me: Contact if you want to contact me just dm me on twitter https://twitter.com/glizuwu;Multiple game hacks to use so the game become easier to play!;blooket,blooket-hack,blooketapi,blookethack,blookettokens,blooket-game,blooket-hacks,blooket-mods,blooket-utilities,blooketjs
therealgliz/blooket-hacks
cleanlock/VideoAdBlockForTwitch;Twitch Adblock Twitch Adblock blocks ads on Twitch by switching to an ad-free version of the stream at 480p during the ad-time and automatically switches back to the original video quality after the ad-time is over. This is 100% done locally, no proxies/VPNs or 3rd party scripts/websites are being used. This extension does not collect/share any of your personal information and the code is public. It is recommended to use this extension along with UBlock Origin. Sourcecode: https://github.com/cleanlock/VideoAdBlockForTwitch The original author of this extension is "saucettv". This extension will always stay donation- and referral-link free. Available Browsers Firefox Chrome Edge Manual Installation Steps for Chrome NOTE: This is NOT RECOMMENDED, you WILL NOT get auto-updates - Download the latest .ZIP Archive - Extract the ZIP Archive - Open up Chrome and in your Web Browser URL, enter: chrome://extensions - Enable the Developer Mode toggle, found in the top right of this view (typically) of the extensions page in your browser. - Click Load unpacked Extension - Navigate into the extracted folder from the ZIP Archive and select the folder chrome . Discord https://discord.gg/PSgWqf3v8V Changelog v5.3.5 removed the URL grabber, Amazon referral link and Donation-Stuff from the original coder v5.3.6 updated manifest.json v5.3.7 updated to Manifest v3 v5.3.8 updated extension menu added GitHub & Discord link to the extension menu v5.3.9 fixed "Show/Hide 'Blocking Ads'-message logic v5.4.0 (Chrome) / v5.4.1 (Firefox) applied fix for the 360p quality issue (thanks @pixeltris ) v5.5.0 Added proxies/embeds in order to fight the purple screen "Commercial break" (thanks to @pixeltris ) v5.5.0 Updated Logos etc. ) v5.7.0 Added Adblock-Timer (thanks to @GODrums ) Credits @saucettv (original Author) @mikirobles (removed Donation/Amazon stuff) @pwltr (added the GPL-License & helped with updating to Manifest v3) @HatterTheMadd (helped with updating to Manifest v3) @kdjmonaghan (added clearer install instructions for less advanced users);Blocks Ads on Twitch.tv.;[]
cleanlock/VideoAdBlockForTwitch
mastodon/mastodon-android;Mastodon for Android This is the repository for the official Android app for Mastodon. Or get the APK from the The Releases Section . Contributing Our goal is delivering a polished, professionally designed and user-friendly app. We proceed according to wireframes provided by a professional UX designer that works with Mastodon gGmbH. This means that any outside contributions that change the app visually must first be coordinated with the UX designer. This can take time. Furthermore, we work off of an internal roadmap and aim for feature-parity and consistency with our iOS app. The iOS app is designated as the "primary" between the two, therefore, if you want to request features, please do so in the Mastodon for iOS repository, as you are requesting a feature to be both in iOS and Android (exceptions being system integrations specific to Android). On the other hand, any contributions that improve existing functionality, performance, or accessibility should not have any roadblocks to being merged. If you would like to help translate the app into your language, please go to Crowdin . If your language is not listed in the Crowdin project, please create an issue and we will add it. Please do not create pull requests that modify strings.xml files for languages other than English. Building As this app is using Java 17 features, you need JDK 17 or newer to build it. Other than that, everything is pretty standard. You can either import the project into Android Studio and build it from there, or run the following command in the project directory: ./gradlew assembleRelease License This project is released under the GPL-3 License . The Mastodon name and logo are trademarks of Mastodon gGmbH. If you intend to redistribute a modified version of this app, use a unique name and icon for your app that does not mistakenly imply any official connection with or endorsement by Mastodon gGmbH.;Official Android app for Mastodon;android,mastodon
mastodon/mastodon-android
joseadanof/awesome-cloudnative-trainings;Awesome Cloud Native Trainings It just started as a post in Medium where I was collecting all the free trainings with and without certificates that were released from different companies supporting Cloud Native Computing Foundation Projects and Kubernetes related OSS. Whether you are studying for a Kubernetes Certification or powering your career as DevOps Engineer, Cloud Engineer, Platform Engineer, Cloud Developer, Developer Advocate, or SRE, this set of trainings could prepare you well to face many Cloud Native transformation challenges 50, yes, more than 50 certificates or badges you can get from this awesome repository of trainings. Free Trainings with Certifications Akuity Introduction to Continuous Delivery and GitOps using Argo CD - Course + Badge AWS Architecting - Course + Badge Serverless - Course + Badge Object Storage - Course + Badge Block Storage - Course + Badge File Storage - Course + Badge Storage Data Migration - Course + Badge Data Protection & Disaster Recovery - Course + Badge Isovalent Getting Started with Cilium - Labs + Badge Cilium Cluster Mesh - Labs + Badge Cilium Service Mesh - Labs + Badge Isovalent Cilium Enterprise: Network Policies - Labs + Badge Cilium LoadBalancer IPAM and BGP Service Advertisement - Labs + Badge Arrikto Introduction to Kubeflow - Training + Certification ScyllaDB ScyllaDB Courses with Certificates - Courses + Completion Certificates Arango DB ArangoDB Certified Professional - Certificate Redis (Redis University) Introduction to Redis Data Structures - Certificate Redis for Java Developers - Certificate Redis for JavaScript Developers - Certificate Redis for Python Developers - Certificate Redis for .NET Developers - Certificate Redis: Querying, Indexing and Full-Text Search - Certificate Redis: Storing, Indexing and Querying JSON at Speed - Certificate Running Redis at Scale - Certificate Redis Security - Certificate Redis Certified Developer - Certificate O11y Academy Practical Observability - Course + Certificate Cockroach Labs Introduction to Distrbuited SQL and CockroachDB - Course + Certificate Practical First Steps with CockroachDB - Course + Certificate IBM Cognitive Class Docker Essentials - Course + Badge Building Cloud Native and Multi Cloud Applications - Course + Certification Harness Harness Chaos Engineering Practitioner - Course + Certificate Ambassador Labs Summer of Kubernetes - Labs + Challenges + Prizes Nirmata Kyverno Fundamentals - Introduction Course + Certification IBM Build Smart on Kubernetes - Labs + Credly Badge Tetrate Academy Envoy Fundamentals - Course + Certification Istio Fundamentals - Course + Certification Certified Istio Administrator by Tetrate - Certification Codefresh GitOps Fundamentals - Course + Certification Kasten Learning Kubernetes Badges (Apprentice, Defender , Helmsman, Contender, Protector, Surveyor, Architect, Rookie and Explorer) - Labs + Badges solo.io Academy On-demand Workshops: Istio * Get Started with Istio (with Fundamentals for Istio Certification) - Hands on workshop lab + Quiz for credly badge * Deploy Istio for Production (with Intermediate for Istio Certification) - Hands on workshop lab + Quiz for credly badge * Get Started with Istio Ambient Mesh (with Istio Ambient Mesh Foundation Certification) - Hands on workshop lab + Quiz for credly badge Cilium * Introduction to Cilium (with Fundamentals for Cilium Certification) - Hands on workshop lab + Quiz for credly badge Envoy * Get Started with Envoy Proxy (with Fundamentals for Envoy Certification) - Hands on workshop lab + Quiz for credly badge eBPF * Get Started with eBPF (with Fundamentals for eBPF Certification) - Hands on workshop lab + Quiz for credly badge Upcoming Events: Conferences - Live Streams - Webinars - Instructor led workshops Tigera - Calico Certification Certified Calico Operator: Level 1 (Kubernetes Networking and Security) - Training + Certification Certified Calico Operator: AWS Expert - Training + Certification Certified Calico Operator: eBPF - Training + Certification Progress Chef Chef Principles Certification - Training | Certification Gremlim Chaos Engineering Practitioner Certificate Program - Training | Certification Gitlab Level Up Gitlab Level Up Training Platform - Free Trainings and Certifications MongoDB MongoDB Basics - Training + Certification New Relic Full Stack Observability Certificate - Certification Trainings ControlPlane Kubernetes Labs - Online Training RedHat Developer Cloud Native Tutorials - Trainings FreeCodeCamp Kubernetes Cloud Native Associate Certification Course by ExamPro - Video Course Grafana University Observability Trainings - Trainings Styra Academy Microservices Autxhorization with Styra - Training OPA Policy Authoring - Training Traefik Academy Master Traefik with K3S - Training The Linux Foundation Introduction to Cilium - [Training]https://www.edx.org/es/course/introduction-to-cilium Introduction to Istio - Training Introduction to GitOps - Training Introduction to Service Mesh with Linkerd - Training Introduction to Kubernetes on the Edge with k3s - Training Introduction to Cloud Foundry and Cloud-Native Software Architecture - Training Elastic Observability Fundamentals - Training Kibana Fundamentals - Training AWS AWS Cloud Practitioner Essentials - Training DevOps Engineer Learning Plan - Learning Plan Containers Learning Plan (Multiple languages) - Learning Plan Developer Learning Plan (Multiple languages) - Learning Plan Alibaba Cloud Native Technology Foundations - Training Promlabs Introduction to Prometheus - Training Sysdig Training Falco 101 - Training Introduction to Prometheus and PromQL - Training Cloudbees Certified Jenkins Engineer 2021 - Training Datastax Academy Apache Cassandra Developer - Training Apache Cassandra Administrator - Training Apache Cassandra Operations in K8S - Training VMWare Kube Academy Kubernetes related courses - VMWare Kube Academy Aqua Cloud Native Academy Shift Left DevOps - Training Contributions Open to suggestions, enhancements and collaborations, open a PR and make your contributions. License Awesome Cloud Native Trainings by @joseadanof is licensed under CC BY 4.0;Awesome Trainings from Cloud Native Computing Foundation Projects and Kubernetes related software;cloud-native,cncf,containers,continuous-learning,devops,devsecops,istio,k8s,kubernetes,linux-foundation
joseadanof/awesome-cloudnative-trainings
rustic-rs/rustic;fast, encrypted, and deduplicated backups ## About `rustic` is a backup tool that provides fast, encrypted, deduplicated backups. It reads and writes the [restic][1] repo format described in the [design document][2] and can be used as a *restic* replacement in most cases. It is implemented in [Rust](https://www.rust-lang.org/), a performant, memory-efficient, and reliable cross-platform systems programming language. Hence `rustic` supports all major operating systems (Linux, MacOs, *BSD), with Windows support still being experimental. ### Stability `rustic` currently is in **beta** state and misses regression tests. It is not recommended to use it for production backups, yet. ## `rustic` Libraries The `rustic` project is split into multiple crates: - [rustic](https://crates.io/crates/rustic-rs) - the main binary - [rustic-core](https://crates.io/crates/rustic_core) - the core library - [rustic-backend](https://crates.io/crates/rustic_backend) - the library for supporting various backends ## Features - Backup data is **deduplicated** and **encrypted**. - Backup storage can be local or cloud storages, including cold storages. - Allows multiple clients to **concurrently** access a backup repository using lock-free operations. - Backups by default are append-only on the repository. - The operations are robustly designed and can be **safely aborted** and **efficiently resumed**. - Snapshot organization is possible by hostname, backup paths, label and tags. Also a rich set of metadata is saved with each snapshot. - Retention policies and cleaning of old backups can be **highly customized**. - Follow-up backups only process changed files, but still create a complete backup snapshot. - In-place restore only modifies files which are changed. - Uses config files for easy configuration of all every-day commands, see [example config files](/config/). ## Contact You can ask questions in the [Discussions][3] or have a look at the [FAQ](https://rustic.cli.rs/docs/FAQ.html). | Contact | Where? | | ------------- | --------------------------------------------------------------------------------------------------------------- | | Issue Tracker | [GitHub Issues](https://github.com/rustic-rs/rustic/issues) | | Discord | [![Discord](https://dcbadge.vercel.app/api/server/WRUWENZnzQ?style=flat-square)](https://discord.gg/WRUWENZnzQ) | | Discussions | [GitHub Discussions](https://github.com/rustic-rs/rustic/discussions) | ## Getting started Please check our [documentation](https://rustic.cli.rs/docs/getting_started.html) for more information on how to get started. ## Installation ### From binaries #### [cargo-binstall](https://crates.io/crates/cargo-binstall) ```bash cargo binstall rustic-rs ``` #### Windows ##### [Scoop](https://scoop.sh/) ```bash scoop install rustic ``` Or you can check out the [releases](https://github.com/rustic-rs/rustic/releases). Nightly binaries are available [here](https://rustic.cli.rs/docs/nightly_builds.html). ### From source **Beware**: This installs the latest development version, which might be unstable. ```bash cargo install --git https://github.com/rustic-rs/rustic.git rustic-rs ``` ### crates.io ```bash cargo install rustic-rs ``` ## Differences to `restic`? We have collected some improvements of `rustic` over `restic` [here](https://rustic.cli.rs/docs/comparison-restic.html). ## Contributing Tried rustic and not satisfied? Don't just walk away! You can help: - You can report issues or suggest new features on our [Discord server](https://discord.gg/WRUWENZnzQ) or using [Github Issues](https://github.com/rustic-rs/rustic/issues/new/choose)! Do you know how to code or got an idea for an improvement? Don't keep it to yourself! - [Contribute fixes](https://github.com/rustic-rs/rustic/contribute) or new features via a pull requests! Please make sure, that you read the [contribution guide](https://rustic.cli.rs/docs/contributing-to-rustic.html). ## Minimum Rust version policy This crate's minimum supported `rustc` version is `1.70.0`. The current policy is that the minimum Rust version required to use this crate can be increased in minor version updates. For example, if `crate 1.0` requires Rust 1.20.0, then `crate 1.0.z` for all values of `z` will also require Rust 1.20.0 or newer. However, `crate 1.y` for `y > 0` may require a newer minimum version of Rust. In general, this crate will be conservative with respect to the minimum supported version of Rust. ## License Licensed under either of: - [Apache License, Version 2.0](./LICENSE-APACHE) - [MIT license](./LICENSE-MIT) at your option. [//]: # (badges) [//]: # (general links) [1]: https://github.com/restic/restic [2]: https://github.com/restic/restic/blob/master/doc/design.rst [3]: https://github.com/rustic-rs/rustic/discussions;rustic - fast, encrypted, and deduplicated backups powered by Rust;backup,deduplication,encryption,rust,restic
rustic-rs/rustic
XZB-1248/Spark;[English] [中文] [API Document] [API文档] Spark Spark is a free, safe, open-source, web-based, cross-platform and full-featured RAT (Remote Administration Tool) that allow you to control all your devices via browser anywhere. We won't collect any data, thus the server will never self-upgrade. Your clients will only communicate with your server forever. |![GitHub repo size](https://img.shields.io/github/repo-size/DGP-Studio/Snap.Genshin?style=flat-square)|![GitHub issues](https://img.shields.io/github/issues/XZB-1248/Spark?style=flat-square)|![GitHub closed issues](https://img.shields.io/github/issues-closed/XZB-1248/Spark?style=flat-square)| |-|-|-| |[![GitHub downloads](https://img.shields.io/github/downloads/XZB-1248/Spark/total?style=flat-square)](https://github.com/XZB-1248/Spark/releases)|[![GitHub release (latest by date)](https://img.shields.io/github/downloads/XZB-1248/Spark/latest/total?style=flat-square)](https://github.com/XZB-1248/Spark/releases/latest)| |-|-| Notice Due to my busy schedule with personal matters and the abuse of this project for cyberattacks, it's going to reach its end of life and will be archived very soon. I will no longer provide any support for this project, as it is officially abandoned. Disclaimer THIS PROJECT, ITS SOURCE CODE, AND ITS RELEASES SHOULD ONLY BE USED FOR EDUCATIONAL PURPOSES. ALL ILLEGAL USAGE IS PROHIBITED! YOU SHALL USE THIS PROJECT AT YOUR OWN RISK. THE AUTHORS AND DEVELOPERS ARE NOT RESPONSIBLE FOR ANY DAMAGE CAUSED BY YOUR MISUSE OF THIS PROJECT. YOUR DATA IS PRICELESS. THINK TWICE BEFORE YOU CLICK ANY BUTTON OR ENTER ANY COMMAND. If you found any security vulnerability, please DO NOT open an issue and immediately contact me via email . Quick start binary Download executable from releases . Following this to complete configuration. Run executable and browse to http://IP:Port to access the web interface. Generate a client and run it on your target device. Enjoy! Configuration Configuration file config.json should be placed in the same directory as the executable file. Example: json { "listen": ":8000", "salt": "123456abcdef", "auth": { "username": "password" }, "log": { "level": "info", "path": "./logs", "days": 7 } } listen required , format: IP:Port salt required , length <= 24 after modification, you need to re-generate all clients auth optional , format: username:password hashed-password is highly recommended format: $algorithm$hashed-password , example: $sha256$11223344556677AABBCCDDEEFF supported algorithms: sha256 , sha512 , bcrypt if you don't follow the format, password will be treated as plain-text log optional level optional , possible value: disable , fatal , error , warn , info , debug path optional , default: ./logs days optional , default: 7 Features | Feature/OS | Windows | Linux | MacOS | |-----------------|---------|-------|-------| | Process manager | ✔ | ✔ | ✔ | | Kill process | ✔ | ✔ | ✔ | | Network traffic | ✔ | ✔ | ✔ | | File explorer | ✔ | ✔ | ✔ | | File transfer | ✔ | ✔ | ✔ | | File editor | ✔ | ✔ | ✔ | | Delete file | ✔ | ✔ | ✔ | | Code highlight | ✔ | ✔ | ✔ | | Desktop monitor | ✔ | ✔ | ✔ | | Screenshot | ✔ | ✔ | ✔ | | OS info | ✔ | ✔ | ✔ | | Terminal | ✔ | ✔ | ✔ | | * Shutdown | ✔ | ✔ | ✔ | | * Reboot | ✔ | ✔ | ✔ | | * Log off | ✔ | ❌ | ✔ | | * Sleep | ✔ | ❌ | ✔ | | * Hibernate | ✔ | ❌ | ❌ | | * Lock screen | ✔ | ❌ | ❌ | Blank cell means the situation is not tested yet. The Star symbol means the function may need administration or root privilege. Screenshots Development note There are three components in this project, so you have to build them all. Go to Quick start if you don't want to make yourself boring. Client Server Front-end If you want to make client support OS except linux and windows, you should install some additional C compiler. For example, to support android, you have to install Android NDK . tutorial ```bash Clone this repository. $ git clone https://github.com/XZB-1248/Spark $ cd ./Spark Here we're going to build front-end pages. $ cd ./web Install all dependencies and build. $ npm install $ npm run build-prod Embed all static resources into one single file by using statik. $ cd .. $ go install github.com/rakyll/statik $ statik -m -src="./web/dist" -f -dest="./server/embed" -p web -ns web Now we should build client. When you're using unix-like OS, you can use this. $ mkdir ./built $ go mod tidy $ go mod download $ ./scripts/build.client.sh Finally we're compiling the server side. $ mkdir ./releases $ ./scripts/build.server.sh ``` Then create a new directory with a name you like. Copy executable file inside releases to that directory. Copy the whole built directory to that new directory. Copy configuration file mentioned above to that new directory. Finally, run the executable file in that directory. Dependencies Spark contains many third-party open-source projects. Lists of dependencies can be found at go.mod and package.json . Some major dependencies are listed below. Back-end Go ( License ) gin-gonic/gin (MIT License) imroc/req (MIT License) kbinani/screenshot (MIT License) shirou/gopsutil ( License ) gorilla/websocket (BSD-2-Clause License) orcaman/concurrent-map (MIT License) Front-end React (MIT License) Ant-Design (MIT License) axios (MIT License) xterm.js (MIT License) crypto-js (MIT License) Acknowledgements natpass (MIT License) Image difference algorithm inspired by natpass. Stargazers over time License BSD-2 License;✨Spark is a web-based, cross-platform and full-featured Remote Administration Tool (RAT) written in Go that allows you control all your devices anywhere. Spark是一个Go编写的,网页UI、跨平台以及多功能的远程控制和监控工具,你可以随时随地监控和控制所有设备。;golang,rat,remote-control,remote-administration-tool,remote-admin-tool,spark,server-monitoring,dashboard,go,remote-access-tool
XZB-1248/Spark
murphysecurity/murphysec;中文 | EN MurphySec CLI is used for detecting vulnerable dependencies from the command-line, and also can be integrated into your CI/CD pipeline. Features Analyze dependencies being used by your project, including direct and indirect dependencies Detect known vulnerabilities in project dependencies Screenshots CLI scan result scan result page Table of Contents Supported languages How it works Working Scenarios Getting Started Command Introduction Communication License Supported languages Currently supports Java, JavaScript, Golang. Other development languages will be gradually supported in the future. Want to learn more about language support? check out our documentation How it works MurphySec CLI obtains the dependency information of your project mainly by building the project or parsing the package manifest files. The dependency information of the project will be uploaded to the server, and the dependencies with security issues in the project will be identified through the vulnerability knowledge base maintained by MurphySec. Note: MurphySec CLI will only send the dependencies and basic information of your project to server for identifying the dependencies with security issues, and will not upload any code snippets. Working Scenarios To detect security issues in your code locally To detect security issues in CI/CD pipeline Learn how to integrate MurphySec CLI in Jenkins Getting Started 1. Install MurphySec CLI Visit the GitHub Releases page to download the latest version of MurphySec CLI, or install it by running: Linux wget -q https://s.murphysec.com/release/install.sh -O - | /bin/bash OSX curl -fsSL https://s.murphysec.com/release/install.sh | /bin/bash WINDOWS powershell -Command "iwr -useb https://s.murphysec.com/release/install.ps1 | iex" 2. Get access token MurphySec CLI requires an access token from your MurphySec account for authentication to work properly. What is an access token? Go to MurphySec platform - Access Token , click the copy button after the Token, then the access token is copied to the clipboard. 3. Authentication There are two authentication methods available: Interactive authentication and Parameter authentication Interactive authentication Execute murphysec auth login command and paste the access token. If you need to change the access token, you can repeat this command to overwrite the old one. Parameter Authentication Specify the access token for authentication by adding the --token parameter 4. Detection To perform detection using the murphysec scan command, you can execute the following command. bash murphysec scan [your-project-path] Available parameters --token : Specify the access token --log-level : Specify the log level to be printed on the command line output stream, no log will be printed by default, optional parameters are silent , error , warn , info , debug --json : Specify the output of the result as json format, not showing the result details by default 5. View results MurphySec CLI does not show the result details by default, you can view the results in MurphySec platform . Command Introduction murphysec auth Mainly used for the management of certification ``` Usage: murphysec auth [command] Available Commands: login logout ``` murphysec scan Mainly used to run detections ``` Usage: murphysec scan DIR [flags] Flags: -h, --help help for scan --json json output Global Flags: --log-level string specify log level, must be silent|error|warn|info|debug --no-log-file do not write log file --server string specify server address --token string specify API token -v, --version show version and exit --write-log-to string specify log file path ``` Communication Contact our official WeChat account, and we'll add you into the group for communication. License Apache 2.0;An open source tool focused on software supply chain security. 墨菲安全专注于软件供应链安全,具备专业的软件成分分析(SCA)、漏洞检测、专业漏洞库。;security,scanner,dependency,vulnerability-detection,software-supply-chain,sca,software-composition-analysis,codescan
murphysecurity/murphysec
swirlai/swirl-search;[![Swirl](docs/images/dark_header.png)](https://www.swirlaiconnect.com) SWIRL AI Connect #### Bring AI to the Data, not the Data to the AI. ### SWIRL AI Connect is an open-source AI platform designed to simplify the setup of AI infrastructure. It supports powerful tools like Retrieval-Augmented Generation (RAG), Analytics, and Co-Pilot, enhancing decision-making capabilities with AI for businesses. [Start Searching](#-try-swirl-now-in-docker) · [Slack](https://join.slack.com/t/swirlmetasearch/shared_invite/zt-1qk7q02eo-kpqFAbiZJGOdqgYVvR1sfw) · [Key Features](#-key-features) · [Contribute](#-contributing-to-swirl) · [Documentation](#-documentation) · [Connectors](#-list-of-connectors) [![License: Apache 2.0](https://img.shields.io/badge/License-Apache_2.0-blue.svg?color=088395&logoColor=blue&style=flat-square)](https://opensource.org/license/apache-2-0/) [![GitHub Release](https://img.shields.io/github/v/release/swirlai/swirl-search?style=flat-square&color=8DDFCB&label=Release)](https://github.com/swirlai/swirl-search/releases) [![Website](https://img.shields.io/badge/Website-swirlaiconnect.com-00215E?style=flat-square)](https://www.swirlaiconnect.com) [![SWIRL Slack](https://img.shields.io/badge/Slack-SWIRL%20Community-0E21A0?logo=slack&style=flat-square)](https://join.slack.com/t/swirlmetasearch/shared_invite/zt-1qk7q02eo-kpqFAbiZJGOdqgYVvR1sfw) [![Test and Build Pipeline](https://github.com/swirlai/swirl-search/actions/workflows/test-build-pipeline.yml/badge.svg?style=flat-square&branch=main)](https://github.com/swirlai/swirl-search/actions/workflows/test-build-pipeline.yml) Get your AI up and running in minutes, not months. SWIRL AI Connect is an open-source AI Connect platform that streamlines the integration of advanced AI technologies into business operations. It supports powerful features like Retrieval-Augmented Generation (RAG), Analytics, and Co-Pilot, enabling enhanced decision-making with AI and boosting enterprise AI transformation. SWIRL operated without needing to move data into a vector database or undergo ETL processes. This approach not only enhances security but also speeds up the deployment. As a private cloud AI provider, SWIRL operates entirely within your private cloud infrastructure, running locally inside the firewall to ensure maximum data security and compliance. Why SWIRL AI Connect? Instant AI Deployment: Swiftly deploy AI-driven enterprise software within your private cloud environment. SWIRL AI Connect integrates seamlessly, offering built-in security measures like data compliance and firewall protections, ensuring secure AI connectivity and granular access control. Easy and Fast Retrieval Augmented Generation(RAG): SWIRL AI Connect simplifies the use of Retrieval-Augmented Generation (RAG). Our platform eliminates the need for external vector databases, LangChain, or LlamaIndex, making implementing AI RAG tools directly on your data easier. No Data Movement: Operate directly on local data without the hassles of ETL processes, re-indexing, or data movement. SWIRL AI Connect enhances data security by allowing the data to remain in place and run securely inside your firewall. Boost Productivity with AI: Enhance team efficiency and streamline workflows with advanced analytics and Co-Pilot features. SWIRL AI Connect helps you find information faster and make smarter decisions, accelerating enterprise AI transformation and boosting productivity. SWIRL AI Connect enables you to perform Unified Search and bring in a secure AI Co-Pilot. SWIRL Unified Search : SWIRL Unified Search offers a secure and powerful integrated search solution that enables users to query across all enterprise data sources seamlessly. This scalable unified search platform is designed for large enterprises, startups, and small teams, allowing for comprehensive searches across cloud services, on-premise systems, and data silos without compromising security. By implementing SWIRL Unified Search, businesses can enhance productivity, improve data accessibility, and make more informed decisions by harnessing the full potential of their data landscape. SWIRL AI Co-Pilot : SWIRL AI Co-Pilot acts as an intelligent assistant, leveraging advanced AI to provide context-aware insights and support to business users. Securely integrated within your enterprise systems, SWIRL AI Co-Pilot helps streamline workflows, automate tasks, and deliver personalized recommendations, significantly boosting operational efficiency. Users benefit from real-time decision support, reduced manual workload, and a more intuitive interaction with their data, enabling them to focus on strategic activities that drive business growth. SWIRL's Ranking in Action SWIRL leverages the specific context of your enterprise data to deliver highly relevant search results tailored to business needs. While general search engines like Google offer broad capabilities, SWIRL excels in the precise and secure handling of enterprise-specific queries, providing actionable insights that enhance decision-making and business efficiency. SWIRL AI Connect Features Based on SWIRL AI Connect 🔎 How Swirl Works SWIRL AI Connect offers a straightforward no-code setup to easily integrate AI capabilities into your enterprise. It connects directly to various enterprise and data applications—like Teams, Snowflake, Databricks, and Google Drive—enabling you to search, fetch, and build an AI-based knowledge base. Utilize SWIRL’s Co-Pilot and Retrieval-Augmented Generation (RAG) to enhance productivity without the need for extracting or indexing any data. Connect: Easily link SWIRL AI Connect to your data sources—be it databases, document stores, or cloud services. Simply add your authentication details to start. Query: Interact with SWIRL AI Connect using natural language. Ask questions or input commands to immediately harness the power of AI in your workflows. Get Results: Benefit from SWIRL AI Connect’s advanced search capabilities combined with generative AI. It quickly delivers accurate and contextually augmented responses by distributing queries across connected platforms that have a search API—ranging from search engines and databases to noSQL engines and SaaS services. 🔌 List of Connectors Full list of connectors is available here . For Enterprise Support on Connectors Contact the Swirl Team at: support@swirl.today 🔥 Try Swirl Now In Docker Prerequisites To run Swirl in Docker, you must have the latest Docker app for MacOS, Linux, or Windows installed and running locally. You can also watch the video tutorial to get started. Windows users must also install and configure either the WSL 2 or the Hyper-V backend, as outlined in the System Requirements for installing Docker Desktop on Windows . Start Swirl in Docker Warning Make sure the Docker app is running before proceeding! Download the YAML file: https://raw.githubusercontent.com/swirlai/swirl-search/main/docker-compose.yaml bash curl https://raw.githubusercontent.com/swirlai/swirl-search/main/docker-compose.yaml -o docker-compose.yaml Optional : To enable Swirl's Real-Time Retrieval Augmented Generation (RAG) in Docker, run the following commands from the Console using a valid OpenAI API key: shell export MSAL_CB_PORT=8000 export MSAL_HOST=localhost export OPENAI_API_KEY=‘<your-OpenAI-API-key>’ :key: Check out OpenAI's YouTube video if you don't have an OpenAI API Key. In MacOS or Linux, run the following command from the Console: bash docker-compose pull && docker-compose up In Windows, run the following command from PowerShell: bash docker compose up After a few minutes the following or similar should appear: Open this URL with a browser: http://localhost:8000 (or http://localhost:8000/galaxy ) If the search page appears, click Log Out at the top, right. The Swirl login page will appear. Enter the username admin and password password , then click Login . Enter a search in the search box and press the Search button. Ranked results appear in just a few seconds: To view the raw JSON, open http://localhost:8000/swirl/search/ The most recent Search object will be displayed at the top. Click on the result_url link to view the full JSON Response. Notes 📝 Warning The Docker version of Swirl does not retain any data or configuration when shut down! :key: Swirl includes five (5) Google Programmable Search Engines (PSEs) to get you up and running right away. The credentials for these are shared with the Swirl Community. :key: Using Swirl with Microsoft 365 requires installation and approval by an authorized company Administrator. For more information, please review the M365 Guide or contact us . Next Steps 👇 Check out the details of our latest release ! Head over to the Quick Start Guide and install Swirl locally! Video Tutorial Guide to Run SWIRL in Docker in 60 seconds. 🌟 Key Features | ✦ | Feature | |:-----:|:--------| | 📌 | Microsoft 365 integration and OAUTH2 support | | 🔍 | SearchProvider configurations for all included Connectors. They can be organized with the active, default and tags properties . | | ✏️ | Adaptation of the query for each provider such as rewriting NOT term to -term , removing NOTted terms from providers that don't support NOT, and passing down the AND, + and OR operators. | | ⏳ | Synchronous or asynchronous search federation via APIs | | 🛎️ | Optional subscribe feature to continuously monitor any search for new results | | 🛠️ | Pipelining of Processor stages for real-time adaptation and transformation of queries, responses and results | | 🗄️ | Results stored in SQLite3 or PostgreSQL for post-processing, consumption and/or analytics | | ➡️ | Built-in Query Transformation support, including re-writing and replacement | | 📖 | Matching on word stems and handling of stopwords via NLTK | | 🚫 | Duplicate detection on field or by configurable Cosine Similarity threshold | | 🔄 | Re-ranking of unified results using Cosine Vector Similarity based on spaCy 's large language model and NLTK | | 🎚️ | Result mixers order results by relevancy, date or round-robin (stack) format, with optional filtering of just new items in subscribe mode | | 📄 | Page through all results requested, re-run, re-score and update searches using URLs provided with each result set | | 📁 | Sample data sets for use with SQLite3 and PostgreSQL | | ✒️ | Optional spell correction using TextBlob | | ⌛ | Optional search/result expiration service to limit storage use | | 🔌 | Easily extensible Connector and Mixer objects | 👩‍💻 Contributing to Swirl Do you have a brilliant idea or improvement for SWIRL? We're all ears, and thrilled you're here to help! 🔗 Get Started in 3 Easy Steps : Connect with Fellow Enthusiasts - Jump into the Swirl Slack Community and share your ideas. You'll find a welcoming group of Swirl enthusiasts and team members eager to assist and collaborate. Branch It Out - Always branch off from the develop branch with a descriptive name that encapsulates your idea or fix. Start Your Contribution - Ready to get your hands dirty? Make sure all contributions come through a GitHub pull request. We roughly follow the Gitflow branching model , so all changes destined for the next release should be made to the develop branch. 📚 First time contributing on GitHub? No worries, the GitHub documentation has you covered with a great guide on contributing to projects. 💡 Every contribution, big or small, makes a difference. Join us in shaping the future of Swirl! ☁ Use the Swirl Cloud For information about Swirl as a managed service, please contact us ! 📖 Documentation Overview | Quick Start | User Guide | Admin Guide | M365 Guide | Developer Guide | Developer Reference | AI Guide 👷‍♂️ Need Help? We're Here for You! At Swirl, every user matters to us. Whether you're a beginner finding your way or an expert with feedback, we're here to support, listen, and help. Don't hesitate to reach out to us. Join the SWIRL Community Slack: Dive into our SWIRL Community on Slack - to discuss anything related to SWIRL. Direct Support: For any questions, suggestions, or even a simple hello, drop us an email at support@swirl.today . We cherish every message and promise to get back to you promptly! Request A Connector (Enterprise Support) Want to see a new connector quickly and fast. Contact the Swirl Team at: support@swirl.today;SWIRL AI Connect: AI infrastructure software that powers your Search & Retrieval Augmented Generation (RAG) applications. Simplify and enhance your AI pipelines with seamless integration of large language models (LLMs) and data sources.;search,search-engine,federated-query,federated-search,ai-search,bigquery,large-language-models,relevancy,metasearch,django
swirlai/swirl-search
williamyang1991/DualStyleGAN;DualStyleGAN - Official PyTorch Implementation This repository provides the official PyTorch implementation for the following paper: Pastiche Master: Exemplar-Based High-Resolution Portrait Style Transfer Shuai Yang , Liming Jiang , Ziwei Liu and Chen Change Loy In CVPR 2022. Project Page | Paper | Supplementary Video Abstract: Recent studies on StyleGAN show high performance on artistic portrait generation by transfer learning with limited data. In this paper, we explore more challenging exemplar-based high-resolution portrait style transfer by introducing a novel DualStyleGAN with flexible control of dual styles of the original face domain and the extended artistic portrait domain. Different from StyleGAN, DualStyleGAN provides a natural way of style transfer by characterizing the content and style of a portrait with an intrinsic style path and a new extrinsic style path , respectively. The delicately designed extrinsic style path enables our model to modulate both the color and complex structural styles hierarchically to precisely pastiche the style example. Furthermore, a novel progressive fine-tuning scheme is introduced to smoothly transform the generative space of the model to the target domain, even with the above modifications on the network architecture. Experiments demonstrate the superiority of DualStyleGAN over state-of-the-art methods in high-quality portrait style transfer and flexible style control. Features : High-Resolution (1024) | Training Data-Efficient (~200 Images) | Exemplar-Based Color and Structure Transfer Updates [02/2023] Add --wplus in style_transfer.py to use original w+ pSp encoder rather than z+. [09/2022] Pre-trained models in three new styles (feat. StableDiffusion) are released. [07/2022] Source code license is updated. [03/2022] Paper and supplementary video are released. [03/2022] Web demo is created. [03/2022] Code is released. [03/2022] This website is created. Web Demo Integrated into Huggingface Spaces 🤗 using Gradio . Try out the Web Demo: or Installation Clone this repo: bash git clone https://github.com/williamyang1991/DualStyleGAN.git cd DualStyleGAN Dependencies: All dependencies for defining the environment are provided in environment/dualstylegan_env.yaml . We recommend running this repository using Anaconda : bash conda env create -f ./environment/dualstylegan_env.yaml We use CUDA 10.1 so it will install PyTorch 1.7.1 (corresponding to Line 22 , Line 25 , Line 26 of dualstylegan_env.yaml ). Please install PyTorch that matches your own CUDA version following https://pytorch.org/ . ☞ Install on Windows: here and here (1) Dataset Preparation Cartoon, Caricature and Anime datasets can be downloaded from their official pages. We also provide the script to build new datasets. | Dataset | Description | | :--- | :--- | | Cartoon | 317 cartoon face images from Toonify . | | Caricature | 199 images from WebCaricature . Please refer to dataset preparation for more details. | | Anime | 174 images from Danbooru Portraits . Please refer to dataset preparation for more details. | | Fantasy | 137 fantasy face images generated by StableDiffusion . | | Illustration | 156 illustration face images generated by StableDiffusion . | | Impasto | 120 impasto face images generated by StableDiffusion . | | Other styles | Please refer to dataset preparation for the way of building new datasets. | (2) Inference for Style Transfer and Artistic Portrait Generation Inference Notebook To help users get started, we provide a Jupyter notebook found in ./notebooks/inference_playground.ipynb that allows one to visualize the performance of DualStyleGAN. The notebook will download the necessary pretrained models and run inference on the images found in ./data/ . If no GPU is available, you may refer to Inference on CPU , and set device = 'cpu' in the notebook. Pretrained Models Pretrained models can be downloaded from Google Drive or Baidu Cloud (access code: cvpr): | Model | Description | | :--- | :--- | | encoder | Pixel2style2pixel encoder that embeds FFHQ images into StyleGAN2 Z+ latent code | | encoder_wplus | Original Pixel2style2pixel encoder that embeds FFHQ images into StyleGAN2 W+ latent code | | cartoon | DualStyleGAN and sampling models trained on Cartoon dataset, 317 (refined) extrinsic style codes | | caricature | DualStyleGAN and sampling models trained on Caricature dataset, 199 (refined) extrinsic style codes | | anime | DualStyleGAN and sampling models trained on Anime dataset, 174 (refined) extrinsic style codes | | arcane | DualStyleGAN and sampling models trained on Arcane dataset, 100 extrinsic style codes | | comic | DualStyleGAN and sampling models trained on Comic dataset, 101 extrinsic style codes | | pixar | DualStyleGAN and sampling models trained on Pixar dataset, 122 extrinsic style codes | | slamdunk | DualStyleGAN and sampling models trained on Slamdunk dataset, 120 extrinsic style codes | | fantasy | DualStyleGAN models trained on Fantasy dataset, 137 extrinsic style codes | | illustration | DualStyleGAN models trained on Illustration dataset, 156 extrinsic style codes | | impasto | DualStyleGAN models trained on Impasto dataset, 120 extrinsic style codes | The saved checkpoints are under the following folder structure: checkpoint |--encoder.pt % Pixel2style2pixel model |--encoder_wplus.pt % Pixel2style2pixel model (optional) |--cartoon |--generator.pt % DualStyleGAN model |--sampler.pt % The extrinsic style code sampling model |--exstyle_code.npy % extrinsic style codes of Cartoon dataset |--refined_exstyle_code.npy % refined extrinsic style codes of Cartoon dataset |--caricature % the same files as in Cartoon ... Exemplar-Based Style Transfer Transfer the style of a default Cartoon image onto a default face: python python style_transfer.py The result cartoon_transfer_53_081680.jpg is saved in the folder .\output\ , where 53 is the id of the style image in the Cartoon dataset, 081680 is the name of the content face image. An corresponding overview image cartoon_transfer_53_081680_overview.jpg is additionally saved to illustrate the input content image, the encoded content image, the style image (* the style image will be shown only if it is in your folder), and the result: Specify the style image with --style and --style_id (find the mapping between id and filename here , find the visual mapping between id and the style image here ). Specify the filename of the saved images with --name . Specify the weight to adjust the degree of style with --weight . The following script generates the style transfer results in the teaser of the paper. python python style_transfer.py python style_transfer.py --style cartoon --style_id 10 python style_transfer.py --style caricature --name caricature_transfer --style_id 0 --weight 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 python style_transfer.py --style caricature --name caricature_transfer --style_id 187 --weight 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 python style_transfer.py --style anime --name anime_transfer --style_id 17 --weight 0 0 0 0 0.75 0.75 0.75 1 1 1 1 1 1 1 1 1 1 1 python style_transfer.py --style anime --name anime_transfer --style_id 48 --weight 0 0 0 0 0.75 0.75 0.75 1 1 1 1 1 1 1 1 1 1 1 Specify the content image with --content . If the content image is not well aligned with FFHQ, use --align_face . For preserving the color style of the content image, use --preserve_color or set the last 11 elements of --weight to all zeros. python python style_transfer.py --content ./data/content/unsplash-rDEOVtE7vOs.jpg --align_face --preserve_color \ --style arcane --name arcane_transfer --style_id 13 \ --weight 0.6 0.6 0.6 0.6 0.6 0.6 0.6 0.6 0.6 0.6 0.6 1 1 1 1 1 1 1 → Specify --wplus to use the original pSp encoder to extract the W+ intrinsic style code, which may better preserve the face features of the content image. Remarks : Our trained pSp encoder on Z+/W+ space cannot perfectly encode the content image. If the style transfer result more consistent with the content image is desired, one may use latent optimization to better fit the content image or using other StyleGAN encoders (as discussed in https://github.com/williamyang1991/DualStyleGAN/issues/11 and https://github.com/williamyang1991/DualStyleGAN/issues/29). More options can be found via python style_transfer.py -h . Artistic Portrait Generation Generate random Cartoon face images (Results are saved in the ./output/ folder): python python generate.py Specify the style type with --style and the filename of the saved images with --name : python python generate.py --style arcane --name arcane_generate Specify the weight to adjust the degree of style with --weight . Keep the intrinsic style code, extrinsic color code or extrinsic structure code fixed using --fix_content , --fix_color and --fix_structure , respectively. python python generate.py --style caricature --name caricature_generate --weight 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 --fix_content More options can be found via python generate.py -h . (3) Training DualStyleGAN Download the supporting models to the ./checkpoint/ folder: | Model | Description | | :--- | :--- | | stylegan2-ffhq-config-f.pt | StyleGAN model trained on FFHQ taken from rosinality . | | model_ir_se50.pth | Pretrained IR-SE50 model taken from TreB1eN for ID loss. | Facial Destylization Step 1: Prepare data. Prepare the dataset in ./data/DATASET_NAME/images/train/ . First create lmdb datasets: python python ./model/stylegan/prepare_data.py --out LMDB_PATH --n_worker N_WORKER --size SIZE1,SIZE2,SIZE3,... DATASET_PATH For example, download 317 Cartoon images into ./data/cartoon/images/train/ and run python ./model/stylegan/prepare_data.py --out ./data/cartoon/lmdb/ --n_worker 4 --size 1024 ./data/cartoon/images/ Step 2: Fine-tune StyleGAN. Fine-tune StyleGAN in distributed settings: python python -m torch.distributed.launch --nproc_per_node=N_GPU --master_port=PORT finetune_stylegan.py --batch BATCH_SIZE \ --ckpt FFHQ_MODEL_PATH --iter ITERATIONS --style DATASET_NAME --augment LMDB_PATH Take the cartoon dataset for example, run (batch size of 8*4=32 is recommended) python -m torch.distributed.launch --nproc_per_node=8 --master_port=8765 finetune_stylegan.py --iter 600 --batch 4 --ckpt ./checkpoint/stylegan2-ffhq-config-f.pt --style cartoon --augment ./data/cartoon/lmdb/ The fine-tuned model can be found in ./checkpoint/cartoon/finetune-000600.pt . Intermediate results are saved in ./log/cartoon/ . Step 3: Destylize artistic portraits. python python destylize.py --model_name FINETUNED_MODEL_NAME --batch BATCH_SIZE --iter ITERATIONS DATASET_NAME Take the cartoon dataset for example, run: python destylize.py --model_name finetune-000600.pt --batch 1 --iter 300 cartoon The intrinsic and extrinsic style codes are saved in ./checkpoint/cartoon/instyle_code.npy and ./checkpoint/cartoon/exstyle_code.npy , respectively. Intermediate results are saved in ./log/cartoon/destylization/ . To speed up destylization, set --batch to large value like 16. For styles severely different from real faces, set --truncation to small value like 0.5 to make the results more photo-realistic (it enables DualStyleGAN to learn larger structrue deformations). Progressive Fine-Tuning Stage 1 & 2: Pretrain DualStyleGAN on FFHQ. We provide our pretrained model generator-pretrain.pt at Google Drive or Baidu Cloud (access code: cvpr). This model is obtained by: python -m torch.distributed.launch --nproc_per_node=1 --master_port=8765 pretrain_dualstylegan.py --iter 3000 --batch 4 ./data/ffhq/lmdb/ where ./data/ffhq/lmdb/ contains the lmdb data created from the FFHQ dataset via ./model/stylegan/prepare_data.py . Stage 3: Fine-Tune DualStyleGAN on Target Domain. Fine-tune DualStyleGAN in distributed settings: python python -m torch.distributed.launch --nproc_per_node=N_GPU --master_port=PORT finetune_dualstylegan.py --iter ITERATIONS \ --batch BATCH_SIZE --ckpt PRETRAINED_MODEL_PATH --augment DATASET_NAME The loss term weights can be specified by --style_loss (λ FM ), --CX_loss (λ CX ), --perc_loss (λ perc ), --id_loss (λ ID ) and --L2_reg_loss (λ reg ). λ ID and λ reg are suggested to be tuned for each style dataset to achieve ideal performance. More options can be found via python finetune_dualstylegan.py -h . Take the Cartoon dataset as an example, run (multi-GPU enables a large batch size of 8*4=32 for better performance): python -m torch.distributed.launch --nproc_per_node=8 --master_port=8765 finetune_dualstylegan.py --iter 1500 --batch 4 --ckpt ./checkpoint/generator-pretrain.pt --style_loss 0.25 --CX_loss 0.25 --perc_loss 1 --id_loss 1 --L2_reg_loss 0.015 --augment cartoon The fine-tuned models can be found in ./checkpoint/cartoon/generator-ITER.pt where ITER = 001000, 001100, ..., 001500. Intermediate results are saved in ./log/cartoon/ . Large ITER has strong cartoon styles but at the cost of artifacts, and users may select the most balanced one from 1000-1500. We use 1400 for our paper experiments. (optional) Latent Optimization and Sampling Refine extrinsic style code. Refine the color and structure styles to better fit the example style images. python python refine_exstyle.py --lr_color COLOR_LEARNING_RATE --lr_structure STRUCTURE_LEARNING_RATE DATASET_NAME By default, the code will load instyle_code.npy , exstyle_code.npy , and generator.pt in ./checkpoint/DATASET_NAME/ . Use --instyle_path , --exstyle_path , --ckpt to specify other saved style codes or models. Take the Cartoon dataset as an example, run: python refine_exstyle.py --lr_color 0.1 --lr_structure 0.005 --ckpt ./checkpoint/cartoon/generator-001400.pt cartoon The refined extrinsic style codes are saved in ./checkpoint/DATASET_NAME/refined_exstyle_code.npy . lr_color and lr_structure are suggested to be tuned to better fit the example styles. Training sampling network. Train a sampling network to map unit Gaussian noises to the distribution of extrinsic style codes: python python train_sampler.py DATASET_NAME By default, the code will load refined_exstyle_code.npy or exstyle_code.npy in ./checkpoint/DATASET_NAME/ . Use --exstyle_path to specify other saved extrinsic style codes. The saved model can be found in ./checkpoint/DATASET_NAME/sampler.pt . (4) Results Exemplar-based cartoon style trasnfer https://user-images.githubusercontent.com/18130694/158047991-77c31137-c077-415e-bae2-865ed3ec021f.mp4 Exemplar-based caricature style trasnfer https://user-images.githubusercontent.com/18130694/158048107-7b0aa439-5e3a-45a9-be0e-91ded50e9136.mp4 Exemplar-based anime style trasnfer https://user-images.githubusercontent.com/18130694/158048114-237b8b81-eff3-4033-89f4-6e8a7bbf67f7.mp4 Other styles Combine DualStyleGAN with State-of-the-Art Diffusion model We use StableDiffusion to generate face images of the specified style of famous artists. Trained with these images, DualStyleGAN is able to pastiche these famous artists and generates appealing results. Citation If you find this work useful for your research, please consider citing our paper: bibtex @inproceedings{yang2022Pastiche, title={Pastiche Master: Exemplar-Based High-Resolution Portrait Style Transfer}, author={Yang, Shuai and Jiang, Liming and Liu, Ziwei and Loy, Chen Change}, booktitle={CVPR}, year={2022} } Acknowledgments The code is mainly developed based on stylegan2-pytorch and pixel2style2pixel .;[CVPR 2022] Pastiche Master: Exemplar-Based High-Resolution Portrait Style Transfer;style-transfer,face,stylegan2,stylegan-image-manipulation,toonify,caricatures,cvpr2022
williamyang1991/DualStyleGAN
Privoce/vocechat-web;Web Client of VoceChat [![contributions welcome](https://img.shields.io/badge/contributions-welcome-brightgreen.svg?style=flat)](https://github.com/privoce/vocechat-web/issues) ![GitHub issues](https://img.shields.io/github/issues-raw/Privoce/vocechat-web) ![GitHub](https://img.shields.io/github/license/privoce/vocechat-web) ![GitHub top language](https://img.shields.io/github/languages/top/privoce/vocechat-web) ![Docker Pulls](https://img.shields.io/docker/pulls/privoce/vocechat-server) - 🎉 Powered by React & Redux Toolkit - ✅ Typescript - 📦 PWA - 📢 Notification by firebase ## Host your server! Or use our test server - Host your own Voce server ([docker image](https://hub.docker.com/r/privoce/vocechat-server/tags)): Run on x86_64 platform: ```bash docker run -d --restart=always \ -p 3000:3000 \ --name vocechat-server \ privoce/vocechat-server:latest ``` For more server hosting instructions, see our documentation: https://doc.voce.chat/ ## Preview - official site: https://voce.chat - live demo: https://privoce.voce.chat/ - demo API Docs (Swagger): https://dev.voce.chat/api/swagger - design: https://www.figma.com/file/EHnNr53kNmDWgUT86It6CH/UI - text editor: https://plate.udecode.io/docs/installation - markdown editor: https://nhn.github.io/tui.editor/latest/ - redux: [@reduxjs/toolkit](https://redux-toolkit.js.org/introduction/getting-started) - indexDB wrapper: https://github.com/localForage/localForage ## Local Development - `git clone https://github.com/Privoce/vocechat-web vocechat-web` - `cd vocechat-web & yarn install` - `yarn start` - Open `localhost:3009` ### Tools Recommended - [VS Code](https://code.visualstudio.com/) Editor Recommended - VS Code plugins: - [dbaeumer.vscode-eslint](https://marketplace.visualstudio.com/items?itemName=dbaeumer.vscode-eslint): ESLint - [esbenp.prettier-vscode](https://marketplace.visualstudio.com/items?itemName=esbenp.prettier-vscode): Prettier - [dsznajder.es7-react-js-snippets](https://marketplace.visualstudio.com/items?itemName=dsznajder.es7-react-js-snippets): Extensions for React, React-Native and Redux in JS/TS with ES7+ syntax ## License [GPL v3](https://github.com/Privoce/vocechat-web/blob/main/LICENSE) ## Thanks all the contributors Discuss collaboration: han@privoce.com or https://bridger.chat/han Telegram group: https://t.me/opencfdchannel VoceChat: https://voce.chat Telegram channel: https://t.me/vocechat_group VoceChat Channel: https://privoce.voce.chat;VoceChat Web App;chat,indexdb,pwa,react,redux-toolkit,typescript,bot
Privoce/vocechat-web