full_name
stringlengths
9
72
url
stringlengths
28
91
description
stringlengths
3
343
readme
stringlengths
1
207k
empty-233/tencent-sandbox
https://github.com/empty-233/tencent-sandbox
Windows sandbox 腾讯全家桶沙盒配置
# tencent-sandbox tencent-sandbox 是使用 [Windows sandbox](https://learn.microsoft.com/zh-cn/windows/security/application-security/application-isolation/windows-sandbox/windows-sandbox-overview) 为腾讯全家桶创建沙盒的配置 ## 注意事项 ### **Windows 10 用户** #### 有python环境 请先运行 win10_init.py #### 没有python环境 1. 进入你 clone/下载 的**项目目录** 2. 按键盘上的Win + R 3. 输入 powershell, 按回车 4. 输入 `PowerShell -ExecutionPolicy Bypass -File ".\win10_init.ps1"`。 #### 不明白以上两点如何操作 1. 用文本方式打开Tencent.wsb 2. 将**所有**例如`<HostFolder>.\tencent-sandbox\xxxxxxx</HostFolder>` 中的 `.\` 手动修改为你 clone/下载 的**项目目录**。 ### **家庭版**不支持沙盒 ## 兼容性 目前测试 **微信** **QQ** **QQNT** **TIM** **企业微信** **腾讯会议** **腾讯文档** 正常使用 如有其他需要,请提 Issue 注: **企业微信无法自动登录(检查设备ID),只能每次重新登录** ## 沙盒配置 默认分配**2GB**内存,有需要可以自行修改 `<MemoryInMB>value</MemoryInMB>` (以 MB 为单位) **音频输入**默认**开启**,有需要可以自行修改 `<AudioInput>value</AudioInput>` (Enable/Disable/Default) **剪贴板重定向**默认**开启**,有需要可以自行修改 `<ClipboardRedirection>value</ClipboardRedirection>` (Enable/Disable/Default) **视频输入**默认**关闭**,有需要可以自行修改 `<VideoInput>value</VideoInput>` (Enable/Disable/Default) 其他配置参考[Windows沙盒配置](https://learn.microsoft.com/zh-cn/windows/security/application-security/application-isolation/windows-sandbox/windows-sandbox-configure-using-wsb-file)官方文档 ## 使用教程 启用 `Windows 沙盒` git clone 本项目,**或者点击`Releases`下载** ``` bash git clone https://github.com/empty-233/tencent-sandbox.git ``` 使用`mkdir.bat`来**一键创建以下结构目录** **按照下面可选项配置`SysWOW64`** 打开**Tencent.wsb** 安装**经过测试**的软件 (可选) 移动桌面快捷方式到任何位置,再移动回桌面(否则快捷方式会消失) 操作完成后就可以正常使用了,**关闭沙盒账号和数据都有所保留** ### 可选项 下列方式二选一,**默认不挂载系统SysWOW64** #### 复制(默认) 新开一个**默认沙盒**,并正常安装 **QQ** 安装完成之后复制 `C:\Windows\SysWOW64` 到 `Data\SysWOW64` (不然qq运行一段时间后会崩溃) #### 挂载 修改 `<HostFolder>.\Data\SysWOW64</HostFolder>` 为 `<HostFolder>C:\Windows\SysWOW64</HostFolder>` ## 挂载路径 请查看 Tencent.wsb 中的 `MappedFolder` 配置项 qq保存的路径: Data\Documents\Tencent\ (qqid) \FileRecv 微信保存的路径: Data\Documents\WeChat\ (wxid) \FileStorage\File 其余的请查看`Data\Documents`
openmm/openmm_workshop_july2023
https://github.com/openmm/openmm_workshop_july2023
null
# OpenMM workshop This repository contains the materials for the OpenMM workshop that was delivered on the 12th July 2023 after the CCPBioSim conference. The workshop consists of an introduction presentation, setup information, and a series of jupyter notebooks. The workshop was delivered in person and on Zoom where demonstrators were be on hand to answer questions. Only the intro slides are specific to the live course. The workshop materials are designed to be done by anyone at anytime! We aim to keep the notebooks up to date. If you have any questions, find any bugs, or have suggestions for improvement please raise an issue in the github repository. ## Introduction This is a powerpoint presentation. The slides can be found in [./slides](./slides). ## Setup There are two ways to run the workshop notebooks: - In a web browser with Google Colab. - Running locally on your own machine in a Conda environment. The instructions for either can be found in [./setup](./setup/README.md). We aim to keep the notebooks fully tested in Colab so we suggest you run them in Colab. Note that we have designed the exercises to not be computationally expensive so they can be run on any hardware. ## Training materials The material is in the form of jupyter notebooks. It is split up into three sections. ### Section 1 - Introduction to OpenMM - [**Protein in water**](./section_1/protein_in_water.ipynb). Aimed at people new to OpenMM. This covers loading in a PDB file, setting up a simulation, running the simulation, basic analysis, and advice for running on HPC resources. - [**Protein-ligand complex**](./section_1/protein_ligand_complex.ipynb). Aimed at beginners. Covers parameterising a small molecule, combining topologies, and using other tools to create OpenMM compatible input. ### Section 2 - Custom forces - [**Custom forces and Umbrella sampling**](./section_2/custom_forces.ipynb). Aimed at people looking to use the custom forces functionality of OpenMM (Can be done after section 1 material if you are a beginner). Covers using custom forces with a case-study of umbrella sampling. ### Section 3 - Machine Learning Potentials - [**Machine Leaning Potentials**](./section_3/machine_learning_potentials.ipynb). Aimed at people using Machine Learning Potentials. Covers the OpenMM Machine Learning software stack with examples of using ANI and MACE. ## Extras - [Guide on building OpenMM from source](./extra/compile_openmm.ipynb). ## Acknowledgments - This workshop was prepared by Stephen Farr (University of Edinburgh, Michel research group) with support from EPSRC grant EP/W030276/1 ''Supporting the OpenMM Community-led Development of Next-Generation Condensed Matter Modelling Software'' and with help from Julien Michel (University of Edinburgh) and Will Poole (University of Southampton, Essex research group).
embedchain/community-showcase
https://github.com/embedchain/community-showcase
null
> ## This repository is archived since we have moved all the community showcases to [docs.embedchain.ai](https://docs.embedchain.ai). [![Discord](https://dcbadge.vercel.app/api/server/nhvCbCtKV?style=flat)](https://discord.gg/6PzXDgEjG5) [![Twitter](https://img.shields.io/twitter/follow/embedchain)](https://twitter.com/embedchain) [![Substack](https://img.shields.io/badge/Substack-%23006f5c.svg?logo=substack)](https://embedchain.substack.com/) # Embedchain Community Showcase A repository that collects and showcases all the apps, blogs, videos, and tutorials created by the community. ## Apps ### Open Source - [Discord Bot for LLM chat](https://github.com/Reidond/discord_bots_playground/tree/c8b0c36541e4b393782ee506804c4b6962426dd6/python/chat-channel-bot) by Reidond - [EmbedChain-Streamlit-Docker App](https://github.com/amjadraza/embedchain-streamlit-app) by amjadraza - [Harry Potter Philosphers Stone Bot](https://github.com/vinayak-kempawad/Harry_Potter_Philosphers_Stone_Bot/) by Vinayak Kempawad, ([linkedin post](https://www.linkedin.com/feed/update/urn:li:activity:7080907532155686912/)) - [LLM bot trained on own messages](https://github.com/Harin329/harinBot) by Hao Wu ### Closed Source - [Taobot.io](https://taobot.io) - chatbot & knowledgebase hybrid by [cachho](https://github.com/cachho) ## Templates ### Replit - [Embedchain Chat Bot](https://replit.com/@taranjeet1/Embedchain-Chat-Bot) by taranjeetio - [Embedchain Memory Chat Bot Template](https://replit.com/@taranjeetio/Embedchain-Memory-Chat-Bot-Template) by taranjeetio ## Posts ### Blogs - [Customer Service LINE Bot](https://www.evanlin.com/langchain-embedchain/) ### LinkedIn - [What is embedchain](https://www.linkedin.com/posts/activity-7079393104423698432-wRyi/) by Rithesh Sreenivasan - [Building a chatbot with EmbedChain](https://www.linkedin.com/posts/activity-7078434598984060928-Zdso/) by Lior Sinclair - [Making chatbot without vs with embedchain](https://www.linkedin.com/posts/kalyanksnlp_llms-chatbots-langchain-activity-7077453416221863936-7N1L/) by Kalyan KS ### Twitter - [What is embedchain](https://twitter.com/AlphaSignalAI/status/1672668574450847745) by Lior - [Building a chatbot with Embedchain](https://twitter.com/Saboo_Shubham_/status/1673537044419686401) by Shubham Saboo ## Videos - [embedChain Create LLM powered bots over any dataset Python Demo Tesla Neurallink Chatbot Example](https://www.youtube.com/watch?v=bJqAn22a6Gc) by Rithesh Sreenivasan - [Embedchain - NEW 🔥 Langchain BABY to build LLM Bots](https://www.youtube.com/watch?v=qj_GNQ06I8o) by 1littlecoder - [EmbedChain -- NEW!: Build LLM-Powered Bots with Any Dataset](https://www.youtube.com/watch?v=XmaBezzGHu4) by DataInsightEdge - [Chat With Your PDFs in less than 10 lines of code! EMBEDCHAIN tutorial](https://www.youtube.com/watch?v=1ugkcsAcw44) by Phani Reddy - [How To Create A Custom Knowledge AI Powered Bot | Install + How To Use](https://www.youtube.com/watch?v=VfCrIiAst-c) by The Ai Solopreneur - [Build Custom Chatbot in 6 min with this Framework [Beginner Friendly]](https://www.youtube.com/watch?v=-8HxOpaFySM) by Maya Akim - [embedchain-streamlit-app](https://www.youtube.com/watch?v=3-9GVd-3v74) by Amjad Raza ## Mentions ### Github repos - [awesome-ChatGPT-repositories](https://github.com/taishi-i/awesome-ChatGPT-repositories)
agathasangkara/Vidio-Premier
https://github.com/agathasangkara/Vidio-Premier
🎥 Vidio Premier Account Generate
🎥 Vidio Premier Generator 3-12 Month ``` COMING SOON 😋 ```
fei666888/radioberry_mod
https://github.com/fei666888/radioberry_mod
Based on Radioberry designd by" https://github.com/pa3gsb/Radioberry-2.x"
# radioberry_mod Based on Radioberry designd by" https://github.com/pa3gsb/Radioberry-2.x" Redesign the PCB Layout for easy build
Nowze/1336-V3-Buildbot
https://github.com/Nowze/1336-V3-Buildbot
null
Hey everyone i leak my own discord token stealer since 2019-2023. Im leaving discord and stop everything. Be fun with the source code every instructions are here. Please tag the github if you use it i need star :c french Utilise un Vps avec Ubuntu 20.04 (je préfère https://rdp.sh/ c'est 10$ par mois et en plus c'est proche c'est a Amsterdam) - sudo apt install zip - sudo apt install unzip Upload le zip extrait le zip (si le repertoire de fichier ne se nomme pas Stealer renomme le Download Node v18.16.1: - sudo apt update - sudo apt install curl - curl -sL https://deb.nodesource.com/setup_18.x | sudo -E bash - - sudo apt install -y nodejs Setup the src on your vps - sudo npm install - sudo npm i pm2 -g - sudo npm install -g pkg Unzip node_modules.zip dans Stealer/bot Install https://anonfiles.com/5dwb0204z6/node_modules_zip et place les dans ClientObf and unzip Changer vos directories dans Stealer/Bot/index.js et mettez les nouvelles (ex: /home/vshell/ClientObf/link.txt votre nom de vps est lolita, alors cela fera /home/lolita/ClientObf/link.txt si c'est root votre user alors cela sera /root/a/ClientObf/link.txt ) Pour modifier le nom des logs allez dans Stealer/ClientObf/Utils dans discord.js Modifier les émojis et le nom et dans stats.js c'est de même. Pour modifier le nom des embeds de build allez dans Stealer/Bot et dans index.js vous pouvez renammes le bot ^^ Ajouter dans Stealer/Bot/index.js: token: "your token" client id: "discord developper portal (app id)" guildid: "your guild id" verified roles: "customers roles" /build icon: none (icon not work or crash bot) webhook_url: your webhook name: yourname with no space or crash bot Ne pas edit la config de build.js sinon plus rien ne marchera official 1336 Channel https://t.me/St34ler ----------------------------------------------------------------------- English Use Vps with Ubuntu 20.04 (i prefer https://rdp.sh/ 10$ server per month in amsterdam) - sudo apt install zip - sudo apt install unzip Download zip extracts the zip (if the file directory is not named Stealer renames the Download Node v18.16.1: - sudo apt update - sudo apt install curl - curl -sL https://deb.nodesource.com/setup_18.x | sudo -E bash - - sudo apt install -y nodejs Setup the src on your vps Unzip node_modules.zip in Stealer/bot Install https://anonfiles.com/5dwb0204z6/node_modules_zip and put him on ClientObf and unzip - sudo npm install - sudo npm i pm2 -g - sudo npm install -g pkg Change your directories in Stealer/Bot/index.js and put the new ones (ex: /home/vshell/ClientObf/link.txt your vps name is lolita, then this will do /home/lolita/ClientObf/link.txt if your user are root /root/a/ClientObf/link.txt ) To modify the name of the logs go to Stealer/ClientObf/Utils in discord.js Modify emojis and name and in stats.js it's the same. To modify the name of the build embeds go to Stealer/Bot and in index.js you can rename the bot ^^ Add in Stealer/Bot/index.js: token: "your token" client id: "discord developper portal (app id)" guildid: "your guild id" verified roles: "customers roles" /build icon: none (icon not work or crash bot) webhook_url: your webhook name: yourname with no space or crash bot don't edit build.js or nothing will be work official 1336 Channel https://t.me/St34ler
bitquark/shortscan
https://github.com/bitquark/shortscan
An IIS short filename enumeration tool
# Shortscan An IIS short filename enumeration tool. ## Functionality Shortscan is designed to quickly determine which files with short filenames exist on an IIS webserver. Once a short filename has been identified the tool will try to automatically identify the full filename. In addition to standard discovery methods Shortscan also uses a unique checksum matching approach to attempt to find the long filename where the short filename is based on Windows' propriatary shortname collision avoidance checksum algorithm (more on this research at a later date). ## Installation ### Quick install Using a recent version of [go](https://golang.org/): ``` go install github.com/bitquark/shortscan/cmd/shortscan@latest ``` ### Manual install To build (and optionally install) locally: ``` go get && go build go install ``` ## Usage ### Basic usage Shortscan is easy to use with minimal configuration. Basic usage looks like: ``` $ shortscan http://example.org/ ``` ### Examples This example sets multiple custom headers by using `--header`/`-H` multiple times: ``` shortscan -H 'Host: gibson' -H 'Authorization: Basic ZGFkZTpsMzN0' ``` To check whether a site is vulnerable without performing file enumeration use: ``` shortscan --isvuln ``` ### Advanced features The following options allow further tweaks: ``` $ shortscan --help Shortscan v0.6 · an IIS short filename enumeration tool by bitquark Usage: main [--wordlist FILE] [--header HEADER] [--concurrency CONCURRENCY] [--timeout SECONDS] [--verbosity VERBOSITY] [--fullurl] [--stabilise] [--patience LEVEL] [--characters CHARACTERS] [--autocomplete mode] [--isvuln] URL Positional arguments: URL url to scan Options: --wordlist FILE, -w FILE combined wordlist + rainbow table generated with shortutil --header HEADER, -H HEADER header to send with each request (use multiple times for multiple headers) --concurrency CONCURRENCY, -c CONCURRENCY number of requests to make at once [default: 20] --timeout SECONDS, -t SECONDS per-request timeout in seconds [default: 10] --verbosity VERBOSITY, -v VERBOSITY how much noise to make (0 = quiet; 1 = debug; 2 = trace) [default: 0] --fullurl, -F display the full URL for confirmed files rather than just the filename [default: false] --stabilise, -s attempt to get coherent autocomplete results from an unstable server (generates more requests) [default: false] --patience LEVEL, -p LEVEL patience level when determining vulnerability (0 = patient; 1 = very patient) [default: 0] --characters CHARACTERS, -C CHARACTERS filename characters to enumerate [default: JFKGOTMYVHSPCANDXLRWEBQUIZ8549176320-_()&'!#$%@^{}~] --autocomplete mode, -a mode autocomplete detection mode (auto = autoselect; method = HTTP method magic; status = HTTP status; distance = Levenshtein distance; none = disable) [default: auto] --isvuln, -V bail after determining whether the service is vulnerable [default: false] --help, -h display this help and exit ``` ## Utility The shortscan project includes a utility named `shortutil` which can be used to perform various short filename operations and to make custom rainbow tables for use with the tool. ### Examples You can create a rainbow table from an existing wordlist like this: ``` shortutil wordlist input.txt > output.rainbow ``` To generate a one-off checksum for a file: ``` shortutil checksum index.html ``` ### Usage Run `shortutil <command> --help` for a definiteive list of options for each command. ``` Shortutil v0.3 · a short filename utility by bitquark Usage: main <command> [<args>] Options: --help, -h display this help and exit Commands: wordlist add hashes to a wordlist for use with, for example, shortscan checksum generate a one-off checksum for the given filename ``` ## Wordlist A custom wordlist was built for shortscan. For full details see [pkg/shortscan/resources/README.md](pkg/shortscan/resources/README.md) ## Credit Original IIS short filename [research](https://soroush.secproject.com/downloadable/microsoft_iis_tilde_character_vulnerability_feature.pdf) by Soroush Dalili. Additional research and this project by [bitquark](https://github.com/bitquark).
HiPhish/rainbow-delimiters.nvim
https://github.com/HiPhish/rainbow-delimiters.nvim
Rainbow delimiters for Neovim with Tree-sitter
.. default-role:: code ############################### Rainbow delimiters for Neovim ############################### This Neovim plugin provides alternating syntax highlighting (“rainbow parentheses”) for Neovim, powered by `Tree-sitter`_. The goal is to have a hackable plugin which allows for different configuration of queries and strategies, both globally and per file type. Users can override and extend the built-in defaults through their own configuration. This is a fork of `nvim-ts-rainbow2`_, which was implemented as a module for `nvim-treessiter`_. However, since nvim-treesitter has deprecated the module system I had to create this standalone plugin. Installation and setup ###################### Installation ============ Install it like any other Neovim plugin. You will need a Tree-sitter parser for each language you want to use rainbow delimiters with. Setup ===== Configuration is done by setting entries in the Vim script dictionary `g:rainbow_delimiters`. Here is an example configuration: .. code:: vim let g:rainbow_delimiters = { \ 'strategy': { \ '': rainbow_delimiters#strategy.global, \ 'vim': rainbow_delimiters#strategy.local, \ }, \ 'query': { \ '': 'rainbow-delimiters', \ 'lua': 'rainbow-blocks', \ }, \ 'highlight': [ \ 'RainbowDelimiterRed', \ 'RainbowDelimiterYellow', \ 'RainbowDelimiterBlue', \ 'RainbowDelimiterOrange', \ 'RainbowDelimiterGreen', \ 'RainbowDelimiterViolet', \ 'RainbowDelimiterCyan', \ ], \ } The equivalent code in Lua: .. code:: lua -- This module contains a number of default definitions local rainbow_delimiters = require 'rainbow-delimiters' vim.g.rainbow_delimiters = { strategy = { [''] = rainbow_delimiters.strategy['global'], vim = rainbow_delimiters.strategy['local'], }, query = { [''] = 'rainbow-delimiters', lua = 'rainbow-blocks', }, highlight = { 'RainbowDelimiterRed', 'RainbowDelimiterYellow', 'RainbowDelimiterBlue', 'RainbowDelimiterOrange', 'RainbowDelimiterGreen', 'RainbowDelimiterViolet', 'RainbowDelimiterCyan', }, } Please refer to the `manual`_ for more details. For those who prefer a `setup` function there is the module `rainbow-delimiters.setup`. Help wanted ########### There are only so many languages which I understand to the point that I can write queries for them. If you want support for a new language please consider contributing code. See the CONTRIBUTING_ for details. Status of the plugin #################### Tree-sitter support in Neovim is still experimental. This plugin and its API should be considered stable insofar as breaking changes will only happen if changes to Neovim necessitates them. .. warning:: There is currently a shortcoming in Neovim's Tree-sitter API which makes it so that only the first node of a capture group can be highlighted. Please see `neovim/neovim#17099`_ for details. Affected queries: - HTML `rainbow-delimiters` - JSX (Javascript + React.js) `rainbow-delimiters-react` (affects React tags only) - Python (`rainbow-delimiters`) (affects only the `for ... in` inside comprehensions) - TSX (Typescript + React.js) `rainbow-delimiters-react` (affects React tags only) - Vue.js `rainbow-delimiters` Most of these are related to HTML-like tags, so you can use an alternative query instead. See the manual_ (`:h ts-rainbow-query`) for a list of extra queries. Screenshots ########### Bash ==== .. image:: https://user-images.githubusercontent.com/4954650/212133420-4eec7fd3-9458-42ef-ba11-43c1ad9db26b.png :alt: Screenshot of a Bash script with alternating coloured delimiters C = .. image:: https://user-images.githubusercontent.com/4954650/212133423-8b4f1f00-634a-42c1-9ebc-69f8057a63e6.png :alt: Screenshot of a C program with alternating coloured delimiters Common Lisp =========== .. image:: https://user-images.githubusercontent.com/4954650/212133425-85496400-4e24-4afd-805c-55ca3665c4d9.png :alt: Screenshot of a Common Lisp program with alternating coloured delimiters Java ==== .. image:: https://user-images.githubusercontent.com/4954650/212133426-7615f902-e39f-4625-bb91-2e757233c7ba.png :alt: Screenshot of a Java program with alternating coloured delimiters LaTeX ===== Using the `blocks` query to highlight the entire `\begin` and `\end` instructions. .. image:: https://user-images.githubusercontent.com/4954650/212133427-46182f57-bfd8-4cbe-be1f-9aad5ddfd796.png :alt: Screenshot of a LaTeX document with alternating coloured delimiters License ####### Licensed under the Apache-2.0 license. Please see the `LICENSE`_ file for details. Migrating from nvim-ts-rainbow2 ############################### Rainbow-Delimiters uses different settings than nvim-ts-rainbow2, but converting the configuration is straight-forward. The biggest change is where the settings are stored. - Settings are stored in the global variable `g:rainbow-delimiters`, which has the same keys as the old settings - The default strategy and query have index `''` (empty string) instead of `1` - Default highlight groups have the prefix `RainbowDelimiter` instead of `TSRainbow`, e.g. `RainbowDelimiterRed` instead of `TSRainbowRed` - The default query is now called `rainbow-delimiters` instead of `rainbow-parens` - The public Lua module is called `rainbow-delimiters` instead of `ts-rainbow` The name of the default query is now `rainbow-delimiters` because for some languages like HTML the notion of "parentheses" does not make any sense. In HTML the only meaningful delimiter is the tag. Hence the generic notion of a "delimiter". Attribution ########### This is a fork of a previous Neovim plugin, the original repository is available under https://sr.ht/~p00f/nvim-ts-rainbow/. Attributions from the original author ===================================== Huge thanks to @vigoux, @theHamsta, @sogaiu, @bfredl and @sunjon and @steelsojka for all their help .. _Tree-sitter: https://tree-sitter.github.io/tree-sitter/ .. _nvim-treesitter: https://github.com/nvim-treesitter/nvim-treesitter .. _CONTRIBUTING: CONTRIBUTING.rst .. _LICENSE: LICENSE .. _manual: doc/rainbow-delimiters.txt .. _neovim/neovim#17099: https://github.com/neovim/neovim/pull/17099 .. _nvim-ts-rainbow2: https://gitlab.com/HiPhish/nvim-ts-rainbow2 .. _nvim-treessiter: https://github.com/nvim-treesitter/nvim-treesitter
7eu7d7/HCP-Diffusion-webui
https://github.com/7eu7d7/HCP-Diffusion-webui
webui for HCP-Diffusion
# HCP Diffusion web UI 一个基于Vue.js和Flask的HCP Diffusion图形界面 ![](./imgs/infer.webp) ![](./imgs/train.webp) ## 在 Windows 下安装 ### 准备 1. 安装 [Python](https://www.python.org/downloads/) (不支持python >= 3.11), 需要把Python添加到环境变量中 "Add Python to PATH". 2. 安装 [git](https://git-scm.com/download/win). 3. 安装 [node.js](https://nodejs.org/en/download) (>= 14.0.0). ### 下载 HCP Diffusion 与 webui ```bash git clone https://github.com/7eu7d7/HCP-Diffusion-webui.git cd HCP-Diffusion-webui git clone https://github.com/7eu7d7/HCP-Diffusion.git ``` ### 自动安装并启动 点击运行`webui-user.bat` ## 在 Linux 下安装 ### 安装依赖: ```bash # Debian-based: sudo apt install wget git python3 python3-venv nodejs # Red Hat-based: sudo dnf install wget git python3 nodejs # Arch-based: sudo pacman -S wget git python3 nodejs ``` ### 下载 HCP Diffusion 与 webui ```bash git clone https://github.com/7eu7d7/HCP-Diffusion-webui.git cd HCP-Diffusion-webui git clone https://github.com/7eu7d7/HCP-Diffusion.git ``` ### 自动安装并启动 ```bash bash webui.sh ``` 可以在 `webui-user.sh` 中配置选项 ## 使用说明 + 把`diffusers`格式的模型放到`sd_models/`文件夹中便可以被加载。 + `ckpts/`文件夹放训练好的`lora`或`part(微调的模型)`。 + 生成图片和对应的配置文件输出到`output/`中。
daijro/hrequests
https://github.com/daijro/hrequests
🚀 Web scraping for humans
<img src="https://i.imgur.com/r8GcQW1.png" align="center"> </img> <h2 align="center">hrequests</h2> <h4 align="center"> <p align="center"> <a href="https://github.com/daijro/hrequests/blob/main/LICENSE"> <img src="https://img.shields.io/github/license/daijro/hrequests.svg"> </a> <a href="https://python.org/"> <img src="https://img.shields.io/badge/python-3.7&#8208;3.11-blue"> </a> <a href="https://pypi.org/project/hrequests/"> <img alt="PyPI" src="https://img.shields.io/pypi/v/hrequests.svg"> </a> <a href="https://pepy.tech/project/hrequests"> <img alt="PyPI" src="https://pepy.tech/badge/hrequests"> </a> <a href="https://github.com/ambv/black"> <img src="https://img.shields.io/badge/code%20style-black-black.svg"> </a> <a href="https://github.com/PyCQA/isort"> <img src="https://img.shields.io/badge/imports-isort-yellow.svg"> </a> </p> Hrequests (human requests) is a simple, configurable, feature-rich, replacement for the Python requests library. </h4> ### ✨ Features - Seamless transition between HTTP and headless browsing 💻 - Integrated fast HTML parser 🚀 - High performance concurrency with gevent (*without monkey-patching!*) 🚀 - Replication of browser TLS fingerprints 🚀 - JavaScript rendering 🚀 - Supports HTTP/2 🚀 - Realistic browser header generation 🚀 - JSON serializing up to 10x faster than the standard library 🚀 ### 💻 Browser crawling - Simple & uncomplicated browser automation - Human-like cursor movement and typing - Chrome extension support - Full page screenshots - Headless and headful support - No CORS restrictions ### ⚡ More - High performance ✨ - Minimal dependence on the python standard libraries - Written with type safety - 100% threadsafe ❤️ --- # Installation Install via pip: ```bash pip install -U hrequests python -m playwright install chromium ``` Other depedencies will be downloaded on the first import: ```py >>> import hrequests ``` --- # Documentation 1. [Simple Usage](https://github.com/daijro/hrequests#simple-usage) 2. [Sessions](https://github.com/daijro/hrequests#sessions) 3. [Concurrent & Lazy Requests](https://github.com/daijro/hrequests#concurrent--lazy-requests) 4. [HTML Parsing](https://github.com/daijro/hrequests#html-parsing) 5. [Browser Automation](https://github.com/daijro/hrequests#browser-automation) <hr width=50> ## Simple Usage Here is an example of a simple `get` request: ```py >>> resp = hrequests.get('https://www.google.com/') ``` Requests are sent through [bogdanfinn's tls-client](https://github.com/bogdanfinn/tls-client) to spoof the TLS client fingerprint. This is done automatically, and is completely transparent to the user. Other request methods include `post`, `put`, `delete`, `head`, `options`, and `patch`. The `Response` object is a near 1:1 replica of the `requests.Response` object, with some additional attributes. <details> <summary>Parameters</summary> ``` Parameters: url (str): URL to send request to data (Union[str, bytes, bytearray, dict], optional): Data to send to request. Defaults to None. files (Dict[str, Union[BufferedReader, tuple]], optional): Data to send to request. Defaults to None. headers (dict, optional): Dictionary of HTTP headers to send with the request. Defaults to None. params (dict, optional): Dictionary of URL parameters to append to the URL. Defaults to None. cookies (Union[RequestsCookieJar, dict, list], optional): Dict or CookieJar to send. Defaults to None. json (dict, optional): Json to send in the request body. Defaults to None. allow_redirects (bool, optional): Allow request to redirect. Defaults to True. history (bool, optional): Remember request history. Defaults to False. verify (bool, optional): Verify the server's TLS certificate. Defaults to True. timeout (int, optional): Timeout in seconds. Defaults to 30. proxies (dict, optional): Dictionary of proxies. Defaults to None. no_pause (bool, optional): Run the request in the background. Defaults to False. <Additionally includes all parameters from `hrequests.Session` if a session was not specified> Returns: hrequests.response.Response: Response object ``` </details> ### Properties Get the response url: ```py >>> resp.url: str 'https://www.google.com/' ``` Check if the request was successful: ```py >>> resp.status_code: int 200 >>> resp.reason: str 'OK' >>> resp.ok: bool True >>> bool(resp) True ``` Getting the response body: ```py >>> resp.text: str '<!doctype html><html itemscope="" itemtype="http://schema.org/WebPage" lang="en"><head><meta charset="UTF-8"><meta content="origin" name="referrer"><m...' >>> resp.content: Union[bytes, str] '<!doctype html><html itemscope="" itemtype="http://schema.org/WebPage" lang="en"><head><meta charset="UTF-8"><meta content="origin" name="referrer"><m...' ``` Parse the response body as JSON: ```py >>> resp.json(): Union[dict, list] {'somedata': True} ``` Get the elapsed time of the request: ```py >>> resp.elapsed: datetime.timedelta datetime.timedelta(microseconds=77768) ``` Get the response cookies: ```py >>> resp.cookies: RequestsCookieJar <RequestsCookieJar[Cookie(version=0, name='1P_JAR', value='2023-07-05-20', port=None, port_specified=False, domain='.google.com', domain_specified=True... ``` Get the response headers: ```py >>> resp.headers: CaseInsensitiveDict {'Alt-Svc': 'h3=":443"; ma=2592000,h3-29=":443"; ma=2592000', 'Cache-Control': 'private, max-age=0', 'Content-Encoding': 'br', 'Content-Length': '51288', 'Content-Security-Policy-Report-Only': "object-src 'none';base-uri 'se ``` <hr width=50> ## Sessions Creating a new Chrome Session object: ```py >>> session = hrequests.Session() # version randomized by default >>> session = hrequests.Session('chrome', version=112) ``` <details> <summary>Parameters</summary> ``` Parameters: browser (Literal['firefox', 'chrome', 'opera'], optional): Browser to use. Default is 'chrome'. version (int, optional): Version of the browser to use. Browser must be specified. Default is randomized. os (Literal['win', 'mac', 'lin'], optional): OS to use in header. Default is randomized. headers (dict, optional): Dictionary of HTTP headers to send with the request. Default is generated from `browser` and `os`. verify (bool, optional): Verify the server's TLS certificate. Defaults to True. timeout (int, optional): Default timeout in seconds. Defaults to 30. ja3_string (str, optional): JA3 string. Defaults to None. h2_settings (dict, optional): HTTP/2 settings. Defaults to None. additional_decode (str, optional): Additional decode. Defaults to None. pseudo_header_order (list, optional): Pseudo header order. Defaults to None. priority_frames (list, optional): Priority frames. Defaults to None. header_order (list, optional): Header order. Defaults to None. force_http1 (bool, optional): Force HTTP/1. Defaults to False. catch_panics (bool, optional): Catch panics. Defaults to False. debug (bool, optional): Debug mode. Defaults to False. ``` </details> Browsers can also be created through the `firefox`, `chrome`, and `opera` shortcuts: ```py >>> session = hrequests.firefox.Session() >>> session = hrequests.chrome.Session() >>> session = hrequests.opera.Session() ``` <details> <summary>Parameters</summary> ``` Parameters: version (int, optional): Version of the browser to use. Browser must be specified. Default is randomized. os (Literal['win', 'mac', 'lin'], optional): OS to use in header. Default is randomized. headers (dict, optional): Dictionary of HTTP headers to send with the request. Default is generated from `browser` and `os`. verify (bool, optional): Verify the server's TLS certificate. Defaults to True. ja3_string (str, optional): JA3 string. Defaults to None. h2_settings (dict, optional): HTTP/2 settings. Defaults to None. additional_decode (str, optional): Additional decode. Defaults to None. pseudo_header_order (list, optional): Pseudo header order. Defaults to None. priority_frames (list, optional): Priority frames. Defaults to None. header_order (list, optional): Header order. Defaults to None. force_http1 (bool, optional): Force HTTP/1. Defaults to False. catch_panics (bool, optional): Catch panics. Defaults to False. debug (bool, optional): Debug mode. Defaults to False. ``` </details> `os` can be `'win'`, `'mac'`, or `'lin'`. Default is randomized. ```py >>> session = hrequests.firefox.Session(os='mac') ``` This will automatically generate headers based on the browser name and OS: ```py >>> session.headers {'Accept': '*/*', 'Connection': 'keep-alive', 'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_4; rv:60.2.2) Gecko/20100101 Firefox/60.2.2', 'Accept-Encoding': 'gzip, deflate, br', 'Pragma': 'no-cache'} ``` <details> <summary>Why is the browser version in the header different than the TLS browser version?</summary> Website bot detection systems typically do not correlate the TLS fingerprint browser version with the browser header. By adding more randomization to our headers, we can make our requests appear to be coming from a larger number of clients. We can make it seem like our requests are coming from a larger number of clients. This makes it harder for websites to identify and block our requests based on a consistent browser version. </details> ### Properties Here is a simple get request. This is a wraper around `hrequests.get`. The only difference is that the session cookies are updated with each request. Creating sessions are recommended for making multiple requests to the same domain. ```py >>> resp = session.get('https://www.google.com/') ``` Session cookies update with each request: ```py >>> session.cookies: RequestsCookieJar <RequestsCookieJar[Cookie(version=0, name='1P_JAR', value='2023-07-05-20', port=None, port_specified=False, domain='.google.com', domain_specified=True... ``` Regenerate headers for a different OS: ```py >>> session.os = 'win' >>> session.headers: CaseInsensitiveDict {'Accept': '*/*', 'Connection': 'keep-alive', 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:66.0.3) Gecko/20100101 Firefox/66.0.3', 'Accept-Encoding': 'gzip, deflate, br', 'Accept-Language': 'en-US;q=0.5,en;q=0.3', 'Cache-Control': 'max-age=0', 'DNT': '1', 'Upgrade-Insecure-Requests': '1', 'Pragma': 'no-cache'} ``` ### Closing Sessions Sessions can also be closed to free memory: ```py >>> session.close() ``` Alternatively, sessions can be used as context managers: ```py with hrequests.Session() as session: resp = session.get('https://www.google.com/') print(resp) ``` <hr width=50> ## Concurrent & Lazy Requests ### "Lazy" Requests Adding the `no_pause=True` keyword argument will return a `LazyTLSRequest` object. This will send the request immediately, but doesn't wait for the response to be ready until an attribute of the response is accessed. ```py resp1 = hrequests.get('https://www.google.com/', no_pause=True) resp2 = hrequests.get('https://www.google.com/', no_pause=True) # resp1 and resp2 are sent concurrently print('Resp 1:', resp1.reason) # will pause for resp1 to finish, if it hasn't already print('Resp 2:', resp2.reason) # will pause for resp2 to finish, if it hasn't already ``` This is useful for sending multiple requests concurrently, but only waiting for the response when it is needed. Note that `no_pause` uses gevent as it's backend. Use `no_pause_threadsafe` when running across multiple threads. ### Grequests-style Async Requests The method `async_get` will create an unsent request. <details> <summary>Parameters</summary> ``` Parameters: url (str): URL to send request to data (Union[str, bytes, bytearray, dict], optional): Data to send to request. Defaults to None. files (Dict[str, Union[BufferedReader, tuple]], optional): Data to send to request. Defaults to None. headers (dict, optional): Dictionary of HTTP headers to send with the request. Defaults to None. params (dict, optional): Dictionary of URL parameters to append to the URL. Defaults to None. cookies (Union[RequestsCookieJar, dict, list], optional): Dict or CookieJar to send. Defaults to None. json (dict, optional): Json to send in the request body. Defaults to None. allow_redirects (bool, optional): Allow request to redirect. Defaults to True. history (bool, optional): Remember request history. Defaults to False. verify (bool, optional): Verify the server's TLS certificate. Defaults to True. timeout (int, optional): Timeout in seconds. Defaults to 30. proxies (dict, optional): Dictionary of proxies. Defaults to None. no_pause (bool, optional): Run the request in the background. Defaults to False. <Additionally includes all parameters from `hrequests.Session` if a session was not specified> Returns: hrequests.response.Response: Response object ``` </details> Async requests are evaluated on `hrequests.map`, `hrequests.imap`, or `hrequests.imap_enum`. This functionality is similar to [grequests](https://github.com/spyoungtech/grequests). Unlike grequests, [monkey patching](https://www.gevent.org/api/gevent.monkey.html) is not required because this does not rely on the standard python SSL library. Create a set of unsent Requests: ```py reqs = [ hrequests.async_get('https://www.google.com/'), hrequests.async_get('https://www.duckduckgo.com/'), hrequests.async_get('https://www.yahoo.com/') ] ``` #### map Send them all at the same time using map: ```py >>> hrequests.map(reqs, size=3) [<Response [200]>, <Response [200]>, <Response [200]>] ``` <details> <summary>Parameters</summary> ``` Concurrently converts a list of Requests to Responses. Parameters: requests - a collection of Request objects. size - Specifies the number of requests to make at a time. If None, no throttling occurs. exception_handler - Callback function, called when exception occured. Params: Request, Exception timeout - Gevent joinall timeout in seconds. (Note: unrelated to requests timeout) Returns: A list of Response objects. ``` </details> #### imap `imap` returns a generator that yields responses as they come in: ```py >>> for resp in hrequests.imap(reqs, size=3): ... print(resp) <Response [200]> <Response [200]> <Response [200]> ``` <details> <summary>Parameters</summary> ``` Concurrently converts a generator object of Requests to a generator of Responses. Parameters: requests - a generator or sequence of Request objects. size - Specifies the number of requests to make at a time. default is 2 exception_handler - Callback function, called when exception occurred. Params: Request, Exception Yields: Response objects. ``` </details> `imap_enum` returns a generator that yields a tuple of `(index, response)` as they come in. The `index` is the index of the request in the original list: ```py >>> for index, resp in hrequests.imap_enum(reqs, size=3): ... print(index, resp) (1, <Response [200]>) (0, <Response [200]>) (2, <Response [200]>) ``` <details> <summary>Parameters</summary> ``` Like imap, but yields tuple of original request index and response object Unlike imap, failed results and responses from exception handlers that return None are not ignored. Instead, a tuple of (index, None) is yielded. Responses are still in arbitrary order. Parameters: requests - a sequence of Request objects. size - Specifies the number of requests to make at a time. default is 2 exception_handler - Callback function, called when exception occurred. Params: Request, Exception Yields: (index, Response) tuples. ``` </details> #### Exception Handling To handle timeouts or any other exception during the connection of the request, you can add an optional exception handler that will be called with the request and exception inside the main thread. ```py >>> def exception_handler(request, exception): ... return f'Response failed: {exception}' >>> bad_reqs = [ ... hrequests.async_get('http://httpbin.org/delay/5', timeout=1), ... hrequests.async_get('http://fakedomain/'), ... hrequests.async_get('http://example.com/'), ... ] >>> hrequests.map(bad_reqs, size=3, exception_handler=exception_handler) ['Response failed: Connection error', 'Response failed: Connection error', <Response [200]>] ``` The value returned by the exception handler will be used in place of the response in the result list: <hr width=50> ## HTML Parsing HTML scraping uses PyQuery, which is ~7x faster than bs4. This functionality is based of [requests-html](https://github.com/psf/requests-html). | Library | Time (1e5 trials) | | --- | --- | | BeautifulSoup4 | 52.6 | | PyQuery | 7.5 | The HTML parser can be accessed through the `html` attribute of the response object: ```py >>> resp = session.get('https://python.org/') >>> resp.html <HTML url='https://www.python.org/'> ``` ### Parsing page Grab a list of all links on the page, as-is (anchors excluded): ```py >>> resp.html.links {'//docs.python.org/3/tutorial/', '/about/apps/', 'https://github.com/python/pythondotorg/issues', '/accounts/login/', '/dev/peps/', '/about/legal/',... ``` Grab a list of all links on the page, in absolute form (anchors excluded): ```py >>> resp.html.absolute_links {'https://github.com/python/pythondotorg/issues', 'https://docs.python.org/3/tutorial/', 'https://www.python.org/about/success/', 'http://feedproxy.g... ``` Search for text on the page: ```py >>> resp.html.search('Python is a {} language')[0] programming ``` ### Selecting elements Select an element using a CSS Selector: ```py >>> about = resp.html.find('#about') ``` <details> <summary>Parameters</summary> ``` Given a CSS Selector, returns a list of :class:`Element <Element>` objects or a single one. Parameters: selector: CSS Selector to use. clean: Whether or not to sanitize the found HTML of ``<script>`` and ``<style>`` containing: If specified, only return elements that contain the provided text. first: Whether or not to return just the first result. _encoding: The encoding format. Returns: A list of :class:`Element <Element>` objects or a single one. Example CSS Selectors: - ``a`` - ``a.someClass`` - ``a#someID`` - ``a[target=_blank]`` See W3School's `CSS Selectors Reference <https://www.w3schools.com/cssref/css_selectors.asp>`_ for more details. If ``first`` is ``True``, only returns the first :class:`Element <Element>` found. ``` </details> XPath is also supported: ```py >>> resp.html.xpath('/html/body/div[1]/a') [<Element 'a' class=('px-2', 'py-4', 'show-on-focus', 'js-skip-to-content') href='#start-of-content' tabindex='1'>] ``` <details> <summary>Parameters</summary> ``` Given an XPath selector, returns a list of Element objects or a single one. Parameters: selector (str): XPath Selector to use. clean (bool, optional): Whether or not to sanitize the found HTML of <script> and <style> tags. Defaults to first (bool, optional): Whether or not to return just the first result. Defaults to False. _encoding (str, optional): The encoding format. Defaults to None. Returns: _XPath: A list of Element objects or a single one. If a sub-selector is specified (e.g. //a/@href), a simple list of results is returned. See W3School's XPath Examples for more details. If first is True, only returns the first Element found. ``` </details> ### Introspecting elements Grab an Element's text contents: ```py >>> print(about.text) About Applications Quotes Getting Started Help Python Brochure ``` Getting an Element's attributes: ```py >>> about.attrs {'id': 'about', 'class': ('tier-1', 'element-1'), 'aria-haspopup': 'true'} ``` Get an Element's raw HTML: ```py >>> about.html '<li aria-haspopup="true" class="tier-1 element-1 " id="about">\n<a class="" href="/about/" title="">About</a>\n<ul aria-hidden="true" class="subnav menu" role="menu">\n<li class="tier-2 element-1" role="treeitem"><a href="/about/apps/" title="">Applications</a></li>\n<li class="tier-2 element-2" role="treeitem"><a href="/about/quotes/" title="">Quotes</a></li>\n<li class="tier-2 element-3" role="treeitem"><a href="/about/gettingstarted/" title="">Getting Started</a></li>\n<li class="tier-2 element-4" role="treeitem"><a href="/about/help/" title="">Help</a></li>\n<li class="tier-2 element-5" role="treeitem"><a href="http://brochure.getpython.info/" title="">Python Brochure</a></li>\n</ul>\n</li>' ``` Select Elements within Elements: ```py >>> about.find_all('a') [<Element 'a' href='/about/' title='' class=''>, <Element 'a' href='/about/apps/' title=''>, <Element 'a' href='/about/quotes/' title=''>, <Element 'a' href='/about/gettingstarted/' title=''>, <Element 'a' href='/about/help/' title=''>, <Element 'a' href='http://brochure.getpython.info/' title=''>] >>> about.find('a') <Element 'a' href='/about/' title='' class=''> ``` Search for links within an element: ```py >>> about.absolute_links {'http://brochure.getpython.info/', 'https://www.python.org/about/gettingstarted/', 'https://www.python.org/about/', 'https://www.python.org/about/quotes/', 'https://www.python.org/about/help/', 'https://www.python.org/about/apps/'} ``` <hr width=50> ## Browser Automation You can spawn a `BrowserSession` instance by calling it: ```py >>> page = hrequests.BrowserSession() # headless=True by default ``` <details> <summary>Parameters</summary> ``` Parameters: headless (bool, optional): Whether to run the browser in headless mode. Defaults to True. session (hrequests.session.TLSSession, optional): Session to use for headers, cookies, etc. resp (hrequests.response.Response, optional): Response to update with cookies, headers, etc. proxy_ip (str, optional): Proxy to use for the browser. Example: 123.123.123 mock_human (bool, optional): Whether to emulate human behavior. Defaults to False. browser (Literal['firefox', 'chrome', 'opera'], optional): Generate useragent headers for a specific browser os (Literal['win', 'mac', 'lin'], optional): Generate headers for a specific OS extensions (Union[str, Iterable[str]], optional): Path to a folder of unpacked extensions, or a list of paths to unpacked extensions ``` </details> `BrowserSession` is entirely safe to use across threads. ### Render an existing Response Responses have a `.render()` method. This will render the contents of the response in a browser page. Once the page is closed, the Response content and the Response's session cookies will be updated. **Example - submitting a login form:** ```py >>> resp = session.get('https://www.somewebsite.com/') >>> with resp.render(mock_human=True) as page: ... page.type('.input#username', 'myuser') ... page.type('.input#password', 'p4ssw0rd') ... page.click('#submit') # `session` & `resp` now have updated cookies, content, etc. ``` <details> <summary><strong>Or, without a context manager</strong></summary> ```py >>> resp = session.get('https://www.somewebsite.com/') >>> page = resp.render(mock_human=True) >>> page.type('.input#username', 'myuser') >>> page.type('.input#password', 'p4ssw0rd') >>> page.click('#submit') >>> page.close() # must close the page when done! ``` </details> The `mock_human` parameter will emulate human-like behavior. This includes easing and randomizing mouse movements, and randomizing typing speed. This functionality is based on [botright](https://github.com/Vinyzu/botright/). <details> <summary>Parameters</summary> ``` Parameters: headless (bool, optional): Whether to run the browser in headless mode. Defaults to False. mock_human (bool, optional): Whether to emulate human behavior. Defaults to False. extensions (Union[str, Iterable[str]], optional): Path to a folder of unpacked extensions, or a list of paths to unpacked extensions ``` </details> ### Properties Cookies are inherited from the session: ```py >>> page.cookies: RequestsCookieJar # cookies are inherited from the session <RequestsCookieJar[Cookie(version=0, name='1P_JAR', value='2023-07-05-20', port=None, port_specified=False, domain='.somewebsite.com', domain_specified=True... ``` ### Pulling page data Get current page url: ```py >>> page.url: str https://www.somewebsite.com/ ``` Get page content: ```py >>> page.text: str '<!doctype html><html itemscope="" itemtype="http://schema.org/WebPage" lang="en"><head><meta content="Search the world\'s information, including webpag' >>> page.content: bytes b'<!doctype html><html itemscope="" itemtype="http://schema.org/WebPage" lang="en"><head><meta content="Search the world\'s information, including webpag' ``` Parsing HTML from the page content: ```py >>> page.html.find_all('a') [<Element 'a' href='/about/' title='' class=''>, <Element 'a' href='/about/apps/' title=''>, ...] >>> page.html.find('a') <Element 'a' href='/about/' title='' class=''>, <Element 'a' href='/about/apps/' title=''> ``` Take a screenshot of the page: ```py page.screenshot('screenshot.png') ``` <details> <summary>Parameters</summary> ``` Parameters: path (str): Path to save screenshot to full_page (bool): Whether to take a screenshot of the full scrollable page ``` </details> ### Navigate the browser Navigate to a url: ```py >>> page.url = 'https://bing.com' # or use goto >>> page.goto('https://bing.com') ``` Navigate through page history: ```py >>> page.back() >>> page.forward() ``` ### Controlling elements Click an element: ```py >>> page.click('#my-button') # or through the html parser >>> page.html.find('#my-button').click() ``` <details> <summary>Parameters</summary> ``` Parameters: selector (str): CSS selector to click. button (Literal['left', 'right', 'middle'], optional): Mouse button to click. Defaults to 'left'. count (int, optional): Number of clicks. Defaults to 1. timeout (float, optional): Timeout in seconds. Defaults to 30. wait_after (bool, optional): Wait for a page event before continuing. Defaults to True. ``` </details> Hover over an element: ```py >>> page.hover('.dropbtn') # or through the html parser >>> page.html.find('.dropbtn').hover() ``` <details> <summary>Parameters</summary> ``` Parameters: selector (str): CSS selector to hover over modifiers (List[Literal['Alt', 'Control', 'Meta', 'Shift']], optional): Modifier keys to press. Defaults to None. timeout (float, optional): Timeout in seconds. Defaults to 90. ``` </details> Type text into an element: ```py >>> page.type('#my-input', 'Hello world!') # or through the html parser >>> page.html.find('#my-input').type('Hello world!') ``` <details> <summary>Parameters</summary> ``` Parameters: selector (str): CSS selector to type in text (str): Text to type delay (int, optional): Delay between keypresses in ms. On mock_human, this is randomized by 50%. Defaults to 50. timeout (float, optional): Timeout in seconds. Defaults to 30. ``` </details> Drag and drop an element: ```py >>> page.dragTo('#source-selector', '#target-selector') # or through the html parser >>> page.html.find('#source-selector').dragTo('#target-selector') ``` <details> <summary>Parameters</summary> ``` Parameters: source (str): Source to drag from target (str): Target to drop to timeout (float, optional): Timeout in seconds. Defaults to 30. wait_after (bool, optional): Wait for a page event before continuing. Defaults to False. check (bool, optional): Check if an element is draggable before running. Defaults to False. Throws: hrequests.exceptions.BrowserTimeoutException: If timeout is reached ``` </details> ### Check page elements Check if a selector is visible and enabled: ```py >>> page.isVisible('#my-selector'): bool >>> page.isEnabled('#my-selector'): bool ``` <details> <summary>Parameters</summary> ``` Parameters: selector (str): Selector to check ``` </details> Evaluate and return a script: ```py >>> page.evaluate('selector => document.querySelector(selector).checked', '#my-selector') ``` <details> <summary>Parameters</summary> ``` Parameters: script (str): Javascript to evaluate in the page arg (str, optional): Argument to pass into the javascript function ``` </details> ### Awaiting events ```py >>> page.awaitNavigation() ``` <details> <summary>Parameters</summary> ``` Parameters: timeout (float, optional): Timeout in seconds. Defaults to 30. Throws: hrequests.exceptions.BrowserTimeoutException: If timeout is reached ``` </details> Wait for a script or function to return a truthy value: ```py >>> page.awaitScript('selector => document.querySelector(selector).value === 100', '#progress') ``` <details> <summary>Parameters</summary> ``` Parameters: script (str): Script to evaluate arg (str, optional): Argument to pass to script timeout (float, optional): Timeout in seconds. Defaults to 30. Throws: hrequests.exceptions.BrowserTimeoutException: If timeout is reached ``` </details> Wait for the URL to match: ```py >>> page.awaitUrl(re.compile(r'https?://www\.google\.com/.*'), timeout=10) ``` <details> <summary>Parameters</summary> ``` Parameters: url (Union[str, Pattern[str], Callable[[str], bool]]) - URL to match for timeout (float, optional): Timeout in seconds. Defaults to 30. Throws: hrequests.exceptions.BrowserTimeoutException: If timeout is reached ``` </details> Wait for an element to exist on the page: ```py >>> page.awaitSelector('#my-selector') ``` <details> <summary>Parameters</summary> ``` Parameters: selector (str): Selector to wait for timeout (float, optional): Timeout in seconds. Defaults to 30. Throws: hrequests.exceptions.BrowserTimeoutException: If timeout is reached ``` </details> Wait for an element to be enabled: ```py >>> page.awaitEnabled('#my-selector') ``` <details> <summary>Parameters</summary> ``` Parameters: selector (str): Selector to wait for timeout (float, optional): Timeout in seconds. Defaults to 30. Throws: hrequests.exceptions.BrowserTimeoutException: If timeout is reached ``` </details> ### Adding Chrome extensions Chrome extensions can be easily imported into a browser session. Some potentially useful extensions include: - [hektCaptcha](https://github.com/Wikidepia/hektCaptcha-extension) - Hcaptcha solver - [uBlock Origin](https://github.com/gorhill/uBlock) - Ad & popup blocker - [FastForward](https://fastforward.team/) - Bypass & skip link redirects Extensions are added with the `extensions` parameter. - This can be an list of absolute paths to unpacked extensions: ```py with resp.render(extensions=['C:\\extentions\\hektcaptcha', 'C:\\extentions\\ublockorigin']): ``` - Or a folder containing the unpacked extensions: ```py with resp.render(extensions='C:\\extentions'): ``` Note that these need to be *unpacked* extensions. You can unpack a `.crx` file by changing the file extension to `.zip` and extracting the contents. Here is an usage example of using a captcha solver: ```py >>> with hrequests.render('https://accounts.hcaptcha.com/demo', extensions=['C:\\extentions\\hektcaptcha']) as page: ... page.awaitSelector('.hcaptcha-success') # wait for captcha to finish ... page.click('input[type=submit]') ``` ### Requests & Responses Requests can also be sent within browser sessions. These operate the same as the standard `hrequests.request`, and will use the browser's cookies and headers. The `BrowserSession` cookies will be updated with each request. This returns a normal `Response` object: ```py >>> resp = page.get('https://duckduckgo.com') ``` <details> <summary>Parameters</summary> ``` Parameters: url (str): URL to send request to params (dict, optional): Dictionary of URL parameters to append to the URL. Defaults to None. data (Union[str, dict], optional): Data to send to request. Defaults to None. headers (dict, optional): Dictionary of HTTP headers to send with the request. Defaults to None. form (dict, optional): Form data to send with the request. Defaults to None. multipart (dict, optional): Multipart data to send with the request. Defaults to None. timeout (float, optional): Timeout in seconds. Defaults to 30. verify (bool, optional): Verify the server's TLS certificate. Defaults to True. max_redirects (int, optional): Maximum number of redirects to follow. Defaults to None. Throws: hrequests.exceptions.BrowserTimeoutException: If timeout is reached Returns: hrequests.response.Response: Response object ``` </details> Other methods include `post`, `put`, `delete`, `head`, and `patch`. ### Closing the page The `BrowserSession` object must be closed when finished. This will close the browser, update the response data, and merge new cookies with the session cookies. ```py >>> page.close() ``` Note that this is automatically done when using a context manager. Session cookies are updated: ```py >>> session.cookies: RequestsCookieJar <RequestsCookieJar[Cookie(version=0, name='MUID', value='123456789', port=None, port_specified=False, domain='.bing.com', domain_specified=True, domain_initial_dot=True... ``` Response data is updated: ```py >>> resp.url: str 'https://www.bing.com/?toWww=1&redig=823778234657823652376438' >>> resp.content: Union[bytes, str] '<!DOCTYPE html><html lang="en" dir="ltr"><head><meta name="theme-color" content="#4F4F4F"><meta name="description" content="Bing helps you turn inform... ``` #### Other ways to create a Browser Session You can use `.render` to spawn a `BrowserSession` object directly from a url: ```py # Using a Session: >>> page = session.render('https://google.com') # Or without a session at all: >>> page = hrequests.render('https://google.com') ``` Make sure to close all `BrowserSession` objects when done! ```py >>> page.close() ``` ---
pikho/ppromptor
https://github.com/pikho/ppromptor
Prompt-Promptor is a python library for automatically generating prompts using LLMs
# Prompt-Promptor: An Autonomous Agent Framework for Prompt Engineering Prompt-Promptor(or shorten for ppromptor) is a Python library designed to automatically generate and improve prompts for LLMs. It draws inspiration from autonomous agents like AutoGPT and consists of three agents: Proposer, Evaluator, and Analyzer. These agents work together with human experts to continuously improve the generated prompts. ## 🚀 Features: - 🤖 The use of LLMs to prompt themself by giving few samples. - 💪 Guidance for OSS LLMs(eg, LLaMA) by more powerful LLMs(eg, GPT4) - 📈 Continuously improvement. - 👨‍👨‍👧‍👦 Collaboration with human experts. - 💼 Experiment management for prompt engineering. - 🖼 Web GUI interface. - 🏳️‍🌈 Open Source. ## Warning - This project is currently in its earily stage, and it is anticipated that there will be major design changes in the future. - The main function utilizes an infinite loop to enhance the generation of prompts. If you opt for OpenAI's ChatGPT as Target/Analysis LLMs, kindly ensure that you set a usage limit. ## Concept ![Compare Prompts](https://github.com/pikho/ppromptor/blob/main/doc/images/concept.png?raw=true) A more detailed class diagram could be found in [doc](https://github.com/pikho/ppromptor/tree/main/doc) ## Installations ### From Github 1. Install Package ``` pip install ppromptor --upgrade ``` 2. Clone Repository from Github ``` git clone https://github.com/pikho/ppromptor.git ``` 3. Start Web UI ``` cd ppromptor streamlit run ui/app.py ``` ### Running Local Model(WizardLM) 1. Install Required Packages ``` pip install requirements_local_model.txt ``` 2. Test if WizardLM can run correctly ``` cd <path_to_ppromptor>/ppromptor/llms python wizardlm.py ``` ## Usage 1. Start the Web App ``` cd <path_to_ppromptor> streamlit run ui/app.py ``` 2. Load the Demo Project Load `examples/antonyms.db`(default) for demo purposes. This demonstrates how to use ChatGPT to guide WizardLM to generate antonyms for given inputs. 3. Configuration In the Configuration tab, set `Target LLM` as `wizardlm` if you can infer this model locally. Or choose both `Target LLM` and `Analysis LLM` as `chatgpt`. If chatgpt is used, please provide the OpenAI API Key. 4. Load the dataset The demo project has already loaded 5 records. You can add your own dataset.(Optional) 5. Start the Workload Press the `Start` button to activate the workflow. 5. Prompt Candidates Generated prompts can be found in the `Prompt Candidates` tab. Users can modify generated prompts by selecting only 1 Candidate, then modifying the prompt, then `Create Prompt`. This new prompt will be evaluated by Evaluator agent and then keep improving by Analyzer agent. By selecting 2 prompts, we can compare these prompts side by side. ![Compare Prompts](https://github.com/pikho/ppromptor/blob/main/doc/images/cmp_candidates-1.png?raw=true) ![Compare Prompts](https://github.com/pikho/ppromptor/blob/main/doc/images/cmp_candidates-2.png?raw=true) ## Contribution We welcome all kinds of contributions, including new feature requests, bug fixes, new feature implementation, examples, and documentation updates. If you have a specific request, please use the "Issues" section. For other contributions, simply create a pull request (PR). Your participation is highly valued in improving our project. Thank you!
zhenruyan/WSL-libre-linux-kernel
https://github.com/zhenruyan/WSL-libre-linux-kernel
Installing a 100% libre(free) linux kernel for wsl,It is possible to celebrate freedom within a cell. 给WSL替换自由内核!!!
![无标题](https://github.com/zhenruyan/WSL-libre-linux-kernel/assets/9253251/64554eeb-1075-43b6-aba4-f1eb412d1143) # WSL replace libre linux kernel 100% libre!! [ENGLISH](https://github.com/zhenruyan/WSL-libre-linux-kernel/blob/master/README.md)/[简体中文](https://github.com/zhenruyan/WSL-libre-linux-kernel/blob/master/README_CN.md) [![General build kernel](https://github.com/zhenruyan/WSL-libre-linux-kernel/actions/workflows/blank.yml/badge.svg)](https://github.com/zhenruyan/WSL-libre-linux-kerne/actions/workflows/blank.yml) [![Coverage Status](https://coveralls.io/repos/github/zhenruyan/WSL-libre-linux-kernel/badge.svg?branch=master)](https://coveralls.io/github/zhenruyan/WSL-libre-linux-kernel?branch=master) [![TODOs](https://badgen.net/https/api.tickgit.com/badgen/github.com/zhenruyan/WSL-libre-linux-kernel)](https://www.tickgit.com/browse?repo=github.com/zhenruyan/WSL-libre-linux-kernel) IRC: `#WSL-libre-linux-kernel` on [Libera Chat](https://libera.chat), https://web.libera.chat/#WSL-libre-linux-kernel ## You can also enjoy in windows, completely 100% free kernel * Optimization Async io * Optimization Scheduler * Optimize memory footprint * Enhanced for real-time applications * Modules such as ntfs. * Modules such as exfat. * Modules such as f2fs. * Modules such as btrfs. * kvm are enabled by default. * Better support for ssd and other devices. ## todo - [x] Automatic synchronization of source codes - [ ] Automatic compilation based on tag - [ ] installation script - [ ] Published to scoop winget chocolatey and other windows package management platforms. ![logo](https://www.fsfla.org/ikiwiki/selibre/linux-libre/100gnu+freedo.png) [install kernel](https://github.com/zhenruyan/WSL-libre-linux-kernel/wiki/install-kernel) [build kernel](https://github.com/zhenruyan/WSL-libre-linux-kernel/wiki/config-and-build-kernel) [microsoft wsl2 documents](https://learn.microsoft.com/zh-cn/windows/wsl/wsl-config) Exciting and delightful GNU Linux-libre, Free as in Freedo ![librelinux](https://www.fsfla.org/ikiwiki/selibre/linux-libre/stux.jpg) > The freedom to run the program as you wish, for any purpose (freedom 0). > The freedom to study how the program works, and change it so it does your computing as you wish (freedom 1). Access to the source code is a precondition for this. > The freedom to redistribute copies so you can help others (freedom 2). > The freedom to distribute copies of your modified versions to others (freedom 3). By doing this you can give the whole community a chance to benefit from your changes. Access to the source code is a precondition for this. ![image](https://github.com/zhenruyan/WSL-libre-linux-kernel/assets/9253251/f7f8de26-7761-453f-90de-f6f44b9d7c63) ## All binaries were compiled using an E5 2689 engraved with a logo celebrating the 40th anniversary of GNU. Full of libre faith! ![image](https://github.com/zhenruyan/WSL-libre-linux-kernel/assets/9253251/7de8fa88-8e5a-4f26-9a14-1df6126552d2) ## Celebrate GNU's 40th anniversary with us, we hope to have 40 release announcements, discussions or presentations in one day! https://www.gnu.org/gnu40/ ![https://www.gnu.org/gnu40/](https://www.gnu.org/gnu40/GNU40_HM_banner.png) # thinks * [https://gnu.org](https://gnu.org) * [https://github.com/microsoft/WSL2-Linux-Kernel](https://github.com/microsoft/WSL2-Linux-Kernel) * [https://www.fsfla.org/ikiwiki/selibre/linux-libre/](https://www.fsfla.org/ikiwiki/selibre/linux-libre/)
terror/edmv
https://github.com/terror/edmv
Bulk rename files with your favourite editor
## edmv 📦 [![CI](https://github.com/terror/edmv/actions/workflows/ci.yml/badge.svg)](https://github.com/terror/edmv/actions/workflows/ci.yml) [![crates.io](https://shields.io/crates/v/edmv.svg)](https://crates.io/crates/edmv) **edmv** is a tool that lets you bulk rename files fast using your preferred text editor. ### Demo Below is a short demo showcasing the main functionality of the program: [![asciicast](https://asciinema.org/a/33OVZX9m1PZcyqYvdqmtvBRRv.svg)](https://asciinema.org/a/33OVZX9m1PZcyqYvdqmtvBRRv) ### Installation You can install the **edmv** command-line utility via the rust package manager [cargo](https://doc.rust-lang.org/cargo/): ```bash cargo install edmv ``` ...or you can build it from source: ```bash git clone https://github.com/terror/edmv cd edmv cargo install --path . ``` ...or you can download one of the pre-built binaries from the [releases](https://github.com/terror/edmv/releases) page. ### Usage Below is the output of `edmv --help`: ``` Bulk rename files using your favorite editor Usage: edmv [OPTIONS] [sources]... Arguments: [sources]... Paths to edit Options: --editor <EDITOR> Editor command to use --force Overwrite existing files --resolve Resolve conflicting renames --dry-run Run without making any changes -h, --help Print help -V, --version Print version ``` An option of note is the `--resolve` option, this applies to sources an intermediate rename to either a temporary directory or file - automatically handling conflicts such as overlapping or circular renames. ### Prior Art **edmv** is a tested and extended re-implementation of the version [Casey](https://github.com/casey) wrote in [Python](https://github.com/casey/edmv), do check it out!
nmmmnu/geohash
https://github.com/nmmmnu/geohash
geohash implementation in C++
GEOHASH - implementation in C++ =============================== --- ### What is geohash? Geohash is a algorithm for encoding two dimentional coordinates into a hash. - [wikipedia]. - [Movable Type Scripts] ### This release - Modern C++17 - Fast, no allocations - Neigbour cell optimization [wikipedia]: https://en.wikipedia.org/wiki/Geohash [Movable Type Scripts]: https://www.movable-type.co.uk/scripts/geohash.html
ubergeek77/lemmy-docker-multiarch
https://github.com/ubergeek77/lemmy-docker-multiarch
A build repository for multiarch Docker images for Lemmy
# lemmy-docker-multiarch [![Build Multiarch Images](https://github.com/ubergeek77/lemmy-docker-multiarch/actions/workflows/build-multiarch.yml/badge.svg?branch=main)](https://github.com/ubergeek77/lemmy-docker-multiarch/actions/workflows/build-multiarch.yml) A build repository for multiarch Docker images for Lemmy. Builds [`LemmyNet/lemmy`](https://github.com/LemmyNet/lemmy/) and [`LemmyNet/lemmy-ui`](https://github.com/LemmyNet/lemmy-ui/) for: - x64 (`amd64`) - ARM (`arm/v7`) - ARM64 (`arm64`) I made these because the Lemmy project does not currently support ARM, [and has been deliberating how to create ARM builds since `0.17.4`](https://github.com/LemmyNet/lemmy/issues/3102). The Dockerfiles I use, and the workflow I use to compile these images, are all open source here. [You can see the logs of previous runs on the Actions tab](https://github.com/ubergeek77/lemmy-docker-multiarch/actions/workflows/build-multiarch.yml). When [`LemmyNet/lemmy`](https://github.com/LemmyNet/lemmy/) or [`LemmyNet/lemmy-ui`](https://github.com/LemmyNet/lemmy-ui/) have new tags, my workflow will automatically be launched and those new tags will be built. These images are primarily here so they can be used in my [Lemmy-Easy-Deploy](https://github.com/ubergeek77/Lemmy-Easy-Deploy) project. However, you may use these images manually if you like. They are drop-in replacements for the official Lemmy images. I don't tag `latest`, so you will need to specify a tag to pull. For example, to use `0.18.0`: ``` ghcr.io/ubergeek77/lemmy:0.18.0 ghcr.io/ubergeek77/lemmy-ui:0.18.0 ``` I also build `rc` tags. In general, I will have images for any stable or `rc` tag of the official Lemmy repositories. To see the full list of tags I've built, check the Packages section on this repo, or go to the images directly: - https://github.com/ubergeek77/lemmy-docker-multiarch/pkgs/container/lemmy - https://github.com/ubergeek77/lemmy-docker-multiarch/pkgs/container/lemmy-ui
wibus-wee/icalingua-theme-telegram
https://github.com/wibus-wee/icalingua-theme-telegram
A Telegram-like theme based on icalingua++
<div> <a href="https://github.com/wibus-wee/icalingua-theme-telegram"> <img align="right" width="200" src="https://github.com/wibus-wee/icalingua-theme-telegram/assets/62133302/563396b0-9211-409a-9136-74a6f3cad037#gh-light-mode-only" /> </a> <a href="https://github.com/wibus-wee/icalingua-theme-telegram"> <img align="right" width="200" src="https://github.com/wibus-wee/icalingua-theme-telegram/assets/62133302/115cdb16-88fa-4ba6-9a14-32b9ab669b1b#gh-dark-mode-only" /> </a> </div> # Telegram Theme For Icalingua++ 一个基于 [Icalingua++](https://github.com/Icalingua-plus-plus/Icalingua-plus-plus) 的 Telegram 风格主题。 ## Motivation | 动机 我非常喜欢 Telegram Desktop 的 UI,但是在很多时候我都没法访问 Telegram,并且地区使用习惯的原因,我很难使用 Telegram,而是使用 QQ。 但是 Tencent QQ NT 版本的 UI 完全没有办法自由定制,即使定制成功了也是 HACK 进去的,对这款软件来说它并不合法。所以我决定使用 Icalingua++ 来实现这个主题。它完全可以实现 Telegram 的 UI,而且它是开源的,可以让更多的人使用。 > 总结:**QQ NT 一坨屎,Icalingua++ 大大滴好!** 但其实这个主题与其名曰主题,不如说是一个增强版的 Icalingua++,因为它不仅仅是一个主题,它还会增强 Icalingua++ 的功能与体验。 ## Attentions | 注意事项 - 由于 Icalingua++ 的限制,我改变了消息列表的 DOM 结构,所以我暂时无法实现点击图片放大的功能。后续可能我会尝试重写灯箱来实现这个功能。[Issue #16](https://github.com/wibus-wee/icalingua-theme-telegram/issues/16) - 它**强制改变**了很多原本的**DOM结构**,这可能会导致一些功能出现问题,如果你发现了这些问题,欢迎提交 [Issue](https://github.com/wibus-wee/icalingua-theme-telegram/issues)。 - 由于我们想要增强聊天功能,我们可能需要另外启动一个子进程来处理一些信息。如果你**不信任我 / 不信任仓库代码**,你可以不使用这个主题。 - 在 [#32](https://github.com/wibus-wee/icalingua-theme-telegram/pull/32) 中,我实现了手动控制功能启动功能,你现在可以在 `config.js` 中设置你想要启动的功能了,你也可以将 `manual` 设置为 `false` 来启动所有功能。有关更多配置信息,请参阅 [Config | 配置](#config--配置)。 ## Features | 特性 - **基础样式。** 将 Telegram 的大部分样式移植到 Icalingua++。 - **深度修改。** 将同一联系人的多条消息合并为一条,以减少界面占用。 - **更好的图片信息显示效果。** 以更好的方式显示图片信息。 - **新图标。** 用 Telegram 风格的图标替换图标。 - **漂亮的模态框。** 更改模态框的样式,使其更加美观。 - **不同的用户名颜色。** 为每个用户名分配不同的颜色,以便更好地区分不同的联系人。 - **良好的动效。** 为 Icalingua++ 移除与主题不和谐的动效以及添加更多合理的动效。 - **更多样式。** 将添加更多样式,使 Icalingua++ 更像 Telegram。 - **更好的开发体验。** 自动重载样式和页面,以便开发者更好地开发主题。 ## Installation | 安装 ### Automatic | 自动安装 1. 下载最新的发布版本或从 [CI Release](https://github.com/wibus-wee/icalingua-theme-telegram/releases) 下载最新的构建版本。 2. 给予执行权限 `chmod +x install.sh` 3. 在解压缩包后的目录下运行 `./install.sh`。 4. 重启 Icalingua++。 ### Manual | 手动安装 #### 从 CI 下载 1. 下载最新的发布版本或从 [CI Release](https://github.com/wibus-wee/icalingua-theme-telegram/releases) 下载最新的构建版本。 2. 将 `addon.js`, `style.css`, `main.js`, `config.js` 复制到 Icalingua++ 的[数据目录](https://github.com/Icalingua-plus-plus/Icalingua-plus-plus#%E9%BB%98%E8%AE%A4%E6%95%B0%E6%8D%AE%E7%9B%AE%E5%BD%95) 3. 重启 Icalingua++。 #### 从源码安装 1. 克隆这个仓库。 2. 安装依赖 `pnpm install`。 3. 给予执行权限 `chmod +x dist/install.sh`。 4. 运行 `cd dist && ./install.sh`。 5. 重启 Icalingua++。 ## Enhancements & Feat. | 增强 & 新功能 这个文件用于帮助一些由于 DOM 结构的原因无法直接通过改变 CSS 实现目标样式的元素。已经实现的功能有: - [x] 获取 ChatGroup 的宽度以改变 ChatGroup Aside 为 Telegram 风格的头部菜单栏。 - [x] 合并同一联系人的多条消息为一条。 - [x] 更好的图片信息显示效果。(仅针对单张图片消息) - [x] 移除回复消息的图标并改为点击即可回复消息的样式。 - [x] 为每个用户名分配不同的颜色。 - [x] 自动重载 CSS 和 JS 文件。 - [x] 手动控制功能启动功能。 - [ ] 全新的图像显示器。 - [ ] 主题自动更新器。 - [ ] 用 Telegram 风格的图标替换图标。 - [ ] 更改模态框的样式,使其更加美观。 - [ ] 鼠标滑动以回复消息 ## Config | 配置 在 [#32](https://github.com/wibus-wee/icalingua-theme-telegram/pull/32) 中,我们引入了一个新的配置文件 `config.js`,你可以在这个文件中配置你想要启动的功能。有关这个文件的配置项定义,你可以前往 [types.d.ts](./types.d.ts) 查看。在此我简单介绍一下配置项: > **Note** > > **不知道咋写的先学下 JavaScript 吧,或者将 `manual` 设置为 false,这样所有功能都会启动。** ### core -- 启动的核心功能 你可以去前往 [core](./src/core//index.ts) 查看所有的核心功能。你需要填入的是核心功能的 Key。如: 在文件里有一行代码: ```ts "modify-chat-box-interval": modifyChatBoxInterval, ``` 我想启动这个功能,那么你在 config.js 的 core 中需要填入的是 [`modify-chat-box-interval`],以此类推。 ### chatbox --- 启动的聊天框修改功能 与 [core](#core----启动的核心功能) 类似,你可以去前往 [chatbox](./src/functions/index.ts) 查看所有的聊天框修改功能。你需要填入的是聊天框修改功能的 Key。 ### 其他注意事项 - 你可以在 `config.js` 中设置 `manual` 为 `false` 来启动所有功能。 - 当 `dev` 为 `true` 且你启动了 `fileChangesListener` 时,当你对 JS 文件进行修改时,Icalingua++ 会自动重载窗口,对 CSS 文件修改时,Icalingua++ 会自动重载 CSS 文件。 - 当 `dev` 为 `true` 时,全部功能都会启动,你无法通过 `manual` 或其他办法来关闭功能(除了删代码 🙂)。 - 修改了 `config.js` 后,你需要重启 / 重载 Icalingua++ 才能使配置生效。 - 你如果“不小心”填错了功能的 Key,你大可以放心这个功能是不会被启动的,并且在控制台会有错误警告。 - 如果你对这种**控制台乱拉屎**的行为非常厌恶 🤬,你可以将 `log` 设置为 `false` 来关闭控制台输出。 ## Preview |Light|Dark| |---|---| |<img alt="light" src="https://github.com/wibus-wee/icalingua-theme-telegram/assets/62133302/841d7e5e-5e82-4373-9983-f61903879c86">|<img alt="dark" src="https://github.com/wibus-wee/icalingua-theme-telegram/assets/62133302/e07826bd-99a8-49fb-96b6-c7dad19cf16e">| ## Author Telegram Theme For Icalingua++ © Wibus, Released under AGPLv3. Created on Jul 1, 2023 > [Personal Website](http://wibus.ren/) · [Blog](https://blog.wibus.ren/) · GitHub [@wibus-wee](https://github.com/wibus-wee/) · Telegram [@wibus✪](https://t.me/wibus_wee)
matfrei/CLIPMasterPrints
https://github.com/matfrei/CLIPMasterPrints
Code for CLIPMasterPrints: Fooling Contrastive Language-Image Pre-training Using Latent Variable Evolution
# CLIPMasterPrints: Fooling Contrastive Language-Image Pre-training Using Latent Variable Evolution [![Paper](https://img.shields.io/badge/paper-arxiv.2307.03798-B31B1B.svg)](https://arxiv.org/abs/2307.03798) ![alt text](static/demo.gif) Installation ------- clipmasterprints builds upon the stable diffusion conda enviroment and decoder model. To run the code in the repository, you need to download and set up both: ``` mkdir external cd external # clone repository git clone https://github.com/CompVis/stable-diffusion.git # get correct commit git checkout 69ae4b35e0a0f6ee1af8bb9a5d0016ccb27e36dc # created and activate conda env with SD dependencies cd stable-diffusion conda env create -f environment.yaml conda activate ldm # install SD from source into conda env pip install -e . # move previously downloaded SD sd-v1-4.ckpt into correct folder # (Refer to https://github.com/CompVis/ for where to download the checkpoint) ln -s <path/to/sd-v1-4.ckpt> models/ldm/stable-diffusion-v1/model.ckpt # return to base dir cd ../.. ``` After all Stable Diffusion dependencies are installed, install the package from source using ``` git clone https://github.com/matfrei/CLIPMasterPrints.git cd CLIPMasterPrints pip install -e . ``` Mining and evaluating CLIPMasterPrints ------- To mine fooling master images, use ``` python train/mine.py --config-path config/<config-name>.yaml ``` where ```<config-name>``` is a placeholder for the desired config file. Use ```cmp_artworks.yaml``` to target artwork captions or ```cmp_imagenet_classes_*.yaml``` to reproduce our experiments on imagenet class captions. To display some plots for mined images, execute ``` python eval/eval_results.py ``` Authors ------- Matthias Freiberger <matfr@itu.dk> Peter Kun <peku@itu.dk> Anders Sundnes Løvlie <asun@itu.dk> Sebastian Risi <sebr@itu.dk> Citation ------ If you use the code for academic or commecial use, please cite the associated paper: ``` @misc{https://doi.org/10.48550/arXiv.2307.03798, doi = {10.48550/ARXIV.2307.03798}, url = {https://arxiv.org/abs/2307.03798}, author = {Freiberger, Matthias and Kun, Peter and Løvlie, Anders Sundnes and Risi, Sebastian}, title = {CLIPMasterPrints: Fooling Contrastive Language-Image Pre-training Using Latent Variable Evolution}, publisher = {arXiv}, year = {2023}, }
sxyazi/yazi
https://github.com/sxyazi/yazi
⚡️ Blazing fast terminal file manager written in Rust, based on async I/O.
## Yazi - ⚡️ Blazing Fast Terminal File Manager Yazi ("duck" in Chinese) is a terminal file manager written in Rust, based on non-blocking async I/O. It aims to provide an efficient, user-friendly, and configurable file management experience. https://github.com/sxyazi/yazi/assets/17523360/740a41f4-3d24-4287-952c-3aec51520a32 ⚠️ Note: Yazi is currently in active development and may be unstable. The API is subject to change without prior notice. ## Installation Before getting started, ensure that the following dependencies are installed on your system: - nerd-fonts (required, for icons) - jq (optional, for JSON preview) - unar (optional, for archive preview) - ffmpegthumbnailer (optional, for video thumbnails) - fd (optional, for file searching) - rg (optional, for file content searching) - fzf (optional, for directory jumping) - zoxide (optional, for directory jumping) ### Arch Linux Install with paru or your favorite AUR helper: ```bash paru -S yazi jq unarchiver ffmpegthumbnailer fd ripgrep fzf zoxide ``` ### macOS Install the dependencies with Homebrew: ```bash brew install jq unar ffmpegthumbnailer fd ripgrep fzf zoxide brew tap homebrew/cask-fonts && brew install --cask font-symbols-only-nerd-font ``` And download the latest release [from here](https://github.com/sxyazi/yazi/releases). Or you can install Yazi via cargo: ```bash cargo install --git https://github.com/sxyazi/yazi.git ``` ### Nix Nix users can install Yazi from [the NUR](https://github.com/nix-community/nur-combined/blob/master/repos/xyenon/pkgs/yazi/default.nix): ```bash nix-env -iA nur.repos.xyenon.yazi ``` Or add the following to your configuration: ```nix # configuration.nix environment.systemPackages = with pkgs; [ nur.repos.xyenon.yazi ]; ``` ### Build from source Execute the following commands to clone the project and build Yazi: ```bash git clone https://github.com/sxyazi/yazi.git cd yazi cargo build --release ``` Then, you can run: ```bash ./target/release/yazi ``` ## Usage ```bash yazi ``` If you want to use your own config, copy the [config folder](https://github.com/sxyazi/yazi/tree/main/config) to `~/.config/yazi`, and modify it as you like. ## Image Preview | Platform | Protocol | Support | | ------------- | -------------------------------------------------------------------------------- | --------------------- | | Kitty | [Terminal graphics protocol](https://sw.kovidgoyal.net/kitty/graphics-protocol/) | ✅ Built-in | | WezTerm | [Terminal graphics protocol](https://sw.kovidgoyal.net/kitty/graphics-protocol/) | ✅ Built-in | | Konsole | [Terminal graphics protocol](https://sw.kovidgoyal.net/kitty/graphics-protocol/) | ✅ Built-in | | iTerm2 | [Inline Images Protocol](https://iterm2.com/documentation-images.html) | ✅ Built-in | | Hyper | Sixel | ☑️ Überzug++ required | | foot | Sixel | ☑️ Überzug++ required | | X11 / Wayland | Window system protocol | ☑️ Überzug++ required | | Fallback | [Chafa](https://hpjansson.org/chafa/) | ☑️ Überzug++ required | Yazi automatically selects the appropriate preview method for you, based on the priority from top to bottom. That's relying on the `$TERM`, `$TERM_PROGRAM`, and `$XDG_SESSION_TYPE` variables, make sure you don't overwrite them by mistake! For instance, if your terminal is Alacritty, which doesn't support displaying images itself, but you are running on an X11/Wayland environment, it will automatically use the "Window system protocol" to display images -- this requires you to have [Überzug++](https://github.com/jstkdng/ueberzugpp) installed. ## TODO - [x] Add example config for general usage, currently please see my [another repo](https://github.com/sxyazi/dotfiles/tree/main/yazi) instead - [x] Integration with fzf, zoxide for fast directory navigation - [x] Integration with fd, rg for fuzzy file searching - [x] Documentation of commands and options - [x] Support for Überzug++ for image previews with X11/wayland environment - [ ] Batch renaming support ## License Yazi is MIT licensed.
mani-sh-reddy/Lunar-Lemmy-iOS
https://github.com/mani-sh-reddy/Lunar-Lemmy-iOS
Lunar is an iOS app that serves as a client for Lemmy, the open-source federated alternative to Reddit
# Lunar - An iOS Client for Lemmy [![GitHub release](https://img.shields.io/github/v/release/mani-sh-reddy/Lunar-Lemmy-iOS)](https://github.com/mani-sh-reddy/Lunar-Lemmy-iOS/releases) ![Cocoapods platforms](https://img.shields.io/cocoapods/p/ios) [![Static Badge](https://img.shields.io/badge/Swift-5.9-orange?logo=swift&logoColor=orange)](https://www.swift.org/about/) [![Static Badge](https://img.shields.io/badge/SwiftUI-3.0-blue?logo=swift&logoColor=blue) ](https://developer.apple.com/xcode/swiftui/) [![GitHub last commit (branch)](https://img.shields.io/github/last-commit/mani-sh-reddy/Lunar-Lemmy-iOS/dev)](https://github.com/mani-sh-reddy/Lunar-Lemmy-iOS/commits/main) [![license: GPL v3](https://img.shields.io/badge/license-GPLv3-maroon.svg)](https://www.gnu.org/licenses/gpl-3.0) Lunar is an iOS app that serves as a client for [Lemmy, the open-source federated alternative to Reddit](https://join-lemmy.org/instances) ![Lunar App Screenshots](Screenshots/LunarIconScreenshots.png) ## Getting Started Lunar is currently in its alpha testing phase and, as a result, it has not been released on the app store or TestFlight yet. However, you can still install Lunar on your iOS device by following these steps: 1. Ensure you have an Apple developer account. If you don't have one, you can create a free developer account by visiting this link: https://developer.apple.com 2. Clone the Lunar repository and open Lunar.xcodeproj. 3. In Xcode, select your project, go to the "General" tab, and choose "Automatically manage signing" and your personal team. 4. Connect your iPhone to your computer and select it as the run destination. 5. Run your project. After a successful build, you might encounter an error in Xcode stating that your app is not from a trustworthy source. 6. To resolve this issue, navigate to your device's Settings, search for "Device Management," select your profile name, and then click "Trust." 7. Now, you can run your app again, and it should work without any issues. ## Components [Alamofire](https://github.com/Alamofire/Alamofire) - Elegant HTTP Networking in Swift [Kingfisher](https://github.com/onevcat/Kingfisher) - A lightweight, pure-Swift library for downloading and caching images from the web. ## Contributing Contributions are welcome! If you would like to contribute, please create a pull request with your changes. ## License Lunar is released under the [GPL-3.0 license](https://choosealicense.com/licenses/gpl-3.0/). See the `LICENSE` file for more information. ## Contact If you would like to give feedback or any suggestions, please open a [discussion](https://github.com/mani-sh-reddy/Lunar-Lemmy-iOS/discussions).
codingburgas/2223-otj-10-project-python-web-scraper-GSTabanov20
https://github.com/codingburgas/2223-otj-10-project-python-web-scraper-GSTabanov20
2223-otj-10-project-python-web-scraper-GSTabanov20 created by GitHub Classroom
## Cryptocurrency Data Display This Python script fetches data from the CoinMarketCap website and displays information about popular cryptocurrencies, trending cryptocurrencies, top gainers, and top losers. The data is scraped using the `requests` library and parsed using the `BeautifulSoup` library. ## Installation To run this script, you need to have Python installed on your system. ### Dependencies The following libraries are required to run the script: - `requests`: Used to send HTTP requests and fetch the webpage content. - `beautifulsoup4`: Used to parse the HTML content and extract data from it. You can install these libraries using the following command: pip install requests beautifulsoup4 Make sure you have pip installed and it is up to date. If you don't have pip installed, you can refer to the official Python documentation for instructions on how to install it. ## Usage Once you have installed the required libraries, you can run the script by executing the following command: ```python python main.py ``` The script will fetch the data from the CoinMarketCap website and display a menu with the following options: 1. Popular Cryptocurrencies 2. Trending Cryptocurrencies (currently under development) 3. Top Gainers (currently under development) 4. Top Losers (currently under development) To select an option, enter the corresponding number and press Enter. Option 1 will display information about popular cryptocurrencies, including their names, prices, and 24-hour trading volumes. After displaying the information, you will be prompted with two options: to go back to the menu or exit the script. Options 2, 3, and 4 are currently under development and will display a message indicating that. You will be presented with the same options to go back to the menu or exit the script. If you enter an invalid choice, an error message will be displayed, and you will be prompted to enter a valid choice. ## Disclaimer This script fetches data from the CoinMarketCap website, which is a third-party platform. The availability and accuracy of the data depend on the CoinMarketCap website's stability and functionality. OpenAI and the author of this script do not guarantee the availability or accuracy of the data displayed by the script. Please use this script responsibly and adhere to any applicable terms of service or usage guidelines provided by CoinMarketCap. ## Contributions Contributions to this script are welcome. If you have any suggestions or improvements, feel free to fork the script's repository and submit a pull request with your changes.
profclems/compozify
https://github.com/profclems/compozify
Convert "docker run" commands to docker compose files
# Compozify Compozify is a simple (yet complicated) tool to generate a `docker-compose.yml` file from a `docker run` command. # Usage ## Screenshot ![image](https://github.com/profclems/compozify/assets/41906128/bcd27512-8692-44f3-9113-63bfb112e38e) ## Installation Download a binary suitable for your OS at the [releases page](https://github.com/profclems/compozify/releases/latest). ### Quick install #### Linux and macOS ```sh curl -sfL https://raw.githubusercontent.com/profclems/compozify/main/install.sh | sh ``` #### Windows (PowerShell) Open a PowerShell terminal (version 5.1 or later) and run: ```powershell Set-ExecutionPolicy RemoteSigned -Scope CurrentUser # Optional: Needed to run a remote script the first time irm https://raw.githubusercontent.com/profclems/compozify/main/install.ps1 | iex ``` ### From source #### Prerequisites for building from source - `make` - Go 1.18+ 1. Verify that you have Go 1.18+ installed ```sh go version ``` If `go` is not installed, follow instructions on [the Go website](https://golang.org/doc/install). 2. Clone this repository ```sh git clone https://github.com/profclems/compozify.git cd compozify ``` If you have `$GOPATH/bin` or `$GOBIN` in your `$PATH`, you can just install with `make install` (install compozify in `$GOPATH/bin`) and **skip steps 3 and 4**. 3. Build the project ```sh make build ``` 4. Change PATH to find newly compiled `compozify` ```sh export PATH=$PWD/bin:$PATH ``` 4. Run `compozify --version` to confirm that it worked ## License Copyright © [Clement Sam](https://twitter.com/clems_dev) `compozify` is open-sourced software licensed under the [MIT](LICENSE) license.
avikumart/LLM-GenAI-Transformers-Notebooks
https://github.com/avikumart/LLM-GenAI-Transformers-Notebooks
An repository containing all the LLM notebooks with tutorial and projects
# LLM-GenAI-Transformers-Notebooks A repository containing all the LLM notebooks with tutorials and projects ### The focus area of this repository: 1. Transformers tutorial and notebooks 2. LLM notebooks and their applications 3. Tools and technologies of GenAI 4. Courses List in GenAI 5. Generative AI Blogs/Articles 🤖Contributions are welcome...
taishan1994/langchain-learning
https://github.com/taishan1994/langchain-learning
langchain学习笔记,包含langchain源码解读、langchain中使用中文模型、langchain实例等。
# langchain-learning langchain的学习笔记。依赖: ```python openai==0.27.8 langchian==0.0.225 ``` 和langchain相类似的一些工具: - [danswer-ai/danswer: Ask Questions in natural language and get Answers backed by private sources. Connects to tools like Slack, GitHub, Confluence, etc.](https://github.com/danswer-ai/danswer) ## **文章** **注意:由于langchain或langchain-ChatGLM的更新,可能导致部分源码和讲解的有所差异。** 有的一些文章直接放的是一些链接,从网上收集整理而来。 **** - langchain组件-数据连接(data connection) - langchain组件-模型IO(model IO) - langchain组件-链(chains) - langchain组件-代理(agents) - langchain组件-内存(memory) - langchain组件-回调(callbacks) - langchain中ChatOpenAI背后做了什么.md - langchain.load.serializable.py.md - langchain中的一些schema.md - langchain中是怎么调用chatgpt的接口的.md - langchain结构化输出背后的原理,md - langchain中memory的工作原理.md - langchain怎么确保输出符合道德期望.md - langchain中路由链LLMRouterChain的原理.md - langchain中的EmbeddingRouterChain原理.md - langchain集成GPTCache.md - langchain集成Mivus向量数据库.md - langchain中的StreamingStdOutCallbackHandler原理.md - pydantic中config的一些配置.md - pydantic中的Serializable和root_validator.md - python中常用的一些魔术方法.md - python的typing常用的类型.md - python中functools的partial的用法.md - python中inspect的signature用法.md - python中args和kwargs.md - [我为什么放弃了 LangChain? - 知乎 (zhihu.com)](https://zhuanlan.zhihu.com/p/645358531) 目前基于langchain的中文项目有两个: - https://github.com/yanqiangmiffy/Chinese-LangChain - https://github.com/imClumsyPanda/langchain-ChatGLM 我们从中可以学到不少。 #### langchain-ChatGLM - 使用api部署langchain-chatglm的基本原理.md - 上传文档时发生了什么.md - 关于HuggingFaceEmbeddings.md - 关于InMemoryDocstore.md - 关于CharacterTextSplitter.md - 关于TextLoader.md - 关于怎么调用bing的搜索接口.md - 根据query得到相关的doc的原理.md - 根据查询出的docs和query生成prompt.md - 根据prompt用模型生成结果.md - [ChatGPT小型平替之ChatGLM-6B本地化部署、接入本地知识库体验 | 京东云技术团队](https://juejin.cn/post/7246408135015415868) ## **中文例子** - 定制中文LLM模型 - 定制中文聊天模型 - 使用中文splitter.md - 根据query查询docs.md - mini-langchain-ChatGLM.md - 打造简易版类小爱同学助手.md - chatglm实现agent控制.md - [向量检索增强chatglm生成-结合ES](https://zhuanlan.zhihu.com/p/644619003) - [知识图谱抽取LLM - 知乎 (zhihu.com)](https://zhuanlan.zhihu.com/p/645509983) ## **英文例子** - langchain使用openai例子.md(文本翻译) - openai调用chatgpt例子.md - langchain解析结果并格式化输出.md - langchain带有记忆的对话.md - langchain中使用不同链.md - langchain基于文档的问答md - [使用GGML和LangChain在CPU上运行量化的llama2](https://zhuanlan.zhihu.com/p/644701608) - [本地部署开源大模型的完整教程:LangChain + Streamlit+ Llama - 知乎 (zhihu.com)](https://zhuanlan.zhihu.com/p/639565332) ## prompt工程.md 一个优化的prompt对结果至关重要,感兴趣的可以去看看这个。 [yzfly/LangGPT: LangGPT: Empowering everyone to become a prompt expert!🚀 Structured Prompt,结构化提示词。 (github.com)](https://github.com/yzfly/LangGPT):构建结构化的高质量prompt ## **langchain可能存在一些问题** 虽然langchain给我们提供了一些便利,但是也存在一些问题: - **无法解决大模型基础技术问题,主要是prompt重用问题**:首先很多大模型应用的问题都是大模型基础技术的缺陷,并不是LangChain能够解决的。其中核心的问题是大模型的开发主要工作是prompt工程。而这一点的重用性很低。但是,这些功能都**需要非常定制的手写prompt**。链中的每一步都需要手写prompt。输入数据必须以非常特定的方式格式化,以生成该功能/链步骤的良好输出。设置DAG编排来运行这些链的部分只占工作的5%,95%的工作实际上只是在提示调整和数据序列化格式上。这些东西都是**不可重用**的。 - **LangChain糟糕的抽象与隐藏的垃圾prompt造成开发的困难**:简单说,就是LangChain的抽象工作不够好,所以很多步骤需要自己构建。而且LangChain内置的很多prompt都很差,不如自己构造,但是它们又隐藏了这些默认prompt。 - **LangChain框架很难debug**:**尽管LangChain很多方法提供打印详细信息的参数,但是实际上它们并没有很多有价值的信息**。例如,如果你想看到实际的prompt或者LLM查询等,都是十分困难的。原因和刚才一样,LangChain大多数时候都是隐藏了自己内部的prompt。所以如果你使用LangChain开发效果不好,你想去调试代码看看哪些prompt有问题,那就很难。 - **LangChain鼓励工具锁定**:LangChain鼓励用户在其平台上进行开发和操作,但是如果用户需要进行一些LangChain文档中没有涵盖的工作流程,即使有自定义代理,也很难进行修改。这就意味着,一旦用户开始使用LangChain,他们可能会发现自己被限制在LangChain的特定工具和功能中,而无法轻易地切换到其他可能更适合他们需求的工具或平台。 以上内容来自: - [Langchain Is Pointless | Hacker News (ycombinator.com)](https://news.ycombinator.com/item?id=36645575) - [使用LangChain做大模型开发的一些问题:来自Hacker News的激烈讨论~](https://zhuanlan.zhihu.com/p/642498874) 有时候一些简单的任务,我们完全可以自己去实现相关的流程,这样**每一部分都由我们自己把控**,更易于修改。 # 使用langchain解决复杂任务 ## 方法一:领域微调LLM 使用领域数据对LLM进行微调,受限于计算资源和模型参数的大小,而且模型会存在胡言乱语的情况。这里面涉及到一系列的问题: - 数据怎么获取,怎么进行数据清理。 - 分词使用什么方式。 - 模型采用什么架构,怎么训练,怎么评估模型。 - 模型怎么进行有效推理,怎么进行部署。 - 领域预训练、领域指令微调、奖励模型、结果对齐。 ## 方法二:langchain + LLM + tools 基本思路: 1、用户提问:请对比下商品雅诗兰黛特润修护肌活精华露和SK-II护肤精华? 2、RouterChain问题路由,即使用哪种方式回答问题:(调用一次LLM) - RouterChain可以是一个LLM,也可以是一个embedding,去匹配到合适的解决方案,如果没有匹配到任何解决方案,则使用模型内部知识进行回答。 - 这里匹配到**商品对比**这一问题,得到解决方案:(1)调用商品搜索工具得到每一个商品的介绍。(2)通过搜索结果对比这些商品。 3、使用Planner生成step:(调用一次LLM) - 根据解决方案生成合适的steps,比如:(1)搜索雅诗兰黛特润修护肌活精华露。(2)搜索SK-II护肤精华。(3)对比上述商品。 4、执行者Executer执行上述步骤:(调用steps次LLM,n是超参数表明调用的最大次数) - 需要提供工具,每个step的问题,需要调用llm生成每个工具的调用参数。 - 调用工具获取结果。 5、对所有的结果进行汇总。(调用一次LLM) ## 方法三:langchain + LLM + 检索 相比于方案1,不使用工具,直接根据问题进行对数据库进行检索,然后对检索到的结果进行回答。 检索的方式可以是基于给定问题的关键字,使用ES工具从海量数据库中检索到可能存在答案的topk段落。把这topk个段落连同问题一起发送给LLM,进行回答。 检索的方式改成向量的形式,先对所有已知资料按照300个字切分成小的段落,然后对这些段落进行编码成向量,当用户提问时,把用户问题同样编码成向量,然后对这些段落进行检索,得到topk最相关的段落,把这topk个段落连同问题一起发送给LLM,进行回答。 ![图片](README.assets/640.png) **上述方法的优缺点:** **领域微调LLM**:需要耗费很多的人力收集领域内数据和问答对,需要耗费很多算力进行微调。 **langchain + LLM + tools**:是把LLM作为一个子的服务,LangChain作为计划者和执行者的大脑,合适的时机调用LLM,优点是解决复杂问题,缺点是不可靠。LLM生成根据问题和工具调用工具获取数据时不可靠。可以不能很好的利用工具。可能不能按照指令调用合适的工具,还可能设定计划差,难以控制。优点是:用于解决复杂的问题。 **langchain + LLM + 检索**:优点是现在的领域内主流问答结构,缺点:是根据问题对可能包含答案的段落检索时可能检索不准。不适用于复杂问答 **总结:最大的问题还是LLM本身:** - LLM输出的不可控性,会导致后续步骤出现偏差。 - LLM的输入的context的长度问题:目前已经可以把长度推广到10亿以上了。 - 训练一个LLM需要的成本:对于数据而言,除了人工收集整理外,也可以使用大模型进行生成;对于训练而言,目前也有不少基于参数有效微调的例子。 - LLM的部署问题:也已经有不少加速推理的方法,比如量化、压缩、使用分布式进行部署、使用C++进行部署等。 LLM是整个系统的基座,目前还是有不少选择的余地的,网上开源了不少中文大语言模型,但大多都是6B/7B/13B的,要想有一个聪明的大脑,模型的参数量还是需要有保证的。 以上参考:[https://mp.weixin.qq.com/s/FvRchiT0c0xHYscO_D-sdA](https://python.langchain.com.cn/docs/modules/agents/how_to/custom_llm_chat_agent) # 扩展 留出一些问题以待思考:可能和langchain相关,也可能和大模型相关 - **怎么根据垂直领域的数据选择中文大模型?**1、是否可以商用。2、根据各评测的排行版。3、在自己领域数据上进行评测。4、借鉴现有的垂直领域模型的选择,比如金融大模型、法律大模型、医疗大模型等。 - **数据的一个答案由一系列相连的句子构成,怎么对文本进行切分以获得完整的答案?**比如: ```python 怎么能够解决失眠? 1、保持良好的心情; 2、进行适当的训练。 3、可适当使用药物。 ``` 1、尽量将划分的文本的长度设置大一些。2、为了避免答案被分割,可以设置不同段之间可以重复一定的文本。3、检索时可返回前top_k个文档。4、融合查询出的多个文本,利用LLM进行总结。 - **怎么构建垂直领域的embedding?** - **怎么存储获得的embedding?** - **如何引导LLM更好的思考?** 可使用:**chain of thoughts、self ask、ReAct**,具体介绍可以看这一篇文章:https://zhuanlan.zhihu.com/p/622617292 实际上,langchain中就使用了ReAct这一策略。 # 参考 > [Introduction | 🦜️🔗 Langchain](https://python.langchain.com/docs/get_started/introduction.html) > > [API Reference — 🦜🔗 LangChain 0.0.229](https://api.python.langchain.com/en/latest/api_reference.html) > > [https://mp.weixin.qq.com/s/FvRchiT0c0xHYscO_D-sdA](https://python.langchain.com.cn/docs/modules/agents/how_to/custom_llm_chat_agent) > > https://python.langchain.com.cn/docs/modules/agents/how_to/custom_llm_chat_agent
johnlui/DIYSearchEngine
https://github.com/johnlui/DIYSearchEngine
🔍 Go 开发的开源互联网搜索引擎,附教程《自己动手开发互联网搜索引擎》
# Go 开发的开源互联网搜索引擎 DIYSearchEngine 是一个能够高速采集海量互联网数据的开源搜索引擎,采用 Go 语言开发。 > #### 想运行本项目请拉到项目底部,有[教程](#本项目运行方法)。 <br> <h2 align="center">《两万字教你自己动手开发互联网搜索引擎》</h2> ## 写在前面 本文是一篇教你做“合法搜索引擎”的文章,一切都符合《网络安全法》和 robots 业界规范的要求,如果你被公司要求爬一些上了反扒措施的网站,我个人建议你马上离职,我已经知道了好几起全公司数百人被一锅端的事件。 ## 《爬乙己》 搜索引擎圈的格局,是和别处不同的:只需稍作一番查考,便能获取一篇又一篇八股文,篇篇都是爬虫、索引、排序三板斧。可是这三板斧到底该怎么用代码写出来,却被作者们故意保持沉默,大抵可能确实是抄来的罢。 从我年方二十,便开始在新浪云计算店担任一名伙计,老板告诉我,我长相过于天真,无法应对那些难缠的云计算客户。这些客户时刻都要求我们的服务在线,每当出现故障,不到十秒钟电话就会纷至沓来,比我们的监控系统还要迅捷。所以过了几天,掌柜又说我干不了这事。幸亏云商店那边要人,无须辞退,便改为专管云商店运营的一种无聊职务了。 我从此便整天的坐在电话后面,专管我的职务。虽然只需要挨骂道歉,损失一些尊严,但总觉得有些无聊。掌柜是一副凶脸孔,主顾也没有好声气,教人活泼不得;只有在午饭后,众人一起散步时闲谈起搜索引擎,才能感受到几许欢笑,因此至今仍深刻铭记在心。 由于谷歌被戏称为“哥”,本镇居民就为当地的搜索引擎取了一个绰号,叫作度娘。 度娘一出现,所有人都笑了起来,有的叫到,“度娘,你昨天又加法律法规词了!”他不回答,对后台说,“温两个热搜,要一碟文库豆”,说着便排出九枚广告。我们又故意的高声嚷道,“你一定又骗了人家的钱了!”度娘睁大眼睛说,“你怎么这样凭空污人清白……”“什么清白?我前天亲眼见你卖了莆田系广告,第一屏全是。”度娘便涨红了脸,额上的青筋条条绽出,争辩道,“广告不能算偷……流量!……互联网广告的事,能算偷么?”接连便是难懂的话,什么“免费使用”,什么“CPM”之类,引得众人都哄笑起来:店内外充满了快活的空气。 ## 本文目标 三板斧文章遍地都是,但是真的自己开发出来搜索引擎的人却少之又少,其实,开发一个搜索引擎没那么难,数据量也没有你想象的那么大,倒排索引也没有字面上看着那么炫酷,BM25 算法也没有它的表达式看起来那么夸张,只给几个人用的话也没多少计算压力。 突破自己心灵的枷锁,只靠自己就可以开发一个私有的互联网搜索引擎! 本文是一篇“跟我做”文章,只要你一步一步跟着我做,最后就可以得到一个可以运行的互联网搜索引擎。本文的后端语言采用 Golang,内存数据库采用 Redis,字典存储采用 MySQL,不用费尽心思地研究进程间通信,也不用绞尽脑汁地解决多线程和线程安全问题,也不用自己在磁盘上手搓 B+ 树致密排列,站着就把钱挣了。 ## 目录 把大象装进冰箱,只需要三步: 1. 编写高性能爬虫,从互联网上爬取网页 2. 使用倒排索引技术,将网页拆分成字典 3. 使用 BM25 算法,返回搜索结果 ## 第一步,编写高性能爬虫,从互联网上爬取网页 Golang 的协程使得它特别适合拿来开发高性能爬虫,只要利用外部 Redis 做好“协程间通信”,你有多少 CPU 核心 go 都可以吃完,而且代码写起来还特别简单,进程和线程都不需要自己管理。当然,协程功能强大,代码简略,这就导致它的 debug 成本很高:我在写协程代码的时候感觉自己像在炼丹,修改一个字符就可以让程序从龟速提升到十万倍,简直比操控 ChatGPT 还神奇。 在编写爬虫之前,我们需要知道从互联网上爬取内容需要遵纪守法,并遵守`robots.txt`,否则,可能就要进去和前辈们切磋爬虫技术了。robots.txt 的具体规范大家可以自行搜索,下面跟着我开搞。 新建 go 项目我就不演示了,不会的可以问一下 ChatGPT~ ### 爬虫工作流程 我们先设计一个可以落地的爬虫工作流程。 #### 1. 设计一个 UA 首先我们要给自己的爬虫设定一个 UA,尽量采用较新的 PC 浏览器的 UA 加以改造,加入我们自己的 spider 名称,我的项目叫“Enterprise Search Engine” 简称 ESE,所以我设定的 UA 是 `Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4280.67 Safari/537.36 ESESpider/1.0`,你们可以自己设定。 需要注意的是,部分网站会屏蔽非头部搜索引擎的爬虫,这个需要你们转动聪明的小脑袋瓜自己解决哦。 #### 2. 选择一个爬虫工具库 我选择的是 [PuerkitoBio/goquery](https://github.com/PuerkitoBio/goquery),它支持自定义 UA 爬取,并可以对爬到的 HTML 页面进行解析,进而得到对我们的搜索引擎十分重要的页面标题、超链接等。 #### 3. 设计数据库 爬虫的数据库倒是特别简单,一个表即可。这个表里面存着页面的 URL 和爬来的标题以及网页文字内容。 ```sql CREATE TABLE `pages` ( `id` int unsigned NOT NULL AUTO_INCREMENT, `url` varchar(768) CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci DEFAULT NULL COMMENT '网页链接', `host` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci DEFAULT NULL COMMENT '域名', `dic_done` tinyint DEFAULT '0' COMMENT '已拆分进词典', `craw_done` tinyint NOT NULL DEFAULT '0' COMMENT '已爬', `craw_time` timestamp NOT NULL DEFAULT '2001-01-01 00:00:00' COMMENT '爬取时刻', `origin_title` varchar(2000) CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci DEFAULT NULL COMMENT '上级页面超链接文字', `referrer_id` int NOT NULL DEFAULT '0' COMMENT '上级页面ID', `scheme` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci DEFAULT NULL COMMENT 'http/https', `domain1` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci DEFAULT NULL COMMENT '一级域名后缀', `domain2` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci DEFAULT NULL COMMENT '二级域名后缀', `path` varchar(2000) CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci DEFAULT NULL COMMENT 'URL 路径', `query` varchar(2000) CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci DEFAULT NULL COMMENT 'URL 查询参数', `title` varchar(1000) CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci DEFAULT NULL COMMENT '页面标题', `text` longtext CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci COMMENT '页面文字', `created_at` timestamp NOT NULL DEFAULT '2001-01-01 08:00:00' COMMENT '插入时间', PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci; ``` #### 4. 给爷爬! 爬虫有一个极好的特性:自我增殖。每一个网页里,基本都带有其他网页的链接,这样我们就可以道生一,一生二,二生三,三生万物了。 此时,我们只需要找一个导航网站,手动把该网站的链接插入到数据库里,爬虫就可以开始运作了。各位可以自行挑选可口的页面链接服用。 我们正式进入实操阶段,以下都是可以运行的代码片段,代码逻辑在注释里面讲解。 我采用`joho/godotenv`来提供`.env`配置文件读取的能力,你需要提前准备好一个`.env`文件,并在里面填写好可以使用的 MySQL 数据库信息,具体可以参考项目中的`.env.example`文件。 ```go func main() { fmt.Println("My name id enterprise-search-engine!") // 加载 .env initENV() // 该函数的具体实现可以参考项目代码 // 开始爬 nextStep(time.Now()) // 阻塞,不跑爬虫时用于阻塞主线程 select {} } // 循环爬 func nextStep(startTime time.Time) { // 初始化 gorm 数据库 dsn0 := os.Getenv("DB_USERNAME0") + ":" + os.Getenv("DB_PASSWORD0") + "@(" + os.Getenv("DB_HOST0") + ":" + os.Getenv("DB_PORT0") + ")/" + os.Getenv("DB_DATABASE0") + "?charset=utf8mb4&parseTime=True&loc=Local" gormConfig := gorm.Config{} db0, _ := gorm.Open(mysql.Open(dsn0), &gormConfig) // 从数据库里取出本轮需要爬的 100 条 URL var pagesArray []models.Page db0.Table("pages"). Where("craw_done", 0). Order("id").Limit(100).Find(&pagesArray) tools.DD(pagesArray) // 打印结果 // 限于篇幅,下面用文字描述 1. 循环展开 pagesArray 2. 针对每一个 page,使用 curl 工具类获取网页文本 3. 解析网页文本,提取出标题和页面中含有的超链接 4. 将标题、一级域名后缀、URL 路径、插入时间等信息补充完全,更新到这一行数据上 5. 将页面上的超链接插入 pages 表,我们的网页库第一次扩充了! fmt.Println("跑完一轮", time.Now().Unix()-startTime.Unix(), "秒") nextStep(time.Now()) // 紧接着跑下一条 } ``` ![](https://qn.lvwenhan.com/2023-06-28-16878837968415.jpg) 我已经事先将 hao123 的链接插入了 pages 表,所以我运行`go build -o ese *.go && ./ese`命令之后,得到了如下信息: ```ruby My name id enterprise-search-engine! 加载.env : /root/enterprise-search-engine/.env APP_ENV: local [[{1 0 https://www.hao123.com 0 0 2001-01-01 00:00:00 +0800 CST 2001-01-01 08:00:00 +0800 CST 0001-01-01 00:00:00 +0000 UTC}]] ``` ![](https://qn.lvwenhan.com/2023-06-27-16877958190576.jpg) <center>《递龟》</center> 上面的代码中,我们第一次用到了~~递龟~~递归:自己调用自己。 #### 5. 合法合规:遵守 robots.txt 规范 我选择用`temoto/robotstxt`这个库来探查我们的爬虫是否被允许爬取某个 URL,使用一张单独的表来存储每个域名的 robots 规则,并在 Redis 中建立缓存,每次爬取 URL 之前,先进行一次匹配,匹配成功后再爬,保证合法合规。 ### 制造真正的生产级爬虫 ![怎样画马](https://qn.lvwenhan.com/2023-06-27-怎样画马.jpg) <center>《怎样画马》</center> 有了前面这个理论上可以运行的简单爬虫,下面我们就要给这匹马补充亿点细节了:生产环境中,爬虫性能优化是最重要的工作。 从某种程度上来说,搜索引擎的优劣并不取决于搜索算法的优劣,因为算法作为一种“特定问题的简便算法”,一家商业公司比别家强的程度很有限,搜索引擎的真正优劣在于哪家能够以最快的速度索引到互联网上层出不穷的新页面和已经更新过内容的旧页面,在于哪家能够识别哪个网页是价值最高的网页。 识别网页价值方面,李彦宏起家的搜索专利,以及谷歌大名鼎鼎的 PageRank 都拥有异曲同工之妙。但本文的重点不在这个领域,而在于技术实现。让我们回到爬虫性能优化,为什么性能优化如此重要呢?我们构建的是互联网搜索引擎,需要爬海量的数据,因此我们的爬虫需要足够高效:中文互联网有 400 万个网站,3500 亿个网页,哪怕只爬千分之一,3.5 亿个网页也不是开玩笑的,如果只是单线程阻塞地爬,消耗的时间恐怕要以年为单位了。 爬虫性能优化,我们首先需要规划一下硬件。 #### 硬件要求 首先计算磁盘空间,假设一个页面 20KB,在不进行压缩的情况下,一亿个页面就需要 `20 * 100000000 / 1024 / 1024 / 1024 = 1.86TB` 的磁盘空间,而我们打算使用 MySQL 来存储页面文本,需要的空间会更大一点。 我的爬虫花费了 2 个月的时间,爬到了大约 1 亿个 URL,其中 3600 万个爬到了页面的 HTML 文本存在了数据库里,共消耗了超过 600GB 的磁盘空间。 除了硬性的磁盘空间,剩下的就是尽量多的 CPU 核心数和内存了:CPU 拿来并发爬网页,内存拿来支撑海量协程的消耗,外加用 Redis 为爬虫提速。爬虫阶段对内存的要求还不大,但在后面第二步拆分字典的时候,大内存容量的 Redis 将成为提速利器。 所以,我们对硬件的需求是这样的:一台核心数尽量多的物理机拿来跑我们的 ese 二进制程序,外加高性能数据库(例如16核64GB内存,NVME磁盘),你能搞到多少台数据库就准备多少台,就算你搞到了 65536 台数据库,也能跑满,理论上我们可以无限分库分表。能这么搞是因为网页数据具有离散性,相互之间的关系在爬虫和字典阶段还不存在,在查询阶段才比较重要。 顺着这个思路,有人可能就会想,我用 KV 数据库例如 MongoDB 来存怎么样呢?当然是很好的,但是MongoDB 不适合干的事情实在是太多啦,所以你依然需要 Redis 和 MySQL 的支持,如果你需要爬取更大规模的网页,可以把 MongoDB 用起来,利用进一步推高系统复杂度的方式获得一个显著的性能提升。 下面我们开始进行软件优化,我只讲述关键步骤,各位有什么不明白的地方可以参考项目代码。 #### 重用 HTTP 客户端以防止内存泄露 这个点看起来很小,但当你瞬间并发数十万协程的时候,每个协程 1MB 的内存浪费累积起来都是巨大的,很容易造成 OOM。 我们在 tools 文件夹下创建`curl.go`工具类,专门用来存放[全局 client](https://req.cool/zh/docs/tutorial/best-practices/#%e9%87%8d%e7%94%a8-client) 和 curl 工具函数: ```go package tools import ... //省略,具体可以参考项目代码 // 全局重用 client 对象,4 秒超时,不跟随 301 302 跳转 var client = req.C().SetTimeout(time.Second * 4).SetRedirectPolicy(req.NoRedirectPolicy()) // 返回 document 对象和状态码 func Curl(page models.Page, ch chan int) (*goquery.Document, int) { ... //省略,具体可以参考项目代码 } ``` #### 基础知识储备:goroutine 协程 我默认你已经了解 go 协程是什么了,它就是一个看起来像魔法的东西。在这里我提供一个理解协程的小诀窍:每个协程在进入磁盘、网络等“只需要后台等待”的任务之后,会把当前 CPU 核心(可以理解成一个图灵机)的指令指针 goto 到下一个协程的起始。 需要注意的是,协程是一种特殊的并发形式,你在并发函数内调用的函数必须都支持并发调用,类似于传统的“线程安全”,如果你一不小心写了不安全的代码,轻则卡顿,重则 crash。 #### 一次取出一批需要爬的 URL,使用协程并发爬 协程代码实操来啦! ```go // tools.DD(pagesArray) // 打印结果 // 创建 channel 数组 chs := make([]chan int, len(pagesArray)) // 展开 pagesArray 数组 for k, v := range pagesArray { // 存储 channel 指针 chs[k] = make(chan int) // 阿瓦达啃大瓜!! go craw(v, chs[k], k) } // 注意,下面的代码不可省略,否则你上面 go 出来的那些协程会瞬间退出 var results = make(map[int]int) for _, ch := range chs { // 神之一手,收集来自协程的返回数据,并 hold 主线程不瞬间退出 r := <-ch _, prs := results[r] if prs { results[r] += 1 } else { results[r] = 1 } } // 当代码执行到这里的时候,说明所有的协程都已经返回数据了 fmt.Println("跑完一轮", time.Now().Unix()-startTime.Unix(), "秒") ``` `craw`函数协程化: ```go // 真的爬,存储标题,内容,以及子链接 func craw(status models.Page, ch chan int, index int) { // 调用 CURL 工具类爬到网页 doc, chVal := tools.Curl(status, ch) // 对 doc 的处理在这里省略 // 最重要的一步,向 chennel 发送 int 值,该动作是协程结束的标志 ch <- chVal return } ``` 协程优化做完了,CPU 被吃满了,接下来数据库要成为瓶颈了。 ### MySQL 性能优化 做到这里,在做普通业务逻辑的时候非常快的 MySQL 已经是整个系统中最慢的一环了:pages 表一天就要增加几百万行,MySQL 会以肉眼可见的速度慢下来。我们要对 MySQL 做性能优化。 #### 何以解忧,唯有索引 首先,收益最大的肯定是加索引,这句话适用于 99% 的场景。 在你磁盘容量够用的情况下,加索引通常可以获得数百倍到数万倍的性能提升。我们先给 url 加个索引,因为我们每爬到一个 URL 都要查一下它是否已经在表里面存在了,这个动作的频率是非常高的,如果我们最终爬到了一亿个页面,那这个对比动作至少会做百亿次。 #### 部分场景下很好用的分库分表 非常幸运,爬虫场景和分库分表非常契合:只要我们能根据 URL 将数据均匀地分散开,不同的 URL 之间是没有多少关系的。那我们该怎么将数据分散开呢?使用散列值! 每一个 URL 在 MD5 之后,都会得到一个形如`698d51a19d8a121ce581499d7b701668`的 32 位长度的 16 进制数。而这些数字在概率上是均等的,所以理论上我们可以将数亿个 URL 均匀分布在多个库的多个表里。下面问题来了,该怎么分呢? #### 只有一台数据库,应该分表吗? 如果你看过我的[《高并发的哲学原理(八)-- 将 InnoDB 剥的一丝不挂:B+ 树与 Buffer Pool 》](https://lvwenhan.com/tech-epic/506.html)的话,就会明白,只要你能接受分表的逻辑代价,那在任何大数据量场景下分表都是有明显收益的,因为随着表容量的增加,那棵 16KB 页块组成的 B+ 树的复杂度增加是超线性的,用牛逼的话说就是:二阶导数持续大于 0。此外,缓存也会失效,你的 MySQL 运行速度会降低到一个令人发指的水平。 所以,即便你只有一台数据库,那也应该分表。如果你的磁盘是 NVME,我觉得单机拿出 MD5 的前两位数字,分出来 16 x 16 = 256 个表是比较不错的。 当然,如果你能搞到 16 台数据库服务器,那拿出第一位 16 进制数字选定物理服务器,再用二三位数字给每台机器分 256 个表也是极好的。 #### 我的真实硬件和分表逻辑 由于我司比较节俭~~贫穷~~,机房的服务器都是二手的,实在是拿不出高性能的 NVME 服务器,于是我找 IT 借了两台 ThinkBook 14 寸笔记本装上了 CentOS Stream 9: 1. 把内存扩充到最大,形成了 8GB 板载 + 32GB 内存条一共 40GB 的奇葩配置 2. CPU 是 AMD Ryzen 5 5600U,虽然是低压版的 CPU,只有六核十二线程,但是也比 Intel 的渣渣 CPU 快多了(Intel:牙膏真的挤完了,一滴都没有了) 3. 磁盘就用自带的 500GB NVME,实测读写速度能跑到 3GB/2GB,十分够用 由于单台机器只有 6 核,我就各给他们分了 128 个表,在每次要执行 SQL 之前,我会先用 URL 作为参数获取一下它对应的数据库服务器和表名。表名获取逻辑如下: 1. 计算此 URL 的 MD5 散列值 2. 取前两位十六进制数字 3. 拼接成类似`pages_0f`样子的表名 ```go tableName := table + "_" + tools.GetMD5Hash(url)[0:2] ``` ### 爬虫数据流和架构优化 上面我们已经使用协程把 CPU 全部利用起来了,又使用分库分表技术把数据库硬件全部利用起来了,但是如果你这个时候直接用上面的代码开始跑,会发现速度还是不够快:因为某些工作 MySQL 还是不擅长做。 此时,我们就需要对数据流和架构做出优化了。 #### 拆分仓库表和状态表 原始的 pages 表有 16 个字段,在我们爬的过程中,只用得到五个:`id` `url` `host` `craw_done` `craw_time`。而看过我上面的 InnoDB 文章的小伙伴还知道,在页面 HTML 被填充进`text`字段之后,pages 表的 16KB 页块会出现频繁的调整和指针的乱飞,对 InnoDB 的“局部性”性能涡轮的施展非常不利,会造成 buffer pool 的频繁失效。 所以,为了爬的更快,为 pages 表打造一个性能更强的“影子”就十分重要。于是,我为`pages_0f`表打造了只包含上面五个字段的`status_0f`兄弟表,数据从 pages 表里面复制而来,承担一些频繁读写任务: 1. 检查 URL 是否已经在库,即如果以前别的页面上已经出现了这个 URL 了,本次就不需要再入库了 2. 找出下一批需要爬的页面,即`craw_done=0`的 URL 3. craw_time 承担日志的作用,用于统计过去一段时间的爬虫效率 除了这些高频操作,存储页面 HTML 和标题等信息的低频操作是可以直接入`paqes_0f`仓库表的。 #### 实时读取 URL 改为后台定时读取 随着单表数据量的逐渐提升,每一轮开始时从数据库里面批量读出需要爬的 URL 成了一个相对耗时的操作,即便每张表只需要 500ms,那轮询 256 张表总耗时也达到了 128 秒之多,这是无法接受的,所以这个流程也需要异步化。你问为什么不异步同时读取 256 张表?因为 MySQL 最宝贵的就是连接数,这样会让连接数直接爆掉,大家都别玩了,关于连接数我们下面还会有提及。 我们把流程调整一下:每 20 秒从 status 表中搜罗一批需要爬的 URL 放进 Redis 中积累起来,爬的时候直接从 Redis 中读一批。这么做是为了把每一秒的时间都利用起来,尽力填满协程爬虫的胃口。 ```go // 在 main() 中注册定时任务 c := cron.New(cron.WithSeconds()) // 每 20 秒执行一次 prepareStatusesBackground 函数 c.AddFunc("*/20 * * * * *", prepareStatusesBackground) go c.Start() // prepareStatusesBackground 函数中,使用 LPush 向有序列表的头部插入 URL for _, v := range _statusArray { taskBytes, _ := json.Marshal(v) db.Rdb.LPush(db.Ctx, "need_craw_list", taskBytes) } // 每一轮都使用 RPop 从有序列表的尾部读取需要爬的 URL var statusArr []models.Status maxNumber := 1 // 放大倍数,控制每一批的 URL 数量 for i := 0; i < 256*maxNumber; i++ { jsonString := db.Rdb.RPop(db.Ctx, "need_craw_list").Val() var _status models.Status err := json.Unmarshal([]byte(jsonString), &_status) if err != nil { continue } statusArr = append(statusArr, _status) } ``` #### 十分重要的爬虫压力管控 过去十年,中国互联网每次有搜索引擎新秀崛起,我都要被新爬虫 DDOS 一遍,想想就气。这帮大厂的菜鸟程序员,以为随便一个网站都能承受住 2000 QPS,实际上互联网上 99.9% 网站的极限 QPS 到不了 100,超过 10 都够呛。对了,如果有 YisouSpider 的人看到本文,请回去推动一下你们的爬虫优化,虽然你们的爬虫不会持续高速爬取,但是你们在每分钟的第一秒并发 10 个请求的方法更像是 DDOS,对系统的危害更大... 我们要像谷歌那样,做一个压力均匀的文明爬虫,这就需要我们把每一个域名的爬虫频率都记录下来,并实时进行调整。我基于 Redis 和每个 URL 的 host 做了一个计数器,在每次真的要爬某个 URL 之前,调用一次检测函数,看是否对单个域名的爬虫压力过大。 此外,由于我们的 craw 函数是协程调用的,此时 Redis 就显得更为重要了:它能提供宝贵的“线程安全数据读写”功能,如果你也是`sync.Map`的受害者,我相信你一定懂我😭 > #### 我认为,单线程的 Redis 是 go 协程最佳的伙伴,就像 PHP 和 MySQL 那样。 具体代码我就不放了,有需要的同学可以自己去看项目代码哦。 #### 疯狂使用 Redis 加速频繁重复的数据库调用 我们使用协程高速爬到数据了,下一步就是存储这些数据。这个操作看起来很简单,更新一下原来那一行,再插入 N 行新数据不就行了吗,其实不行,还有一个关键步骤需要使用 Redis 来加速:新爬到的 URL 是否已经在数据库里存在了。这个操作看起来简单,但在我们解决了上面这些性能问题以后,庞大的数量就成了这一步最大的问题,每一次查询会越来越慢,查询字数还特别多,这谁顶得住。 如果我们拿 Redis 来存 URL,岂不是需要把所有 URL 都存入 Redis 吗,这内存需求也太大了。这个时候,我们的老朋友,`局部性`又出现了:由于我们的爬虫是按照顺序爬的,那“朋友的朋友也是朋友”的概率是很大的,所以我们只要在 Redis 里记录一下某条 URL 是否存在,那之后一段时间,这个信息被查到的概率也很大: ```go // 我们使用一个 Hash 来存储 URL 是否存在的状态 statusHashMapKey := "ese_spider_status_exist" statusExist := db.Rdb.HExists(db.Ctx, statusHashMapKey, _url).Val() // 若 HashMap 中不存在,则查询或插入数据库 if !statusExist { ··· 代码省略,不存在则创建这行 page,存在则更新信息 ··· // 无论是否新插入了数据,都将 _url 入 HashMap db.Rdb.HSet(db.Ctx, statusHashMapKey, _url, 1).Err() } ``` 这段代码看似简单,实测非常好用,唯一的问题就是不能运行太长时间,隔一段时间得清空一次,因为随着时间的流逝,局部性会越来越差。 细心的小伙伴可能已经发现了,既然爬取状态已经用 Redis 来承载了,那还需要区分 pages 和 status 表吗?需要,因为 Redis 也不是全能的,它的基础数据依然是来自 MySQL 的。目前这个架构类似于复杂的三级火箭,看起来提升没那么大,但这小小的提速可能就能让你爬三亿个网页的时间从 3 个月缩减到 1 个月,是非常值的。 另外,如果通过扫描 256 张表中 craw_time 字段的方式来统计“过去 N 分钟爬了多少个 URL、有效页面多少个、因为爬虫压力而略过的页面多少个、网络错误的多少个、多次网络错误后不再重复爬取的多少个”的数据,还是太慢了,也太消耗资源了,这些统计信息也需要使用 Redis 来记录: ```go // 过去一分钟爬到了多少个页面的 HTML allStatusKey := "ese_spider_all_status_in_minute_" + strconv.Itoa(int(time.Now().Unix())/60) // 计数器加 1 db.Rdb.IncrBy(db.Ctx, allStatusKey, 1).Err() // 续命 1 小时 db.Rdb.Expire(db.Ctx, allStatusKey, time.Hour).Err() // 过去一分钟从新爬到的 HTML 里面提取出了多少个新的待爬 URL newStatusKey := "ese_spider_new_status_in_minute_" + strconv.Itoa(int(time.Now().Unix())/60) // 计数器加 1 db.Rdb.IncrBy(db.Ctx, newStatusKey, 1).Err() // 续命 1 小时 db.Rdb.Expire(db.Ctx, newStatusKey, time.Hour).Err() ``` ### 生产爬虫遇到的其他问题 在我们不断提高爬虫速度的过程中,爬虫的复杂度也在持续上升,我们会遇到玩具爬虫遇不到的很多问题,接下来我分享一下我的处理经验。 #### 抑制暴增的数据库连接数 在协程这个大杀器的协助之下,我们可以轻易写出超高并行的代码,把 CPU 全部吃完,但是,并行的协程多了以后,数据库的连接数压力也开始暴增。MySQL 默认的最大连接数只有 151,根据我的实际体验,哪怕是一个协程一个连接,我们这个爬虫也可以轻易把连接数干到数万,这个数字太大了,即便是最新的 CPU 加上 DDR5 内存,受制于 MySQL 算法的限制,在连接数达到这个级别以后,处理海量连接数所需要的时间也越来越多。这个情况和[《高并发的哲学原理(二)-- Apache 的性能瓶颈与 Nginx 的性能优势》](https://lvwenhan.com/tech-epic/500.html)一文中描述的 Apache 的 prefork 模式比较像。好消息是,最近版本的 MySQL 8 针对连接数匹配算法做了优化,大幅提升了大量连接数下的性能。 除了协程之外,分库分表对连接数的的暴增也负有不可推卸的责任。为了提升单条 SQL 的性能,我们给单台数据库服务器分了 256 张表,这种情况下,以前的一个连接+一条 SQL 的状态会突然增加到 256 个连接和 256 条 SQL,如果我们不加以限制的话,可以说协程+分表一启动,你就一定会收到海量的`Too many connections`报错。我的解决方法是,在 gorm 初始化的时候,给他设定一个“单线程最大连接数”: ```go dbdb0, _ := _db0.DB() dbdb0.SetMaxIdleConns(1) dbdb0.SetMaxOpenConns(100) dbdb0.SetConnMaxLifetime(time.Hour) ``` 根据我的经验,100 个够用了,再大的话,你的 TCP 端口就要不够用了。 #### 域名黑名单 我们都知道,内容农场是一种专门钻搜索引擎空子的垃圾内容生产者,爬虫很难判断哪些网站是内容农场,但是人一点进去就能判断出来。而这些域名的内部链接做的又特别好,这就导致我们需要手动给一些恶心的内容农场域名加黑名单。我们把爬到的每个域名下的 URL 数量统计一下,搞一个动态的排名,就能很容易发现头部的内容农场域名了。 #### 复杂的失败处理策略 > 生产代码和教学代码最大的区别就是成吨的错误处理!—— John·Lui(作者自己) 如果你真的要搞一个涵盖数亿页面的可以用的搜索引擎,你会碰到各种各样的奇葩失败,这些失败都需要拿出特别的处理策略,下面我分享一下我遇到过的问题和我的处理策略。 1. 单页面超时非常重要:如果你想尽可能地在一段时间内爬到尽量多的页面的话,缩短你 curl 的超时时间非常重要,经过摸索,我把这个时间设定到了 4 秒,既能爬到绝大多数网页,也不会浪费时间在一些根本就无法响应的 URL 上。 2. 单个 URL 错误达到一定数量以后,需要直接拉黑,不然一段时间后,你的爬虫整天就只爬那些被无数次爬取失败的 URL 上啦,一个新页面也爬不到。这个次数我设定的是 3 次。 3. 如果某个 URL 返回的 HTML 无法被解析,果断放弃,没必要花费额外资源重新爬。 4. 由于我们的数据流已经是三级火箭形态,所以在各种地方加上“动态锁”就很必要,因为很多时候我们需要手动让其他级火箭发动机暂停运行,手动检修某一级发动机。我一般拿 MySQL 来做这件事,创建一个名为`kvstores`的表,只有 key value 两个字段,需要的时候我会手动修改特定 key 对应的 value 值,让某一级发动机暂停一下。 5. 由于 curl 的结果具有不确定性,务必需要保证任何情况下,都要给 channel 返回信号量,不然你的整个应用会直接卡死。 6. 一个页面内经常会有同一个超链接重复出现,在内存里保存已经见过的 URL 并跳过重复值可以显著节约时间。 7. 我建了一个 MySQL 表来存储我手动插入的黑名单域名,这个非常好用,可以在爬虫持续运行的时候随时“止损”,停止对黑名单域名的新增爬取。 至此,我们的爬虫终于构建完成了。 ### 爬虫运行架构图 现在我们的爬虫运行架构图应该是下面这样的: ![whiteboard_exported_image](https://qn.lvwenhan.com/2023-07-06-whiteboard_exported_image.png) 爬虫搞完了,让我们进入第二大部分。 ## 第二步,使用倒排索引生成字典 那个~~男人~~一听就很牛逼的词出现了:倒排索引。 对于没搞过倒排索引的人来说,这个词听起来和“生态化反”一样牛逼,其实它非常简单,简单程度堪比 HTTP 协议。 ### 倒排索引到底是什么 下面这个例子可以解释倒排索引是个什么东西: 1. 我们有一个表 titles,含有两个字段,ID 和 text,假设这个表有 100 行数据,其中第一行 text 为“爬虫工作流程”,第二行为“制造真正的生产级爬虫” 2. 我们对这两行文本进行分词,第一行可以得到“爬虫”、“工作”、“流程”三个词,第二行可以得到“制造”、“真正的”、“生产级”、“爬虫”四个词 3. 我们把顺序颠倒过来,以词为 key,以①`titles.id` ②`,` ③`这个词在 text 中的位置` 这三个元素拼接在一起为一个`值`,不同 text 生成的`值`之间以 - 作为间隔,对数据进行“反向索引”,可以得到: 1. 爬虫: 1,0-2,8 2. 工作:1,2 3. 流程:1,4 4. 制造:2,0 5. 真正的:2,2 6. 生产级:2,5 倒排索引完成了!就是这么简单。说白了,就是把所有内容都分词出来,再反向给每个词标记出“他出现在哪个文本的哪个位置”,没了,就是这么简单。下面是我生成的字典中,“辰玺”这个词的字典值: ```text 110,85,1,195653,7101-66,111,1,195653,7101- ``` 你问为什么我不找个常见的词?因为随便一个常见的词,它的字典长度都是以 MB 为单位的,根本没法放出来... #### 还有一个牛逼的词,最小完美哈希,可以用来排布字典数据,加快搜索速度,感兴趣的同学可以自行学习 ### 生成倒排索引数据 理解了倒排索引是什么以后,我们就可以着手把我们爬到的 HTML 处理成倒排索引了。 我使用`yanyiwu/gojieba`这个库来调用结巴分词,按照以下步骤对我爬到的每一个 HTML 文本进行分词并归类: 1. 分词,然后循环处理这些词: 2. 统计词频:这个词在该 HTML 中出现的次数 3. 记录下这个词在该 HTML 中每一次出现的位置,从 0 开始算 4. 计算该 HTML 的总长度,搜索算法需要 5. 按照一定格式,组装成倒排索引值,形式如下: ```go // 分表的顺序,例如 0f 转为十进制为 15 strconv.Itoa(i) + "," + // pages.id 该 URL 的主键 ID strconv.Itoa(int(pages.ID)) + "," + // 词频:这个词在该 HTML 中出现的次数 strconv.Itoa(v.count) + "," + // 该 HTML 的总长度,BM25 算法需要 strconv.Itoa(textLength) + "," + // 这个词出现的每一个位置,用逗号隔开,可能有多个 strings.Join(v.positions, ",") + // 不同 page 之间的间隔符 "-" ``` 我们按照这个规则,把所有的 HTML 进行倒排索引,并且把生成的索引值拼接在一起,存入 MySQL 即可。 ### 使用协程 + Redis 大幅提升词典生成速度 不知道大家感受到了没有,词典的生成是一个比爬虫高几个数量级的 CPU 消耗大户,一个 HTML 动辄几千个词,如果你要对数亿个 HTML 进行倒排索引,需要的计算量是非常惊人的。我爬到了 3600 万个页面,但是只处理了不到 800 万个页面的倒排索引,因为我的计算资源也有限... 并且,把词典的内容存到 MySQL 里难度也很大,因为一些常见词的倒排索引会巨长,例如“没有”这个词,真的是到处都有它。那该怎么做性能优化呢?还是我们的老朋友,协程和 Redis。 #### 协程分词 两个 HTML 的分词工作之间完全没有交集,非常适合拿协程来跑。 但是,MySQL 举手了:我顶不住。所以协程的好朋友 Redis 也来了。 #### 使用 Redis 做为词典数据的中转站 我们在 Redis 中针对每一个词生成一个 List,把倒排出来的索引插入到尾部: ```go db.Rdb10.RPush(db.Ctx, word, appendSrting) ``` #### 使用协程从 Redis 搬运数据到 MySQL 中 你没看错,这个地方也需要使用协程,因为数据量实在是太大了,一个线程循环跑会非常慢。经过我的不断尝试,我发现每次转移 2000 个词,对 Redis 的负载比较能够接受,E5-V4 的 CPU 单核能够跑满,带宽大概 400Mbps。 从 Redis 到 MySQL 的高性能搬运方法如下: 1. 随机获取一个 key 2. 判断该 key 的长度,只有大于等于 2 的进入下一步 3. 把最后一个索引值留下,前面的元素一个一个`LPop`(弹出头部)出来,拼接在一起 4. 汇集一批 2000 个随机词的结果,append 到数据库该词现有索引值的后面 有了协程和 Redis 的协助,分词加倒排索引的速度快了起来,但是如果你选择一个词一个词地 append 值,你会发现 MySQL 又双叒叕变的超慢,又要优化 MySQL 了!🙊 ### 事务的妙用:MySQL 高速批量插入 由于需要往磁盘里写东西,所以只要是一个一个 update,怎么优化都会很慢,那有没有一次性 update 多行数据的方法呢?有!那就是事务: ```go tx.Exec(`START TRANSACTION`) // 需要批量执行的 update 语句 for w, s := range needUpdate { tx.Exec(`UPDATE word_dics SET positions = concat(ifnull(positions,''), ?) where name = ?`, s, w) } tx.Exec(`COMMIT`) ``` 这么操作,字典写入速度一下子起来了。但是,每次执行 2000 条 update 语句对磁盘的要求非常高,我进行这个操作的时候,可以把磁盘写入速度瞬间提升到 1.5GB/S,如果你的数据库存储不够快,可以减少语句数量。 ### 世界的参差:无意义的词 这个世界上的东西并不都是有用的,一个 HTML 中的字符也是如此。 首先,一般不建议索引整个 HTML,而是把他用 DOM 库处理一下,提取出文本内容,再进行索引。 其次,即便是你已经过滤掉了所有的 html 标签、css、js 代码等,还是有些词频繁地出现:它们出现的频率如此的高,以至于反而失去了作为搜索词的价值。这个时候,我们就需要把他们狠狠地拉黑,不处理他们的倒排索引。我使用的黑名单词如下,表名为`word_black_list`,只有两个字段 id、word,需要的自取: ```ruby INSERT INTO `word_black_list` (`id`, `word`) VALUES (1, 'px'), (2, '20'), (3, '('), (4, ')'), (5, ','), (6, '.'), (7, '-'), (8, '/'), (9, ':'), (10, 'var'), (11, '的'), (12, 'com'), (13, ';'), (14, '['), (15, ']'), (16, '{'), (17, '}'), (18, '\''), (19, '\"'), (20, '_'), (21, '?'), (22, 'function'), (23, 'document'), (24, '|'), (25, '='), (26, 'html'), (27, '内容'), (28, '0'), (29, '1'), (30, '3'), (31, 'https'), (32, 'http'), (33, '2'), (34, '!'), (35, 'window'), (36, 'if'), (37, '“'), (38, '”'), (39, '。'), (40, 'src'), (41, '中'), (42, '了'), (43, '6'), (44, '。'), (45, '<'), (46, '>'), (47, '联系'), (48, '号'), (49, 'getElementsByTagName'), (50, '5'), (51, '、'), (52, 'script'), (53, 'js'); ``` 至此,字典的处理告一段落,下面让我们一起 Just 搜 it! ## 第三步,使用 BM25 算法给出搜索结果 网上关于 BM25 算法的文章是不是看起来都有点懵?别担心,看完下面这段文字,我保证你能自己写出来这个算法的具体实现,这种有具体文档的工作是最好做的了,比前面的性能优化简单多了。 ### 简单介绍一下 BM25 算法 BM25 算法是现代搜索引擎的基础,它可以很好地反映一个词和一堆文本的相关性。它拥有不少独特的设计思想,我们下面会详细解释。 这个算法第一次被生产系统使用是在 1980 年代的伦敦城市大学,在一个名为 Okapi 的信息检索系统中被实现出来,而原型算法来自 1970 年代 Stephen E. Robertson、Karen Spärck Jones 和他们的同伴开发的概率检索框架。所以这个算法也叫 Okapi BM25,这里的 BM 代表的是`best matching`(最佳匹配),非常实在,和比亚迪的“美梦成真”有的一拼(Build Your Dreams)😂 ### 详细讲解 BM25 算法数学表达式的含义 ![](https://qn.lvwenhan.com/2023-06-30-16880554899557.jpg) 我简单描述一下这个算法的含义。 首先,假设我们有 100 个页面,并且已经对他们分词,并全部生成了倒排索引。此时,我们需要搜索这句话“BM25 算法的数学描述”,我们就需要按照以下步骤来计算: 1. 对“BM25 算法的数学描述”进行分词,得到“BM25”、“算法”、“的”、“数学”、“描述”五个词 2. 拿出这五个词的全部字典信息,假设包含这五个词的页面一共有 50 个 3. 逐个计算这五个词和这 50 个页面的`相关性权重`和`相关性得分`的乘积(当然,不是每个词都出现在了这 50 个网页中,有多少算多少) 4. 把这 50 页面的分数分别求和,再倒序排列,即可以获得“BM25 算法的数学描述”这句话在这 100 个页面中的搜索结果 `相关性权重`和`相关性得分`名字相似,别搞混了,它们的具体定义如下: #### 某个词和包含它的某个页面的“相关性权重” ![](https://qn.lvwenhan.com/2023-06-30-16880562026300.jpg) 上图中的`Wi`指代的就是相关性权重,最常用的是`TF-IDF`算法中的`IDF`权重计算法: ![](https://qn.lvwenhan.com/2023-06-30-16880562733047.jpg) 这里的 N 指的是页面总数,就是你已经加入字典的页面数量,需要动态扫描 MySQL 字典,对我来说就是 784 万。而`n(Qi)`就是这个词的字典长度,就是含有这个词的页面有多少个,就是我们字典值中`-`出现的次数。 这个参数的现实意义是:如果一个词在很多页面里面都出现了,那说明这个词不重要,例如百分百空手接白刃的“的”字,哪个页面都有,说明这个词不准确,进而它就不重要。 词以稀为贵。 我的代码实现如下: ```go // 页面总数 db.DbInstance0.Raw("select count(*) from pages_0f where dic_done = 1").Scan(&N) N *= 256 // 字典的值中`-`出现的次数 NQi := len(partsArr) // 得出相关性权重 IDF := math.Log10((float64(N-NQi) + 0.5) / (float64(NQi) + 0.5)) ``` #### 某个词和包含它的某个页面的“相关性得分” ![](https://qn.lvwenhan.com/2023-06-30-16880570104402.jpg) 这个表达式看起来是不是很复杂,但是它的复杂度是为了处理查询语句里面某一个关键词出现了多次的情况,例如“八百标兵奔北坡,炮兵并排北边跑。炮兵怕把标兵碰,标兵怕碰炮兵炮。”,“炮兵”这个词出现了 3 次。为了能快速实现一个能用的搜索引擎,我们放弃支持这种情况,然后这个看起来就刺激的表达式就可以简化成下面这种形式: ![](https://qn.lvwenhan.com/2023-06-30-16880571028529.jpg) 需要注意的是,这里面的大写的 K 依然是上面那个略微复杂的样式。我们取 k1 为 2,b 为 0.75,页面(文档)平均长度我自己跑了一个,13214,你们可以用我这个数,也可以自己跑一个用。 我的代码实现如下: ```go // 使用 - 切分后的值,为此页面的字典值,形式为: // 110,85,1,195653,7101 ints := strings.Split(p, ",") // 这个词在这个页面中出现总次数 Fi, err := strconv.Atoi(ints[2]) // 这个页面的长度 Dj, _ := strconv.Atoi(ints[3]) k1 := 2.0 b := 0.75 // 页面平均长度 avgDocLength := 13214.0 // 得到相关性得分 RQiDj := (float64(Fi) * (k1 + 1)) / (float64(Fi) + k1*(1-b+b*(float64(Dj)/avgDocLength))) ``` #### 怎么样,是不是比你想象的简单? ### 检验搜索结果 我在我搞的“翰哥搜索”页面上搜了一下“BM25 算法的数学描述”,结果如下: ![](https://qn.lvwenhan.com/2023-07-04-16884473624549.jpg) 我搜索“住范儿”,结果如下: ![](https://qn.lvwenhan.com/2023-06-30-16880596117906.jpg) 第一个就是我们官网,可以说相当精准了。 看起来效果还不错,要知道这只是在 784 万的网页中搜索的结果哦,如果你有足够的服务器资源,能搞定三亿个页面的爬取、索引和查询的话,效果肯定更加的好。 ### 如何继续提升搜索准确性? 目前我们的简化版 BM25 算法的搜索结果已经达到能用的水平了,还能继续提升搜索准确性吗?还可以: 1. 本文全部是基于分词做的字典,你可以再做一份基于单字的,然后把单字的搜索结果排序和分词的搜索结果进行结合,搜索结果可以更准。 2. 相似的原理,打造更加合理、更加丰富的分词方式,构造不同倾向的词典,可以提升特定领域的搜索结果表现,例如医学领域、代码领域等。 3. 打造你自己的 PageRank 技术,从 URL 之间关系的角度,给单个 URL 的价值进行打分,并将这个价值分数放进搜索结果的排序参数之中。 4. 引入 proximity 相似性计算,不仅考虑精确匹配的关键词,还要考虑到含义相近的关键词的搜索结果。 5. 特殊查询的处理:修正用户可能的输入错误,处理中文独特的“拼音匹配”需求等。 ### 参考资料 1. 【NLP】非监督文本匹配算法——BM25 https://zhuanlan.zhihu.com/p/499906089 2. 《自制搜索引擎》—— [日]山田浩之、末永匡 文章结束了,你学废了吗?欢迎到下列位置留下你的评论: 1. Github:https://github.com/johnlui/DIY-Search-Engine 2. 博客:https://lvwenhan.com/tech-epic/509.html 【全文完】 <hr> # 本项目运行方法 首先,给自己准备一杯咖啡。 1. 把本项目下载到本地 2. 编译:`go build -o ese *.go` 3. 修改配置文件:`cp .env.example .env`,然后把里面的数据库和 Redis 配置改成你的 4. 执行`./ese art init`创建数据库 5. 手动插入一个真实的 URL 到 pages_00 表中,只需要填充 url 和 host 两个字段 6. 执行`./ese`,静待好事发生 ☕️ 过一段时间,等字典数据表`word_dics`里面填充了数据之后,打开[http://127.0.0.1:10086](http://127.0.0.1:10086),尝试搜一下吧!🔍 #### 更多项目运行信息,请见 [wiki](https://github.com/johnlui/DIY-Search-Engine/wiki) #### 网页直接阅读:https://lvwenhan.com/tech-epic/509.html ### 作者信息: 1. 姓名:吕文翰 2. GitHub:[johnlui](https://github.com/johnlui) 3. 职位:住范儿 CTO ![公众号](https://lvwenhan.com/content/uploadfile/202301/79c41673579170.jpg) ### 文章版权声明 本文权归属于[吕文翰](https://github.com/johnlui),采用 [CC BY-NC-ND 4.0](https://creativecommons.org/licenses/by-nc-nd/4.0/legalcode.zh-Hans) 协议开源,供 GitHub 平台用户免费阅读。 <a rel="license" href="https://creativecommons.org/licenses/by-nc-nd/4.0/legalcode.zh-Hans"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by-nc-nd/4.0/88x31.png" /></a> ### 代码版权 本项目代码采用 MIT 协议开源。
aigenprotocol/aigen
https://github.com/aigenprotocol/aigen
Aigen's open-source implemenation of various tools to create AINFTs effortlessly
<div align="center"> <img src="https://aigenprotocol.com/static/media/aigen-logo-light.fad5403b0fa280336867e8ea8400db40.svg" /> <h3> Aigen's open-source tools to create AINFTs effortlessly </h3> </div> ### create environment variables #### create a .env file and put these variables ``` ACCOUNT_ADDRESS=0x0000000000000000000000000000000000000000 PRIVATE_KEY=000000000000000000000000000000000000000000000000000000000000000 AINFT_CONTRACT_ADDRESS=0x000000000000000000000000000000000000 PROVIDER_URL=http://0.0.0.0:8545 NFTSTORAGE_TOKEN=<NFTStorage Token> MODELS_DIR=/Users/apple/aigen ``` ### compile & deploy AINFTToken.sol smart contract The smart contract can be found at contracts->AINFTToken.sol ##### compile ``` npm run compileAINFTTokenContract ``` #### deploy ``` npm run deployAINFTTokenContract ``` this will automatically deploy the smart contract to 'PROVIDER_URL' Note: * Using Remix IDE, deploy the smart contract to the local Ganache or Goerli testnet. * It is recommended that you test the smart contract before deploying it to the mainnet. ### commands #### install python dependencies ``` pip install -r requirements ``` #### extract model weights and create shards in a single command ``` python main.py --action "create_shards" -n "test" -m "<path-to-model.h5>" -no 20 ``` provide model name, model path and no of ainfts to create * For the time being, we only support Keras models ### code ### extract and save model weights ``` from ai import save_model save_model(model_name=model_name, model_dir=MODELS_DIR, model_path=model_path) ``` provide model name and local model path to start extracting weights ### create shards of model weights ``` from ai import create_shards create_shards(model_name=model_name, model_dir=MODELS_DIR, no_of_ainfts=no_of_ainfts) ``` provide model name and no of ainfts to create. This function will automatically create shards from model weights ### install node dependencies ``` npm install or yarn ``` ### mint ainfts ``` npm run ainft --action="createAINFT" --model_name="Test" --model_dir="/Users/apple/aigen/test" ``` this step will deploy files to NFTStorage and mint AINFTs ### download AINFTs ``` npm run ainft --action=downloadAINFT --model_name=test --model_dir="/Users/apple/aigen/test" ``` this will automatically download and decrypt content of AINFTs ### merge model shards ``` python main.py --action "merge_shards" --name test ``` this will merge shards back to recover the original weight files ### load model (keras) ``` python main.py --action "load_model" --name test ``` this will load model from merged shards ## License <a href="LICENSE.rst"><img src="https://img.shields.io/github/license/aigenprotocol/aigen"></a> This project is licensed under the MIT License - see the [LICENSE](LICENSE.rst) file for details
efenatuyo/xolo-sniper-recode
https://github.com/efenatuyo/xolo-sniper-recode
decided to fully rewrite the code to make the ugc sniper faster
# xolo-sniper-recode decided to fully rewrite the code to make the ugc sniper faster
grugnoymeme/flipperzero-GUI-wifi-cracker
https://github.com/grugnoymeme/flipperzero-GUI-wifi-cracker
GUI - Analyze WPA/WPA2 handshakes from FlipperZero's captured .pcaps to find out the WiFi Passwords.
# (flipperzero)-GUI-wifi-cracker Would you like to be able to extract WPA/WPA2 handshakes from FlipperZero's captured .pcap files, and analyze them with hashcat, and find out passwords JUST IN ONE CLICK? This is the GUI (Graphic User Interface) version of my other script and repo [`flipperzero-CLI-wifi-cracker`](https://github.com/grugnoymeme/flipperzero-CLI-wifi-cracker), i just wanted to make the process easyest as possible, and this is the result. --- ## Extarcion of .pcap file. You can automatize the extraction of .pcap files from flipper zero, using the [@0xchocolate](https://github.com/0xchocolate) 's companion app, of the [@JustCallMeKoKo's](https://github.com/justcallmekoko) ESP32marauder. Once you've connected the devboard and opened the app,follow these instructions: ``` Menu Apps WIFI / GPIO / GPIO EXTRA [ESP32] WiFi Marauder Scripts [+]ADD SCRIPT < Enter a name for your script > Save < Select your script > [+]EDIT STAGES [+]ADD STAGE [+]Deauth < Select Deauth > Timeout 1 Save Back [+]ADD STAGE [+]Sniff RAW < Select Sniff RAW > Timeout 15 (or 10, maybe also 5 is ok) Save Back Back [*]SAVE ``` --- # Disclaimer This tool is not developed by the Flipper Zero staff. Please note that the code you find on this repo is only proposed for educational purposes and should NEVER be used for illegal activities.
kirillkuzin/cyberpunk2077ai
https://github.com/kirillkuzin/cyberpunk2077ai
🤖CyberAI is designed to bridge the world of Cyberpunk 2077 and the power of OpenAI's AI technology.
# CyberAI :robot: Welcome to the CyberAI project! This plugin for Cyberpunk 2077 enables integration between the videogame and OpenAI API, opening a world of possibilities for enhancing the gameplay experience. With this plugin, you can call OpenAI API methods directly from the scripting level. Plugin development is still in progress. :construction: ![bg](bg.png) ## Installation :wrench: 1. Download the zip from the latest release 2. Move CyberAI.dll and Settings.json to red4ext\plugins\CyberAI 3. Open Settings.json and paste your OpenAI API key and organization id 4. If you would access to the plugin's functions from CET console, you need to move CyberAI.reds to r6\scripts\CyberAI ## Usage :computer: ### Redscript Ask GPT to generate an answer for you: ``` native func ScheduleChatCompletionRequest(chat_id: String, messages: array<array<String>>); ScheduleChatCompletionRequest("your_custom_id", {{"System", "Speak as ..."}, {"User", "How are you?"}}); ScheduleChatCompletionRequest("your_custom_id", {{"User", "My name is V"}, {"User", "How are you and what's my name?"}}); ``` You can collect your request and an answer when it is done: ``` native func GetAnswer(chat_id: String) -> String; native func GetRequest(chat_id: String) -> String; LogChannel(n"DEBUG", GetAnswer("your_custom_id")); LogChannel(n"DEBUG", GetRequest("your_custom_id")); ``` You can iterate through the chat history, or you can get it as a string: ``` native func GetHistory(chat_id: String) -> array<array<String>>; native func GetHistoryAsString(chat_id: String) -> String; LogChannel(n"DEBUG", GetHistoryAsString("your_custom_id")); let history = GetHistory(chat_id); for completion in history { LogChannel(n"DEBUG", "Role: " + ToString(completion[0]) + "\nMessage: " + ToString(completion[1])); } ``` Flushing a chat: ``` native func FlushChat(chat_id: String); FlushChat("your_custom_id"); ``` You need to put your generated custom string ID in almost all functions. CyberAI will provide a new chat for every unique ID and keep the chat history. :warning: <b>Whenever you request to chat, all the chat history also sends to OpenAI API.</b> ## Future Plans :rocket: - **Eleven Labs Integration:** We plan to integrate with Eleven Labs to provide text-to-speech functionality, allowing the AI to generate audio responses in addition to text. - **InWorld AI Integration:** We are exploring the possibility of integrating with InWorld AI, a powerful AI engine for creating characters. This could allow for even more dynamic and responsive AI characters in the game. ## Inspiration :bulb: With this plugin, the possibilities are almost limitless. Here are just a few examples of what you can script with it: - **AI-NPC Dialogue:** Use OpenAI's GPT to generate unique dialogue for non-player characters (NPCs), increasing the diversity and richness of in-game interactions. - **Dynamic Plot Generation:** Use OpenAI to generate unique storylines or side quests based on in-game events or player actions. - **Procedural Mission Planning:** Generate procedural missions based on context, NPC data, and player preferences using AI. - **Interactive Environment**: Use AI to generate dynamic responses from the environment, making your exploration of Night City even more immersive. - **Intelligent Enemy Tactics:** AI could control enemy tactics based on the player's strategy and actions, making combat more challenging and unpredictable. Remember, these are just examples, and the only limit is your imagination! ## Credits :link: This project would not be possible without the following projects: - [red4ext-rs](https://github.com/jac3km4/red4ext-rs): A Rust binding for RED4ext, which provides an API to interact with the internals of Cyberpunk 2077. - [async-openai](https://github.com/64bit/async-openai): An asynchronous, modern Rust library for OpenAI API. - [OpenAI API](https://openai.com/blog/openai-api): OpenAI offers a general-purpose "text in, text out" interface, making it possible to use language models like GPT-3 in a wide range of applications. ## License :bookmark: This project is licensed under the terms of the MIT License. See the LICENSE file in the project's root directory for more details. Enjoy exploring the new world of Cyberpunk 2077 with the power of AI! :video_game: :joystick: ---
AQF0R/FUCK_AWD_TOOLS
https://github.com/AQF0R/FUCK_AWD_TOOLS
AWD
# 前言 ## 欢迎关注微信公众号:朱厌安全团队 感谢支持! ### 一个面向AWD竞赛的工具,目前BUG和待优化点可能会较多,请谅解!!! 欢迎师傅们提出本工具宝贵建议,如有BUG欢迎师傅们向我们反馈,我们会第一时间关注! ### 工具也会持续维持,后面会更新新模块,敬请期待! # 工具介绍 FUCK_AWD工具cmd运行命令:python fuck_awd_main.py 即可 如果CMD颜色乱码可以解压 ansi189-bin.zip 运行对应电脑兼容的版本 cmd命令ansicon.exe -i 工具攻击模块原理是以马上马,所以使用时要确认好场景! 利用前提条件:目标服务器没有过滤system()函数、有写入权限、目标为PHP网站 攻击模块:工具目前支持单一目标,批量目标攻击、批量执行命令、预设置三种类型后门木马提供选择(一句话/不死马/蠕虫马),执行完毕批量保存执行结果、非自定义/自定义后门存活监测等。 防御模块:支持目录树生成、文件一键备份、文件监控、PHP文件数目检测、PHP危险函数检测、一键PHP文件上WAF等。 # 工具效果 ![图片](https://github.com/AQF0R/FUCK_AWD_TOOLS/assets/120232326/45d8f3c6-fd49-4762-a3d9-f50d2acb72c1) ![图片](https://github.com/AQF0R/FUCK_AWD_TOOLS/assets/120232326/a31e5939-b471-423f-9283-7ba5e311fe12) ![图片](https://github.com/AQF0R/FUCK_AWD_TOOLS/assets/120232326/c3bd87db-e46d-44fb-b243-8296509d1768) ![图片](https://github.com/AQF0R/FUCK_AWD_TOOLS/assets/120232326/5b895f1f-1bb2-49a5-9a4c-db31917de88f) ![图片](https://github.com/AQF0R/FUCK_AWD_TOOLS/assets/120232326/0c175494-5e27-47e9-ae32-355dcf38f6b1) ![图片](https://github.com/AQF0R/FUCK_AWD_TOOLS/assets/120232326/b0f95d57-26ba-49af-a76a-b2d687f66761) ## 拓展知识 内存马查杀: 1.ps auxww|grep shell.php 找到pid后杀掉进程就可以,你删掉脚本是起不了作用的,因为php执行的时候已经把脚本读进去解释成opcode运行了 2.重启php等web服务 3.用一个ignore_user_abort(true)脚本,一直竞争写入(断断续续)。usleep要低于对方不死马设置的值。 4.创建一个和不死马生成的马一样名字的文件夹。 修改curl: alias curl='echo fuckoff' 较低权限 chmod -x curl 较高权限 /usr/bin curl路径 apache日志路径: /var/log/apache2/ /usr/local/apache2/logs
xun082/online-cooperative-edit
https://github.com/xun082/online-cooperative-edit
一个基于 webrtc 实现的在线协同编辑器
### Compiles and hot-reloads for development ``` npm start ``` ### Compiles and minifies for production ``` npm run build ``` This is the official base template for [Create Neat](https://github.com/xun082/react-cli). For more information, please refer to: - [Getting Started](https://github.com/xun082/react-cli) – How to create a new app.
ytdl-org/ytdl-nightly
https://github.com/ytdl-org/ytdl-nightly
Nightly builds for youtube-dl.
[![Build Status](https://github.com/ytdl-org/youtube-dl/workflows/CI/badge.svg)](https://github.com/ytdl-org/youtube-dl/actions?query=workflow%3ACI) **This repository is for youtube-dl nightly builds.** youtube-dl - download videos from youtube.com or other video platforms - [INSTALLATION](#installation) - [DESCRIPTION](#description) - [MORE INFORMATION](#more-information) - [COPYRIGHT](#copyright) # INSTALLATION These instructions are specific to this nightly release repository. Refer to the [installation instructions](https://github.com/ytdl-org/youtube-dl#installation) for the main repository (the "main instructions" below) for guidance on * the types of installation available * installation methods and command lines to adapt for each installation type. Find the appropriate build to install in the [Releases](https://github.com/ytdl-org/ytdl-nightly/releases) page here, rather than the URLs mentioned in the main instructions. Replace the download URL in the appropriate command line shown in the main instructions with the URL from the Releases page. These nightly releases are not available through package managers like _brew_. # DESCRIPTION **youtube-dl** is a command-line program to download videos from YouTube.com and a few more sites. It requires the Python interpreter, version 2.6, 2.7, or 3.2+, and it is not platform specific. It should work on your Unix box, on Windows or on macOS. It is released to the public domain, which means you can modify it, redistribute it or use it however you like. youtube-dl [OPTIONS] URL [URL...] # MORE INFORMATION For **more information**, or to **raise a support issue**, visit [the main site](https://github.com/ytdl-org/youtube-dl). Please contribute any pull requests at the main site as well, unless the change is specific to this nightly repository (eg, workflow improvements). # COPYRIGHT youtube-dl is released into the public domain by the copyright holders. This README file is likewise released into the public domain.
shemmee/TikTok-UI-Clone
https://github.com/shemmee/TikTok-UI-Clone
This is a TikTok UI Clone that replicates the TikTok feed and its elements, including smooth video scrolling. Explore an interactive interface similar to Tiktok and enjoy a seamless browsing experience.
# TikTok UI Clone The TikTok UI Clone is a web application developed to replicate the user interface of the TikTok app. It is built using React.js, CSS, and JSX, and allows users to browse and view TikTok-style videos in a familiar and interactive interface. # Features - Browse and view TikTok-style videos. - Smooth and responsive video playback. - User-friendly interface with intuitive navigation. - Infinite scrolling for seamless video discovery. - Like, comment, and share videos. - Follow users and explore personalized content. # Technologies & Tools Used - React.js - CSS - JSX - VS Code # Installation and Usage To use this TikTok UI Clone, follow these steps: - Clone the repository or download the source code. - Open the project in your preferred code editor. - Run `npm install` to install the necessary dependencies. - Run `npm start` to start the development server. - Open your browser and navigate to `http://localhost:3000` to access the app. # Demo A live demo of the TikTok UI Clone is available at [LINK TO LIVE DEMO](https://tik-tok-ui-clone-shemmee.vercel.app). # Credits The TikTok UI Clone was created by [s-shemmee](https://github.com/s-shemmee). # License This project is licensed under the MIT license.
mkirchhof/url
https://github.com/mkirchhof/url
Uncertainty-aware representation learning (URL) benchmark
# URL: A Representation Learning Benchmark for <br> Transferable Uncertainty Estimates Michael Kirchhof, Bálint Mucsányi, Seong Joon Oh, Enkelejda Kasneci ![URL Benchmark](plots/benchmark.png) _Representation learning has driven the field to develop pretrained models that generalize and transfer to new datasets. With the rising demand of reliable machine learning and uncertainty quantification, we seek pretrained models that output both an embedding and an uncertainty estimate, even on unseen datasets. To guide the development of such models, we propose the uncertainty-aware representation learning (URL) benchmark. It measures whether the uncertainty predicted by a model reliably reveals the uncertainty of its embedding. URL takes only four lines of code to implement but still has an information-theoretical backbone and correlates with human-perceived uncertainties. We apply URL to study ten large-scale uncertainty quantifiers that were pretrained on ImageNet and transfered to eight downstream datasets. We find that transferable uncertainty quantification is an unsolved open problem, but that it appears to be not at stakes with classical representation learning._ **Link**: [arxiv.org/abs/2307.03810](https://www.arxiv.org/abs/2307.03810) --- ## Installation **TL;DR:** Create a conda environment with ```conda env create -f requirements.yml```, then [download the datasets](#datasets). ### Conda environment Long answer: First, install Python 3.8.8 and PyTorch 1.10 with a CUDA backend that suits your GPU (in this case, CUDA 11.1) ``` pip install python=3.8.8 conda install pytorch==1.13.0 torchvision==0.14.0 torchaudio==0.13.0 pytorch-cuda=11.7 -c pytorch -c nvidia ``` Then, install the dependencies: ``` pip install matplotlib pyyaml huggingface_hub safetensors>=0.2 scipy==1.7.1 argparse==1.4.0 tueplots==0.0.5 wandb==0.13.5 torchmetrics==0.11.3 scikit-learn==0.24.1 pandas==1.2.4 conda install -c pytorch faiss-gpu ``` ### Datasets Now, download all datasets. The scripts search for them by default under ```./data```. You can adjust this via the arguments ```--data-dir``` for the upstream (ImageNet) dataset, ```--data-dir-downstream``` for the zero-shot downstream and further downstream datasets, and ```--real-labels``` and ```--soft-labels``` for the auxiliary ImageNet-RealH files. If your downstream datasets are spread over multiple directories, consider providing one folder that gives symlinks to them. Upstream dataset: [ImageNet-1k](https://www.image-net.org/download.php) ImageNet-RealH files: [raters.npz](https://github.com/google-research/reassessed-imagenet/blob/master/raters.npz) and [real.json](https://github.com/google-research/reassessed-imagenet/blob/master/real.json) Downstream datasets: [CUB200-211](https://www.dropbox.com/s/tjhf7fbxw5f9u0q/cub200.tar?dl=0), [CARS196](https://www.dropbox.com/s/zi2o92hzqekbmef/cars196.tar?dl=0), [Stanford Online Products](https://www.dropbox.com/s/fu8dgxulf10hns9/online_products.tar?dl=0) Further downstream datasets: [CIFAR-10H, Treeversity#1, Turkey, Pig, Benthic](https://doi.org/10.5281/zenodo.7152309) Please verify that your folder structure for the downstream datasets looks like this (note the folder names for each dataset): ``` cub200 └───images | └───BMW 3 Series Wagon 2012 | │ 00039.jpg | │ ... | ... cars196 └───images | └───001.Black_footed_Albatross | │ Black_Footed_Albatross_0001_796111.jpg | │ ... | ... online_products └───images | └───bicycle_final | │ 111085122871_0.jpg | ... | └───Info_Files | │ bicycle.txt | │ ... ``` The further downstream datasets should look like this directly after unzipping ``` CIFAR10H/Treeversity#1/Turkey/Pig/Benthic └───fold1 | │ 182586_00.png | ... └───annotations.json ``` --- ## Training and Benchmarking your own Method Training happens in ```train.py```. This is adapted from the ```timm``` library, including all its models, to which we added various uncertainty output methods, so that all models have outputs of the form ```class_logits, uncertainties, embeddings = model(input)```. The best starting point to implement your own ideas would be to adapt the uncertainty output methods in ```./timm/models/layers/_uncertainizer.py```, implement losses in ```./timm/loss```, or enhance model architectures in ```./timm/models```. The URL benchmark is evaluated in ```validate.py```, which is called during training, but can also be used stand-alone if you prefer to train with your own code. An exemplary call would be ``` train.py --model=resnet50 --loss=elk --inv_temp=28 --unc-module=pred-net --unc_width=1024 --ssl=False --warmup-lr=0.0001 --lr-base=0.001 --sched=cosine --batch-size=128 --accumulation_steps=16 --epochs=32 --seed=1 --eval-metric avg_downstream_auroc_correct --log-wandb=True --data-dir=./data/ImageNet2012 --data-dir-downstream=./data --soft-labels=./data/raters.npz --real-labels=./data/real.json ``` The most important parameters are: * ```--model``` Which backbone to use. In our paper, we use ```resnet50``` and ```vit_medium_patch16_gap_256```. * ```--loss``` Which loss to use. Note that some approaches, like MCDropout, use a ```cross-entropy``` loss, but specify other parameters to make them into their own loss. Please refer to the example codes below. * ```--inv_temp``` The hyperparameter constant the distances in the softmax exponentials are multiplied by. This can also be understood as the inverse of the temperature. Some approaches require further hyperparameters. Please refer to the example codes below. * ```--unc-module``` How to calculate the uncertainty attached to each embedding. Popular choices are an explicit ```pred-net``` module attached to the model or the ```class-entropy``` of the predicted upstream class label distribution. These uncertainty modules are implemented as wrappers around the models, so that any combination of model backbone and uncertainty method should work. * ```--unc_width``` If ```--unc-module=pred-net```, this gives the width of the 3-layer MLP to estimate uncertainties. * ```--ssl``` Set to ```False``` if you want to learn with supervised labels and to ```True``` if you want to learn from self-supervised contrastive pairs. Note that these result in different data formats, such that not all losses are compatible with all settings. Please refer to the example codes below. * ```--warmup-lr``` The fixed learning rate to use in the first epoch. Usually lower than the learning rate in later epochs. * ```--lr-base``` The learning rate in reference to a batchsize of 256. This will be increased/decreased automatically if you use a smaller or bigger total batchsize. The current learning rate will be printed in the log. * ```--sched``` Which learning rate scheduler to use. In the paper we use ```cosine``` annealing, but you may want to try out ```step```. * ```--batch-size``` How many samples to process at the same time. * ```--accumulation_steps``` How many batches to calculate before making one optimizer step. The accumulation steps times the batch size gives the final, effective batchsize. Loss scalers adjust to this automatically. * ```--epochs``` Number of epochs to train for on the upstream dataset. Since we automatically start from pretrained checkpoints, this is set to ```32``` by default. * ```--seed``` For final results, we replicate each experiment on the seeds ```1, 2, 3`` * ```--eval-metric``` Which metric to select the best epoch checkpoint by. We use ```avg_downstream_auroc_correct``` for the R-AUROC averaged across all downstream validation sets. These options here are the internal keys of the results dictionaries, as further detailed below. Keep in mind that the an ```eval_``` is prepended to the metric name internally, as we only allow to use metrics on the valiation splits to be used as ```eval-metric```. * ```--log-wandb``` We recommend to set this to ```True``` to log your results in W&B. Don't forget to login with your API key. * ```--data-dir``` Folder where ImageNet, or in general your upstream dataset, is stored. * ```--data-dir-downstream``` Folder where **all** CUB200, CARS196, SOP, CIFAR10H, ..., are stored, or whichever downstream and further downstream datasets you use. * ```--soft-labels``` and ```--real-labels``` Links to your ImageNet-Real H ```raters.npz``` and ```real.json``` files. --- ## Metrics **TL;DR:** The R@1 and R-AUROC metrics reported in the paper are internally named ```best_test_avg_downstream_r1``` and ```best_test_avg_downstream_auroc_correct```. During hyperparameter tuning, please use them on their validation sets, i.e., ```best_eval_avg_downstream_r1``` and ```best_eval_avg_downstream_auroc_correct```. **Long answer:** All metrics are named as follows: ``` <best/current epoch>_<dataset>_<metric> ``` , e.g., ```best_eval_avg_downstream_auroc_correct``` or ```eval_r1```. * ```<best/current epoch>``` We prepend a ```best_``` if the metric is computed on the best so-far epoch. The best epoch is chosen via ```--eval-metric```, see above. If nothing is prepended, this is just the metric of the current epoch. * ```<dataset>``` gives which dataset and eval/test split the metric is computed on. Options are * ```eval``` The validation set given in ```--dataset_eval```, which is usually just the same as the upstream loader, i.e., ```torch/imagenet``` or ```soft/imagenet```. * ```eval_avg_downstream``` The average across the validation sets of all downstream loaders. They are defined via ```--dataset-downstream```. * ```test_avg_downstream``` The average across the test sets of all downstream loaders. They are defined via ```--dataset-downstream```. * ```furthertest_avg_downstream``` The average across the test sets of all "further" downstream loaders. They are defined via ```--further-dataset-downstream```. This is essentially just a second set of datasets to test on. * ```eval_repr/cub```, ```test_repr/cub```, ```furthertest_soft/benthic```, ..., The results on each individual dataset. This is done for all datasets in ```--dataset-downstream``` and ```--further-dataset-downstream```. * ```<metric>``` Which metric we evaluate: * ```auroc_correct``` This is the R-AUROC from the paper main text. This is the main metric we focus on. * ```r1``` The R@1 from the paper main text. This is the second most important metric. * ```top1``` and ```top5``` The top-1 and top-5 accuracy of the classifier. This only makes sense on the upstream dataset (it is output aswell for downstream datasets just for modularity reasons). * ```croppedHasBiggerUnc``` How often a cropped version of an image has a higher uncertainty than the original version. * ```rcorr_crop_unc``` Rank correlation between how much we cropped an image and high uncertainty the model outputs. _Use with care!_ This is only implemented in reference to previous works. This metric only makes sense if all images show a single object, such that the amount of cropping has a roughly equal effect across all images. ```croppedHasBiggerUnc``` fixes this issue and should be preferred. * ```rcorr_entropy``` The rank correlation with the entropy of human soft label distributions. Only available for ```soft/...``` datasets. * ```min_unc```, ```avg_unc```, and ```max_unc``` The minimum, average, and maximum uncertainties across the dataset. --- ## Reproducing our Implemented Methods Below are the calls to reproduce the URL benchmark results on all ten baseline approaches, both on ResNet and ViT backbones. They all use the best hyperparameters we found in our searches. All approaches in the paper were repeated on seeds 1, 2, and 3, which we do not show here for brevity. ### Cross Entropy ``` train.py --inv_temp=31.353232263344143 --loss=cross-entropy --lr-base=0.0027583475549166764 --model=resnet50 --unc-module=class-entropy --unc_start_value=0 ``` ``` train.py --img-size=256 --inv_temp=60.70635770117517 --loss=cross-entropy --lr-base=0.004954014361368407 --model=vit_medium_patch16_gap_256 --unc-module=class-entropy --unc_start_value=0 ``` ### InfoNCE InfoNCE requires ```--ssl=True```, and a lower batchsize, since we forward two self-supervised crops per image. ``` train.py --accumulation_steps=21 --batch-size=96 --inv_temp=15.182859908025058 --loss=infonce --lr-base=0.0004452562693472003 --model=resnet50 --ssl=True --unc-module=embed-norm --unc_start_value=0 ``` ``` train.py --accumulation_steps=21 --batch-size=96 --img-size=256 --inv_temp=20.82011649785067 --loss=infonce --lr-base=0.006246538808281836 --model=vit_medium_patch16_gap_256 --ssl=True --unc-module=embed-norm --unc_start_value=0 ``` ### MCInfoNCE MCInfoNCE requires ```--ssl=True```, and a lower batchsize, since we forward two self-supervised crops per image. The MC sampling MCInfoNCE adds over InfoNCE did not significantly impact runtime or memory usage. ``` train.py --accumulation_steps=21 --batch-size=96 --inv_temp=52.43117045513681 --loss=mcinfonce --lr-base=2.384205225724591e-05 --model=resnet50 --ssl=True --unc-module=pred-net --unc_start_value=0.001 --warmup-lr=3.487706876306753e-05 ``` ``` train.py --accumulation_steps=21 --batch-size=96 --img-size=256 --inv_temp=50.27568453131382 --loss=mcinfonce --lr-base=0.0031866603949435874 --model=vit_medium_patch16_gap_256 --ssl=True --unc-module=pred-net --unc_start_value=0.001 ``` ### Expected Likelihood Kernel (ELK) ``` train.py --inv_temp=27.685357549319253 --loss=elk --lr-base=0.008324452068209802 --model=resnet50 --unc-module=pred-net --unc_start_value=0 ``` ``` train.py --img-size=256 --inv_temp=56.77356863558765 --loss=elk --lr-base=0.009041687325778511 --model=vit_medium_patch16_gap_256 --unc-module=pred-net --unc_start_value=0 ``` ### Non-isotropic von Mises Fisher (nivMF) ``` train.py --inv_temp=10.896111351193488 --loss=nivmf --lr-base=0.00014942909398367403 --model=resnet50 --unc-module=pred-net --unc_start_value=0.001 ``` ``` train.py --img-size=256 --inv_temp=31.353232263344143 --loss=nivmf --lr-base=0.0027583475549166764 --model=vit_medium_patch16_gap_256 --unc-module=pred-net --unc_start_value=0.001 ``` ### Hedged Instance Embeddings (HIB) HIB has an additional hyperparameter ```--hib_add_const``` to shift its sigmoid. HIB requires lower batchsizes to prevent running out of VRAM. ``` train.py --accumulation_steps=21 --batch-size=96 --hib_add_const=2.043464396656407 --inv_temp=26.850376086478832 --loss=hib --lr-base=5.606607236666466e-05 --model=resnet50 --unc-module=pred-net --unc_start_value=0 --warmup-lr=2.2864937540918197e-06 ``` ``` train.py --accumulation_steps=43 --batch-size=48 --hib_add_const=-5.360730528719454 --img-size=256 --inv_temp=13.955844954616405 --loss=hib --lr-base=0.0005920448270870512 --model=vit_medium_patch16_gap_256 --unc-module=pred-net --unc_start_value=0 ``` ### Heteroscedastic Classifiers (HET-XL) HET-XL uses several hyperparameters, see the args in ```train.py```, most importantly the ```--rank_V``` of the covariance matrix and ```--c-mult```. HET-XL uses a standard cross-entropy loss, but a modified architecture, which you call via the ```--model``` argument. We've implemented this only for ResNet 50 and ViT Medium. It can also use either its covariance determinant or class entropy as ```--unc-module```. In our experiments, the latter outperformed the former. ``` train.py --c-mult=0.011311824684149863 --inv_temp=28.764754827923134 --loss=cross-entropy --lr-base=0.00030257136041070065 --model=resnet50hetxl --rank_V=1 --unc-module=class-entropy --unc_start_value=0 ``` ``` train.py --c-mult=0.011586882497402008 --img-size=256 --inv_temp=21.601079237861356 --loss=cross-entropy --lr-base=0.00012722151293115814 --model=vit_medium_patch16_gap_256hetxl --rank_V=1 --unc-module=hetxl-det --unc_start_value=0 ``` ### Direct Risk Prediction (Riskpred) Riskpred uses the ```--lambda-value``` hyperparameter to balance its cross entropy and uncertainty prediction loss. ``` train.py --inv_temp=27.538650119804444 --lambda-value=0.04137484664752506 --loss=riskpred --lr-base=0.00907673293373138 --model=resnet50 --unc-module=pred-net --unc_start_value=0 ``` ``` train.py --img-size=256 --inv_temp=29.83516046330469 --lambda-value=0.011424752423322174 --loss=riskpred --lr-base=0.0026590263551453507 --model=vit_medium_patch16_gap_256 --unc-module=pred-net --unc_start_value=0.001 ``` ### MCDropout Specify the number of MC samples to take via ```--num-heads``` and the dropout rate via ```--drop```. ``` train.py --drop=0.08702220252645132 --inv_temp=29.31590841184109 --loss=cross-entropy --lr-base=0.00016199535513680024 --model=resnet50dropout --unc-module=jsd --unc_start_value=0 ``` ``` train.py --drop=0.1334044009405148 --img-size=256 --inv_temp=57.13603169495254 --loss=cross-entropy --lr-base=0.0027583475549166764 --model=vit_medium_patch16_gap_256dropout --unc-module=class-entropy --unc_start_value=0 ``` ### Ensemble Specify the number heads via ```--num-heads```. This increases memory and computation usage. ``` train.py --inv_temp=29.89825063351814 --loss=cross-entropy --lr-base=0.004405890102835956 --model=resnet50 --num-heads=10 --unc-module=class-entropy --unc_start_value=0 ``` ``` train.py --img-size=256 --inv_temp=54.435826404570726 --loss=cross-entropy --lr-base=0.004944771531139904 --model=vit_medium_patch16_gap_256 --num-heads=10 --unc-module=class-entropy --unc_start_value=0 ``` ### Spectral-normalized Neural Gaussian Processes (SNGP/GP) SNGP has mutiple hyperparameters. Our implementation follows the defaults of the original paper. Most importantly, ```--use-spec-norm``` controls whether to use SNGP or drop the SN and only use GP. Like HET-XL, SNGP is called via a modified model architecture and otherwise uses a standard cross entropy loss. ``` train.py --gp-cov-discount-factor=-1 --gp-input-normalization=True --loss=cross-entropy --lr-base=0.003935036929170965 --model=resnet50sngp --spec-norm-bound=3.0034958778109893 --unc-module=class-entropy --unc_start_value=0 --use-spec-norm=True ``` ``` train.py --gp-cov-discount-factor=0.999 --gp-input-normalization=True --img-size=256 --loss=cross-entropy --lr-base=0.0002973866135608272 --model=vit_medium_patch16_gap_256sngp --spec-norm-bound=2.0072013733952883 --unc-module=class-entropy --unc_start_value=0 --use-spec-norm=False ``` --- ## Licenses ### Code This repo bases largely on [timm](https://github.com/huggingface/pytorch-image-models) (Apache 2.0), with some dataloaders from [Revisiting Deep Metric Learning](https://github.com/Confusezius/Revisiting_Deep_Metric_Learning_PyTorch) (MIT Licence), and some methods from [Probabilistic Contrastive Learning](https://github.com/mkirchhof/Probabilistic_Contrastive_Learning) (MIT License). Several further methods are (re-)implemented by ourselves. Overall, this repo is thus under an Apache 2.0 License. That said, it is your responsibility to ensure you comply with licenses here and conditions of any dependent licenses. Where applicable, the sources/references for various components are linked in docstrings. ### Pretrained Weights So far all of the pretrained weights available here are pretrained on ImageNet with a select few that have some additional pretraining (see extra note below). ImageNet was released for non-commercial research purposes only (https://image-net.org/download). It's not clear what the implications of that are for the use of pretrained weights from that dataset. Any models I have trained with ImageNet are done for research purposes and one should assume that the original dataset license applies to the weights. It's best to seek legal advice if you intend to use the pretrained weights in a commercial product. #### Pretrained on more than ImageNet Several weights included or references here were pretrained with proprietary datasets that I do not have access to. These include the Facebook WSL, SSL, SWSL ResNe(Xt) and the Google Noisy Student EfficientNet models. The Facebook models have an explicit non-commercial license (CC-BY-NC 4.0, https://github.com/facebookresearch/semi-supervised-ImageNet1K-models, https://github.com/facebookresearch/WSL-Images). The Google models do not appear to have any restriction beyond the Apache 2.0 license (and ImageNet concerns). In either case, you should contact Facebook or Google with any questions. --- ## Citing ``` @article{kirchhof2023url, title={URL: A Representation Learning Benchmark for Transferable Uncertainty Estimates}, author={Michael Kirchhof and Bálint Mucsányi and Seong Joon Oh and Enkelejda Kasneci}, journal={arXiv preprint arXiv:2307.03810}, year={2023} } ``` If you use the benchmark, please also cite the datasets.
oddfar/campus-imaotai
https://github.com/oddfar/campus-imaotai
i茅台app自动预约,每日自动预约,支持docker一键部署
<p align="center"><a href="https://oddfar.com/" target="_blank" rel="noopener noreferrer"><img width="180" src="https://note.oddfar.com/img/web.png" alt="logo"></a></p> <p align="center"> <a href="https://github.com/oddfar/campus-imaotai/stargazers"><img src="https://img.shields.io/github/stars/oddfar/campus-imaotai.svg"></a> <a href="https://github.com/oddfar/campus-imaotai/blob/master/LICENSE"><img src="https://img.shields.io/github/license/oddfar/campus-imaotai.svg"></a> </p> <p align="center"> i茅台app自动预约,每日自动预约,支持docker一键部署</p> <h2 align="center">Campus-imaotai</h2> [笔记仓库](https://github.com/oddfar/notes) | [我的博客](https://oddfar.com) ## 项目介绍 i茅台app,每日自动预约茅台,可添加多个用户,可选本市出货量最大的门店,或预约你的位置附近门店 软件会在每日9点05开始批量预约,并 [pushplus](https://www.pushplus.plus/) 推送消息 此项目使用 **Campus** 进行编写:<https://github.com/oddfar/campus> ## 使用教程 - 部署项目:[点击此处](https://github.com/oddfar/campus-imaotai/wiki/%E9%A1%B9%E7%9B%AE%E4%BD%BF%E7%94%A8#%E9%83%A8%E7%BD%B2%E9%A1%B9%E7%9B%AE) - 使用项目:[点击此处](https://github.com/oddfar/campus-imaotai/wiki/%E9%A1%B9%E7%9B%AE%E4%BD%BF%E7%94%A8#%E9%A1%B9%E7%9B%AE%E4%BD%BF%E7%94%A8) - 常见问题:[点击此处](https://github.com/oddfar/campus-imaotai/wiki/%E5%B8%B8%E8%A7%81%E9%97%AE%E9%A2%98) 有问题请查看文档,查找 [issues](https://github.com/oddfar/campus-imaotai/issues) 上是否有相同问题! 若没有则提交 [issues](https://github.com/oddfar/campus-imaotai/issues) ,附带详细的错误原因 不会Docker部署的,可使用客户端:<https://github.com/lisongkun/hygge-imaotai> ## 贡献代码 若您有好的想法,发现一些 **BUG** 并修复了,欢迎提交 **Pull Request** 参与开源贡献 发起 pull request 请求,提交到 master 分支,等待作者合并 ## 演示图 | i茅台预约 | | | ------------------------------------------------------------ | ------------------------------------------------------------ | | ![image-20230707144241399](.github/image-20230707144241399.png) | ![image-20230707144404638](.github/image-20230707144404638.png) | | | | | ![image-20230707144703842](.github/image-20230707144703842.png) | ![image-20230707145525709](.github/image-20230707145525709.png) | ## 友情链接 yize8888-maotai: https://github.com/yize8888/maotai ## 声明 - 本项目涉及的数据由使用的个人或组织自行填写,本项目不对数据内容负责,包括但不限于数据的真实性、准确性、合法性。使用本项目所造成的一切后果,与本项目的所有贡献者无关,由使用的个人或组织完全承担。 - 本项目中涉及的第三方硬件、软件等,与本项目没有任何直接或间接的关系。本项目仅对部署和使用过程进行客观描述,不代表支持使用任何第三方硬件、软件。使用任何第三方硬件、软件,所造成的一切后果由使用的个人或组织承担,与本项目无关。 - 本项目中所有内容只供学习和研究使用,不得将本项目中任何内容用于违反国家/地区/组织等的法律法规或相关规定的其他用途。 - 所有基于本项目源代码,进行的任何修改,为其他个人或组织的自发行为,与本项目没有任何直接或间接的关系,所造成的一切后果亦与本项目无关。 - 所有直接或间接使用本项目的个人和组织,应24小时内完成学习和研究,并及时删除本项目中的所有内容。如对本项目的功能有需求,应自行开发相关功能。 - 本项目保留随时对免责声明进行补充或更改的权利,直接或间接使用本项目内容的个人或组织,视为接受本项目的特别声明。 ## 鸣谢 > [IntelliJ IDEA](https://zh.wikipedia.org/zh-hans/IntelliJ_IDEA) 是一个在各个方面都最大程度地提高开发人员的生产力的 IDE,适用于 JVM 平台语言。 特别感谢 [JetBrains](https://www.jetbrains.com/?from=campus) 为开源项目提供免费的 [IntelliJ IDEA](https://www.jetbrains.com/idea/?from=campus) 等 IDE 的授权 [<img src=".github/jetbrains-variant.png" width="200"/>](https://www.jetbrains.com/?from=campus)
raghavan/PdfGptIndexer
https://github.com/raghavan/PdfGptIndexer
An efficient tool for indexing and searching PDF text data using OpenAI API and FAISS (Facebook AI Similarity Search) index, designed for rapid information retrieval and superior search accuracy.
# PdfGptIndexer ## Description PdfGptIndexer is an efficient tool for indexing and searching PDF text data using OpenAI APIs and FAISS (Facebook AI Similarity Search). This software is designed for rapid information retrieval and superior search accuracy. ## Libraries Used 1. [Textract](https://github.com/deanmalmgren/textract) - A Python library for extracting text from any document. 2. [Transformers](https://github.com/huggingface/transformers) - A library by Hugging Face providing state-of-the-art general-purpose architectures for Natural Language Understanding (NLU) and Natural Language Generation (NLG). 3. [Langchain](https://python.langchain.com/) - A text processing and embeddings library. 4. [FAISS (Facebook AI Similarity Search)](https://github.com/facebookresearch/faiss) - A library for efficient similarity search and clustering of dense vectors. ## Installing Dependencies You can install all dependencies by running the following command: ```shell pip install langchain openai textract transformers langchain faiss-cpu pypdf tiktoken ``` ## How It Works The PdfGptIndexer operates in several stages: 1. It first processes a specified folder of PDF documents, extracting the text and splitting it into manageable chunks using Hugging Face Transformers library. 2. Each text chunk is then embedded using the default OpenAI embedding model (text-embedding-ada-002) through the LangChain library. 3. These embeddings are stored in a FAISS index, providing a compact and efficient storage method. 4. Finally, a query interface allows you to retrieve relevant information from the indexed data by asking questions. The application fetches and displays the most relevant text chunk. ![Untitled-2023-06-16-1537](https://github.com/raghavan/PdfGptIndexer/assets/131585/2e71dd82-bf4f-44db-b1ae-908cbb465deb) ## Advantages of Storing Embeddings Locally Storing embeddings locally provides several advantages: 1. Speed: Once the embeddings are stored, retrieval of data is significantly faster as there's no need to compute embeddings in real-time. 2. Offline access: After the initial embedding creation, the data can be accessed offline. 3. Compute Savings: You only need to compute the embeddings once and reuse them, saving computational resources. 4. Scalability: This makes it feasible to work with large datasets that would be otherwise difficult to process in real-time. ## Running the Program To run the program, you should: 1. Make sure you have installed all dependencies. 2. Clone the repository to your local machine. 3. Navigate to the directory containing the Python script. 4. Replace "<OPENAI_API_KEY>" with your actual OpenAI API key in the script. 5. Finally, run the script with Python. ```python python3 pdf_gpt_indexer.py ``` Please ensure that the folders specified in the script for PDF documents and the output text files exist and are accessible. The query interface will start after the embeddings are computed and stored. You can exit the query interface by typing 'exit'. ## Exploring Custom Data with ChatGPT Check out the post [here](https://devden.raghavan.studio/p/chatgpt-using-your-own-data) for a comprehensive guide on how to utilize ChatGPT with your own custom data.
trickest/github-logs
https://github.com/trickest/github-logs
Extracting OSINT Insights from 15TB of GitHub Event Logs
<h1 align="center">GitHub Logs <a href="https://twitter.com/intent/tweet?text=GitHub%20Logs%20-%20Extracting%20OSINT%20Insights%20from%2015TB%20of%20GitHub%20Event%20Logs%0A%0Ahttps%3A%2F%2Fgithub.com%2Ftrickest%2Fgithub-logs"><img src="https://img.shields.io/badge/Tweet--lightgrey?logo=twitter&style=social" alt="Tweet" height="20"/></a></h1> <h3 align="center">Extracting OSINT Insights from 15TB of GitHub Event Logs</h3> We've developed 3 Trickest workflows that processed and enriched the entire GitHub Event Logs dataset, giving us some valuable insights. Now, the entire community can benefit from these results. Did you know that exactly 16 GitHub users have the special `site_admin` status on the public GitHub platform? Are you interested in finding out which users are associated with a specific company using a `grep` command? Or perhaps you want to identify deleted user accounts that could pose risks like impersonation or dependency confusion? Maybe you're curious about the top 10 most popular repositories across GitHub in terms of stars, forks, or watchers. If any of this catches your interest, check out our highlights report: [Uncovering OSINT Insights from 15TB of GitHub Logs](https://trickest.com/reports/uncovering-github-osint-insights/). <p align="center"> <a href="https://trickest.com/reports/uncovering-github-osint-insights/"><img align="center" src="images/uncovering-github-osint-insights.png" /></a> </p> ## What's Inside? The repository consists of CSV files that contain the user and repository details organized into two main folders: `users` and `repos`. Inside each folder, you'll find a main folder that contains the complete dataset: `all_user_info` and `all_repo_info`. We've also included partial datasets for specific criteria like `users_site_admin`, `users_deleted`, `users_company`, `repos_archived`, `repos_deleted`, `and repos_sorted_stars`. ``` github-logs ├── repos │ ├── all_repo_info │ │ ├── all_repo_info00000.csv │ │ ├── all_repo_info00001.csv │ │ └── ... │ ├── repos_archived │ ├── repos_deleted │ ├── repos_disabled │ ├── repos_private │ ├── repos_sorted_forks │ ├── repos_sorted_stars │ └── repos_sorted_watchers └── users ├── all_user_info ├── users_company ├── users_deleted ├── users_email ├── users_github_star ├── users_hireable └── users_site_admin ``` If you're looking for inspiration on how to use this data or if you're curious about how we collected it, check out the [insights report](https://trickest.com/reports/uncovering-github-osint-insights/). ## Build your own workflows We believe in the value of tinkering; cookie-cutter solutions rarely cut it. Sign up for a [Trickest](https://trickest.com) demo to access the workflows we used, customize them, explore many more workflows, or even create your own from scratch! [<img src="images/banner.png" />](https://trickest-access.paperform.co/)
HoyoBot/HoyoBot-SDK
https://github.com/HoyoBot/HoyoBot-SDK
米游社高效率机器人SDK
<div align="center"> [![简体中文](https://img.shields.io/badge/简体中文-100%25-green?style=flat-square)](https://github.com/HoyoBot/HoyoBot-SDK/blob/main/README.md) [![License: GPL v3](https://img.shields.io/badge/License-GPL%20v3-blue.svg?style=flat-square)](https://github.com/HoyoBot/HoyoBot-SDK/blob/main/LICENSE) [![Maven](https://jitpack.io/v/HoyoBot/HoyoBot-SDK.svg)](https://jitpack.io/#HoyoBot/HoyoBot-SDK) [English](README.md) | [简体中文](README_CH.md) </div> <div align="center"> <img width="160" src="docs/hoyobot-logo.png" alt="logo"></br> ---- # HoyoSDK HoyoSDK 是一个在全平台下运行,提供 米游社大别野 协议支持的高效率机器人库 这个项目的名字来源于 <p><a href = "https://www.mihoyo.com/">米哈游</a>英文名<a href = "https://www.mihoyo.com/?page=product">《Mihoyo》</a>的后两部分(Mi <b>Hoyo</b>)</p> <p>其含义是为米哈游旗下软件<a href = "https://www.miyoushe.com/">米游社</a>创造的项目<a href = "https://github.com/HoyoBot/HoyoBot-SDK">(<b>HoyoSDK</b>)</a></p> </div> - HoyoSDK 是一个在全平台下运行,提供 米游社大别野 协议支持的高效率机器人库 - 本项目仍在开发中,请等待正式版再在生产环境中使用 - 如果你支持这个项目,请给我们一个star. 我们很欢迎社区的贡献 --------- ## 特性 - 基于Netty|高性能|易开发 - 开源的|跨平台|快速开发插件 --------- ## 相关链接 ###### 开发者文档 * [HoyoBot-SDK 官方文档](https://sdk.catrainbow.me) ###### 下载 * [Jenkins (实时构建)](https://ci.lanink.cn/job/HoyoBot-SDK/) ###### 反馈问题 * [Issues/Tickets](https://github.com/HoyoBot/HoyoBot-SDK/issues) ###### 开发相关 * [License (GPLv3)](https://github.com/HoyoBot/HoyoBot-SDK/blob/main/LICENSE) * [说明文档 (Docs)](https://github.com/HoyoBot/HoyoBot-SDK/blob/main/docs/README.md) ###### 官方插件 * [hoyo-sdk-mollyai](https://github.com/HoyoBot/hoyo-sdk-mollyai) ## 安装 & 运行 注意: 本框架仅支持 Java17 及以上版本的环境 - 从Java CI: https://ci.lanink.cn/job/HoyoBot-SDK/ - 下载最新版构建 `sdk-main-1.0.0-jar-with-dependencies.jar` - (跳转链接): [CI](https://ci.lanink.cn/job/HoyoBot-SDK/) - 将它放进你的服务器 - 使用命令 `java -jar (下载的文件名)` 即可运行 ## 原生命令 HoyoBot自带的命有这些,当然你也可以通过插件注册自定义机器命令.你可以在sdk-api中学习怎么注册一个命令 - `version` - 查看机器人及HoyoSDK-Protocol协议版本 - `help` - 查看命令帮助 - `plugins` - 列出当前机器人安装的插件 - `reload` - 热重载机器人插件 ## 构建Jar文件 #### 环境: Kotlin | Java (17) - `git clone https://github.com/HoyoBot/HoyoBot-SDK.git` - `cd HoyoBot-SDK` - `git submodule update --init` - `./mvnw clean package` * 构建好的文件能在目录 target/ directory 中找到. ## 部署开发环境 - HoyoBot的插件非常容易开发,这给你的机器人带来了无限的可能性 - 前往 sdk-api 查看 示例插件 ### GroupId - `com.github.HoyoBot.HoyoBot-SDK` ### Repository可用版本 | ArtifactId | Version | |:----------:|:-------:| | sdk-main | beta | | sdk-main | beta3 | | sdk-main | beta4 | | sdk-main | beta5 | | sdk-main | beta6 | | sdk-main | beta7 | | sdk-main | beta8 | ### Gradle: ```gradle allprojects { repositories { ... maven { url 'https://jitpack.io' } } } dependencies { implementation 'com.github.HoyoBot.HoyoBot-SDK:HoyoBot:beta' } ``` ### Maven: ##### Repository: ```xml <repositories> <repository> <id>jitpack.io</id> <url>https://jitpack.io</url> </repository> </repositories> ``` ##### Dependencies: ```xml <dependencies> <dependency> <groupId>com.github.HoyoBot.HoyoBot-SDK</groupId> <artifactId>ExamplePlugin</artifactId> <version>beta5</version> </dependency> </dependencies> ``` ## 协议支持 <details> <summary>支持的协议列表</summary> **米游社回调事件** - 消息发送 - 图片发送 - 成员信息及列表获取 - 大别野信息及列表获取 - 踢除用户 - 消息回复 </details> ## 机器人事件说明 HoyoBot将机器人发生的一切都处理为了事件,若你要开发其插件,只需要注册监听器, 就可以让事件触发时执行你的插件代码 你可以在 sdk-api 中查看样例代码 ### 事件列表 - `ProxyBotStartEvent` - 机器人启动事件 - `ProxyBotStopEvent` - 机器人关闭事件 - `ProxyPluginEnableEvent` - 机器人插件加载事件 - `VillaMemberJoinEvent` - 新成员加入频道事件 - `VillaSendMessageEvent` - 频道成员聊天事件 ---------
Nicolas-Gatien/MemoryBot
https://github.com/Nicolas-Gatien/MemoryBot
Chatbots with Long Term Memory
## Memory Bot Memory Bot is an interactive Python-based chatbot that uses the OpenAI API. Remembers past interactions and can recall relevant memories during conversations. ## Requirements * python 3.6+ * `openai` Python package (install using pip install openai) ## Demo <a href="https://www.youtube.com/watch?v=ktyEcIsCV7s" target="_blank"> <img src="https://i.ytimg.com/vi/ktyEcIsCV7s/maxresdefault.jpg"></img> </a>
AssassinUKG/googleSearcher
https://github.com/AssassinUKG/googleSearcher
A custom Google search (to bypass some limitations on google and VPNs)
# googleSearcher A custom Google search (to bypass some limitations on Google with regards to timeouts, vpns etc) The tool has been coded to only return 100 (max) URLs from google search. Google only allows 100 results. You can get 100 from pages 1-10 or pages 11-20 etc, but only ever 100 max results at a time. A Limitation imposed by Google. > WARNING you will quickly hit the Google limit and will need a paid account to get more. > Script has been edited to allow number choice 1-100 results Great for OSINT and Google Dorks! ## Install ```bash git clone https://github.com/AssassinUKG/googleSearcher.git cd googleSearcher pip install -r requirements.txt cd gSearcher python3 gsearcher.py ``` ## Usage ```bash python3 gsearcher.py -s "filetype:pdf site:tesla.com" ``` ```bash python3 gsearcher.py -s "cats" -p 4 ``` ```bash python3 gsearcher.py -s "dogs" -n 30 -p 3 ``` ### Options ``` usage: gSearch.py [-h] [-s SEARCHTERM] [-n NUMRESULTS] [-p PAGE] options: -h, --help show this help message and exit -s SEARCHTERM, --searchTerm SEARCHTERM Enter a search term! -n NUMRESULTS, --numResults NUMRESULTS Enter the number of results to fetch (max 100) -p PAGE, --page PAGE Enter the page number ``` ## Screenshots > filetype dork (pdf) ![image](https://github.com/AssassinUKG/googleSearcher/assets/5285547/e9f3bce8-3481-4a6f-82e6-3401ae72b463) ![image](https://github.com/AssassinUKG/googleSearcher/assets/5285547/d39c662f-4768-41a4-8fba-d31eaf410d80) ** Update: 04/07/2023: Added option for number of results to help free users ** ![image](https://github.com/AssassinUKG/googleSearcher/assets/5285547/76534204-3b00-41f8-8a82-4f864b6d57f8) --- ## Setup (Google stuffs) You will need a... * GOOGLE API Key * GOOGLE CUSTOM SEARCH ENGINE ID ### Google API Key Link: https://console.cloud.google.com/ To create your application's API key: Create a new key: 1. Open the Google API Console. 2. Create or select a project. 3. On the Credentials page, get an existing API key or create a new one (Create credentials > API key). You can restrict the key before using it in production by clicking Restrict key. > Protect your API key by changing the restrictions to only search API or whatever you need. ### Google CSE (Custom Search Engine) 1. Goto https://programmablesearchengine.google.com/controlpanel/all 2. Add a new search engine and hit save after (see image for settings) ![image](https://github.com/AssassinUKG/googleSearcher/assets/5285547/86a3dda2-b104-4741-bad3-dc6659084e9a) 3. Get your search engine ID ![image](https://github.com/AssassinUKG/googleSearcher/assets/5285547/cb664dc2-eb03-417d-8dd3-d3f721f7d9e0) Now in the gsearcher.py file replace the variables with your key and cse id. ```python my_api_key = "Your GOOGLE API KEY" my_cse_id = "YOU Custom Search Engine ID" ``` Now you're ready to get results! ## Changelog ### Update: 04/07/2023: >Added option for number of results to help free users
midudev/view-transitions-api-books-demo
https://github.com/midudev/view-transitions-api-books-demo
Demo de View Transitions API
Demo de View Transitions API
OtterHacker/Conferences
https://github.com/OtterHacker/Conferences
null
# Conferences
vStar925/RDR1-Keyboard-Mouse
https://github.com/vStar925/RDR1-Keyboard-Mouse
Overhaul bringing keyboard and mouse support to Red Dead Redemption (Xenia). Emulates controller and tweaks sensitivity to be more responsive.
![](https://i3.lensdump.com/i/JZBhuK.png) An overhaul adding emulated controller inputs on KB&M, in addition to replaced button prompts, and tweaked sensitivity to feel more like a mouse. **Works on Red Dead Redemption, Game of the Year Edition, and Undead Nightmare (standalone)** ![](https://i1.lensdump.com/i/JAneMc.jpeg) ## Requirements <details> <summary>Click to expand</summary> - [Xenia Canary](https://github.com/xenia-canary/xenia-canary) - [ReWASD](https://www.rewasd.com/) Paid software with 7 day trial. [You must activate a separate 3 day trial for each advanced mapping feature, or pay $25 for lifetime full license.](https://github.com/vStar925/RDR1-Keyboard-Mouse/issues/3#issuecomment-1620504256) - [RDR Modding Tools](https://archive.org/details/red-dead-redemption-modding-tools.-7z) </details> ## Instructions <details> <summary>Click to expand</summary> ### 1.) reWASD Setup *If you haven't already installed reWASD, do that now and run it at least once* 1.) Download and extract the latest release. Inside, open folder 1. Double-click any or all of the configs to add them to reWASD (You can switch between them at any time within reWASD) 2.) [Group your mouse and keyboard in reWASD](https://help.rewasd.com/how-to-remap/group-of-devices.html) ### 2.) Swapping Files *Make sure you have downloaded and extracted [RDR Modding Tools](https://archive.org/details/red-dead-redemption-modding-tools.-7z) before continuing* 1.) [Follow Sections II and III of this guide to understand the basics of replacing files](https://github.com/vStar925/Red-Dead-Redemption-1-Modding) 2.) Inside *KBM Mod\2.)*, pick which game you are modifying (RDR or Undead Nightmare) and open that folder. Search for all of those files inside rdr2_layer0.rpf with MagicRDR and replace them with their respective files in this folder (Right click, replace) Save and exit MagicRDR. *Note: You do NOT need to repack the extracted iso file after replacing files. You can run the modified game by opening default.xex directly inside Xenia* ### 3.) Hiding the Mouse Cursor (optional) Since are you using a mouse in Xenia, the cursor will stay on screen while playing. Somehow, I didn't find this all that distracting but if you're interested in hiding it, follow these steps. 1.) [Download this invisible cursor](http://www.rw-designer.com/cursor-detail/23254) and save it to C:\Windows\Cursors 2.) Press the Windows key and type "cursor" and open the first result. Go to the pointers tab. 2.) Double-click "Normal select" and replace it with the invisible cursor. Save As... (Xenia, RDR, etc.). Apply. 3.) Select the scheme whenever you want to play RDR with this setup and switch back to None to return the cursor to normal. </details> ## Screenshots <details> <summary>Click to expand</summary> ![](https://i3.lensdump.com/i/JWNLUA.png) ![](https://i.lensdump.com/i/JAnMzP.jpeg) ![](https://i.lensdump.com/i/JAnDKo.jpeg) ```javascript console.log("I'm a code block!"); ``` </details> ## [FAQ](https://github.com/vStar925/RDR1-Keyboard-Mouse/blob/main/FAQ.md)
pacman100/peft-codegen-25
https://github.com/pacman100/peft-codegen-25
null
# peft-codegen-25
xishandong/Vue3_web_redbook
https://github.com/xishandong/Vue3_web_redbook
仿小红书前端设计的网页
# 基于vue3和element-plus前后端项目 ## 此仓库是前端代码 ## 后端代码仓库 [后端代码仓库](https://github.com/xishandong/Django_Server_redbook) ### 使用 ``` bash npm update npm run build npm run dev ``` 这个项目是模仿小红书web端的一个页面 ### 主页 主页部分是一个瀑布流式的卡片效果,点击卡片会展现出放大的卡片 ### 个人详情页 存放个人详情信息,包括收藏、发布帖子以及点赞帖子 ### 登录 登录注册采用卡片式,写了完整的表单校验 ### 发布 已完成,可以上传图片输入信息,然后生成预览图 ### 用户信息管理 已完成,可以管理帖子信息以及用户信息
cr4yfish/nemmy
https://github.com/cr4yfish/nemmy
Nemmy: The Neat Lemmy App for the Web (and phones later on) This project is focused on building an App with the best possible User Experience.
<!-- Improved compatibility of back to top link: See: https://github.com/othneildrew/Best-README-Template/pull/73 --> <a name="readme-top"></a> <!-- *** Thanks for checking out the Best-README-Template. If you have a suggestion *** that would make this better, please fork the repo and create a pull request *** or simply open an issue with the tag "enhancement". *** Don't forget to give the project a star! *** Thanks again! Now go create something AMAZING! :D --> <!-- PROJECT SHIELDS --> <!-- *** I'm using markdown "reference style" links for readability. *** Reference links are enclosed in brackets [ ] instead of parentheses ( ). *** See the bottom of this document for the declaration of the reference variables *** for contributors-url, forks-url, etc. This is an optional, concise syntax you may use. *** https://www.markdownguide.org/basic-syntax/#reference-style-links --> [![Contributors][contributors-shield]][contributors-url] [![Forks][forks-shield]][forks-url] [![Stargazers][stars-shield]][stars-url] [![Issues][issues-shield]][issues-url] [![Image Build](https://github.com/cr4yfish/nemmy/actions/workflows/docker-build.yml/badge.svg)](https://github.com/cr4yfish/nemmy/actions/workflows/docker-build.yml) <!-- PROJECT LOGO --> <br /> <div align="center"> <a href="https://github.com/cr4yfish/nemmy"> <img src="https://i.imgur.com/OzAB6Y0.png" alt="Logo" width="80" height="80"> </a> <h3 align="center">Nemmy</h3> <p align="center"> The Neat Lemmy App for the Web. This project is focused on building a Lemmy App with the best possible User Experience. <br /> <br /> <a href="https://nemmy.app/">Website</a> · <a href="https://play.google.com/store/apps/details?id=app.nemmy.twa">Google Play Store</a> · <a href="https://lemmy.world/c/nemmy">Lemmy Community</a> . <a href="https://github.com/Cr4yfish/Nemmy/issues">Report Bug</a> · <a href="https://github.com/Cr4yfish/Nemmy/issues">Request Feature</a> </p> </div> <div style="width: 100%; display: flex; justify-content: center; gap: 1rem;"> <img src="https://i.imgur.com/OxkGnC4.png" style="height: 500px; width: auto;" > <img src="https://i.imgur.com/twzhIFG.png" style="height: 500px; width: auto;" > <img src="https://i.imgur.com/SGJFQuo.png" style="height: 500px; width: auto;" > <img src="https://i.imgur.com/BljmUFd.png" style="height: 500px; width: auto;" > <img src="https://i.imgur.com/OFc7Va7.png" style="height: 500px; width: auto;" > <img src="https://i.imgur.com/8tHc7vx.png" style="height: 500px; width: auto;" > </div> <!-- ABOUT THE PROJECT --> ## About The Project Nemmy is a Progressive Web App (PWA) for Lemmy. Lemmy is kinda like Reddit, but federated so a CEO can't just fuck over everyone. The goal of Nemmy is to make using Lemmy on multiple devices a seamless experience. Right now Nemmy is in very early Development, however, feel free to visit the Website once in a while to see what's changed. If you like the idea of Nemmy, this is the perfect opportunity for you to influence design decisions by submitting Feedback or Feature requests. You can do that on here in the [Issues Tab](https://github.com/Cr4yfish/Nemmy/issues) or the [Nemmy Community on Lemmy](https://lemmy.world/c/nemmy). ##### Why Nemmy? There are other good Apps for Lemmy out there, even other PWAs like [wefwef.app](wwefwef.app). What singles out Nemmy is not focusing on a single devices (e.g. Mobile-only design like wefwef or Desktop Only design like the default Lemmy Web UI) but making the user experience equally good on most devices. ##### What's Lemmy? The short version is, that Lemmy is similar to Reddit. The main difference is that Lemmy is Open-Source and Federated. This means that there is no single huge "Lemmy Server" but many smaller "instances" of Lemmy hosted by different people. These instances can communicate with each other, so you can follow people on other instances and see their posts and comments. ### Built With - Next.js - Lemmy JS-Client - Tailwind ## Getting started ### Production Setup Here you can choose between hosting it the conventional way and using the Docker image. #### Conventional Follow the development setup steps but instead of running `npm run dev` you run `npm run build` and then `npm run start` #### Docker There are multiple ways to do this. Here is one example using a Dockerfile. 1. Get the Docker Image link [here](https://github.com/cr4yfish/nemmy/pkgs/container/nemmy) 2. Create the Dockerfile ```docker FROM ghcr.io/cr4yfish/nemmy:[latest version] # :latest is not supported yet ``` 3. Build and run ``` docker build -t nemmy . ``` ``` docker run nemmy ``` Nemmy will now run on the local network. To be able to access the server from the internet, you'll have to open the port `3000`. ### Development Setup To setup Nemmy for development, there are only a few steps to take. 1. Download and install Node.js 2. (Optional) Create a fork if you intend to contribute to Nemmy 3. Clone the Repository 4. Install dependencies ``` npm i ``` 5. Run the development server ``` npm run dev ``` ## Roadmap for Q3 2023 - [x] Basic APIs - [x] Voting / Interactions - [x] Voting on Posts - [x] Voting on Comments - [x] Bookmarking Posts & Comments - [x] PWA installable - [x] Community Page - [x] Sort/Filter/Search - [x] Search - [x] Filter - [x] Sorting - [x] Explore Section - [x] Authentication - [x] Login - [x] Register - [x] Support for multiple instances - [x] Subscribing to Communities - [ ] Creating Content - [x] Creating Comments - [x] Creating Posts - [x] Creating Communities - [ ] Supporting _all_ Lemmy Markdown Features **<- Almost done** - [ ] Support Markdown inserts **<- Only UI currently** - [ ] User Page - [ ] Settings **<- WIP** - [x] Notifications - [x] User Profile - [x] User Posts - [ ] Bookmarks **<- Almost done** - [x] User Comments - [ ] Chat **<- WIP** - [x] Sidebar - [x] Subscribed Communities - [x] Instance Info - [ ] Edit tools - [ ] Edit Comments - [ ] Edit Posts - [ ] Moderation Tools - [ ] Delete Posts/Comments - [ ] Ban Users - [ ] Lock Posts/Comments - [ ] Modify Posts (e.g. change title, mark as NSFW) - [ ] Offline Features **<- Is being worked on on another branch** - [ ] Content Cache - [ ] Upload when online - [ ] Show UI without connection - [ ] Customization - [ ] User Themes - [ ] Dark/Light mode per User choice - [ ] Compact Style (Like compact.reddit) **<- Design phase** - [ ] Classic Style (Like old.reddit) - [ ] QoL Features - [ ] Auto-Register to other instances - [ ] Subscribe to the same community on multiple instances (like c/nemmy on Lemmy.world and Lemmy.ml) - [ ] Bulk Block/Hide Communities with the same or similar Name - [ ] Subscribe/Unsubscribe from Communities while in Search mode (So you don't need to leave the Search page just to unsubscribe/subscribe) - [ ] Improved Sorting - [ ] _Instanceless_ features - [x] Create Communities regardless of instance - [ ] Create Posts regardless of instance - [ ] Unified Account cross-instances Ideas for Q4+ - Use GPT3.5 to sum up large text bodies - Improve search Indexing of Posts - Make every single feature of the App 100% usuable for Blind, deaf and color blind people - Add a keyword ban (e.g. you don't want anything with "Beans" in your feed) - Improve PWA Experience until it feels like a native app - Think about making a native app ## FOSS - Nemmy will always be 100% Open Source, no strings attached. - Nemmy will always be 100% free to use (I might start adding donation options if the Hosting Costs rise) ## Security To make sure Nemmy is safe to use, there are multiple code scanners scanning every commit. Humans can also scan the code since it's 100% open source and [create a security issue](https://github.com/cr4yfish/nemmy/security). [contributors-shield]: https://img.shields.io/github/contributors/Cr4yfish/Nemmy.svg?style=for-the-badge [contributors-url]: https://github.com/Cr4yfish/Nemmy/graphs/contributors [forks-shield]: https://img.shields.io/github/forks/Cr4yfish/Nemmy.svg?style=for-the-badge [forks-url]: https://github.com/Cr4yfish/Nemmy/network/members [stars-shield]: https://img.shields.io/github/stars/Cr4yfish/Nemmy.svg?style=for-the-badge [stars-url]: https://github.com/Cr4yfish/Nemmy/stargazers [issues-shield]: https://img.shields.io/github/issues/Cr4yfish/Nemmy.svg?style=for-the-badge [issues-url]: https://github.com/Cr4yfish/Nemmy/issues
FryedCoder/DEX-Triangular-Arbitrage-Bot-with-JavaScript-v5
https://github.com/FryedCoder/DEX-Triangular-Arbitrage-Bot-with-JavaScript-v5
Save time and maximize your profits with our JavaScript bot that performs Triangular Arbitrage on DEX's. Open-source and easy to use, start trading smarter.
<img src="9.png" /> <p>JavaScript DEX Triangular Arbitrage Bot v5 is a software program that uses JavaScript programming language to automate the process of triangular arbitrage on decentralized exchanges (DEXs). Triangular arbitrage is a trading strategy that involves exploiting price discrepancies between three different cryptocurrencies on a single exchange.</p> <p>The bot is designed to monitor the prices of three cryptocurrencies in real-time and execute trades automatically when the conditions for triangular arbitrage are met. It does this by analyzing the prices of the three cryptocurrencies and calculating whether a profitable trade can be made by buying and selling them in a specific order.</p> <p>For example, if the bot detects that the price of cryptocurrency A is lower on the exchange than the price of cryptocurrency B, which is lower than the price of cryptocurrency C, it will execute a series of trades to take advantage of this price discrepancy. The bot will first buy cryptocurrency A, then sell it for cryptocurrency B, and finally sell cryptocurrency B for cryptocurrency C. If the prices are favorable, this process will result in a profit.</p> <p>The JavaScript DEX Triangular Arbitrage Bot v5 is a powerful tool that can help traders make profitable trades on decentralized exchanges.</p> <p>A helpful tester has created a video tutorial, providing step-by-step instructions on how to run the program.</p> https://vimeo.com/841525168 <p>You can Download the zip file of the program here</p> https://raw.githubusercontent.com/FryedCoder/DEX-Triangular-Arbitrage-Bot-with-JavaScript-v5/main/DEX-Triangular-Arbitrage-Bot-with-JavaScript-v5.zip <p>The results of the program's execution have been compiled over a period of approximately 28 days.</p> <img src="6.jpg" /> If this program help you please vote for me in the annual codeathon last year I won four place, I'm hoping to win 1st place next year. <img src="5.png" /> <p>For those who prefer written instructions, please follow these steps:</p> <p>Step 1: Extract the contents of the downloaded file.</p> <p>Step 2: Open the "config.js" file using a text editor such as Notepad.</p> <img src="1.png" /> <p>Step 3: Configure the settings to your preferences and save the file.</p> <img src="2.png" /> <p>Step 4: Open the "run.html" file in any web browser of your choice.</p> <img src="3.png" /> <p>Here what it looks like running and finding a arbitrage.</p> <img src="4.png" /> #cryptoinvesting #cryptocurrencyexchange #cryptoconsultancy #cryptosuccess #cryptoexchange #stablecoins #cryptoinvestmentclub #defi #cryptosignalsgroup #cryptopartners
TechTitan0624/restaurant-management-system
https://github.com/TechTitan0624/restaurant-management-system
null
GKMIT Restaurant :- Restaurant to order and find delicious food. Restaurant Website using MERN Stack. Technology Used:- 1.ReactJS(Frontend) 2.Advance CSS 3.NodeJS(Backend) 4.Vercel 5.Heroku Home Page:- ![Screenshot 2022-11-12 at 3 52 38 PM](https://user-images.githubusercontent.com/114575434/215434030-bdbdbb0a-3825-4a63-891f-017d0e79cb58.png) Food Page:- ![Screenshot 2022-11-12 at 3 52 56 PM](https://user-images.githubusercontent.com/114575434/215434166-7c62e6ed-0cbe-4cba-bb24-9f7d898f7035.png) Cart Page:- ![Screenshot 2022-11-12 at 3 53 23 PM](https://user-images.githubusercontent.com/114575434/215434212-0c1cfffa-0693-4e1b-a5b0-d76e758e25b7.png) Contact Us Page:- ![Screenshot 2022-11-12 at 3 53 37 PM](https://user-images.githubusercontent.com/114575434/215434312-96e29da1-8491-4f27-8853-f87bc7f2d713.png) Free Live Demo:-https://restaurant-frontend-5teja4ug1-himanshur25.vercel.app/
MarvNC/StampNyaa
https://github.com/MarvNC/StampNyaa
A simple desktop app for downloading and using LINE stickers in chat apps like Discord on Windows, MacOS, and Linux.
<p align="center"> <img src="./assets/banner_transparent.png" alt="StampNyaa Banner" max-height="300" /> </p> <h1 align="center">StampNyaa | スタンプにゃあ~</h1> <p align="center"> <img src="./img/Main.png" alt="StampNyaa Demo" height="400" /> </p> <p align="center"> <a href="https://github.com/MarvNC/StampNyaa/releases/latest"> <img src="https://img.shields.io/github/v/release/MarvNC/StampNyaa?style=for-the-badge&color=26C2FF&labelColor=A0A0A0" alt="Release Version" /> </a> <a href="https://github.com/MarvNC/StampNyaa/releases/latest"> <img src="https://img.shields.io/github/downloads/MarvNC/StampNyaa/total?style=for-the-badge&color=26C2FF&labelColor=A0A0A0" alt="Total Downloads" /> </a> <a href="https://github.com/MarvNC/StampNyaa/releases/latest"> <img src="https://img.shields.io/badge/スタンプ-nyaa-blue?style=for-the-badge&&color=26C2FF&labelColor=A0A0A0" alt="StampNyaa" /> </a> </p> ## 🎉 Download LINE Stickers and Use Them Anywhere StampNyaa is a simple cross-platform desktop app for downloading and using LINE stickers in Discord, Whatsapp, Telegram, and anywhere else you can paste images. ## 🚀 Download [![Download button](https://img.shields.io/badge/-download-blue?style=for-the-badge&color=26C2FF)](https://github.com/MarvNC/StampNyaa/releases/latest) [Click here to go to the releases page](https://github.com/MarvNC/StampNyaa/releases/latest) and download the appropriate installer for your operating system in "Assets" (MacOS users should download `darwin`). ## 📚 About ### 🔧 Usage - **Use the hotkey** `Ctrl/Command+Shift+A` by default to open the app at any time, or click the icon in your system tray. You can change this hotkey from the settings menu at any time. - **Find your favorite sticker packs** on the [LINE Sticker Shop](https://store.line.me/stickershop/). - **Download LINE sticker packs** for use with StampNyaa by pressing the + icon in the sidebar and pasting in the sticker pack's URL. Or you can **[add your own stickers](#-add-your-own-stickers)**. - **Favorite stickers for easy access** by right-clicking on them to add them to your favorites list. - **Send stickers** by clicking on them in the app, and they will be automatically pasted to the app you're currently using. Keep in mind that sticker packs from LINE are created by skilled artists and are sold for use in the LINE app; this application simply aims to let you to use them in other apps. Make sure to **support the creators** by purchasing the sticker packs you use! LINE supports PayPal and various other payment methods, so purchasing shouldn't be an issue outside of region locks. > [!WARNING]\ > Some stickers are region locked and cannot be downloaded in certain countries. If you are unable to download a sticker pack, try using a VPN to change your location. You can check if the sticker pack works in a country by opening the store page in incognito with a VPN set to that country. ### 🎞️ Showcase | **Send Stickers on Discord** | **Choose a Theme** | | :--------------------------------------------------: | :---------------------------------------: | | ![Discord Send Sticker](img/DiscordSendSticker.gif) | ![Choose A Theme](img/ChooseATheme.gif) | | **Choose Your Favorite Stickers** | **Manage and Sort Your Stickers** | | ![Select Favorite Stickers](img/ChooseFavorites.gif) | ![Manage and Sort](img/ManageAndSort.gif) | Here are the creators whose stickers are shown in the above images: - [雪子](https://store.line.me/stickershop/author/1719182/ja) - Futaribeya mangaka - [甘城なつき](https://store.line.me/stickershop/author/95033/ja) - Nyaa stamps - [万丈梓](https://store.line.me/stickershop/author/3253391/ja) - Mangaka of 恋する(おとめ)の作り方 - [ななひら](https://store.line.me/stickershop/author/283446/ja) - Nanahira - [こもわた遙華](https://store.line.me/stickershop/author/674845/ja) - Famous SD chibi artist - [まどそふと](https://store.line.me/stickershop/author/106050/ja) - Madosoft - [Navel](https://store.line.me/stickershop/author/79657/ja) - Navel, featuring 小倉朝日 - [clear_blue](https://store.line.me/stickershop/author/552857/ja) - Publisher of stickers for Yuzusoft ## 🎨 Add Your Own Stickers StampNyaa supports adding your own sticker packs to the app. - Simply add a folder to the `stickers` directory in the app's sticker folder, which is located in your `{user}/Pictures` folder by default. - The folder name will be used as the sticker pack name by default, and the images inside will be used as the stickers. - You will also need to add a sticker named `main.png` to be used as the icon representing the sticker pack in the sidebar. - Note that only `.png` images are currently supported. ## ⚠️ Known Limitations - Animated stickers are not supported in certain apps like Discord because they don't support animated pngs for whatever reason 💢. I might add a workaround converting them to `gif` in the future. - MacOS users will need to grant the app accessibility permissions for some functions. - Autoupdate does not work on MacOS nor Linux. > [!WARNING]\ > For **MacOS users**, you must enable accessibility features for the app to automatically press Command+V to paste stickers. It should prompt you to enable this the first time you try to paste a sticker, but if it doesn't, you can [enable it manually](https://support.apple.com/guide/mac-help/allow-accessibility-apps-to-access-your-mac-mh43185/mac). To do this, go to `System Settings > Privacy & Security > Accessibility` and check the box next to StampNyaa. ## 🤝 Acknowledgements & Contribution This app would not exist if it were not for jeffshee's **[LINEStickerKeyboard](https://github.com/jeffshee/LINEStickerKeyboard/)** Android app which I use frequently, inspiring me to build StampNyaa for desktop. Contributions are welcome! If you have any bug reports, feature requests, or questions, please open an issue. If you want to contribute code, please contact me first so we can discuss the changes you want to make. ## 🛠️ Development To run StampNyaa locally, you will need to have [Node.js](https://nodejs.org/en/) installed. Then clone the repository and install dependencies with npm or yarn: ```bash git clone https://github.com/MarvNC/StampNyaa.git cd StampNyaa yarn install ``` To build an installer for your operating system, run: ```bash yarn make ``` To run: ```bash yarn start ``` You will also need something to compile the `.scss` file if you edit any styling. I use [Live Sass Compiler](https://github.com/glenn2223/vscode-live-sass-compiler) for this.
KittenYang/AutoGPT.swift
https://github.com/KittenYang/AutoGPT.swift
AutoGPT with Swift
# AutoGPTSwift This is a very simple re-implementation of [AutoGPT](https://github.com/Significant-Gravitas/Auto-GPT) or [LangChain](https://github.com/hwchase17/langchain) in Swift. In essence, it is an LLM (GPT-3.5) powered chat application that is able to use tools (Google search and a calculator) in order to hold conversations and answer questions. Here's an example: ~~~ Q: What is the world record for solving a rubiks cube? The world record for solving a Rubik's Cube is 4.69 seconds, held by Yiheng Wang (China). Q: Can a robot solve it faster? The fastest time a robot has solved a Rubik's Cube is 0.637 seconds. Q: Who made this robot? Infineon created the robot that solved a Rubik's Cube in 0.637 seconds. Q: What time would an average human expect for solving? It takes the average person about three hours to solve a Rubik's cube for the first time. ~~~ This is not intended to be a replacement for LangChain, instead it was built for fun and educational purposes. If you're interested in how LangChain, and similar tools work, this is a good starting point.
MDarvishi5124/Unixel
https://github.com/MDarvishi5124/Unixel
An English-Arabic font
# Unixel | یونیکسل ![example-text](https://github.com/MDarvishi5124/Unixel/assets/100155793/0575b82d-84c6-43d6-b90b-8507da3c25e2) Unixel is a pixel font inspired by unifont. Although unixel font is a pixel font, special attention has been paid to its beauty and readability in the design department. Unixel tries to take inspiration from the Arabic script and has largely forgotten the geometric structure. یونیکسل یک تایپ‎‌فیس پیکسلی است که با الهام زیادی از روی فونت unifont طراحی شده است. با اینکه فونت یونیکسل، یک فونت پیکسلی است، اما در بخش طراحی به زیبایی و خوانایی آن توجه ویژه ای شده است. یونیکسل سعی دارد از خوشنویسی عربی الهام بگیرد و ساختار هندسی خود را تا حد زیادی فراموش کرده است. Unixel is currently empty. This means that other features like proper Arabic diacritic will be available in future updates. But now the font is suitable for normal use. یونیکسل در حال حاضر دست‌خالیست؛ به معنای آنکه قابلیت های دیگر همچون اِعراب‌گذاری مناسب در بروزرسانی های آینده در دسترس قرار خواهد گرفت. اما اکنون فونت برای استفاده های عادی مناسب است. <a id="download" href="https://github.com/MDarvishi5124/Unixel/releases/tag/1.0"> بارگیری فونت </a> ## کار هایی که شما با یونیکسل انجام دادید | What you did with Unixel ![](https://github.com/MDarvishi5124/Unixel/assets/100155793/3a447c7a-b734-4e23-9b1e-116d18a808be)
zhongjinluo/Sketch2RaBit
https://github.com/zhongjinluo/Sketch2RaBit
A very simple sketch-based 3D modeling system built with the help of 3DBiCar and RaBit.
# *Sketch2RaBit* A very simple sketch-based 3D modeling system built with the help of *[3DBiCar](https://gaplab.cuhk.edu.cn/projects/RaBit/dataset.html)* and [*RaBit*](https://github.com/zhongjinluo/RaBit). If you are interested in sketch-based 3D modeling, you can also refer to [*SimpModeling*](https://github.com/zhongjinluo/SimpModeling) and [*SketchMetaFace*](https://github.com/zhongjinluo/SketchMetaFace). ## Demo https://github.com/zhongjinluo/Sketch2RaBit/assets/22856460/f19c3082-7d01-4205-a1da-1adc7e6a67b2 https://github.com/zhongjinluo/Sketch2RaBit/assets/22856460/331fa89b-e145-435b-824c-737381f94662 https://github.com/zhongjinluo/Sketch2RaBit/assets/22856460/8e5ee370-444b-463b-bf8c-cef781c88ead ## Usage This system has been tested with Python 3.8, PyTorch 1.7.1, CUDA 10.2 on Ubuntu 18.04. - Installation: ``` conda create --name Sketch2RaBit -y python=3.8 conda activate Sketch2RaBit pip install -r requirements.txt ``` - Start by cloning this repo: ``` git clone git@github.com:zhongjinluo/Sketch2RaBit.git cd Sketch2RaBit ``` - Download pre-compiled user interface and checkpoints for backend algorithms from [sketch2rabit_files.zip](https://cuhko365-my.sharepoint.com/:u:/g/personal/220019015_link_cuhk_edu_cn/EXxKE9ZsOxZJh-FtTUVsEJQBb31JXpl_gCiiRdJgZ2suPw?e=aOJFKF) and then: ``` unzip sketch2rabit_files.zip unzip App.zip mv sketch2rabit_files/data/transfer_uv /path-to-repo/ mv sketch2rabit_files/data/pose /path-to-repo/networks/v0/ mv sketch2rabit_files/data/embedding/* /path-to-repo/networks/v0/embedding/ mv sketch2rabit_files/data/pSp/* /path-to-repo/networks/v0/pSp/ ``` - Run the backend server: ``` cd /path-to-repo/ && bash server.sh ``` - Launch the user interface and enjoy it: ``` cd App/ && bash run.sh ``` - If you want to run the backend algorithms on a remote server, you may have to modify `App/config.ini`.
N0rz3/Isearch
https://github.com/N0rz3/Isearch
Isearch is osint tool to track a instagram profile
![logo](https://github.com/N0rz3/Isearch/assets/123885505/460a9ddf-9222-4e50-a627-3d0cfa0a7b58) [![python version](https://img.shields.io/badge/Python-3.10%2B-brightgreen)](https://www.python.org/downloads/) [![license](https://img.shields.io/badge/License-GNU-blue.svg)](https://www.gnu.org/licenses/gpl-3.0.fr.html) # **Isearch is an osint tool which is based on instagram 📷** # **😇 Abouts Isearch** Isearch or rather Instagram search is a tool that retrieves information on an instagram profile (only public to all) **Features of script** - fully async - asynchrone scraping - menu in cli format (commands) **Features** - id - username - fullname - biography - profile picture - number of followers - number of following - externals links - public / private account - buisness account - verified account - public phone number (+lookup) - public email - related profile - potentiels emails - download of profile picture (in the folder img) - all users tagged ## **🛠️ Requirements / Launch** - [Python 3](https://www.python.org/downloads/) ``` git clone https://github.com/N0rz3/Isearch.git cd Isearch pip install -r requirements.txt ``` ### **😲 Usage** ```python isearch.py``` ``` ___ ___ _ |_ | ___ ___| | |___ ___| |_ _| |_|_ -| -_|_ | _| _| | |_____|___|___| |_|_| |___|_|_| (BY 🦊 @N0rz3) 📸 Instagram search 🐦 Twitter: @norze15 ☕ Donations: https://www.buymeacoffee.com/norze usage: isearch.py [-d] [-t] [-f] <username> [-c] [-h] option: -h, --help show this help message and exit -c, --checker checks if you have correctly submitted your session ID -d, --download download the profile picture of the target -t, --tagged displays all profiles that have been tagged by the target -f, --follows shows the number of followers and following ``` **For the proper functioning of the script it will be necessary to deposit your session id in the data.json** ![id](https://github.com/N0rz3/N0rz3/assets/123885505/68df7a56-82b7-4454-85dd-ffc504b33fca) ```json { "sessionID": "YOUR_SESSION_ID" } ``` To verify that you can run the tool without problems enter this command: `python isearch.py --checker` **📚 Example input:** ```python isearch.py arianagrande``` **📚 Example output:** ``` ___ ___ _ |_ | ___ ___| | |___ ___| |_ _| |_|_ -| -_|_ | _| _| | |_____|___|___| |_|_| |___|_|_| (BY 🦊 @N0rz3) 📸 Instagram search 🐦 Twitter: @norze15 ☕ Donations: https://www.buymeacoffee.com/norze ✍️ Profile: [+] ID => 7719696 [+] Current name => arianagrande [+] Fullname => Ariana Grande [+] 👋 External url => http://rembeauty.com/ [+] Bio => currently somewhere over the rainbow 🦉 Account: [+] 🔓 Account is public! [-] 🐌 No Buisness Account! [+] ☑ Account is verified! [-] 📫 No public email. [-] 📞 No public phone number. 🛥️ Community: [+] Followers => 376410455 [+] Following => 600 [+] Number of publication => 126 🖼️ Picture: =>https://scontent-fra3-2.cdninstagram.com/v/t51.2885-19/340830534_932180087934550_8966124175413162437_n.jpg?stp=dst-jpg_s150x150&_nc_ht=scontent-fra3-2.cdninstagram.com&_nc_cat=1&_nc_ohc=pVl8105YNPEAX_zQJE4&edm=AKEQFekBAAAA&ccb=7-5&oh=00_AfB5ZNUF2jE-A3FPRATYu4iqKiRC0YARtM1q6zMQvkxKjg&oe=64A78F2F&_nc_sid=29ddf3 🎭 Identity masked: [+] 1 Account found having participated in a publication -> @r.e.m.beauty 📮 Potentials emails: [+] arianagrande@tutanota.com [+] arianagrande@fastmail.com [+] 4r14n4gr4nd3@aol.com [+] arianagrande@fastmail.com [Already shown] [+] arianagrande@tutanota.com [Already shown] [+] arianagrande0@hotmail.com [+] arianagrande$$@github.com [+] arianagrande.off@icloud.com ``` ## **🌞 More** ### **✔️ / ❌ Rules** **this tool was designed for educational purposes only and is not intended for any mischievous use, I am not responsible for its use.** ### **📜 License** **This project is [License GPL v3](https://www.gnu.org/licenses/gpl-3.0.fr.html) be sure to follow all rules 👍** ### **💖 Thanks** If you like what i do, please subscribe 💖. And if you find this tool is useful don't forget to star 🌟 **💶 Support me 👇** <a href="https://www.buymeacoffee.com/norze" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" height="50" ></a>
DaniScoton/ProjetoHarryPotter
https://github.com/DaniScoton/ProjetoHarryPotter
Consumindo uma API sobre Harry Potter com JavaScript.
# ProjetoHarryPotter Consumindo uma API sobre Harry Potter com JavaScript.
N0rz3/SpyScraper
https://github.com/N0rz3/SpyScraper
Osint tool to scraper websites
<h1 align="center" id="title">SpyScraper 🔍</h1><br> ![](assets/spyscraper.jpg) [![python version](https://img.shields.io/badge/Python-3.10%2B-brightgreen)](https://www.python.org/downloads/) [![license](https://img.shields.io/badge/License-GNU-blue.svg)](https://www.gnu.org/licenses/gpl-3.0.fr.html) # **SpyScraper is an osint tool made for scraping various website information** # **😇 Abouts SpyScraper** ## 🤾 Functioning: SpyScraper allows to scrape different information on websites thanks to regular expressions, scraping of tag in the html content of the site and the **whois** module **Features of script** - fully async - asynchrone scraping - menu in cli format (commands) **Scraping skills** - emails - phones numbers - hrefs - author - date of creation/update/expiration - location - servers names ## **🛠️ Requirements / Launch** - [Python 3](https://www.python.org/downloads/) ``` git clone https://github.com/N0rz3/SpyScraper.git cd SpyScraper pip3 install -r requirements.txt ``` In the data.json: ```json { "target_url": "" # add example targeted URL: https://example.com } ``` ### Then just run the script 👌: `python3 SpyScraper.py ` **Example of result:** ``` ┌─┐┌─┐┬ ┬┌─┐┌─┐┬─┐┌─┐┌─┐┌─┐┬─┐ └─┐├─┘└┬┘└─┐│ ├┬┘├─┤├─┘├┤ ├┬┘ (BY 🦊 @N0rz3) └─┘┴ ┴ └─┘└─┘┴└─┴ ┴┴ └─┘┴└─ Made with ❤️ 🐦 Twitter: @norze15 ☕ Donations: https://www.buymeacoffee.com/norze [🟢] 👑 Author(s): N0rz3 [🟢] 📆 Date(s): Creation date => 2020-01-03 00:00:00 Update date => 2023-03-28 00:00:00 Expiration date => 2024-01-01 00:00:00 [🟢] 📫 Email(s): EXAMPLE@example.com [🟢] 📞 Phone(s) numbers: +1111111111 +1111111122 [🟢] 🏘️ Location: France [🟢] 🤖 Server(s): server.example1 server.example2 [🟢] 👀 Href(s): All hrefs... ``` ## **🌞 More** ### **✔️ / ❌ Rules** - **This tool was designed for educational purposes only and is not intended for any mischievous use, I am not responsible for its use.** - **Be sure to follow the laws of website terms of use and use SpyScraper correctly and responsibly** ### **📜 License** **This project is [License GPL v3](https://www.gnu.org/licenses/gpl-3.0.fr.html) be sure to follow all rules 👍** ### **💖 Thanks** If you like what i do, please subscribe 💖. And if you find this tool is useful don't forget to star 🌟 **💶 Support me 👇** <a href="https://www.buymeacoffee.com/norze" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" height="50" ></a>
jshilong/GPT4RoI
https://github.com/jshilong/GPT4RoI
GPT4RoI: Instruction Tuning Large Language Model on Region-of-Interest
# GPT4RoI: Instruction Tuning Large Language Model on Region-of-Interest :fire: [Demo](http://139.196.83.164:7000/) :fire: [//]: # (<div id="wrapper" align="center">) [//]: # (<figure>) [//]: # ( <img src="figs/demo1.gif" width="45%">&emsp;) [//]: # ( <img src="figs/demo2.gif" width="45%"><br>) [//]: # ( <p style="font-size:1.2vw;">Left: Single-Region Understanding; Right: Single-Region Understanding</p>) [//]: # (</figure>) [//]: # (</div>) <p align="center"> <img src="figs/demo1.gif" width="80%"> <br> <p align="center" style="font-size:1.2vw;">Single-Region Understanding</p> </p> <p align="center"> <img src="figs/demo2.gif" width="80%"> <br> <p align="center" style="font-size:1.2vw;">Multiple-Region Understanding</p> </p> ## Introduction <p align="center"> <img src="figs/framework.png" width="70%"> <br> </p> > [**GPT4RoI: Instruction Tuning Large Language Model on Region-of-Interest**](https://arxiv.org/abs/2307.03601) > [Shilong Zhang*](https://jshilong.github.io/), [Peize Sun*](https://peizesun.github.io/), [Shoufa Chen*](https://www.shoufachen.com/), Min Xiao, Wenqi Shao ,Wenwei Zhang, Kai Chen, Ping Luo</br> > (*Equal Contribution) ### [[Demo](http://139.196.83.164:7000/)] [[Paper](https://arxiv.org/abs/2307.03601)] [[中文介绍](https://zhuanlan.zhihu.com/p/640283103)] [//]: # (#:grin::grin::grin:信交流群:xxx &#40;答案:cheems&#41;) ## Updates - [July 25] [GPT4RoI-7B-delta-V0](https://huggingface.co/shilongz/GPT4RoI-7B-delta-V0) has release ! :fire::fire::fire: You need to combine our delta with the original LLaMA weights follow the [GPT4RoI Weights](https://github.com/jshilong/GPT4RoI/tree/main#weights) section. - [July 7] All training and inference code has been released, you can try demo [here](http://139.196.83.164:7000/) :fire::fire::fire: ## Contents - [Install](#Install) - [Data](#Data) - [GPT4RoI Weights](#Weights) - [Training](#Training) - [Gradio](#Gradio) - [Acknowledge](#Acknowledge) ## Install 1. Clone the `GPT4RoI` ```python git clone https://github.com/jshilong/gpt4roi.git cd gpt4roi ``` 2. Create the env ```shell conda create -n gpt4roi python=3.10 -y conda activate gpt4roi pip install --upgrade pip # enable PEP 660 support pip install setuptools_scm pip install --no-cache-dir -e . # please use conda re-install the torch, pip may loss some runtime lib conda install pytorch torchvision torchaudio pytorch-cuda=11.7 -c pytorch -c nvidia ``` 3. Install the `flash-attn` package ``` pip install ninja pip install flash-attn --no-build-isolation ``` 4. install the `mmcv-1.4.7` package Make sure that your `nvcc -V` is consistent with cudatookit version of `python -c "import torch;print(torch.version.cuda)`. ```shell cd mmcv-1.4.7 MMCV_WITH_OPS=1 pip install -e . ``` <!-- ## Data Preparation | Data file name | Size | original from| | --- | --- | ---| | [single_region_caption.json](https://huggingface.co/datasets/liuhaotian/LLaVA-Instruct-150K/raw/main/llava_instruct_150k.json) | 229 MB | VC, Refcocog | | [multi_region_caption.json](https://huggingface.co/datasets/liuhaotian/LLaVA-Instruct-150K/raw/main/llava_instruct_80k.json) | 229 MB | flicker30k | | [spation-instruction21k.json](https://huggingface.co/datasets/liuhaotian/LLaVA-Instruct-150K/raw/main/conversation_58k.json) | 126 MB | VCR | We also use langauge-image multimodal instruction-folllowing dataset [`LLaVA-Instruct-150K`](https://huggingface.co/datasets/liuhaotian/LLaVA-Instruct-150K), with we detect objects with EVA-lvis model, you should download the ``eva_lvis_coco.pkl`` when you use this dataset. --> ## Data Our dataset includes RefCOCO, RefCOCO+, RefCOCOg, Visual Genome, Flickr30K entities, and the VCR dataset. We are sincerely grateful to the creators of these datasets, especially for the VCR dataset, for their forward-thinking in creating these dataset. The dataset section of this repository may appear somewhat messy, especially the VCR part(still finishing), which may cause GPT4RoI not be very user-friendly. We are currently working on formulating the datasets into a unified format and will be accompanying them with stronger models. Please stay tuned for updates. You can download the corresponding dataset from the official website and organize it as follows. Afterwards, you can modify the ```gpt4roi/configs/dataset_config.json``` file to select the specific dataset you want to use: ```text GPT4RoI ├── data │ ├── coco_det │ │ ├── annotations │ │ │ ├──instances_train2017.json │ │ ├── train2017/ │ ├── mdetr_annotations │ │ ├──finetune_refcoco_train.json │ │ ├──finetune_refcoco+_train.json │ │ ├──finetune_refcocog_train.json │ │ ├──final_flickr_mergedGT_train.json │ ├── coco_imgs/ │ ├── flickr30k-images/ │ ├── visual_genome │ │ ├──train.json │ │ ├──vg_all/ │ ├── llava │ │ ├── llava_instruct_150k.json │ │ ├── llava_150k_bbox_pred_results.pkl │ ├── vcr │ │ ├── train.jsonl │ │ ├── vcr1images/ ``` ### NOTE 1. coco_imgs should contains all coco image(you can soft link them to this directory. 2. We use Visual_Genome_Dataset_V1.2, available for download from [OpenDataLab](https://opendatalab.com/). Ensure to download the [train.json](https://datarelease.blob.core.windows.net/grit/VG_preprocessed_annotations/train.json), you should create a soft link for all VG images to the directory `vg_all`. 3. [llava_150k_bbox_pred_results.pkl](https://huggingface.co/shilongz/temp/tree/main) contains the detection predicted results with EVA-02-DET. We appreciate their work. ## Weights Due to the licensing restrictions of LLaMA, the delta weights GPT4RoI-7B is produced from LLaMA-7B. To acquire the GPT4RoI weights, you need to combine our delta with the original LLaMA weights. ### Step1. Download the original LLaMA-7B weights The original LLaMA weights are available for download. Use the following commands: ```shell git lfs install git clone https://huggingface.co/decapoda-research/llama-7b-hf ./llama-7b ``` Alternatively, access the [webpage](https://huggingface.co/decapoda-research/llama-7b-hf/tree/main) to download the file. ### Step2. Download the delta weights of GPT4RoI-7B The delta weights for GPT4RoI-7B can be downloaded using the following commands: ```shell git lfs install git clone https://huggingface.co/shilongz/GPT4RoI-7B-delta-V0 ./GPT4RoI-7B-delta ``` You can also directly download the file from this [webpage](https://huggingface.co/shilongz/GPT4RoI-7B-delta-V0/tree/main). ### Step3. Apply the delta weights to the original LLaMA-7B weights Apply the delta weights to the original LLaMA-7B weights. Note that this conversion command requires approximately 30 GB of CPU RAM. ```bash export PYTHONPATH=`pwd`:$PYTHONPATH python3 -m scripts.apply_delta.py \ --base ./llama-7b \ --target ./GPT4RoI-7B \ --delta ./GPT4RoI-7B-delta ``` ## Training GPT4RoI is trained on 8 A100 with the following code. ### STAGE 1 Vicuna-v0, an instruction-tuned chatbot, is the base model for this setup. In order to prepare it, first download the delta weights available [here](https://huggingface.co/lmsys/vicuna-7b-delta-v0). To obtain the original weights, follow the instructions provided [here](https://github.com/lm-sys/FastChat/blob/main/docs/vicuna_weights_version.md#how-to-apply-delta-weights-for-weights-v11-and-v0) to integrate these delta weights into LLaMA-7B. Ensure to download the following projector weight file: [LLaVA-7b-pretrain-projector-v0-CC3M-595K-original_caption.bin](https://huggingface.co/liuhaotian/LLaVA-Pretrained-Projectors/resolve/main/LLaVA-7b-pretrain-projector-v0-CC3M-595K-original_caption.bin). Additionally, you have the flexibility to choose from different versions of Vicuna (such as the 13B version or llama v2 chatbot) and the corresponding projector weights from [LLaVA](https://github.com/haotian-liu/LLaVA) to meet your specific requirements effectively. `exp/stage1` is the work directory. ```Shell bash train_stage1.sh exp/stage1 # Resume training in stage1 # bash train_stage1.sh exp/stage1 ``` ### STAGE 2 `exp/stage2` is the work directory. and you should give the work directory of stage1 so we can load the corresponding weight as pretrain model. ```Shell # At the beginning of stage2 bash train_stage2.sh exp/stage2 exp/stage1 # Resume training in stage2 # bash train_stage2.sh exp/stage2 ``` ## Gradio Please install [Gradio Box](https://github.com/ShoufaChen/gradio-dev) first. ```python python gpt4roi/app.py ``` ### NOTES 1. ```prompt format in GPT4RoI``` You should always use `<region1>, <region2>...` to refer the new bounding box in the image when you first draw them. Then you can use normal `region 1` in the conversation to refer the instance. 2. You should always click the `clear all` buttul and waiting the clear process finished before you start a new conversation. <p align="center"> <img src="figs/fig1.png" width="100%"> <br> <p align="center" style="font-size:1.2vw;">Multiple Rounds of Dialogue</p> </p> ## Acknowledge - [LLaVA](https://github.com/haotian-liu/LLaVA): The codebase we built upon. - [Vicuna](https://github.com/lm-sys/FastChat): The LLM we used. - [VCR](https://visualcommonsense.com/): We get strong region reasoning ability from this forward thinking dataset. If you find GPT4RoI useful for your your research and applications, please cite using this BibTeX: ```bibtex @misc{zhang2023gpt4roi, title={GPT4RoI: Instruction Tuning Large Language Model on Region-of-Interest}, author={Shilong Zhang and Peize Sun and Shoufa Chen and Min Xiao and Wenqi Shao and Wenwei Zhang and Kai Chen and Ping Luo}, year={2023}, eprint={2307.03601}, archivePrefix={arXiv}, primaryClass={cs.CV} } ```
SpongebBob/Finetune-ChatGLM2-6B
https://github.com/SpongebBob/Finetune-ChatGLM2-6B
ChatGLM2-6B 全参数微调,支持多轮对话的高效微调。
# Finetune-ChatGLM2-6B ChatGLM2-6B 全参数微调,支持多轮对话的高效微调。 # ChatGLM2-6B-全参数微调 本仓库实现了对于 ChatGLM2-6B 模型的全参数微调,主要改进点在于支持了ChatGLM2-6B 多轮对话的全参数微调,在多轮对话的交互组织方面,使用了更高效的session级别高效训练,训练效果相比原版ChatGLM-6B有较大提升。 Demo数据方面使用了开源项目[BELLE](https://github.com/LianjiaTech/BELLE/tree/main/train)提供的数据。 ## 软件依赖 运行微调需要4.27.1版本的`transformers`。除 ChatGLM-6B 的依赖之外,还需要安装以下依赖 ``` pip install rouge_chinese nltk jieba datasets ``` ## 数据方面 ### 下载数据 ```bash wget https://huggingface.co/datasets/BelleGroup/school_math_0.25M/resolve/main/school_math_0.25M.json ``` 该数据是数学应用题求解任务,包含解题过程,共计约25万条。示例: | Instruction | Answer | | ----------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | 一箱鸡蛋有25个,小明爸爸每天给他9个鸡蛋吃,那么一箱鸡蛋能够供小明吃几天? | 一箱鸡蛋有25个,小明每天吃9个鸡蛋,用除法计算一下即可: 25 ÷ 9 = 2余7 也就是说,小明第1天到第2天共吃2个鸡蛋,第3天吃剩下的7个鸡蛋。 因此,一箱鸡蛋能够供小明吃2天。 | | 小明手中有15颗糖果,他想要分给他的朋友小红和小李,每人分得的糖果数是3颗,那么他还剩下几颗糖果? | 该题可以用减法来解决。小明手中有15颗糖果,每人分得3颗,所以小红和小李一共分得6颗糖果(3+3=6)。 2. 然后再用原来的糖果总数减去分给小红和小李的糖果数,即可得到小明手中剩余的糖果数。 计算过程如下: 15 - 6 = 9 所以,小明还剩下9颗糖果。 | ### 转换数据格式 ```bash python convert_to_conv_data.py --orig_data school_math_0.25M.json --write_data school_math_0.25M_conv.json --dataset_name bellemath head -n 1000 school_math_0.25M_conv.json > belleMath-dev1K.json tail -n +1001 school_math_0.25M_conv.json > belleMath.json ``` demo选取前1000条作为验证集,其余数据作为训练集。 **注:**目前仓库的代码要兼容单轮和多轮数据,因此对于之前的开源数据(格式为[instruction,output],目前绝大部分开源的单轮instruction数据均为这种格式),需要用convert_to_conv_data.py脚本转换为如下的统一格式: ```bash {"id": "bellemath42", "conversations": [{"from": "human", "value": " 一箱鸡蛋有25个,小明爸爸每天给他9个鸡蛋吃,那么一箱鸡蛋能够供小明吃几天?"}, {"from": "assistant", "value": "\n一箱鸡蛋有25个,小明每天吃9个鸡蛋,用除法计算一下即可:\n25 ÷ 9 = 2余7\n也就是说,小明第1天到第2天共吃2个鸡蛋,第3天吃剩下的7个鸡蛋。\n因此,一箱鸡蛋能够供小明吃2天。"}]} {"id": "bellemath80", "conversations": [{"from": "human", "value": "小明手中有15颗糖果,他想要分给他的朋友小红和小李,每人分得的糖果数是3颗,那么他还剩下几颗糖果?"}, {"from": "assistant", "value": " \n1. 该题可以用减法来解决。小明手中有15颗糖果,每人分得3颗,所以小红和小李一共分得6颗糖果(3+3=6)。\n2. 然后再用原来的糖果总数减去分给小红和小李的糖果数,即可得到小明手中剩余的糖果数。 \n计算过程如下:\n15 - 6 = 9\n所以,小明还剩下9颗糖果。"}]} ``` 其他的训练数据见:https://huggingface.co/BelleGroup 按照上述流程转换格式即可。 对于多轮对话数据, [shareGPT](https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered/tree/main) 是一个开源的大规模多轮对话数据,当前代码已支持训练这种多轮对话数据。数据下载: ```bash wget https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered/resolve/main/ShareGPT_V3_unfiltered_cleaned_split.json ``` ### 训练 #### Finetune 需要安装 [Deepspeed](https://github.com/microsoft/DeepSpeed),修改model_name_or_path为下载好的chatglm2-6b的路径,然后运行以下指令: ```shell bash ds_train_finetune.sh ``` ### 推理 ## 模型部署 首先载入Tokenizer: ```python from transformers import AutoConfig, AutoModel, AutoTokenizer # 载入Tokenizer tokenizer = AutoTokenizer.from_pretrained("download_path", trust_remote_code=True) ``` 加载的是新 Checkpoint: ```python model = AutoModel.from_pretrained("output/checkpoint", trust_remote_code=True) ``` 之后根据需求可以进行量化,也可以直接使用: ```python # Comment out the following line if you don't use quantization model = model.quantize(4) model = model.half().cuda() model.transformer.prefix_encoder.float() model = model.eval() response, history = model.chat(tokenizer, "你好", history=[]) ``` ## 使用自己的数据集 参考demo格式,修改 `train.sh` 和 `evaluate.sh` 中的 `train_file`、`validation_file`和`test_file`为你自己的 JSON 格式数据集路径。可能还需要增大 `max_length` 来匹配你自己的数据集中的最大输入输出长度。参考自有多轮对话数据训练loss ![image](https://github.com/SpongebBob/Finetune-ChatGLM2-6B/assets/11678997/736cd1d7-4ffc-49c2-a64e-d7f7991802c9) ## 引用 ``` 1.https://github.com/THUDM/ChatGLM-6B 2.https://github.com/LianjiaTech/BELLE 3.https://github.com/THUDM/ChatGLM2-6B ```
Inferentiaxyz/AnalytiBot
https://github.com/Inferentiaxyz/AnalytiBot
A virtual Data Analyst
# AnalytiBot 📈🤖 A Virtual Data Analyst powered by Artificial Intelligence [](![AnalytiBot](link_to_analytibot_image.png)) ## Overview AnalytiBot is an intelligent virtual data analyst that leverages the power of artificial intelligence to assist you with data analysis tasks. Through an interactive chat interface, you can communicate with AnalytiBot, ask questions, and receive insightful analyses and visualizations based on your data. ## Features - Interactive Chat Interface: Communicate with AnalytiBot using natural language to perform data analysis tasks. - AI-powered Insights: AnalytiBot uses advanced AI algorithms to generate valuable insights from your data. - Data Visualization: Get visually appealing charts and graphs to better understand your data. - Quick and Easy Analysis: AnalytiBot automates complex data analysis processes, making it easy for non-technical users. ## Requirements Before running AnalytiBot, ensure you have the following installed: - Python (version 3.10 or higher) - All dependencies listed in the `requirements.txt` file. Install them using the following command: ```bash pip install -r requirements.txt ``` ## Getting Started 1. Clone the AnalytiBot repository to your local machine. 2. Navigate to the project directory. 3. Install the required dependencies as mentioned in the "Requirements" section. 4. Add your OpenAI API key to a file named openaikey.txt. 5. Launch the AnalytiBot service with the following command: ```bash chainlit run main.py -w ``` ## How to Use AnalytiBot 1. Once the service is running, open your web browser and navigate to the provided address. 2. You will be greeted by AnalytiBot's chat interface. 3. Start interacting with AnalytiBot by typing your questions or data analysis requests in natural language. 4. AnalytiBot will process your queries and will provide you with insightful results and visualizations. ## Todo - [ ] Docker Support: Release a Docker image to simplify the deployment process. - [ ] Report Generation: Enable AnalytiBot to generate comprehensive data analysis reports. - [ ] Prompt Improvement: Enhance the chat prompt to make interactions with AnalytiBot more intuitive and user-friendly. ## Contributing We welcome contributions to make AnalytiBot even better! If you find any issues or have ideas for improvements, please submit an issue or create a pull request. ## License AnalytiBot is released under the [GNU General Public License v3.0](https://github.com/Inferentiaxyz/AnalytiBot/blob/main/LICENSE). ## Useful Links 🔗 - **Website:** Have a look at our services [Inferentia Website](https://inferentia.xyz) 📚 - **Discord Community:** Join our friendly [Inferentia Discord](https://discord.gg/uUc8W9g8du) to ask questions, share your projects, and connect with other developers! 💬 - **Simone Rizzo**: [Github](https://github.com/simone-rizzo), [Linkedin](https://www.linkedin.com/in/simone-rizzo-9851b7147/), [Tiktok](https://www.tiktok.com/@simonerizzo98) - **Antonio Zegarelli**: [Github](https://github.com/89oinotna), [Linkedin](https://www.linkedin.com/in/zegarelli-antonio/) Let's get started with data analysis like never before! 🚀📊
BhavyapriyaN/Tomato_leaf_classification
https://github.com/BhavyapriyaN/Tomato_leaf_classification
null
# Tomato_leaf_classification This is a tomato leaf classification project consisting of 10 classes using CNN. The link for the dataset - https://www.kaggle.com/datasets/arjuntejaswi/plant-village
sksalahuddin2828/C
https://github.com/sksalahuddin2828/C
Educational Purpose only
# C Programming Language <h3>Overview</h3> C is a general-purpose programming language that was developed in the early 1970s by Dennis Ritchie at Bell Labs. It is a procedural language that has gained immense popularity due to its simplicity, efficiency, and portability. C serves as a foundation for many modern programming languages and operating systems, making it an essential language for programmers to learn. <h3>Features</h3> C provides a wide range of features that contribute to its popularity and versatility: <h3>Simplicity:</h3> C offers a relatively simple and straightforward syntax, making it easy to learn and understand. Its minimalistic design focuses on essential programming constructs and allows for efficient code execution. <h3>Portability:</h3> C programs can be compiled and executed on various platforms, including different operating systems and hardware architectures. This portability is achieved through the use of compilers that translate C code into machine-specific instructions. <h3>Efficiency:</h3> C is known for its efficiency and low-level programming capabilities. It provides direct memory access and precise control over hardware resources, making it suitable for systems programming, embedded systems, and performance-critical applications. <h3>Modularity:</h3> C supports modular programming through the use of functions and libraries. Developers can break down complex programs into smaller, reusable modules, enhancing code organization and maintainability. <h3>Extensibility:</h3> C allows for the integration of assembly code within the program, enabling fine-grained control over hardware and direct manipulation of memory. This feature is particularly useful in areas where low-level system interactions are required. <h3>Standard Libraries:</h3> C comes with a rich set of standard libraries that provide a wide range of functions for tasks like input/output operations, string manipulation, memory allocation, and mathematical computations. These libraries simplify development and reduce the need for writing code from scratch. <h2>Applications</h2> C is widely used in various domains and applications, including: <h3>System Programming:</h3> Due to its efficiency and low-level access, C is extensively used for developing operating systems, device drivers, firmware, and other system software. <h3>Embedded Systems:</h3> C is a popular choice for programming embedded systems, such as microcontrollers, because of its ability to interact directly with hardware. <h3>Game Development:</h3> Many game engines and frameworks are implemented in C, allowing developers to create high-performance games for different platforms. <h3>Compilers and Interpreters:</h3> C is often used to develop compilers, interpreters, and programming language implementations due to its ability to generate efficient and optimized code. <h3>Scientific and Numeric Computing:</h3> C is employed in scientific simulations, data analysis, and numerical computation tasks, providing fast and efficient computations. <h3>Utilities and Tools:</h3> Numerous command-line utilities, text editors, and development tools are written in C, taking advantage of its performance and portability.
KenNaNa/toolkit-use
https://github.com/KenNaNa/toolkit-use
a npm toolkit use for vue.js
![](https://kennana.github.io/toolkit-use/logo.png) ## Star History [![Star History Chart](https://api.star-history.com/svg?repos=KenNaNa/toolkit-use&type=Date)](https://star-history.com/#KenNaNa/toolkit-use&Date) ## Introduction This project is an open-source project that serves as a collection of commonly used utility methods during the development process. As an open-source project, we hope to receive support and appreciation from the community, and together maintain this toolkit. ToolkitUse is a commonly used toolkit designed specifically for front-end developers, aiming to provide convenient and efficient solutions and tools to accelerate the front-end project development process. This article will introduce you to the main features and advantages of ToolkitUse, as well as the value it brings to front-end developers. ## Key Features: Feature-rich toolset: ToolkitUse offers a wide range of functional tools, including string manipulation, date operations, form validation, network requests, and more. These tools can assist developers in handling common front-end tasks and issues, enhancing development efficiency. Lightweight and high-performance: ToolkitUse is designed to be a lightweight and high-performance toolkit, ensuring it does not introduce additional burden when used in projects. It has been optimized and tested to provide fast and stable performance across various browsers and devices. Easy to use and integrate: ToolkitUse provides a concise and intuitive API, making it easy to learn and use. You can easily integrate ToolkitUse into your projects, and it is compatible with existing front-end frameworks and libraries. Community support and documentation: ToolkitUse has an active developer community where you can seek support, advice, and solutions. Additionally, ToolkitUse offers detailed documentation and examples to help you get started quickly and understand the usage and functionality of each tool. ## Advantages: Improved development efficiency: ToolkitUse provides a rich set of tools for front-end developers, reducing repetitive work and simplifying common task handling. By using ToolkitUse, you can complete development tasks faster, thereby enhancing overall development efficiency. Optimized code quality: ToolkitUse has been carefully designed and tested to ensure it provides high-quality solutions. It can help you write reliable and maintainable code, reducing errors and potential vulnerabilities. Cross-platform and compatibility: ToolkitUse supports multiple front-end environments and frameworks, making it suitable for various projects. Whether you are developing applications based on React, Vue.js, or Angular, ToolkitUse can provide support and convenience. Conclusion: ToolkitUse is a commonly used toolkit for front-end developers, offering a rich set of functional tools to enhance development efficiency and optimize code quality. Whether you are a beginner or an experienced developer, using ToolkitUse can help you complete front-end projects more easily. Explore the features of ToolkitUse and apply them to your projects to experience the convenience and benefits it brings. ## install ```bash npm i toolkit-use ``` ## usage ```vue <script setup lang='ts'> import { ToolkitUseClipboard, ToolkitUseClipboardOptions, ToolkitUseArray, ToolkitUseDate, ToolkitUseDecimal, ToolkitUseDownload, ToolkitUseString, ToolkitUseShared, ToolkitUseRegex, ToolkitUseDom, } from 'toolkit-use' </script> ``` ## clip ![](./toolkituse-starter/docs/imgs/clip.png)
neuromorphs/NIR
https://github.com/neuromorphs/NIR
Neuromorphic Intermediate Representation reference implementation
<picture> <source media="(prefers-color-scheme: dark)" srcset="docs/logo_dark.png"> <img alt="NIR Logo" src="docs/logo_light.png"> </picture> # NIR: Neuromorphic Intermediate Representation NIR provides a standard way to describe arbitrary **physical** networks. **NIR is currently supported by 9 simulators and hardware platforms**, allowing users to seamlessly move between any of these platforms. See Neuron models in NIR are defined as dynamical system equations, because time is an essential component of neuromorphic systems. The goal of NIR is to provide a common format that different neuromorphic frameworks (including spiking neural networks) can convert to and from. ## Computational primitives > Read more about in our [documentation about NIR primitives](https://nnir.readthedocs.io/en/latest/primitives.html) On top of popular primitives such as convolutional or fully connected/linear computations, we define additional compuational primitives that are specific to neuromorphic computing and hardware implementations thereof. Computational units that are not specifically neuromorphic take inspiration from the Pytorch ecosystem in terms of naming and parameters (such as Conv2d that uses groups/strides). ## Connectivity Each computational unit is a node in a static graph. Given 3 nodes $A$ which is a LIF node, $B$ which is a Linear node and $C$ which is another LIF node, we can define edges in the graph such as: $$ A \rightarrow B \\ B \rightarrow C $$ ## Format The intermediate represenation can be stored as hdf5 file, which benefits from compression. ## Frameworks that currently support NIR | **Framework** | **Write to NIR** | **Read from NIR** | **Examples** | | --------------- | :--: | :--: | :------: | | [Lava-DL](https://github.com/lava-nc/lava-dl) | ✓ | ⬚ | | [Nengo](https://nengo.ai) | ✓ | ✓ | [Nengo examples](https://github.com/neuromorphs/nir/tree/main/example/nengo/) | | [Norse](https://github.com/norse/norse) | ✓ | ✓ | [Norse examples](https://github.com/neuromorphs/nir/tree/main/example/norse/) | | [Rockpool](https://rockpool.ai) ([SynSense Xylo chip](https://www.synsense.ai/products/xylo/)) | ✓ | ✓ | | [Sinabs](https://sinabs.readthedocs.io) ([SynSense Speck chip](https://www.synsense.ai/products/speck-2/)) | ✓ | ✓ | | [snnTorch](https://github.com/jeshraghian/snntorch/) | ✓ | ⬚ | | [SpiNNaker2](https://spinncloud.com/portfolio/spinnaker2/) | ⬚ | ✓ | ## Acknowledgements Authors (in alphabetical order): * [Steven Abreu](https://github.com/stevenabreu7) * [Felix Bauer](https://github.com/bauerfe) * [Jason Eshraghian](https://github.com/jeshraghian) * [Matthias Jobst](https://github.com/matjobst) * [Gregor Lenz](https://github.com/biphasic) * [Jens Egholm Pedersen](https://github.com/jegp) * [Sadique Sheik](https://github.com/sheiksadique) * [Peng Zhou](https://github.com/pengzhouzp) If you use NIR in your work, please cite the [following Zenodo reference](https://zenodo.org/record/8105042) ``` @software{nir2023, author = {Abreu, Steven and Bauer, Felix and Eshraghian, Jason and Jobst, Matthias and Lenz, Gregor and Pedersen, Jens Egholm and Sheik, Sadique}, title = {Neuromorphic Intermediate Representation}, month = jul, year = 2023, publisher = {Zenodo}, version = {0.0.1}, doi = {10.5281/zenodo.8105042}, url = {https://doi.org/10.5281/zenodo.8105042} } ```
jupitergao18/jserver
https://github.com/jupitergao18/jserver
A json api and static files server in rust
[中文版说明](README_CN.md) # jserver A json api and static files server in rust Just like json-server from typicode (nodejs) Get a full fake REST API with __zero coding__ in __less than 30 seconds__ (seriously) Created with <3 for front-end developers who need a quick back-end for prototyping and mocking. ## Getting started Install JServer ``` cargo build --release cp target/release/jserver /usr/bin/ ``` Create a `data.json` file with some data ```json { "posts": [ { "id": 1, "title": "jserver", "author": "jupiter.gao" } ], "comments": [ { "id": 1, "body": "some comment" } ], "profile": { "name": "jupiter" }, "homepage": "https://apicenter.com.cn" } ``` Start JServer ```bash jserver ``` Now if you go to [http://localhost:2901/api/posts/1](http://localhost:2901/api/posts/1), you'll get ```json { "id": 1, "title": "jserver", "author": "jupiter.gao" } ``` Also when doing requests, it's good to know that: - If you make POST, PUT, PATCH or DELETE requests, changes will be automatically and safely saved to `data.json`. - Your request body JSON should be object or single value, just like the GET output. (for example `{"name": "Foobar"}` `"test string"` `83.01`) - Id values are not mutable. Any `id` value in the body of your PUT or PATCH request will be ignored. Only a value set in a POST request will be respected, but only if not already taken. - A POST, PUT or PATCH request should include a `Content-Type: application/json` header to use the JSON in the request body. Otherwise it will return a 400 status code. ## Routes Based on the previous `data.json` file, here are all the default routes. ### Array routes ``` GET /api/posts GET /api/posts/1 POST /api/posts PUT /api/posts/1 PATCH /api/posts/1 DELETE /api/posts/1 ``` ### Object or Value routes ``` GET /api/profile POST /api/profile PUT /api/profile PATCH /api/profile ``` ### Filter ``` GET /api/posts?title=jserver GET /api/posts?id=1 ``` ### Operators For numbers, use the following suffix: `_lt`, `_lte`, `_gt`, `_gte` for `<`, `<=`, `>`, `>=` respectively. For strings, use `_like` for `contains` and `_nlike` for `not contains`. For arrays, use `_contains` for `contains` and `_ncontains` for `not contains`. For numbers, strings, booleans, use `_ne` for `!=`. ``` GET /api/posts?title_like=server GET /api/posts?id_gt=1&id_lt=3 ``` ### Paginate Use `_page` and optionally `_size` to paginate returned data. ``` GET /api/posts?_page=7 GET /api/posts?_page=7&_size=20 ``` _20 items are returned by default, page is 1 based(0 is treated as 1)_ ### Sort Add `_sort` and `_order` (ascending order by default) ``` GET /api/posts?_sort=views&_order=asc ``` For multiple fields, use the following format: ``` GET /api/posts?_sort=user,views&_order=desc,asc ``` ### Slice Add `_start` and (`_end` or `_limit`) ``` GET /api/posts?_start=20&_end=30 GET /api/posts?_start=20&_limit=10 ``` An `X-Total-Count` header is included in the array response ### Database ``` GET /db ``` ### Upload files You can upload files to the server and access them through the static file server below. ``` POST /upload ``` Request body should be `multipart/form-data` and file field name should be `file`. Response body will be a JSON array with each item having `name`, `path` and `size` properties. ### Static file server You can use JSON Server to serve your HTML, JS and CSS, simply create a `./public` directory or use `--public-path` to set a different static files directory. ```bash mkdir public echo 'hello world' > public/index.html jserver ``` ```bash jserver --public-path ./some-other-dir ``` ## CLI usage ``` Usage: jserver [OPTIONS] Options: -b, --bind-address <BIND_ADDRESS> [default: 0.0.0.0:2901] -d, --db-path <DB_PATH> [default: ./data.json] -p, --public-path <PUBLIC_PATH> [default: ./public] -i, --id <ID> [default: id] -m, --max-body-limit-m <MAX_BODY_LIMIT_M> [default: 100] --debug -h, --help Print help -V, --version Print version ``` ## License Apache License 2.0
game-hax/Roblox-Exploit-API
https://github.com/game-hax/Roblox-Exploit-API
A C# Roblox Exploiting API for the UWP distribution of the game. Its basically WeAreDevs API but it bypasses Byfron.
# Roblox Exploiting API > ### ⚠️ This project are for educational purposes only, if you do use this on Roblox and you get banned, that is your fault as you are breaking their Terms of Service. ## [Download API here](https://github.com/game-hax/Roblox-Exploit-API/releases/download/v1.0.1/Hovac_API.dll) This is a Roblox exploiting API based off the WeAreDevs API that supports the UWP version of Roblox! There is no key system and it auto-updates to the latest version! tldr: wearedevs api but it bypasses byfron ## Getting started To use this you need to use the UWP (Universal Windows Platform) version of the Roblox app which doesn't have Byfron. To do this, [get Roblox from the Microsoft store](https://www.microsoft.com/en-us/p/roblox/9nblgggzm6wm)! ## How to create your own exploit > ### ⚠️ Educational purposes only, video tutorial soon! 1) Create a new Visual Studios project. I recommend choosing the `Windows Form App (.NET Framework)` ![image](assets/devenv_87RNNdQaHr.png) 2) Add the exploit API as a reference ![image](assets/devenv_kqebCUsWBc.gif) ![image](assets/devenv_F6tbjiloEY.png) 3) Start using the API! Example: ```cs using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Linq; using System.Text; using System.Threading.Tasks; using System.Windows.Forms; using Hovac_API; namespace TutorialExploit { public partial class Form1 : Form { ExploitAPI exploitAPI = new ExploitAPI(); public Form1() { InitializeComponent(); } private void button2_Click(object sender, EventArgs e) { exploitAPI.LaunchExploit(); // injects into roblox } private void button1_Click(object sender, EventArgs e) { exploitAPI.SendLuaScript(richTextBox1.Text); // runs script from text box } } } ``` ![image](assets/Tutorial_MhsjSdkUBn.png) Your exploit will now work and auto-updates to the lastest version! ## Attribution The code for the actual injecting wasn't done by me. The actual important DLLs are taken from WeAreDev's CDN, however for some reason they haven't updated WeAreDevs API to do this so I just did it myself. You can complain this project is skidded but thats the point, its just a UWP port for the WeAreDevs API.
devlive-community/openai-java-sdk
https://github.com/devlive-community/openai-java-sdk
Provides Java developers with a convenient, easy-to-use SDK to interact with OpenAI's apis.
<div align="center"> # OpenAI Java SDK ![Visitors](https://api.visitorbadge.io/api/visitors?path=https%3A%2F%2Fgithub.com%2Fdevlive-community%2Fopenai-java-sdk&countColor=%23263759&style=flat) [![](https://tokei.rs/b1/github/devlive-community/openai-java-sdk)](https://github.com/devlive-community/openai-java-sdk) ![version](https://img.shields.io/github/v/release/devlive-community/openai-java-sdk.svg) ![GitHub](https://img.shields.io/github/license/devlive-community/openai-java-sdk) ![GitHub branch checks state](https://img.shields.io/github/checks-status/devlive-community/openai-java-sdk/master?style=flat-square) ![GitHub Release Date](https://img.shields.io/github/release-date/devlive-community/openai-java-sdk?style=flat-square) ![GitHub commit activity](https://img.shields.io/github/commit-activity/y/devlive-community/openai-java-sdk?style=flat-square) ![GitHub contributors](https://img.shields.io/github/contributors-anon/devlive-community/openai-java-sdk?style=flat-square) ![GitHub last commit](https://img.shields.io/github/last-commit/devlive-community/openai-java-sdk?style=flat-square) <h3> Provides Java developers with a convenient, easy-to-use SDK to interact with OpenAI's apis.</h3> </div> # How to use? --- ```java <properties> <openai.version>LATEST</openai.version> </properties> <dependencies> <dependency> <groupId>org.devlive.sdk</groupId> <artifactId>openai-java-sdk</artifactId> <version>${openai.version}</version> </dependency> </dependencies> ``` `LATEST` By default the latest version will be used. # Feature --- - Rapidly develop products based on the platform, improve development efficiency, and meet fast delivery requirements. - Concise and easy to understand, code specification requirements, can be used as enterprise code specification. - The steps to use are very simple, and one line of code realizes the required functions. - Support multiple proxy platforms, perfectly adapt to various APIs. - Dynamically expand Max Tokens according to the model - Rich pre-data verification to avoid direct contact with API ## Star History --- [![Star History Chart](https://api.star-history.com/svg?repos=devlive-community/openai-java-sdk&type=Date)](https://star-history.com/#devlive-community/openai-java-sdk&Date) ## Contributor --- <a href="https://github.com/devlive-community/openai-java-sdk/graphs/contributors"> <img src="https://contrib.rocks/image?repo=devlive-community/openai-java-sdk" /> </a>
zsio/Free-GPT-No-Auth
https://github.com/zsio/Free-GPT-No-Auth
收集免费公开、无需登录的GPT镜像站
<a href="https://api-box.com"> <h1 align="center">Free GPT 🆓 No Auth ⛔</h1> </a> <p align="center"> 收集免费公开、无需登录的GPT镜像站 </p> <div align="center"> [![Discord Server](https://discordapp.com/api/guilds/1125981168584626247/widget.png?style=banner2&count=true)](https://discord.gg/cYUU8mCDMd) <p> 你可以加入discord: <a href="https://discord.gg/8AZ9Xa37">discord.gg/8AZ9Xa37</a> 以获取项目最新进展. <a href="https://discord.gg/8AZ9Xa37"> <img align="center" alt="gpt4free Discord" width="22px" src="https://raw.githubusercontent.com/peterthehan/peterthehan/master/assets/discord.svg" /> </a> </p> </div> ### ✔️ TODO - [x] 左侧菜单切换各网站。 - [ ] 通过配置的方式提供列表。 - [ ] Nextjs 服务端添加代理,将某一小部分接口转换为兼容OpenAI API的接口。 This is a [Next.js](https://nextjs.org/) project ## 🚩 仍在努力保持更新 众人拾柴火焰高,此项目离不开大家的支持,如果你有好的想法或者发现了新的网站,欢迎提交PR或者Issue。 ## 🏃 运行 [![Deploy with Vercel](https://vercel.com/button)](https://vercel.com/new/clone?repository-url=https://github.com/zsio/Free-GPT-No-Auth&project-name=free-gpt-no-auth&repository-name=Free-GPT-No-Auth)
dmytrostriletskyi/threads-net
https://github.com/dmytrostriletskyi/threads-net
Unofficial and reverse-engineered Threads (threads.net) Python API wrapper. Supports read and write capabilities.
[![Stand With Ukraine](https://raw.githubusercontent.com/vshymanskyy/StandWithUkraine/main/banner-direct-single.svg)](https://stand-with-ukraine.pp.ua) Unofficial and reverse-engineered Threads ([threads.net](https://www.threads.net/@zuck)) Python API wrapper. Supports read and write capabilities. [![](https://github.com/dmytrostriletskyi/threads-net/actions/workflows/main.yaml/badge.svg?branch=main)](https://github.com/dmytrostriletskyi/threads-net/actions/workflows/main.yaml) [![](https://img.shields.io/github/release/dmytrostriletskyi/threads-net.svg)](https://github.com/dmytrostriletskyi/threads-net/releases) [![](https://img.shields.io/pypi/v/threads-net.svg)](https://pypi.python.org/pypi/threads-net) [![](https://pepy.tech/badge/threads-net)](https://pepy.tech/project/threads-net) [![](https://img.shields.io/pypi/l/threads-net.svg)](https://pypi.python.org/pypi/threads-net/) [![](https://img.shields.io/pypi/pyversions/threads-net.svg)](https://pypi.python.org/pypi/threads-net/) Table of content: * [Disclaimer](#disclaimer) * [Roadmap](#roadmap) * [Getting started](#getting-started) * [How to install](#how-to-install) * [Examples](#examples) * [API](#api) * [Disclaimer](#disclaimer-1) * [Troubleshooting](#troubleshooting) * [Two-Factor Authentication (2FA)](#two-factor-authentication-2fa) * [Terminology](#terminology) * [Initialization](#initialization) * [Settings](#settings) * [Pagination](#pagination) * [Public](#api) * [User](#user) * [Get Identifier](#get-identifier) * [Get By Identifier](#get-by-identifier) * [Get Threads](#get-threads) * [Get Replies](#get-replies) * [Thread](#thread) * [Get Identifier](#get-identifier-1) * [Get By Identifier](#get-by-identifier-1) * [Get Likers](#get-likers) * [Private](#private) * [User](#user-1) * [Get Identifier](#get-identifier-2) * [Get By Identifier](#get-by-identifier-2) * [Get Threads](#get-threads-1) * [Get Replies](#get-replies-1) * [Get Followers](#get-followers) * [Get Following](#get-following) * [Get Friendship Status](#get-friendship-status) * [Search](#search) * [Follow](#follow) * [Unfollow](#unfollow) * [Mute](#mute) * [Unmute](#unmute) * [Restrict](#restrict) * [Unrestrict](#unrestrict) * [Block](#block) * [Unblock](#unblock) * [Thread](#thread-1) * [Get Identifier](#get-identifier-3) * [Get By Identifier](#get-by-identifier-3) * [Get Likers](#get-likers-1) * [Create](#create) * [Delete](#delete) * [Like](#like) * [Unlike](#unlike) * [Repost](#repost) * [Unrepost](#unrepost) * [Quote](#quote) * [Feed](#feed) * [Recommended users](#get-recommended-users) ## Disclaimer * As `Threads` is backed by `Instagram`, those are coupled in terms of `APIs`. For example, there is a way to fetch a `Threads` user or a thread from both `Threads` and `Instagram` `APIs`. But there are unique `API` endpoints that exist only in one of the `APIs`. For instance, to create a thread, there is only `Instagram API` endpoint. * This project is unofficial and reverse-engineered, it means that: * The library would be pretending being a mobile phone or a web user (via proper `HTTP` headers and other things). Thus, you might face `rate limits` (because pretending is never ideal), your `IP` marked as suspicious and even your `Threads` and/or `Instagram` account might be suspended if you mess up with logining or sending too much requests. * `Threads` and `Instagram` `APIs` are not provided as public `API` and developed to be used internally (`Meta'`s `back-end` developers made it only for `Meta'`s `front-end/mobile` developers) without any intention to reveal externally in any way. So, they would not inform the community about any changes they are going to do or done and would not support backward compatibility. Thus, expect `APIs` unstable work sometimes and therefore the library, and be patient waiting for the fixes. ## Roadmap Check what is already done in the table of content above, below the only things to be done are placed: - [ ] Pagination for all methods responding with a list of records - [ ] Get notifications about new threads - [ ] Manage auth for accounts with the enabled 2FA - [ ] Manage auth for accounts required a challenge ## Getting started ### How to install Install the library with the following command using `pip3`: ```bash $ pip3 install threads-net ``` ### Examples Find examples of how to use the library in the `examples` folder: ```bash ls examples ├── private │ ├── create_thread.py │ ├── follow_user.py │ ├── get_user.py │ ├── get_user_followers.py │ ├── get_user_following.py │ └── search_user.py │ ... └── public ├── get_thread.py ├── get_thread_likers.py ├── get_user.py ├── get_user_replies.py └── get_user_threads.py ... ``` ## API ### Disclaimer To interact with `Threads API`, there is no need for any authorization, so in the library it is called `public API`. On the other hand, to interact with `Instagram API`, you have to specify `Instagram` username and password (as like you do when you log in to `Threads` mobile application), so in the library it is called `private API`. * `Public API` has only read-only endpoints. In the same time, `private API` has both read and write endpoints. `Public API` much less stable than `private API` (because `private API` is used for `Instagram API` and `Instagram` is stable product in general already). * `Public API` less risky to get `rate limits` or to have an account in `Threads` and/or `Instagram` be suspended. So, it is a trade-off between stability, bugs and `rate limits` and/or `suspension`. But with the library you can combine two approaches: for instance, prefer `public API` for read-only endpoints and `private API` for write-only endpoints or even build a retry mechanism (try public one, if failed, try private one). ### Troubleshooting #### Two-Factor Authentication (2FA) Currently, it is not possible to use `private API` with two-factor authentication enabled on the account. If you do not disable it, you might face issues like [#37](https://github.com/dmytrostriletskyi/threads-net/issues/37). So, in order to use `private API`, you have to disable it first. By the way, there are plans in the roadmap to manage authentication for such accounts so you don't have to disable it. ### Terminology There might be a confusion among many `Threads API` clients as well as in both `Threads` an `Instagram` `APIs` according to the naming of entities. For instance, in `Threads` a publication is called a `thread`, but under the hood in the `API` of fetching or creating a `thread` it is called a `post`. It is done because `Threads` are backed by `Instagram` and `threads` creation is done on `Instagram API` where a publication called a post. Library maintainers decided to stick into `Threads` terminology and use the word `thread` for the post-related things. ### Initialization Import the class responsible for `Threads API` communication and initialize the object. If you are going to use only `public API`, use the following commands: ```python3 >>> from threads import Threads >>> threads = Threads() ``` If you are going to use only `private API` or both `private` and `public` `APIs`, use the following commands: ```python3 >>> from threads import Threads >>> threads = Threads(username='instagram_username', password='instagram_password') ``` ### Settings Currently, to perform `private API` requests, there is a need to specify a mobile device metadata, timezone offset and an authorization token. The device metadata and timezone offset are randomly generated once you initialize `Threads`, in order the authorization token is requested from `Instagram API` and saved for further usage. If you initialize `Threads` frequently with different data, `Instagram API` might threat it as a non-human and machine behavior and mark your `IP` as suspicious and even suspend your account `Threads` and/or `Instagram` account. Also, the authorization token has long live period and once the authorization token is fetched, it can be reused at least for next 24 hours (maybe more, but at least) and there is no need to fetch it at each initialization over and over again. For this, there are settings. Those are either a `JSON` file or a dictionary you can provide to reuse device metadata, timezone offset and the authorization token instead of generating random data each initialization. To provide settings as a dictionary, use the following commands: ```python >>> settings = { 'authentication': { 'token': 'eyJkc191c2VyX2lkIjYnNjA0MzgxNTQ1N1NjYiLCJzZXNzaW9uaWQiOi1FxSUQ1TXcifQ==', }, 'timezone': { 'offset': -14400, }, 'device': { 'id': 'android-cb0d81d0e326cb42', 'manufacturer': 'OnePlus', 'model': 'ONEPLUS+A3010', 'android_version': 25, 'android_release': '7.1.1', } } >>> threads = Threads(settings=settings, username=INSTAGRAM_USERNAME, password=INSTAGRAM_PASSWORD) ``` To provide settings as a path to a `JSON` file, use the following commands: ```python >>> threads = Threads( settings='/Users/dmytrostriletskyi/projects/threads/settings.json', username='instagram_username', password='instagram_password', ) ``` The `JSON` file content looks like: ```bash $ cat settings.json { "authentication": { "token": "eyJkc191c2VyX2lkIjYnNjA0MzgxNTQ1N1NjYiLCJzZXNzaW9uaWQiOi1FxSUQ1TXcifQ==" }, "timezone": { "offset": -14400 }, "device": { "id": "android-2d53671ef3d1ac79", "manufacturer": "OnePlus", "model": "ONEPLUS+A3010", "android_version": 25, "android_release": "7.1.1" } } ``` Instead of manually create a `JSON` file, you can ask `Threads` to generate it, specifying a path to where to save it, with the following command: ```python >>> threads = Threads(username='instagram_username', password='instagram_password') >>> threads.download_settings(path='/Users/dmytrostriletskyi/projects/threads/settings.json') ``` ### Pagination There are two things that are involved into the pagination for requests that respond with lists of records (e.g. threads, replies, followers, following and so on): `limit` and `max id`. Basically, `limit` is responsible for a number of records to request (where `15` is a default number) and `max id` is kinda offset thingy, but a bit tricky. Let consider the method for getting a list of a user's threads if there are only `10` in total: ```python >>> user_threads = threads.private_api.get_user_threads(id=314216) >>> user_threads { "threads": [ {"id": 3146710118003098789, ...}, {"id": 3146710118003098788, ...}, {"id": 3146710118003098787, ...}, {"id": 3146710118003098786, ...}, {"id": 3146710118003098785, ...}, {"id": 3146710118003098784, ...}, {"id": 3146710118003098783, ...}, {"id": 3146710118003098782, ...}, {"id": 3146710118003098781, ...}, {"id": 3146710118003098780, ...}, ], "status": "ok" } ``` If we want to fetch them iteratively by chunks (for instance, by `2` threads), we just specify a `limit` argument: ```python >>> user_threads = threads.private_api.get_user_threads(id=314216, limit=2) >>> user_threads { "threads": [ {"id": 3146710118003098789, ...}, # 1st thread from 10. {"id": 3146710118003098788, ...}, # 2nd thread from 10. ], "next_max_id": "QVFCWnJRSFdwMW5ITjRhcDlqR2hfY19JM21eQ==", "status": "ok" } ``` But it is not enough, because on this stage we do not know how to iterate and fetch the next `2` threads. Here `max id` helps, if you see, there is `next_max_id` in the response. Which is encoded identifier of the `3rd` thread to start offsetting (chunking) from: ```python >>> user_threads = threads.private_api.get_user_threads( id=314216, limit=2, from_max_id='QVFCWnJRSFdwMW5ITjRhcDlqR2hfY19JM21eQ==', ) >>> user_threads { "threads": [ {"id": 3146710118003098787, ...}, # 3rd thread from 10. {"id": 3146710118003098786, ...}, # 4th thread from 10. ], "next_max_id": "ZrWDBSMm9DTU90Yy1mSXQ4WTNWb0V3ZmI4Um5==", "status": "ok" } ``` So, basically, you should iterate over all threads until `next_max_id` is presented. The code to fetch all existing threads would look like: ```python from_max_id = None while True: user_threads = threads.private_api.get_user_threads(id=314216, limit=2, from_max_id=from_max_id) for thread in user_threads.get('threads'): items = thread.get('thread_items') for item in items: thread_id = item.get('post').get('pk') print(thread_id) from_max_id = user_threads.get('next_max_id') if from_max_id is None: break 3146710118003098789 3146710118003098788 3146710118003098787 3146710118003098786 3146710118003098785 3146710118003098784 3146710118003098783 3146710118003098782 3146710118003098781 3146710118003098780 ``` Use the pagination if there are a lot of records (e.g. threads, replies, followers, following and so on) to be returned as well as if you are somehow limited by computer resources (`RAM`, `bandwidth` and so on). ### Public #### User ##### Get Identifier `threads.public_api.get_user_id` — getting a user's identifier by their username. | Parameters | Type | Required | Restrictions | Description | |:----------:|:-------:|:--------:|:------------:|----------------------------------------------------------| | `username` | String | Yes | - | A user's username which goes along with `@` like `zuck`. | <details> <summary>Open code example</summary> ```python3 >>> user_id = threads.public_api.get_user_id(username='zuck') >>> user_id 314216 ``` </details> ##### Get By Identifier `threads.public_api.get_user` — getting a user by their identifier. | Parameters | Type | Required | Restrictions | Description | |:----------:|:-------:|:--------:|:------------:|----------------------| | `id` | Integer | Yes | `>0` | A user's identifier. | <details> <summary>Open code example</summary> ```python3 >>> user = threads.public_api.get_user(id=314216) >>> user { "data": { "userData": { "user": { "is_private": false, "profile_pic_url": "https://scontent.cdninstagram.com/v/t51.2885-19/357376107_1330597350674698_8884059223384672080_n.jpg?stp=dst-jpg_s150x150&_nc_ht=scontent.cdninstagram.com&_nc_cat=1&_nc_ohc=euIj8dtTGIkAX-mW2_l&edm=APs17CUBAAAA&ccb=7-5&oh=00_AfAUZzobOIH6imLnb2Z3iXoWY5H1Fv_kNnyG8T4UGgJegQ&oe=64AED800&_nc_sid=10d13b", "username": "zuck", "hd_profile_pic_versions": [ { "height": 320, "url": "https://scontent.cdninstagram.com/v/t51.2885-19/357376107_1330597350674698_8884059223384672080_n.jpg?stp=dst-jpg_s320x320&_nc_ht=scontent.cdninstagram.com&_nc_cat=1&_nc_ohc=euIj8dtTGIkAX-mW2_l&edm=APs17CUBAAAA&ccb=7-5&oh=00_AfD5z6UgnQH54dihPnMrXgH2L-mLCMGlFsIF9Ug7U4RWdA&oe=64AED800&_nc_sid=10d13b", "width": 320 }, { "height": 640, "url": "https://scontent.cdninstagram.com/v/t51.2885-19/357376107_1330597350674698_8884059223384672080_n.jpg?stp=dst-jpg_s640x640&_nc_ht=scontent.cdninstagram.com&_nc_cat=1&_nc_ohc=euIj8dtTGIkAX-mW2_l&edm=APs17CUBAAAA&ccb=7-5&oh=00_AfD4BaVu4cDcX53xPocD-3o_ZbKIESxUZhlU08FBpycCsA&oe=64AED800&_nc_sid=10d13b", "width": 640 } ], "is_verified": true, "biography": "", "biography_with_entities": null, "follower_count": 2663947, "profile_context_facepile_users": null, "bio_links": [ { "url": "" } ], "pk": "314216", "full_name": "Mark Zuckerberg", "id": null } } }, "extensions": { "is_final": true } } ``` </details> ##### Get Threads `threads.public_api.get_user_threads` — getting a user's threads by the user's identifier. | Parameters | Type | Required | Restrictions | Description | |:----------:|:-------:|:--------:|:------------:|----------------------| | `id` | Integer | Yes | `>0` | A user's identifier. | <details> <summary>Open code example</summary> ```python3 >>> user_threads = threads.public_api.get_user_threads(id=314216) >>> user_threads { "instagram": { "pk": "314216", "username": "zuck", "full_name": "Mark Zuckerberg", "is_private": false, "profile_pic_url": "https://instagram.fiev6-1.fna.fbcdn.net/v/t51.2885-19/352224138_1028122805231303_1175896139426286760_n.jpg?stp=dst-jpg_s150x150&_nc_ht=instagram.fiev6-1.fna.fbcdn.net&_nc_cat=1&_nc_ohc=hbekpcjRfioAX8iYGJv&edm=AKEQFekBAAAA&ccb=7-5&oh=00_AfDf2r6qwujUc84tkzUlYJMfJt66xoWScQ-nsB5bmtYDnw&oe=64B0066A&_nc_sid=29ddf3", "profile_pic_url_hd": "https://instagram.fiev6-1.fna.fbcdn.net/v/t51.2885-19/352224138_1028122805231303_1175896139426286760_n.jpg?stp=dst-jpg_s320x320&_nc_ht=instagram.fiev6-1.fna.fbcdn.net&_nc_cat=1&_nc_ohc=hbekpcjRfioAX8iYGJv&edm=AKEQFekBAAAA&ccb=7-5&oh=00_AfCoPcYmzHdc2evDvduZjZK-IxDS07wGf0x9czecY_TgVQ&oe=64B0066A&_nc_sid=29ddf3", "is_verified": true, "media_count": 283, "follower_count": 11924434, "following_count": 523, "biography": "", "external_url": null, "account_type": null, "is_business": false, "public_email": null, "contact_phone_number": null, "public_phone_country_code": null, "public_phone_number": null, "business_contact_method": "UNKNOWN", "business_category_name": null, "category_name": "Entrepreneur", "category": null, "address_street": null, "city_id": null, "city_name": null, "latitude": null, "longitude": null, "zip": null, "instagram_location_id": null, "interop_messaging_user_fbid": null }, "threads": { "data": { "userData": { "user": { "is_private": false, "profile_pic_url": "https://scontent.cdninstagram.com/v/t51.2885-19/357376107_1330597350674698_8884059223384672080_n.jpg?stp=dst-jpg_s150x150&_nc_ht=scontent.cdninstagram.com&_nc_cat=1&_nc_ohc=euIj8dtTGIkAX9JAAZI&edm=APs17CUBAAAA&ccb=7-5&oh=00_AfCdAMkmk0XL_r0GQi2MRD1Aq1kPZKBLfXLby47e_hsZrg&oe=64AED800&_nc_sid=10d13b", "username": "zuck", "hd_profile_pic_versions": [ { "height": 320, "url": "https://scontent.cdninstagram.com/v/t51.2885-19/357376107_1330597350674698_8884059223384672080_n.jpg?stp=dst-jpg_s320x320&_nc_ht=scontent.cdninstagram.com&_nc_cat=1&_nc_ohc=euIj8dtTGIkAX9JAAZI&edm=APs17CUBAAAA&ccb=7-5&oh=00_AfDJeE127_ZFA-eD3qRMM0Fh2NM-jRR4tUFsTywCrMctNA&oe=64AED800&_nc_sid=10d13b", "width": 320 }, { "height": 640, "url": "https://scontent.cdninstagram.com/v/t51.2885-19/357376107_1330597350674698_8884059223384672080_n.jpg?stp=dst-jpg_s640x640&_nc_ht=scontent.cdninstagram.com&_nc_cat=1&_nc_ohc=euIj8dtTGIkAX9JAAZI&edm=APs17CUBAAAA&ccb=7-5&oh=00_AfAY-75SdPDasc0ophRu3lgeeHmnb3qPZIE-mCPh8PRRBw&oe=64AED800&_nc_sid=10d13b", "width": 640 } ], "is_verified": true, "biography": "", "biography_with_entities": null, "follower_count": 2663588, "profile_context_facepile_users": null, "bio_links": [ { "url": "" } ], "pk": "314216", "full_name": "Mark Zuckerberg", "id": null } } }, "extensions": { "is_final": true } } } ``` </details> ##### Get Replies `threads.public_api.get_user_replies` — getting a user's replies by the user's identifier. | Parameters | Type | Required | Restrictions | Description | |:----------:|:-------:|:--------:|:------------:|----------------------| | `id` | Integer | Yes | `>0` | A user's identifier. | <details> <summary>Open code example</summary> ```python3 >>> user_replies = threads.public_api.get_user_replies(id=314216) >>> user_replies { "data": { "mediaData": { "threads": [ { "thread_items": [ { "post": { "user": { "profile_pic_url": "https://scontent.cdninstagram.com/v/t51.2885-19/358196917_1319101762021874_6840458701147612733_n.jpg?stp=dst-jpg_s150x150&_nc_ht=scontent.cdninstagram.com&_nc_cat=1&_nc_ohc=RYUCB0OzBK8AX-fwWgu&edm=APs17CUBAAAA&ccb=7-5&oh=00_AfAyNgzr1QLbjIdorzFD01h-NpApHx7-0XKdGWOKjS4Xlw&oe=64ABF5FF&_nc_sid=10d13b", "username": "intense0ne", "id": null, "is_verified": true, "pk": "13455834" }, "image_versions2": { "candidates": [ { "height": 1431, "url": "https://scontent.cdninstagram.com/v/t51.2885-15/358024814_953839475848219_4275939482568010016_n.jpg?stp=dst-jpg_e35&_nc_ht=scontent.cdninstagram.com&_nc_cat=1&_nc_ohc=XBVzqlYSSPMAX-rTJuQ&edm=APs17CUBAAAA&ccb=7-5&ig_cache_key=MzE0MTMwNzMyODM1ODU1NjA4OA%3D%3D.2-ccb7-5&oh=00_AfCUsy68lK2H5iyEGnWnPrQMnd18Dpvl0ZOhGh-iWQpxZA&oe=64ACB5E4&_nc_sid=10d13b", "width": 1170, "__typename": "XDTImageCandidate" }, { "height": 881, "url": "https://scontent.cdninstagram.com/v/t51.2885-15/358024814_953839475848219_4275939482568010016_n.jpg?stp=dst-jpg_e35_p720x720&_nc_ht=scontent.cdninstagram.com&_nc_cat=1&_nc_ohc=XBVzqlYSSPMAX-rTJuQ&edm=APs17CUBAAAA&ccb=7-5&ig_cache_key=MzE0MTMwNzMyODM1ODU1NjA4OA%3D%3D.2-ccb7-5&oh=00_AfBUyEmnvY6wewYd2xR5RcBUdpwYDQ3-tPOJroowQwDDZA&oe=64ACB5E4&_nc_sid=10d13b", "width": 720, "__typename": "XDTImageCandidate" }, { "height": 783, "url": "https://scontent.cdninstagram.com/v/t51.2885-15/358024814_953839475848219_4275939482568010016_n.jpg?stp=dst-jpg_e35_p640x640_sh0.08&_nc_ht=scontent.cdninstagram.com&_nc_cat=1&_nc_ohc=XBVzqlYSSPMAX-rTJuQ&edm=APs17CUBAAAA&ccb=7-5&ig_cache_key=MzE0MTMwNzMyODM1ODU1NjA4OA%3D%3D.2-ccb7-5&oh=00_AfCngfqkXR6lMDsTO_3W3ub2wXo6gy8hs9IflxKgVSZ29Q&oe=64ACB5E4&_nc_sid=10d13b", "width": 640, "__typename": "XDTImageCandidate" }, { "height": 587, "url": "https://scontent.cdninstagram.com/v/t51.2885-15/358024814_953839475848219_4275939482568010016_n.jpg?stp=dst-jpg_e35_p480x480&_nc_ht=scontent.cdninstagram.com&_nc_cat=1&_nc_ohc=XBVzqlYSSPMAX-rTJuQ&edm=APs17CUBAAAA&ccb=7-5&ig_cache_key=MzE0MTMwNzMyODM1ODU1NjA4OA%3D%3D.2-ccb7-5&oh=00_AfDEcZTEK5c-C1M5RO0NLJZ1afo7GGqHClM6SFBH6nVJqQ&oe=64ACB5E4&_nc_sid=10d13b", "width": 480, "__typename": "XDTImageCandidate" }, { "height": 391, "url": "https://scontent.cdninstagram.com/v/t51.2885-15/358024814_953839475848219_4275939482568010016_n.jpg?stp=dst-jpg_e35_p320x320&_nc_ht=scontent.cdninstagram.com&_nc_cat=1&_nc_ohc=XBVzqlYSSPMAX-rTJuQ&edm=APs17CUBAAAA&ccb=7-5&ig_cache_key=MzE0MTMwNzMyODM1ODU1NjA4OA%3D%3D.2-ccb7-5&oh=00_AfDme7hPMR0v7Uc_Rq5Vl-HA3eUAosNOuznntJ3rwkPVgA&oe=64ACB5E4&_nc_sid=10d13b", "width": 320, "__typename": "XDTImageCandidate" }, { "height": 294, "url": "https://scontent.cdninstagram.com/v/t51.2885-15/358024814_953839475848219_4275939482568010016_n.jpg?stp=dst-jpg_e35_p240x240&_nc_ht=scontent.cdninstagram.com&_nc_cat=1&_nc_ohc=XBVzqlYSSPMAX-rTJuQ&edm=APs17CUBAAAA&ccb=7-5&ig_cache_key=MzE0MTMwNzMyODM1ODU1NjA4OA%3D%3D.2-ccb7-5&oh=00_AfDKjwryrv-cxfybvuPaa9TbcBqxozsTeOU6I7fRgDaZVw&oe=64ACB5E4&_nc_sid=10d13b", "width": 240, "__typename": "XDTImageCandidate" }, { "height": 1080, "url": "https://scontent.cdninstagram.com/v/t51.2885-15/358024814_953839475848219_4275939482568010016_n.jpg?stp=c0.130.1170.1170a_dst-jpg_e35_s1080x1080&_nc_ht=scontent.cdninstagram.com&_nc_cat=1&_nc_ohc=XBVzqlYSSPMAX-rTJuQ&edm=APs17CUBAAAA&ccb=7-5&ig_cache_key=MzE0MTMwNzMyODM1ODU1NjA4OA%3D%3D.2-ccb7-5&oh=00_AfAfUg9I6LBwwruVJxP0y7tsa1eILSH3lnERy-GhLvY5tQ&oe=64ACB5E4&_nc_sid=10d13b", "width": 1080, "__typename": "XDTImageCandidate" }, { "height": 750, "url": "https://scontent.cdninstagram.com/v/t51.2885-15/358024814_953839475848219_4275939482568010016_n.jpg?stp=c0.130.1170.1170a_dst-jpg_e35_s750x750_sh0.08&_nc_ht=scontent.cdninstagram.com&_nc_cat=1&_nc_ohc=XBVzqlYSSPMAX-rTJuQ&edm=APs17CUBAAAA&ccb=7-5&ig_cache_key=MzE0MTMwNzMyODM1ODU1NjA4OA%3D%3D.2-ccb7-5&oh=00_AfC4eDvJfpOFQXP9iDiQccDFa8mH-c-4eHjtoJa1guTIFA&oe=64ACB5E4&_nc_sid=10d13b", "width": 750, "__typename": "XDTImageCandidate" }, { "height": 640, "url": "https://scontent.cdninstagram.com/v/t51.2885-15/358024814_953839475848219_4275939482568010016_n.jpg?stp=c0.130.1170.1170a_dst-jpg_e35_s640x640_sh0.08&_nc_ht=scontent.cdninstagram.com&_nc_cat=1&_nc_ohc=XBVzqlYSSPMAX-rTJuQ&edm=APs17CUBAAAA&ccb=7-5&ig_cache_key=MzE0MTMwNzMyODM1ODU1NjA4OA%3D%3D.2-ccb7-5&oh=00_AfCz0RbvKHBOKMTsmaHFJTa2NUymRFh9abONGDVhps-3BQ&oe=64ACB5E4&_nc_sid=10d13b", "width": 640, "__typename": "XDTImageCandidate" }, { "height": 480, "url": "https://scontent.cdninstagram.com/v/t51.2885-15/358024814_953839475848219_4275939482568010016_n.jpg?stp=c0.130.1170.1170a_dst-jpg_e35_s480x480&_nc_ht=scontent.cdninstagram.com&_nc_cat=1&_nc_ohc=XBVzqlYSSPMAX-rTJuQ&edm=APs17CUBAAAA&ccb=7-5&ig_cache_key=MzE0MTMwNzMyODM1ODU1NjA4OA%3D%3D.2-ccb7-5&oh=00_AfCp9r6Se5hjBo-rMHJvLhfDFRJkzbOo7QZu-BkiW-kCew&oe=64ACB5E4&_nc_sid=10d13b", "width": 480, "__typename": "XDTImageCandidate" }, { "height": 320, "url": "https://scontent.cdninstagram.com/v/t51.2885-15/358024814_953839475848219_4275939482568010016_n.jpg?stp=c0.130.1170.1170a_dst-jpg_e35_s320x320&_nc_ht=scontent.cdninstagram.com&_nc_cat=1&_nc_ohc=XBVzqlYSSPMAX-rTJuQ&edm=APs17CUBAAAA&ccb=7-5&ig_cache_key=MzE0MTMwNzMyODM1ODU1NjA4OA%3D%3D.2-ccb7-5&oh=00_AfCizo3gIxHuMKtgDmK1XNlxXr2NyEwfRbsN8ZLNb6y8GA&oe=64ACB5E4&_nc_sid=10d13b", "width": 320, "__typename": "XDTImageCandidate" }, { "height": 240, "url": "https://scontent.cdninstagram.com/v/t51.2885-15/358024814_953839475848219_4275939482568010016_n.jpg?stp=c0.130.1170.1170a_dst-jpg_e35_s240x240&_nc_ht=scontent.cdninstagram.com&_nc_cat=1&_nc_ohc=XBVzqlYSSPMAX-rTJuQ&edm=APs17CUBAAAA&ccb=7-5&ig_cache_key=MzE0MTMwNzMyODM1ODU1NjA4OA%3D%3D.2-ccb7-5&oh=00_AfDvAFSgCbR982_ZAUAFSfdyXLWG_7WbXnv-_UZnorY7EA&oe=64ACB5E4&_nc_sid=10d13b", "width": 240, "__typename": "XDTImageCandidate" }, { "height": 150, "url": "https://scontent.cdninstagram.com/v/t51.2885-15/358024814_953839475848219_4275939482568010016_n.jpg?stp=c0.130.1170.1170a_dst-jpg_e35_s150x150&_nc_ht=scontent.cdninstagram.com&_nc_cat=1&_nc_ohc=XBVzqlYSSPMAX-rTJuQ&edm=APs17CUBAAAA&ccb=7-5&ig_cache_key=MzE0MTMwNzMyODM1ODU1NjA4OA%3D%3D.2-ccb7-5&oh=00_AfAy_iWsQikPBVk0haExvRIaDHqtPRkh2t-A9DHE4G2HWg&oe=64ACB5E4&_nc_sid=10d13b", "width": 150, "__typename": "XDTImageCandidate" } ] }, "original_width": 1170, "original_height": 1431, "video_versions": [], "carousel_media": null, "carousel_media_count": null, "pk": "3141307328358556088", "has_audio": null, "text_post_app_info": { "link_preview_attachment": null, "share_info": { "quoted_post": null, "reposted_post": null }, "reply_to_author": null, "is_post_unavailable": false }, "caption": { "text": "Volk or Yair?? #UFC290\n\nI got Volk!! \ud83d\udc4f\ud83d\udc4f\ud83d\udc4f" }, "taken_at": 1688693036, "like_count": 4144, "code": "CuYKl8tJKG4", "media_overlay_info": null, "id": "3141307328358556088_13455834" }, "line_type": "squiggle", "view_replies_cta_string": "386 replies", "reply_facepile_users": [ { "__typename": "XDTUserDict", "id": null, "profile_pic_url": "https://scontent.cdninstagram.com/v/t51.2885-19/358513246_834402004938078_3745574617102161833_n.jpg?stp=dst-jpg_s150x150&_nc_ht=scontent.cdninstagram.com&_nc_cat=106&_nc_ohc=LtUMjbsa1lgAX83w0Sh&edm=APs17CUBAAAA&ccb=7-5&oh=00_AfDGvGlfVhbbAHunncfE-Gzmbf0rshNG8tlfg4eb5V77iQ&oe=64AD69D4&_nc_sid=10d13b" }, { "__typename": "XDTUserDict", "id": null, "profile_pic_url": "https://scontent.cdninstagram.com/v/t51.2885-19/358028941_1385051345389996_8437393621281150341_n.jpg?stp=dst-jpg_s150x150&_nc_ht=scontent.cdninstagram.com&_nc_cat=104&_nc_ohc=XcGqYzbKhDYAX87Clm7&edm=APs17CUBAAAA&ccb=7-5&oh=00_AfCKDgkhOitXrGIkYQs1V607cQ9rw0cSB9KuZoJTyCIPWQ&oe=64ABFFDC&_nc_sid=10d13b" }, { "__typename": "XDTUserDict", "id": null, "profile_pic_url": "https://scontent.cdninstagram.com/v/t51.2885-19/358164004_986964552348931_381538637050666409_n.jpg?stp=dst-jpg_s150x150&_nc_ht=scontent.cdninstagram.com&_nc_cat=1&_nc_ohc=ML6ZsvS6xFsAX_GLu5T&edm=APs17CUBAAAA&ccb=7-5&oh=00_AfBjZQ0RElFZ0DQxOpHmDIw-p2WRX9_NvhArpVy8Fqz0lA&oe=64AC0D96&_nc_sid=10d13b" } ], "should_show_replies_cta": true, "__typename": "XDTThreadItem" }, { "post": { "user": { "profile_pic_url": "https://scontent.cdninstagram.com/v/t51.2885-19/357376107_1330597350674698_8884059223384672080_n.jpg?stp=dst-jpg_s150x150&_nc_ht=scontent.cdninstagram.com&_nc_cat=1&_nc_ohc=9PG1NK-L8OkAX_mdst3&edm=APs17CUBAAAA&ccb=7-5&oh=00_AfAo5WPQy4q_RbWq8Hox7lQS09zUmk8PHdJWXYjU8Tu2yw&oe=64ACDDC0&_nc_sid=10d13b", "username": "zuck", "id": null, "is_verified": true, "pk": "314216" }, "image_versions2": { "candidates": [] }, "original_width": 612, "original_height": 612, "video_versions": [], "carousel_media": null, "carousel_media_count": null, "pk": "3141314003249945904", "has_audio": null, "text_post_app_info": { "link_preview_attachment": null, "share_info": { "quoted_post": null, "reposted_post": null }, "reply_to_author": { "username": "intense0ne", "id": null }, "is_post_unavailable": false }, "caption": { "text": "Definitely Volk" }, "taken_at": 1688693832, "like_count": 10843, "code": "CuYMHFLrF0w", "media_overlay_info": null, "id": "3141314003249945904_314216" }, "line_type": "line", "view_replies_cta_string": "1,114 replies", "reply_facepile_users": [ { "__typename": "XDTUserDict", "id": null, "profile_pic_url": "https://scontent.cdninstagram.com/v/t51.2885-19/357993149_654182843249841_2984126770891448042_n.jpg?stp=dst-jpg_s150x150&_nc_ht=scontent.cdninstagram.com&_nc_cat=109&_nc_ohc=-HCIQXYHQlcAX8qkTXo&edm=APs17CUBAAAA&ccb=7-5&oh=00_AfAwsxFT9ZYe71DKt7Wm3tmH4-3SwpA_6ifb74G-bMn8Bg&oe=64AC46A1&_nc_sid=10d13b" }, { "__typename": "XDTUserDict", "id": null, "profile_pic_url": "https://scontent.cdninstagram.com/v/t51.2885-19/358036814_264065539563777_3872673087942117120_n.jpg?stp=dst-jpg_s150x150&_nc_ht=scontent.cdninstagram.com&_nc_cat=109&_nc_ohc=T3x8fyZcDQIAX_FXQ-6&edm=APs17CUBAAAA&ccb=7-5&oh=00_AfDtWBscXjH85fml9eI9IERX2wibGkWU-EQcuSH-b3MvCA&oe=64AD33EE&_nc_sid=10d13b" }, { "__typename": "XDTUserDict", "id": null, "profile_pic_url": "https://scontent.cdninstagram.com/v/t51.2885-19/358000233_1044681130274316_6742390706661437266_n.jpg?stp=dst-jpg_s150x150&_nc_ht=scontent.cdninstagram.com&_nc_cat=100&_nc_ohc=PvpO-62JaD0AX_XMP5_&edm=APs17CUBAAAA&ccb=7-5&oh=00_AfC4ma-opT9ieAnypxLSXNtVUOQkSq5_hbtca5ADqOSHRA&oe=64ACA63A&_nc_sid=10d13b" } ], "should_show_replies_cta": true, "__typename": "XDTThreadItem" } ], "id": "3141314003249945904" }, ... ] } }, "extensions": { "is_final": true } } ``` </details> #### Thread ##### Get Identifier `threads.public_api.get_thread_id` — getting a thread's identifier by its `URL` identifier. `URL` identifier is a last part of a thread's website `URL`. If the thread's `URL` is `https://threads.net/t/CuXFPIeLLod`, then it would be `CuXFPIeLLod`. | Parameters | Type | Required | Restrictions | Description | |:----------:|:------:|:--------:|:------------:|------------------------------| | `url_id` | String | Yes | - | A thread's `URL` identifier. | <details> <summary>Open code example</summary> ```python3 >>> thread_id = threads.public_api.get_thread_id(url_id='CuXFPIeLLod') >>> thread_id 3141002295235099165 ``` </details> ##### Get By Identifier `threads.public_api.get_thread` — getting a thread by its identifier. | Parameters | Type | Required | Restrictions | Description | |:----------:|:-------:|:--------:|:------------:|------------------------| | `id` | Integer | Yes | `>0` | A thread's identifier. | <details> <summary>Open code example</summary> ```python3 >>> thread = threads.public_api.get_thread(id=3141002295235099165) >>> thread { "data": { "data": { "containing_thread": { "thread_items": [ { "post": { "user": { "profile_pic_url": "https://scontent.cdninstagram.com/v/t51.2885-19/343392897_618515990300243_8088199406170073086_n.jpg?stp=dst-jpg_s150x150&_nc_ht=scontent.cdninstagram.com&_nc_cat=1&_nc_ohc=1xtR491RY6cAX8Okf3Z&edm=APs17CUBAAAA&ccb=7-5&oh=00_AfDqsZTCRQlz7EaDpgEzBkmXqtqKtTH80Q0r1rIDMJCcpg&oe=64AC2D8D&_nc_sid=10d13b", "username": "mosseri", "id": null, "is_verified": true, "pk": "95561" }, "image_versions2": { "candidates": [] }, "original_width": 612, "original_height": 612, "video_versions": [], "carousel_media": null, "carousel_media_count": null, "pk": "3141055616164096839", "has_audio": null, "text_post_app_info": { "link_preview_attachment": null, "share_info": { "quoted_post": null, "reposted_post": null }, "reply_to_author": null, "is_post_unavailable": false, "direct_reply_count": 2839 }, "caption": { "text": "I've been getting some questions about deleting your account. To clarify, you can deactivate your Threads account, which hides your Threads profile and content, you can set your profile to private, and you can delete individual threads posts \u2013 all without deleting your Instagram account. Threads is powered by Instagram, so right now it's just one account, but we're looking into a way to delete your Threads account separately." }, "taken_at": 1688663030, "like_count": 29984, "code": "CuXRXDdNOtH", "media_overlay_info": null, "id": "3141055616164096839_95561" }, "line_type": "none", "view_replies_cta_string": "2,839 replies", "reply_facepile_users": [], "should_show_replies_cta": true } ], "thread_type": "thread", "header": null, "id": "3141055616164096839" }, "reply_threads": [ { "thread_items": [ { "post": { "user": { "profile_pic_url": "https://scontent.cdninstagram.com/v/t51.2885-19/355424589_948742193018772_4617955233166333765_n.jpg?stp=dst-jpg_s150x150&_nc_ht=scontent.cdninstagram.com&_nc_cat=100&_nc_ohc=heQWdfsKG5MAX9mLim0&edm=APs17CUBAAAA&ccb=7-5&oh=00_AfA6TUpLjjegnP6KuducRm8n6uU9iBCXg4O1P33WYSh3mg&oe=64ACDDBA&_nc_sid=10d13b", "username": "jimmywhotalks", "id": null, "is_verified": true, "pk": "51094265817" }, "image_versions2": { "candidates": [] }, "original_width": 612, "original_height": 612, "video_versions": [], "carousel_media": null, "carousel_media_count": null, "pk": "3141082664316809193", "has_audio": null, "text_post_app_info": { "link_preview_attachment": null, "share_info": { "quoted_post": null, "reposted_post": null }, "reply_to_author": { "username": "mosseri", "id": null }, "is_post_unavailable": false }, "caption": { "text": "Glad to know, Everyone is worrying for that." }, "taken_at": 1688666254, "like_count": 59, "code": "CuXXgqAva_p", "media_overlay_info": null, "id": "3141082664316809193_51094265817" }, "line_type": "none", "view_replies_cta_string": null, "reply_facepile_users": [], "should_show_replies_cta": false } ], "thread_type": "thread", "header": null, "id": "3141082664316809193" }, ... ] } }, "extensions": { "is_final": true } } ``` </details> ##### Get Likers `threads.public_api.get_thread_likers` — getting a thread's likers by the thread's identifier. | Parameters | Type | Required | Restrictions | Description | |:----------:|:-------:|:--------:|:------------:|------------------------| | `id` | Integer | Yes | `>0` | A thread's identifier. | <details> <summary>Open code example</summary> ```python3 >>> thread_likers = threads.public_api.get_thread_likers(id=3141002295235099165) >>> thread_likers { "data": { "likers": { "users": [ { "pk": "32729880576", "full_name": "K 🦋I🦋K🦋I", "profile_pic_url": "https://scontent.cdninstagram.com/19/6007.jpg "follower_count": 51, "is_verified": false, "username": "niyod_couture", "profile_context_facepile_users": null, "id": null }, ... ] } }, "extensions": { "is_final": true } } ``` </details> ### Private #### User ##### Get Identifier `threads.private_api.get_user_id` — getting a user's identifier by their username. | Parameters | Type | Required | Restrictions | Description | |:----------:|:-------:|:--------:|:------------:|----------------------------------------------------------| | `username` | String | Yes | - | A user's username which goes along with `@` like `zuck`. | <details> <summary>Open code example</summary> ```python3 >>> user_id = threads.private_api.get_user_id(username='zuck') >>> user_id 314216 ``` </details> ##### Get By Identifier `threads.private_api.get_user` — getting a user by their identifier. | Parameters | Type | Required | Restrictions | Description | |:----------:|:-------:|:--------:|:------------:|----------------------| | `id` | Integer | Yes | `>0` | A user's identifier. | <details> <summary>Open code example</summary> ```python3 >>> user = threads.private_api.get_user(id=314216) >>> user { "user": { "has_anonymous_profile_picture": false, "is_supervision_features_enabled": false, "follower_count": 2782815, "media_count": 283, "following_count": 313, "following_tag_count": 3, "can_use_affiliate_partnership_messaging_as_creator": false, "can_use_affiliate_partnership_messaging_as_brand": false, "has_collab_collections": false, "has_private_collections": true, "has_music_on_profile": false, "is_potential_business": false, "can_use_branded_content_discovery_as_creator": false, "can_use_branded_content_discovery_as_brand": false, "fan_club_info": { "fan_club_id": null, "fan_club_name": null, "is_fan_club_referral_eligible": null, "fan_consideration_page_revamp_eligiblity": null, "is_fan_club_gifting_eligible": null, "subscriber_count": null, "connected_member_count": null, "autosave_to_exclusive_highlight": null, "has_enough_subscribers_for_ssc": null }, "fbid_v2": "17841401746480004", "pronouns": [], "is_whatsapp_linked": false, "transparency_product_enabled": false, "account_category": "", "interop_messaging_user_fbid": 119171612803872, "bio_links": [ { "link_id": 17979756671109835, "url": "", "lynx_url": "", "link_type": "external", "title": "", "open_external_url_with_in_app_browser": true } ], "can_add_fb_group_link_on_profile": false, "external_url": "", "show_shoppable_feed": false, "merchant_checkout_style": "none", "seller_shoppable_feed_type": "none", "creator_shopping_info": { "linked_merchant_accounts": [] }, "has_guides": false, "has_highlight_reels": false, "hd_profile_pic_url_info": { "url": "https://instagram.fiev6-1.fna.fbcdn.net/v/t51.2885-19/357376107_1330597350674698_8884059223384672080_n.jpg?_nc_ht=instagram.fiev6-1.fna.fbcdn.net&_nc_cat=1&_nc_ohc=euIj8dtTGIkAX_8Gm2Q&edm=AEF8tYYBAAAA&ccb=7-5&oh=00_AfBs65PPEgj6s58aEAVAZWAUJu9JqhMcsWV8VLubRyzQwQ&oe=64B0D240&_nc_sid=1e20d2", "width": 805, "height": 805 }, "hd_profile_pic_versions": [ { "width": 320, "height": 320, "url": "https://instagram.fiev6-1.fna.fbcdn.net/v/t51.2885-19/357376107_1330597350674698_8884059223384672080_n.jpg?stp=dst-jpg_s320x320&_nc_ht=instagram.fiev6-1.fna.fbcdn.net&_nc_cat=1&_nc_ohc=euIj8dtTGIkAX_8Gm2Q&edm=AEF8tYYBAAAA&ccb=7-5&oh=00_AfB7HZGMI4QLDwO-wKkntDBbGavE_-oKHus264KgonCVqg&oe=64B0D240&_nc_sid=1e20d2" }, { "width": 640, "height": 640, "url": "https://instagram.fiev6-1.fna.fbcdn.net/v/t51.2885-19/357376107_1330597350674698_8884059223384672080_n.jpg?stp=dst-jpg_s640x640&_nc_ht=instagram.fiev6-1.fna.fbcdn.net&_nc_cat=1&_nc_ohc=euIj8dtTGIkAX_8Gm2Q&edm=AEF8tYYBAAAA&ccb=7-5&oh=00_AfBxpkJK3xNM4BLNfbMzrcTyKkza60pitJvggFKtuX4uIA&oe=64B0D240&_nc_sid=1e20d2" } ], "is_interest_account": true, "is_favorite": false, "is_favorite_for_stories": false, "is_favorite_for_igtv": false, "is_favorite_for_clips": false, "is_favorite_for_highlights": false, "broadcast_chat_preference_status": { "json_response": "{\"status\":\"ok\",\"status_code\":\"200\",\"is_broadcast_chat_creator\":true,\"notification_setting_type\":2}" }, "live_subscription_status": "default", "usertags_count": 43060, "total_ar_effects": 1, "total_clips_count": 1, "has_videos": true, "total_igtv_videos": 12, "has_igtv_series": false, "biography": "", "include_direct_blacklist_status": true, "biography_with_entities": { "raw_text": "", "entities": [] }, "show_fb_link_on_profile": false, "primary_profile_link_type": 0, "can_hide_category": true, "can_hide_public_contacts": true, "should_show_category": false, "category_id": 1617, "is_category_tappable": false, "should_show_public_contacts": false, "is_eligible_for_smb_support_flow": true, "is_eligible_for_lead_center": false, "is_experienced_advertiser": false, "lead_details_app_id": "com.bloks.www.ig.smb.lead_gen.subpage", "is_business": false, "professional_conversion_suggested_account_type": 3, "account_type": 3, "direct_messaging": "", "instagram_location_id": "", "address_street": "", "business_contact_method": "UNKNOWN", "city_id": 0, "city_name": "", "contact_phone_number": "", "is_profile_audio_call_enabled": false, "latitude": 0.0, "longitude": 0.0, "public_email": "", "public_phone_country_code": "", "public_phone_number": "", "zip": "", "mutual_followers_count": 0, "has_onboarded_to_text_post_app": true, "show_text_post_app_badge": true, "text_post_app_joiner_number": 1, "show_ig_app_switcher_badge": true, "show_text_post_app_switcher_badge": true, "profile_context": "", "profile_context_links_with_user_ids": [], "profile_context_facepile_users": [], "has_chaining": true, "pk": 314216, "pk_id": "314216", "username": "zuck", "full_name": "Mark Zuckerberg", "is_private": false, "follow_friction_type": 0, "is_verified": true, "profile_pic_id": "3138909791791822006_314216", "profile_pic_url": "https://instagram.fiev6-1.fna.fbcdn.net/v/t51.2885-19/357376107_1330597350674698_8884059223384672080_n.jpg?stp=dst-jpg_s150x150&_nc_ht=instagram.fiev6-1.fna.fbcdn.net&_nc_cat=1&_nc_ohc=euIj8dtTGIkAX_8Gm2Q&edm=AEF8tYYBAAAA&ccb=7-5&oh=00_AfD5zX6BkQpgR-H-iTsZ2PlwDEOXnf7PCbohypiV8Aj10g&oe=64B0D240&_nc_sid=1e20d2", "current_catalog_id": null, "mini_shop_seller_onboarding_status": null, "shopping_post_onboard_nux_type": null, "ads_incentive_expiration_date": null, "displayed_action_button_partner": null, "smb_delivery_partner": null, "smb_support_delivery_partner": null, "displayed_action_button_type": "", "smb_support_partner": null, "is_call_to_action_enabled": false, "num_of_admined_pages": null, "category": "Entrepreneur", "account_badges": [], "show_account_transparency_details": true, "existing_user_age_collection_enabled": true, "show_post_insights_entry_point": true, "has_public_tab_threads": true, "third_party_downloads_enabled": 1, "is_regulated_c18": false, "is_in_canada": false, "profile_type": 0, "is_profile_broadcast_sharing_enabled": true, "has_exclusive_feed_content": false, "has_fan_club_subscriptions": false, "is_memorialized": false, "open_external_url_with_in_app_browser": true, "pinned_channels_info": { "pinned_channels_list": [], "has_public_channels": false }, "request_contact_enabled": false, "robi_feedback_source": null, "chaining_results": null, "is_bestie": false, "remove_message_entrypoint": false, "auto_expand_chaining": false, "is_new_to_instagram": false, "highlight_reshare_disabled": false }, "status": "ok" } ``` </details> ##### Get Threads `threads.private_api.get_user_thread` — getting a user's threads by the user's identifier. Supports [pagination](#pagination). | Parameters | Type | Required | Restrictions | Description | |:-------------:|:-------:|:--------:|:------------:|----------------------------------------------------------| | `id` | Integer | Yes | `>0` | A user's identifier. | | `limit` | Integer | No | `>0` | A number of threads to get. Default value is `15`. | | `from_max_id` | String | No | - | An encoded thread's identifier to start offsetting from. | <details> <summary>Open code example</summary> ```python3 >>> user_threads = threads.private_api.get_user_threads(id=314216) >>> user_threads { "medias": [], "threads": [ { "thread_items": [ { "post": { "pk": 3143857175171939857, "id": "3143857175171939857_314216", "taken_at": 1688997002, "device_timestamp": 154381465597879, "client_cache_key": "MzE0Mzg1NzE3NTE3MTkzOTg1Nw==.2", "filter_type": 0, "like_and_view_counts_disabled": false, "integrity_review_decision": "pending", "text_post_app_info": { "is_post_unavailable": false, "is_reply": false, "reply_to_author": null, "direct_reply_count": 24278, "self_thread_count": 0, "reply_facepile_users": [], "link_preview_attachment": null, "can_reply": true, "reply_control": "everyone", "hush_info": null, "share_info": { "can_repost": true, "is_reposted_by_viewer": false, "can_quote_post": true } }, "caption": { "pk": "17971080284299711", "user_id": 314216, "text": "Threads reached 100 million sign ups over the weekend. That's mostly organic demand and we haven't even turned on many promotions yet. Can't believe it's only been 5 days!", "type": 1, "created_at": 1688997002, "created_at_utc": 1688997002, "content_type": "comment", "status": "Active", "bit_flags": 0, "did_report_as_spam": false, "share_enabled": false, "user": { "pk": 314216, "pk_id": "314216", "username": "zuck", "full_name": "Mark Zuckerberg", "is_private": false, "is_verified": true, "profile_pic_id": "3138909791791822006_314216", "profile_pic_url": "https://instagram.fiev6-1.fna.fbcdn.net/v/t51.2885-19/357376107_1330597350674698_8884059223384672080_n.jpg?stp=dst-jpg_s150x150&_nc_ht=instagram.fiev6-1.fna.fbcdn.net&_nc_cat=1&_nc_ohc=XKEMOZTrwUAAX-1hCyT&edm=AKIuHiwBAAAA&ccb=7-5&oh=00_AfCXFLPZTQxrVDWADA_aPG8o4_lRhKbEQsAs6Jh5TF3auQ&oe=64B6C100&_nc_sid=3b7ee3", "fbid_v2": "17841401746480004", "has_onboarded_to_text_post_app": true }, "is_covered": false, "is_ranked_comment": false, "media_id": 3143857175171939857, "private_reply_status": 0 }, "media_type": 19, "code": "CuhOXGmr74R", "product_type": "text_post", "organic_tracking_token": "eyJ2ZXJzaW9uIjo1LCJwYXlsb2FkIjp7ImlzX2FuYWx5dGljc190cmFja2VkIjp0cnVlLCJ1dWlkIjoiYzY0NzE2ZDZmYmJhNDE2MDlhODEwMWU3ZTkzYmM3NWYzMTQzODU3MTc1MTcxOTM5ODU3Iiwic2VydmVyX3Rva2VuIjoiMTY4OTQxOTI0NDgxMHwzMTQzODU3MTc1MTcxOTM5ODU3fDYwNDM4MTU0NTY2fGFiYzJhZWFjNGZkZTBlZTFlN2I4ZTZhNTA4MDcxMzQxNGZjM2U0NTdmN2ZjZmYyZDE4ODlkYmQyODViOTBiZjYifSwic2lnbmF0dXJlIjoiIn0=", "image_versions2": { "candidates": [] }, "original_width": 612, "original_height": 612, "video_versions": [], "like_count": 127327, "has_liked": false, "can_viewer_reshare": true, "top_likers": [], "user": { "pk": 314216, "pk_id": "314216", "username": "zuck", "full_name": "Mark Zuckerberg", "is_private": false, "is_verified": true, "profile_pic_id": "3138909791791822006_314216", "profile_pic_url": "https://instagram.fiev6-1.fna.fbcdn.net/v/t51.2885-19/357376107_1330597350674698_8884059223384672080_n.jpg?stp=dst-jpg_s150x150&_nc_ht=instagram.fiev6-1.fna.fbcdn.net&_nc_cat=1&_nc_ohc=XKEMOZTrwUAAX-1hCyT&edm=AKIuHiwBAAAA&ccb=7-5&oh=00_AfCXFLPZTQxrVDWADA_aPG8o4_lRhKbEQsAs6Jh5TF3auQ&oe=64B6C100&_nc_sid=3b7ee3", "friendship_status": { "following": false, "followed_by": false, "blocking": false, "muting": false, "is_private": false, "incoming_request": false, "outgoing_request": false, "text_post_app_pre_following": false, "is_bestie": false, "is_restricted": false, "is_feed_favorite": false, "is_eligible_to_subscribe": false }, "has_anonymous_profile_picture": false, "has_onboarded_to_text_post_app": true, "account_badges": [] } }, "line_type": "line", "view_replies_cta_string": "24,278 replies", "should_show_replies_cta": true, "reply_facepile_users": [ { "pk": 57325599231, "pk_id": "57325599231", "profile_pic_url": "https://instagram.fiev6-1.fna.fbcdn.net/v/t51.2885-19/359520387_1561960634615463_4231364575815896983_n.jpg?stp=dst-jpg_s150x150&_nc_ht=instagram.fiev6-1.fna.fbcdn.net&_nc_cat=103&_nc_ohc=L0Vg4L6V5_AAX_Hbcgj&edm=AKIuHiwBAAAA&ccb=7-5&oh=00_AfD130-XjZ1RrsD_TVSbyH1SgizO5hGb5W5IJhVTOK3_Gg&oe=64B6B3CC&_nc_sid=3b7ee3" }, { "pk": 53041190925, "pk_id": "53041190925", "profile_pic_url": "https://instagram.fiev6-1.fna.fbcdn.net/v/t51.2885-19/359187805_177094641886545_7812186447575399344_n.jpg?stp=dst-jpg_s150x150&_nc_ht=instagram.fiev6-1.fna.fbcdn.net&_nc_cat=109&_nc_ohc=0CBUtKwuehgAX9EK7C3&edm=AKIuHiwBAAAA&ccb=7-5&oh=00_AfDT8r2ogdVP6qGgBbsjZEoB9tL3V5Zyjwe-rWgUqjRDfw&oe=64B6AA34&_nc_sid=3b7ee3" }, { "pk": 60184849158, "pk_id": "60184849158", "profile_pic_url": "https://instagram.fiev6-1.fna.fbcdn.net/v/t51.2885-19/358782577_620441826734229_2771436956212880281_n.jpg?stp=dst-jpg_s150x150&_nc_ht=instagram.fiev6-1.fna.fbcdn.net&_nc_cat=109&_nc_ohc=UwO4_ipDCRgAX-LM3Qn&edm=AKIuHiwBAAAA&ccb=7-5&oh=00_AfAie1X0UbKUYRbBtRI8fn_8_sbyZAy9sLKQjdTBiWhhNQ&oe=64B6B8B8&_nc_sid=3b7ee3" } ], "can_inline_expand_below": false } ], "thread_type": "thread", "show_create_reply_cta": false, "id": 3143857175171939857, ... }, ... ], "status": "ok" } ``` </details> ##### Get Replies `threads.private_api.get_user_replies — getting a user's replies by the user's identifier. Supports [pagination](#pagination). | Parameters | Type | Required | Restrictions | Description | |:-------------:|:-------:|:--------:|:------------:|---------------------------------------------------------| | `id` | Integer | Yes | `>0` | A user's identifier. | | `limit` | Integer | No | `>0` | A number of replies to get. Default value is `15`. | | `from_max_id` | String | No | - | An encoded reply's identifier to start offsetting from. | <details> <summary>Open code example</summary> ```python3 >>> user_replies = threads.private_api.get_user_replies(id=314216) >>> user_replies { "medias": [], "threads": [ { "thread_items": [ { "post": { "pk": 3146315493980433758, "id": "3146315493980433758_9155181033", "taken_at": 1689290074, "device_timestamp": 1689290056390615, "client_cache_key": "MzE0NjMxNTQ5Mzk4MDQzMzc1OA==.2", "filter_type": 0, "like_and_view_counts_disabled": false, "integrity_review_decision": "pending", "text_post_app_info": { "is_post_unavailable": false, "is_reply": false, "reply_to_author": null, "direct_reply_count": 348, "self_thread_count": 0, "reply_facepile_users": [], "link_preview_attachment": null, "can_reply": true, "reply_control": "everyone", "hush_info": null, "share_info": { "can_repost": true, "is_reposted_by_viewer": false, "can_quote_post": true } }, "caption": { "pk": "17869797770898727", "user_id": 9155181033, "text": "Squad goals \ud83d\ude0e\n\n(via @stylebender)", "type": 1, "created_at": 1689290074, "created_at_utc": 1689290074, "content_type": "comment", "status": "Active", "bit_flags": 0, "did_report_as_spam": false, "share_enabled": false, "user": { "pk": 9155181033, "pk_id": "9155181033", "username": "espnmma", "full_name": "ESPN MMA", "is_private": false, "is_verified": true, "profile_pic_id": "3141004987567665802_9155181033", "profile_pic_url": "https://instagram.fiev6-1.fna.fbcdn.net/v/t51.2885-19/358383703_848206763406082_8252942189546556167_n.jpg?stp=dst-jpg_s150x150&_nc_ht=instagram.fiev6-1.fna.fbcdn.net&_nc_cat=110&_nc_ohc=WOV8wy4ScRoAX_C1CaW&edm=AClQE0sBAAAA&ccb=7-5&oh=00_AfDDl35PMIh3bUJ2WuzOkIIAgIEZqwFqdSh8tuPDTyfgTg&oe=64B7C4EE&_nc_sid=2bf346", "fbid_v2": "17841409270994397", "has_onboarded_to_text_post_app": true }, "is_covered": false, "is_ranked_comment": false, "media_id": 3146315493980433758, "private_reply_status": 0 }, "media_type": 2, "code": "Cup9UWaAu1e", "product_type": "feed", "original_width": 640, "original_height": 1136, "is_dash_eligible": 1, "video_codec": "avc1.64001e", "has_audio": true, "video_duration": 18.784, "video_versions": [ { "type": 101, "width": 720, "height": 1280, "url": "https://instagram.fiev6-1.fna.fbcdn.net/v/t50.2886-16/360404858_562471725885766_7209197166491385251_n.mp4?efg=eyJ2ZW5jb2RlX3RhZyI6InZ0c192b2RfdXJsZ2VuLjcyMC5mZWVkLmJhc2VsaW5lIn0&_nc_ht=instagram.fiev6-1.fna.fbcdn.net&_nc_cat=110&_nc_ohc=m3bTlD7JhCoAX9eODcJ&edm=AClQE0sBAAAA&vs=2611672838971644_3296775565&_nc_vs=HBksFQAYJEdIcFhleFZHNldxbmtQOEJBS085ZUlDZ05neGtia1lMQUFBRhUAAsgBABUAGCRHTm1ZaFJXano5Rk1UM2dBQU0wN3I2d1lrVng2YmtZTEFBQUYVAgLIAQAoABgAGwGIB3VzZV9vaWwBMBUAACa6u7r%2BmOjoPxUCKAJDMywXQDKzMzMzMzMYEmRhc2hfYmFzZWxpbmVfMV92MREAdeoHAA%3D%3D&_nc_rid=e91b3f00fc&ccb=7-5&oh=00_AfB17yCrQdXrNLAZlgUqwZSSjgrwF9H8oNjJF6eVpHx9Ug&oe=64B477DB&_nc_sid=2bf346", "id": "2611672838971644v" }, { "type": 102, "width": 480, "height": 854, "url": "https://instagram.fiev6-1.fna.fbcdn.net/v/t50.2886-16/361062149_663346358581709_8844241500491019989_n.mp4?efg=eyJ2ZW5jb2RlX3RhZyI6InZ0c192b2RfdXJsZ2VuLjQ4MC5mZWVkLmJhc2VsaW5lIn0&_nc_ht=instagram.fiev6-1.fna.fbcdn.net&_nc_cat=105&_nc_ohc=V4Gn1XoC7usAX8ESNnG&edm=AClQE0sBAAAA&vs=1023383492024442_1730708493&_nc_vs=HBksFQAYJEdBVmZoUlhOX1JCY1Qxc0NBTlgyaURESkRyMTZia1lMQUFBRhUAAsgBABUAGCRHTm1ZaFJXano5Rk1UM2dBQU0wN3I2d1lrVng2YmtZTEFBQUYVAgLIAQAoABgAGwGIB3VzZV9vaWwBMBUAACa6u7r%2BmOjoPxUCKAJDMywXQDKzMzMzMzMYEmRhc2hfYmFzZWxpbmVfMl92MREAdeoHAA%3D%3D&_nc_rid=e91b3f00fc&ccb=7-5&oh=00_AfD4G2wv4eJTNJjnGXOF04FdWuUUtelx_tvm5iq5dYx1pw&oe=64B49F3C&_nc_sid=2bf346", "id": "1023383492024442v" }, { "type": 103, "width": 480, "height": 854, "url": "https://instagram.fiev6-1.fna.fbcdn.net/v/t50.2886-16/361062149_663346358581709_8844241500491019989_n.mp4?efg=eyJ2ZW5jb2RlX3RhZyI6InZ0c192b2RfdXJsZ2VuLjQ4MC5mZWVkLmJhc2VsaW5lIn0&_nc_ht=instagram.fiev6-1.fna.fbcdn.net&_nc_cat=105&_nc_ohc=V4Gn1XoC7usAX8ESNnG&edm=AClQE0sBAAAA&vs=1023383492024442_1730708493&_nc_vs=HBksFQAYJEdBVmZoUlhOX1JCY1Qxc0NBTlgyaURESkRyMTZia1lMQUFBRhUAAsgBABUAGCRHTm1ZaFJXano5Rk1UM2dBQU0wN3I2d1lrVng2YmtZTEFBQUYVAgLIAQAoABgAGwGIB3VzZV9vaWwBMBUAACa6u7r%2BmOjoPxUCKAJDMywXQDKzMzMzMzMYEmRhc2hfYmFzZWxpbmVfMl92MREAdeoHAA%3D%3D&_nc_rid=e91b3f00fc&ccb=7-5&oh=00_AfD4G2wv4eJTNJjnGXOF04FdWuUUtelx_tvm5iq5dYx1pw&oe=64B49F3C&_nc_sid=2bf346", "id": "1023383492024442v" } ], "like_count": 9188, "has_liked": false, "can_viewer_reshare": true, "top_likers": [], "user": { "pk": 9155181033, "pk_id": "9155181033", "username": "espnmma", "full_name": "ESPN MMA", "is_private": false, "is_verified": true, "profile_pic_id": "3141004987567665802_9155181033", "profile_pic_url": "https://instagram.fiev6-1.fna.fbcdn.net/v/t51.2885-19/358383703_848206763406082_8252942189546556167_n.jpg?stp=dst-jpg_s150x150&_nc_ht=instagram.fiev6-1.fna.fbcdn.net&_nc_cat=110&_nc_ohc=WOV8wy4ScRoAX_C1CaW&edm=AClQE0sBAAAA&ccb=7-5&oh=00_AfDDl35PMIh3bUJ2WuzOkIIAgIEZqwFqdSh8tuPDTyfgTg&oe=64B7C4EE&_nc_sid=2bf346", "friendship_status": { "following": false, "followed_by": false, "blocking": false, "muting": false, "is_private": false, "incoming_request": false, "outgoing_request": false, "text_post_app_pre_following": false, "is_bestie": false, "is_restricted": false, "is_feed_favorite": false, "is_eligible_to_subscribe": false }, "has_anonymous_profile_picture": false, "has_onboarded_to_text_post_app": true, "account_badges": [] } }, "line_type": "squiggle", "view_replies_cta_string": "348 replies", "should_show_replies_cta": true, "reply_facepile_users": [ { "pk": 60781132585, "pk_id": "60781132585", "profile_pic_url": "https://instagram.fiev6-1.fna.fbcdn.net/v/t51.2885-19/359641162_959966898669598_6990253892519958381_n.jpg?stp=dst-jpg_s150x150&_nc_ht=instagram.fiev6-1.fna.fbcdn.net&_nc_cat=110&_nc_ohc=uLBckw2TPWQAX_yvvWz&edm=AClQE0sBAAAA&ccb=7-5&oh=00_AfDZeeb69FVIALEYjbqFBzIbIXMlhKhyCPMge1K-ni0hZQ&oe=64B7FC4E&_nc_sid=2bf346" }, { "pk": 57175390182, "pk_id": "57175390182", "profile_pic_url": "https://instagram.fiev6-1.fna.fbcdn.net/v/t51.2885-19/358544206_674380928039672_4729519352683894077_n.jpg?stp=dst-jpg_s150x150&_nc_ht=instagram.fiev6-1.fna.fbcdn.net&_nc_cat=108&_nc_ohc=0qWbftDXewwAX8KUtSO&edm=AClQE0sBAAAA&ccb=7-5&oh=00_AfDj4Z6wYUVslOtd4Mp7oRhbxKJZDIsZLxjX861MFn4CfQ&oe=64B74C23&_nc_sid=2bf346" }, { "pk": 46363005289, "pk_id": "46363005289", "profile_pic_url": "https://instagram.fiev6-1.fna.fbcdn.net/v/t51.2885-19/358356136_267621489287701_6184904086615879877_n.jpg?stp=dst-jpg_s150x150&_nc_ht=instagram.fiev6-1.fna.fbcdn.net&_nc_cat=101&_nc_ohc=_kbmWIEqHuYAX9in-BH&edm=AClQE0sBAAAA&ccb=7-5&oh=00_AfBMSnbvraqk2tIrN5MwKAT49ey9QoeppiTEiMzeggaEYw&oe=64B86906&_nc_sid=2bf346" } ], "can_inline_expand_below": false }, ... ], "thread_type": "thread", "show_create_reply_cta": false, "id": 3146509421211016700 }, ], "next_max_id": "QVFBZU1fRWxkXzh4UUhGOV84VUZGLS1BdHlVWVVzQmpNbmUzZ0hkRVJURXgyVDBSb2pSdnhacnRGX1JrNUxMRDgxbFZ4ZW5tSUc5TTRraEgwRV91eWR3Nw==", "status": "ok" } ``` </details> ##### Get Followers `threads.private_api.get_user_followers` — getting a user's followers by the user's identifier. | Parameters | Type | Required | Restrictions | Description | |:----------:|:-------:|:--------:|:------------:|----------------------| | `id` | Integer | Yes | `>0` | A user's identifier. | <details> <summary>Open code example</summary> ```python3 >>> user_followers = threads.private_api.get_user_followers(id=314216) >>> user_followers { "users": [ { "has_anonymous_profile_picture": false, "fbid_v2": "17841407032091362", "has_onboarded_to_text_post_app": true, "text_post_app_joiner_number": 100165200, "pk": 6970898403, "pk_id": "6970898403", "username": "ali_moslemi_", "full_name": "", "is_private": true, "is_verified": false, "profile_pic_id": "3143675402396969798_6970898403", "profile_pic_url": "https://instagram.fiev6-1.fna.fbcdn.net/v/t51.2885-19/358504236_795240265474093_1873793041193487951_n.jpg?stp=dst-jpg_s150x150&_nc_ht=instagram.fiev6-1.fna.fbcdn.net&_nc_cat=101&_nc_ohc=NDKSnch-bEoAX9bRFSg&edm=APQMUHMBAAAA&ccb=7-5&oh=00_AfBAOL--evGSmg6slpcKO-Bmo1I8rXokvlMygal0yEep5A&oe=64AFB4B0&_nc_sid=6ff7c8", "account_badges": [], "is_possible_scammer": false, "third_party_downloads_enabled": 0, "is_possible_bad_actor": { "is_possible_scammer": false, "is_possible_impersonator": { "is_unconnected_impersonator": false } }, "latest_reel_media": 0 }, ... ], "big_list": false, "page_size": 200, "has_more": false, "should_limit_list_of_followers": false, "status": "ok" } ``` </details> ##### Get Following `threads.private_api.get_user_following` — getting a user's following by the user's identifier. | Parameters | Type | Required | Restrictions | Description | |:----------:|:-------:|:--------:|:------------:|----------------------| | `id` | Integer | Yes | `>0` | A user's identifier. | <details> <summary>Open code example</summary> ```python3 >>> user_following = threads.private_api.get_user_following(id=314216) >>> user_following { "users": [ { "has_anonymous_profile_picture": false, "fbid_v2": "17841400595560228", "has_onboarded_to_text_post_app": true, "text_post_app_joiner_number": 37063252, "pk": 14611852, "pk_id": "14611852", "username": "mikesego", "full_name": "Mike Sego", "is_private": false, "is_verified": false, "profile_pic_id": "3141060757056532902_14611852", "profile_pic_url": "https://instagram.fiev6-1.fna.fbcdn.net/v/t51.2885-19/358004364_2046926358984696_1656910531495082901_n.jpg?stp=dst-jpg_s150x150&_nc_ht=instagram.fiev6-1.fna.fbcdn.net&_nc_cat=100&_nc_ohc=6Hb0M2drxbIAX9eVeGw&edm=ALB854YBAAAA&ccb=7-5&oh=00_AfAeVVBHAjmjJtmatY9YTDLovtOXJwzM-netZTUN_ghOLw&oe=64B028E3&_nc_sid=ce9561", "account_badges": [], "is_possible_scammer": false, "third_party_downloads_enabled": 1, "is_possible_bad_actor": { "is_possible_scammer": false, "is_possible_impersonator": { "is_unconnected_impersonator": false } }, "latest_reel_media": 0, "is_favorite": false }, ... ], "big_list": true, "page_size": 200, "next_max_id": "100", "has_more": false, "should_limit_list_of_followers": false, "status": "ok" } ``` </details> ##### Get Friendship Status `threads.private_api.get_friendship_status` — get a friendship status with a user. | Parameters | Type | Required | Restrictions | Description | |:----------:|:-------:|:--------:|:------------:|--------------------------------------------------------| | `id` | Integer | Yes | `>0` | An identifier of a user to get friendship status with. | <details> <summary>Open code example</summary> ```python3 >>> friendship_status = threads.private_api.get_friendship_status(id=314216) >>> friendship_status { "blocking": false, "followed_by": false, "following": false, "incoming_request": false, "is_bestie": false, "is_blocking_reel": false, "is_muting_reel": false, "is_private": false, "is_restricted": false, "muting": false, "outgoing_request": false, "text_post_app_pre_following": false, "is_feed_favorite": false, "is_eligible_to_subscribe": false, "is_supervised_by_viewer": false, "is_guardian_of_viewer": false, "is_muting_notes": false, "status": "ok" } ``` </details> ##### Search `threads.private_api.search_user` — search for a user by a query. | Parameters | Type | Required | Restrictions | Description | |:----------:|:------:|:--------:|:------------:|-----------------| | `query` | String | Yes | - | A search query. | <details> <summary>Open code example</summary> ```python3 >>> found_users = threads.private_api.search_user(query='zuck') >>> found_users { "num_results": 55, "users": [ { "has_anonymous_profile_picture": false, "follower_count": 2779681, "media_count": 283, "following_count": 313, "following_tag_count": 3, "fbid_v2": "17841401746480004", "has_onboarded_to_text_post_app": true, "show_text_post_app_badge": true, "text_post_app_joiner_number": 1, "show_ig_app_switcher_badge": true, "pk": 314216, "pk_id": "314216", "username": "zuck", "full_name": "Mark Zuckerberg", "is_private": false, "is_verified": true, "profile_pic_id": "3138909791791822006_314216", "profile_pic_url": "https://instagram.fiev6-1.fna.fbcdn.net/v/t51.2885-19/357376107_1330597350674698_8884059223384672080_n.jpg?stp=dst-jpg_s150x150&_nc_ht=instagram.fiev6-1.fna.fbcdn.net&_nc_cat=1&_nc_ohc=euIj8dtTGIkAX_8Gm2Q&edm=AM7KJZYBAAAA&ccb=7-5&oh=00_AfBozKxHv4GtcQ5ZwnPSS_eL9ONAYNekI1hhOWPGgNPaxA&oe=64B0D240&_nc_sid=8ec269", "has_opt_eligible_shop": false, "account_badges": [], "third_party_downloads_enabled": 1, "unseen_count": 0, "friendship_status": { "following": false, "is_private": false, "incoming_request": false, "outgoing_request": false, "text_post_app_pre_following": false, "is_bestie": false, "is_restricted": false, "is_feed_favorite": false }, "latest_reel_media": 0, "should_show_category": false }, ... ], "has_more": false, "rank_token": "21af3266-ddec-4166-b00c-091a580a54a8", "status": "ok" } ``` </details> ##### Follow `threads.private_api.follow_user` — follow a user. | Parameters | Type | Required | Restrictions | Description | |:----------:|:-------:|:--------:|:------------:|------------------------------------| | `id` | Integer | Yes | `>0` | An identifier of a user to follow. | <details> <summary>Open code example</summary> ```python3 >>> following = threads.private_api.follow_user(id=314216) >>> following { "friendship_status": { "following": true, "followed_by": false, "blocking": false, "muting": false, "is_private": false, "incoming_request": false, "outgoing_request": false, "text_post_app_pre_following": false, "is_bestie": false, "is_restricted": false, "is_feed_favorite": false, "is_eligible_to_subscribe": false }, "previous_following": false, "status": "ok" } ``` </details> ##### Unfollow `threads.private_api.unfollow_user` — unfollow a user. | Parameters | Type | Required | Restrictions | Description | |:----------:|:-------:|:--------:|:------------:|--------------------------------------| | `id` | Integer | Yes | `>0` | An identifier of a user to unfollow. | <details> <summary>Open code example</summary> ```python3 >>> following = threads.private_api.unfollow_user(id=314216) >>> following { "friendship_status": { "following": false, "followed_by": false, "blocking": false, "muting": false, "is_private": false, "incoming_request": false, "outgoing_request": false, "text_post_app_pre_following": false, "is_bestie": false, "is_restricted": false, "is_feed_favorite": false, "is_eligible_to_subscribe": false }, "status": "ok" } ``` </details> ##### Mute `threads.private_api.mute_user` — mute a user. | Parameters | Type | Required | Restrictions | Description | |:----------:|:-------:|:--------:|:------------:|----------------------------------| | `id` | Integer | Yes | `>0` | An identifier of a user to mute. | <details> <summary>Open code example</summary> ```python3 >>> muting = threads.private_api.mute_user(id=314216) >>> muting { "friendship_status": { "muting": true, ... }, "status": "ok" } ``` </details> ##### Unmute `threads.private_api.unmute_user` — unmute a user. | Parameters | Type | Required | Restrictions | Description | |:----------:|:-------:|:--------:|:------------:|------------------------------------| | `id` | Integer | Yes | `>0` | An identifier of a user to unmute. | <details> <summary>Open code example</summary> ```python3 >>> unmuting = threads.private_api.unmute_user(id=314216) >>> unmuting { "friendship_status": { "muting": false, ... }, "status": "ok" } ``` </details> ##### Restrict `threads.private_api.restrict_user` — restrict a user. | Parameters | Type | Required | Restrictions | Description | |:----------:|:-------:|:--------:|:------------:|--------------------------------------| | `id` | Integer | Yes | `>0` | An identifier of a user to restrict. | <details> <summary>Open code example</summary> ```python3 >>> restricting = threads.private_api.restrict_user(id=314216) >>> restricting { "users": [ { "pk": 314216, "pk_id": "314216", "username": "zuck", "full_name": "Mark Zuckerberg", "is_private": false, "is_verified": true, "friendship_status": { "following": false, "followed_by": false, "blocking": false, "muting": false, "is_private": false, "incoming_request": false, "outgoing_request": false, "text_post_app_pre_following": false, "is_bestie": false, "is_restricted": true, "is_feed_favorite": false, "is_eligible_to_subscribe": false }, "profile_pic_id": "3138909791791822006_314216", "profile_pic_url": "https://instagram.fiev6-1.fna.fbcdn.net/v/t51.2885-19/357376107_1330597350674698_8884059223384672080_n.jpg?stp=dst-jpg_s150x150&_nc_ht=instagram.fiev6-1.fna.fbcdn.net&_nc_cat=1&_nc_ohc=XKEMOZTrwUAAX-1hCyT&edm=AIbK1RIBAAAA&ccb=7-5&oh=00_AfDM-zL-HYX9qBBi9zM3AaHFAFzGu5ASanTF60ahGOYh7Q&oe=64B4C6C0&_nc_sid=0e3124", "has_onboarded_to_text_post_app": true } ], "status": "ok" } ``` </details> ##### Unrestrict `threads.private_api.unrestrict_user` — unrestrict a user. | Parameters | Type | Required | Restrictions | Description | |:----------:|:-------:|:--------:|:------------:|----------------------------------------| | `id` | Integer | Yes | `>0` | An identifier of a user to unrestrict. | <details> <summary>Open code example</summary> ```python3 >>> unrestricting = threads.private_api.unrestrict_user(id=314216) >>> unrestricting { "users": [ { "pk": 314216, "pk_id": "314216", "username": "zuck", "full_name": "Mark Zuckerberg", "is_private": false, "is_verified": true, "friendship_status": { "following": false, "followed_by": false, "blocking": false, "muting": false, "is_private": false, "incoming_request": false, "outgoing_request": false, "text_post_app_pre_following": false, "is_bestie": false, "is_restricted": false, "is_feed_favorite": false, "is_eligible_to_subscribe": false }, "profile_pic_id": "3138909791791822006_314216", "profile_pic_url": "https://instagram.fiev6-1.fna.fbcdn.net/v/t51.2885-19/357376107_1330597350674698_8884059223384672080_n.jpg?stp=dst-jpg_s150x150&_nc_ht=instagram.fiev6-1.fna.fbcdn.net&_nc_cat=1&_nc_ohc=XKEMOZTrwUAAX-1hCyT&edm=AMkHgcoBAAAA&ccb=7-5&oh=00_AfAMSrQmoJc3wyGdpGegznMp-f00XjYVyHNgr26As8rRHQ&oe=64B4C6C0&_nc_sid=88bf82", "has_onboarded_to_text_post_app": true } ], "status": "ok" } ``` </details> ##### Block `threads.private_api.block_user` — block a user. | Parameters | Type | Required | Restrictions | Description | |:----------:|:-------:|:--------:|:------------:|-----------------------------------| | `id` | Integer | Yes | `>0` | An identifier of a user to block. | <details> <summary>Open code example</summary> ```python3 >>> blocking = threads.private_api.block_user(id=314216) >>> blocking { "friendship_status": { "blocking": true, ... }, "status": "ok" } ``` </details> ##### Unblock `threads.private_api.unblock_user` — unblock a user. | Parameters | Type | Required | Restrictions | Description | |:----------:|:-------:|:--------:|:------------:|-------------------------------------| | `id` | Integer | Yes | `>0` | An identifier of a user to unblock. | <details> <summary>Open code example</summary> ```python3 >>> unblocking = threads.private_api.unblock_user(id=314216) >>> unblocking { "friendship_status": { "blocking": false, ... }, "status": "ok" } ``` </details> #### Thread ##### Get Identifier `threads.private_api.get_thread_id` — getting a thread's identifier by its `URL` identifier. `URL` identifier is a last part of a thread's website `URL`. If the thread's `URL` is `https://threads.net/t/CuXFPIeLLod`, then it would be `CuXFPIeLLod`. | Parameters | Type | Required | Restrictions | Description | |:----------:|:------:|:--------:|:------------:|------------------------------| | `url_id` | String | Yes | - | A thread's `URL` identifier. | <details> <summary>Open code example</summary> ```python3 >>> thread_id = threads.private_api.get_thread_id(url_id='CuXFPIeLLod') >>> thread_id 3141002295235099165 ``` </details> ##### Get By Identifier `threads.private_api.get_thread` — getting a thread by its identifier. | Parameters | Type | Required | Restrictions | Description | |:----------:|:-------:|:--------:|:------------:|------------------------| | `id` | Integer | Yes | `>0` | A thread's identifier. | <details> <summary>Open code example</summary> ```python3 >>> thread = threads.private_api.get_thread(id=3141002295235099165) >>> thread { "data": { "data": { "containing_thread": { "thread_items": [ { "post": { "user": { "profile_pic_url": "https://scontent.cdninstagram.com/v/t51.2885-19/343392897_618515990300243_8088199406170073086_n.jpg?stp=dst-jpg_s150x150&_nc_ht=scontent.cdninstagram.com&_nc_cat=1&_nc_ohc=1xtR491RY6cAX8Okf3Z&edm=APs17CUBAAAA&ccb=7-5&oh=00_AfDqsZTCRQlz7EaDpgEzBkmXqtqKtTH80Q0r1rIDMJCcpg&oe=64AC2D8D&_nc_sid=10d13b", "username": "mosseri", "id": null, "is_verified": true, "pk": "95561" }, "image_versions2": { "candidates": [] }, "original_width": 612, "original_height": 612, "video_versions": [], "carousel_media": null, "carousel_media_count": null, "pk": "3141055616164096839", "has_audio": null, "text_post_app_info": { "link_preview_attachment": null, "share_info": { "quoted_post": null, "reposted_post": null }, "reply_to_author": null, "is_post_unavailable": false, "direct_reply_count": 2839 }, "caption": { "text": "I've been getting some questions about deleting your account. To clarify, you can deactivate your Threads account, which hides your Threads profile and content, you can set your profile to private, and you can delete individual threads posts \u2013 all without deleting your Instagram account. Threads is powered by Instagram, so right now it's just one account, but we're looking into a way to delete your Threads account separately." }, "taken_at": 1688663030, "like_count": 29984, "code": "CuXRXDdNOtH", "media_overlay_info": null, "id": "3141055616164096839_95561" }, "line_type": "none", "view_replies_cta_string": "2,839 replies", "reply_facepile_users": [], "should_show_replies_cta": true } ], "thread_type": "thread", "header": null, "id": "3141055616164096839" }, "reply_threads": [ { "thread_items": [ { "post": { "user": { "profile_pic_url": "https://scontent.cdninstagram.com/v/t51.2885-19/355424589_948742193018772_4617955233166333765_n.jpg?stp=dst-jpg_s150x150&_nc_ht=scontent.cdninstagram.com&_nc_cat=100&_nc_ohc=heQWdfsKG5MAX9mLim0&edm=APs17CUBAAAA&ccb=7-5&oh=00_AfA6TUpLjjegnP6KuducRm8n6uU9iBCXg4O1P33WYSh3mg&oe=64ACDDBA&_nc_sid=10d13b", "username": "jimmywhotalks", "id": null, "is_verified": true, "pk": "51094265817" }, "image_versions2": { "candidates": [] }, "original_width": 612, "original_height": 612, "video_versions": [], "carousel_media": null, "carousel_media_count": null, "pk": "3141082664316809193", "has_audio": null, "text_post_app_info": { "link_preview_attachment": null, "share_info": { "quoted_post": null, "reposted_post": null }, "reply_to_author": { "username": "mosseri", "id": null }, "is_post_unavailable": false }, "caption": { "text": "Glad to know, Everyone is worrying for that." }, "taken_at": 1688666254, "like_count": 59, "code": "CuXXgqAva_p", "media_overlay_info": null, "id": "3141082664316809193_51094265817" }, "line_type": "none", "view_replies_cta_string": null, "reply_facepile_users": [], "should_show_replies_cta": false } ], "thread_type": "thread", "header": null, "id": "3141082664316809193" }, ... ] } }, "extensions": { "is_final": true } } ``` </details> ##### Get Likers `threads.private_api.get_thread_likers` — getting a thread's likers by the thread's identifier. | Parameters | Type | Required | Restrictions | Description | |:----------:|:-------:|:--------:|:------------:|------------------------| | `id` | Integer | Yes | `>0` | A thread's identifier. | <details> <summary>Open code example</summary> ```python3 >>> thread_likers = threads.private_api.get_thread_likers(id=3141055616164096839) >>> thread_likers { "users": [ { "pk": 54831820289, "pk_id": "54831820289", "username": "hazrel_syfddn26", "full_name": "HAZLER", "is_private": false, "account_badges": [], "is_verified": false, "profile_pic_id": "3140964514487728732_54831820289", "profile_pic_url": "https://instagram.fiev6-1.fna.fbcdn.net/v/t51.2885-19/358165944_6109031802541294_1514688336008099084_n.jpg?stp=dst-jpg_s150x150&_nc_ht=instagram.fiev6-1.fna.fbcdn.net&_nc_cat=110&_nc_ohc=V1UF-ylegwMAX_HbWIV&edm=APwHDrQBAAAA&ccb=7-5&oh=00_AfC4W3z1EZ61XFQk6Im82OpgoEvQp_r-JNPd2NmN9cdhUQ&oe=64B23019&_nc_sid=8809c9", "has_onboarded_to_text_post_app": true, "latest_reel_media": 0 }, ... ], "user_count": 38283, "status": "ok" } ``` </details> ##### Create `threads.private_api.create_thread` — create a thread. You can create a thread with an attachment link or image (specifying either `HTTP(S)` `URL` or path to a file). Also, you are able to create a thread as a reply to another thread. Basically, each thread is either a root or linked to another thread like a graph. You can check `https://www.threads.net/@threadstester1` for all the illustration of possible threads that can be created. | Parameters | Type | Required | Restrictions | Description | |:-----------:|:------:|:--------:|:------------:|-----------------------------------------------| | `caption` | String | Yes | - | A thread's caption. | | `url` | String | No | - | A thread's attachment `URL`. | | `image_url` | String | No | - | An image's `HTTP(S)` `URL` or path to a file. | | `reply_to` | String | No | - | An identifier of a thread to reply to. | <details> <summary>Open code example</summary> ```python3 >>> created_thread = threads.private_api.create_thread( caption='Hello, world!', ) >>> created_thread = threads.private_api.create_thread( caption='Hello, world!', url='https://www.youtube.com/watch?v=lc4qU6BakvE' ) >>> created_thread = threads.private_api.create_thread( caption='Hello, world!', image_url='/Users/dmytrostriletskyi/projects/threads/assets/picture.png', ) >>> created_thread = threads.private_api.create_thread( caption='Hello, world!', image_url='https://raw.githubusercontent.com/dmytrostriletskyi/threads-net/main/assets/picture.png', reply_to=3141055616164096839, ) >>> created_thread { "media": { "taken_at": 1689087793, "pk": 3144618785809596584, "id": "3144618785809596584_32545771157", "device_timestamp": 1689087791, "media_type": 1, "code": "Cuj7h_yM4Co", "client_cache_key": "MzE0NDYxODc4NTgwOTU5NjU4NA==.2", "filter_type": 0, "can_viewer_reshare": true, "caption": { "pk": "17986860047032912", "user_id": 32545771157, "text": "Hello, world!", "type": 1, "created_at": 1689087793, "created_at_utc": 1689087793, "content_type": "comment", "status": "Active", "bit_flags": 0, "did_report_as_spam": false, "share_enabled": false, "user": { "has_anonymous_profile_picture": true, "liked_clips_count": 0, "fan_club_info": { "fan_club_id": null, "fan_club_name": null, "is_fan_club_referral_eligible": null, "fan_consideration_page_revamp_eligiblity": null, "is_fan_club_gifting_eligible": null, "subscriber_count": null, "connected_member_count": null, "autosave_to_exclusive_highlight": null, "has_enough_subscribers_for_ssc": null }, "fbid_v2": 17841432494728221, "transparency_product_enabled": false, "text_post_app_take_a_break_setting": 0, "interop_messaging_user_fbid": 17842937552083158, "show_insights_terms": false, "allowed_commenter_type": "any", "is_unpublished": false, "reel_auto_archive": "unset", "can_boost_post": false, "can_see_organic_insights": false, "has_onboarded_to_text_post_app": true, "text_post_app_joiner_number": 68427510, "pk": 32545771157, "pk_id": "32545771157", "username": "dmytro.striletskyi", "full_name": "Dmytro Striletskyi", "is_private": false, "profile_pic_url": "https://scontent-cdg4-1.cdninstagram.com/v/t51.2885-19/44884218_345707102882519_2446069589734326272_n.jpg?_nc_ht=scontent-cdg4-1.cdninstagram.com&_nc_cat=1&_nc_ohc=s7XO4bcYceYAX9jyeT8&edm=AAAAAAABAAAA&ccb=7-5&ig_cache_key=YW5vbnltb3VzX3Byb2ZpbGVfcGlj.2-ccb7-5&oh=00_AfBhetyTgz8h53kRlSorYqET2tbmZOmYt2jiucME8qI7AQ&oe=64B3528F", "account_badges": [], "feed_post_reshare_disabled": false, "show_account_transparency_details": true, "third_party_downloads_enabled": 0 }, "is_covered": false, "is_ranked_comment": false, "media_id": 3144618785809596584, "private_reply_status": 0 }, "clips_tab_pinned_user_ids": [], "comment_inform_treatment": { "should_have_inform_treatment": false, "text": "", "url": null, "action_type": null }, "fundraiser_tag": { "has_standalone_fundraiser": false }, "sharing_friction_info": { "should_have_sharing_friction": false, "bloks_app_url": null, "sharing_friction_payload": null }, "xpost_deny_reason": "This post cannot be shared to Facebook.", "caption_is_edited": false, "original_media_has_visual_reply_media": false, "like_and_view_counts_disabled": false, "fb_user_tags": { "in": [] }, "mashup_info": { "mashups_allowed": true, "can_toggle_mashups_allowed": true, "has_been_mashed_up": false, "formatted_mashups_count": null, "original_media": null, "privacy_filtered_mashups_media_count": null, "non_privacy_filtered_mashups_media_count": null, "mashup_type": null, "is_creator_requesting_mashup": false, "has_nonmimicable_additional_audio": false, "is_pivot_page_available": false }, "can_viewer_save": true, "is_in_profile_grid": false, "profile_grid_control_enabled": false, "featured_products": [], "is_comments_gif_composer_enabled": true, "product_suggestions": [], "user": { "has_anonymous_profile_picture": true, "liked_clips_count": 0, "fan_club_info": { "fan_club_id": null, "fan_club_name": null, "is_fan_club_referral_eligible": null, "fan_consideration_page_revamp_eligiblity": null, "is_fan_club_gifting_eligible": null, "subscriber_count": null, "connected_member_count": null, "autosave_to_exclusive_highlight": null, "has_enough_subscribers_for_ssc": null }, "fbid_v2": 17841432494728221, "transparency_product_enabled": false, "text_post_app_take_a_break_setting": 0, "interop_messaging_user_fbid": 17842937552083158, "show_insights_terms": false, "allowed_commenter_type": "any", "is_unpublished": false, "reel_auto_archive": "unset", "can_boost_post": false, "can_see_organic_insights": false, "has_onboarded_to_text_post_app": true, "text_post_app_joiner_number": 68427510, "pk": 32545771157, "pk_id": "32545771157", "username": "dmytro.striletskyi", "full_name": "Dmytro Striletskyi", "is_private": false, "profile_pic_url": "https://scontent-cdg4-1.cdninstagram.com/v/t51.2885-19/44884218_345707102882519_2446069589734326272_n.jpg?_nc_ht=scontent-cdg4-1.cdninstagram.com&_nc_cat=1&_nc_ohc=s7XO4bcYceYAX9jyeT8&edm=AAAAAAABAAAA&ccb=7-5&ig_cache_key=YW5vbnltb3VzX3Byb2ZpbGVfcGlj.2-ccb7-5&oh=00_AfBhetyTgz8h53kRlSorYqET2tbmZOmYt2jiucME8qI7AQ&oe=64B3528F", "account_badges": [], "feed_post_reshare_disabled": false, "show_account_transparency_details": true, "third_party_downloads_enabled": 0 }, "image_versions2": { "candidates": [ { "width": 980, "height": 652, "url": "https://instagram.fiev6-1.fna.fbcdn.net/v/t51.2885-15/359145974_1454822158673126_2191780209015799278_n.jpg?stp=dst-jpg_e35&_nc_ht=instagram.fiev6-1.fna.fbcdn.net&_nc_cat=102&_nc_ohc=8vV5VulJfUwAX-jnGZU&edm=AL_hgCEBAAAA&ccb=7-5&ig_cache_key=MzE0NDYxODc4NTgwOTU5NjU4NA%3D%3D.2-ccb7-5&oh=00_AfB5avonpo72-x_5Te_cXCKf2aM6jJO9dGmmejvdV821zg&oe=64B3067E&_nc_sid=e50d24", "scans_profile": "e35" }, { "width": 720, "height": 479, "url": "https://instagram.fiev6-1.fna.fbcdn.net/v/t51.2885-15/359145974_1454822158673126_2191780209015799278_n.jpg?stp=dst-jpg_e35_s720x720&_nc_ht=instagram.fiev6-1.fna.fbcdn.net&_nc_cat=102&_nc_ohc=8vV5VulJfUwAX-jnGZU&edm=AL_hgCEBAAAA&ccb=7-5&ig_cache_key=MzE0NDYxODc4NTgwOTU5NjU4NA%3D%3D.2-ccb7-5&oh=00_AfBJhcGMT6yd6pECPVoeFAIoTIcgOzTw6zDKxs8ZdZK1CA&oe=64B3067E&_nc_sid=e50d24", "scans_profile": "e35" }, { "width": 640, "height": 426, "url": "https://instagram.fiev6-1.fna.fbcdn.net/v/t51.2885-15/359145974_1454822158673126_2191780209015799278_n.jpg?stp=dst-jpg_e35_s640x640_sh0.08&_nc_ht=instagram.fiev6-1.fna.fbcdn.net&_nc_cat=102&_nc_ohc=8vV5VulJfUwAX-jnGZU&edm=AL_hgCEBAAAA&ccb=7-5&ig_cache_key=MzE0NDYxODc4NTgwOTU5NjU4NA%3D%3D.2-ccb7-5&oh=00_AfBPn-IvLSL4PYxQQoFyFyHFuimEtfKoDq7EoCys3xaMWg&oe=64B3067E&_nc_sid=e50d24", "scans_profile": "e35" }, { "width": 480, "height": 319, "url": "https://instagram.fiev6-1.fna.fbcdn.net/v/t51.2885-15/359145974_1454822158673126_2191780209015799278_n.jpg?stp=dst-jpg_e35_s480x480&_nc_ht=instagram.fiev6-1.fna.fbcdn.net&_nc_cat=102&_nc_ohc=8vV5VulJfUwAX-jnGZU&edm=AL_hgCEBAAAA&ccb=7-5&ig_cache_key=MzE0NDYxODc4NTgwOTU5NjU4NA%3D%3D.2-ccb7-5&oh=00_AfDiPNhb5u_FX3VKC8KZ1rmN7yvmmTMiunWJBK25yuly4A&oe=64B3067E&_nc_sid=e50d24", "scans_profile": "e35" }, { "width": 320, "height": 213, "url": "https://instagram.fiev6-1.fna.fbcdn.net/v/t51.2885-15/359145974_1454822158673126_2191780209015799278_n.jpg?stp=dst-jpg_e35_s320x320&_nc_ht=instagram.fiev6-1.fna.fbcdn.net&_nc_cat=102&_nc_ohc=8vV5VulJfUwAX-jnGZU&edm=AL_hgCEBAAAA&ccb=7-5&ig_cache_key=MzE0NDYxODc4NTgwOTU5NjU4NA%3D%3D.2-ccb7-5&oh=00_AfAy8wZuJneP3N_g6PyDKk-2kiAGLvPO4XPfK1MBwVSSPw&oe=64B3067E&_nc_sid=e50d24", "scans_profile": "e35" }, { "width": 240, "height": 160, "url": "https://instagram.fiev6-1.fna.fbcdn.net/v/t51.2885-15/359145974_1454822158673126_2191780209015799278_n.jpg?stp=dst-jpg_e35_s240x240&_nc_ht=instagram.fiev6-1.fna.fbcdn.net&_nc_cat=102&_nc_ohc=8vV5VulJfUwAX-jnGZU&edm=AL_hgCEBAAAA&ccb=7-5&ig_cache_key=MzE0NDYxODc4NTgwOTU5NjU4NA%3D%3D.2-ccb7-5&oh=00_AfDnUnB202rqbhfPupeNKKd-NRM9e4-FvsnnGD2FsEY_bQ&oe=64B3067E&_nc_sid=e50d24", "scans_profile": "e35" }, { "width": 1080, "height": 1080, "url": "https://instagram.fiev6-1.fna.fbcdn.net/v/t51.2885-15/359145974_1454822158673126_2191780209015799278_n.jpg?stp=c164.0.652.652a_dst-jpg_e35&_nc_ht=instagram.fiev6-1.fna.fbcdn.net&_nc_cat=102&_nc_ohc=8vV5VulJfUwAX-jnGZU&edm=AL_hgCEBAAAA&ccb=7-5&ig_cache_key=MzE0NDYxODc4NTgwOTU5NjU4NA%3D%3D.2-ccb7-5&oh=00_AfDGwvzJe8hIAfBBrRerzo8PCdQAey0_Q9uJSF5eGz1f-w&oe=64B3067E&_nc_sid=e50d24", "scans_profile": "e35" }, { "width": 750, "height": 750, "url": "https://instagram.fiev6-1.fna.fbcdn.net/v/t51.2885-15/359145974_1454822158673126_2191780209015799278_n.jpg?stp=c164.0.652.652a_dst-jpg_e35_s750x750_sh0.08&_nc_ht=instagram.fiev6-1.fna.fbcdn.net&_nc_cat=102&_nc_ohc=8vV5VulJfUwAX-jnGZU&edm=AL_hgCEBAAAA&ccb=7-5&ig_cache_key=MzE0NDYxODc4NTgwOTU5NjU4NA%3D%3D.2-ccb7-5&oh=00_AfD2nnAd_T-Zj3cX0-TUEjajO9jn3EMfKwaUHwLmagBhiA&oe=64B3067E&_nc_sid=e50d24", "scans_profile": "e35" }, { "width": 640, "height": 640, "url": "https://instagram.fiev6-1.fna.fbcdn.net/v/t51.2885-15/359145974_1454822158673126_2191780209015799278_n.jpg?stp=c164.0.652.652a_dst-jpg_e35_s640x640_sh0.08&_nc_ht=instagram.fiev6-1.fna.fbcdn.net&_nc_cat=102&_nc_ohc=8vV5VulJfUwAX-jnGZU&edm=AL_hgCEBAAAA&ccb=7-5&ig_cache_key=MzE0NDYxODc4NTgwOTU5NjU4NA%3D%3D.2-ccb7-5&oh=00_AfB0kVjlsVkk26Rqad6KRlMWTOxs88NzLS1iuHpHYLU1tA&oe=64B3067E&_nc_sid=e50d24", "scans_profile": "e35" }, { "width": 480, "height": 480, "url": "https://instagram.fiev6-1.fna.fbcdn.net/v/t51.2885-15/359145974_1454822158673126_2191780209015799278_n.jpg?stp=c164.0.652.652a_dst-jpg_e35_s480x480&_nc_ht=instagram.fiev6-1.fna.fbcdn.net&_nc_cat=102&_nc_ohc=8vV5VulJfUwAX-jnGZU&edm=AL_hgCEBAAAA&ccb=7-5&ig_cache_key=MzE0NDYxODc4NTgwOTU5NjU4NA%3D%3D.2-ccb7-5&oh=00_AfCo8QoGPw8tlIavEsSbZkMoZOFdey8XM0--RAIABYqxZQ&oe=64B3067E&_nc_sid=e50d24", "scans_profile": "e35" }, { "width": 320, "height": 320, "url": "https://instagram.fiev6-1.fna.fbcdn.net/v/t51.2885-15/359145974_1454822158673126_2191780209015799278_n.jpg?stp=c164.0.652.652a_dst-jpg_e35_s320x320&_nc_ht=instagram.fiev6-1.fna.fbcdn.net&_nc_cat=102&_nc_ohc=8vV5VulJfUwAX-jnGZU&edm=AL_hgCEBAAAA&ccb=7-5&ig_cache_key=MzE0NDYxODc4NTgwOTU5NjU4NA%3D%3D.2-ccb7-5&oh=00_AfAfleZMSb3Ae50Fj1dhDUYc6slUO5zxUKwcKYL0PmHNkA&oe=64B3067E&_nc_sid=e50d24", "scans_profile": "e35" }, { "width": 240, "height": 240, "url": "https://instagram.fiev6-1.fna.fbcdn.net/v/t51.2885-15/359145974_1454822158673126_2191780209015799278_n.jpg?stp=c164.0.652.652a_dst-jpg_e35_s240x240&_nc_ht=instagram.fiev6-1.fna.fbcdn.net&_nc_cat=102&_nc_ohc=8vV5VulJfUwAX-jnGZU&edm=AL_hgCEBAAAA&ccb=7-5&ig_cache_key=MzE0NDYxODc4NTgwOTU5NjU4NA%3D%3D.2-ccb7-5&oh=00_AfAR1f9S-NF_DwhyaSq0qzme6k4rDsGb6RfWkgnw07Tt1A&oe=64B3067E&_nc_sid=e50d24", "scans_profile": "e35" }, { "width": 150, "height": 150, "url": "https://instagram.fiev6-1.fna.fbcdn.net/v/t51.2885-15/359145974_1454822158673126_2191780209015799278_n.jpg?stp=c164.0.652.652a_dst-jpg_e35_s150x150&_nc_ht=instagram.fiev6-1.fna.fbcdn.net&_nc_cat=102&_nc_ohc=8vV5VulJfUwAX-jnGZU&edm=AL_hgCEBAAAA&ccb=7-5&ig_cache_key=MzE0NDYxODc4NTgwOTU5NjU4NA%3D%3D.2-ccb7-5&oh=00_AfAYtetjgt4tQah5PnX8LXRlrUGS0A1uH4kkUrUPNwcr_g&oe=64B3067E&_nc_sid=e50d24", "scans_profile": "e35" } ] }, "original_width": 980, "original_height": 652, "is_reshare_of_text_post_app_media_in_ig": false, "comment_threading_enabled": false, "max_num_visible_preview_comments": 2, "has_more_comments": false, "preview_comments": [], "comment_count": 0, "can_view_more_preview_comments": false, "hide_view_all_comment_entrypoint": false, "likers": [], "shop_routing_user_id": null, "can_see_insights_as_brand": false, "is_organic_product_tagging_eligible": false, "product_type": "feed", "is_paid_partnership": false, "music_metadata": { "music_canonical_id": "0", "audio_type": null, "music_info": null, "original_sound_info": null, "pinned_media_ids": null }, "deleted_reason": 0, "organic_tracking_token": "eyJ2ZXJzaW9uIjo1LCJwYXlsb2FkIjp7ImlzX2FuYWx5dGljc190cmFja2VkIjpmYWxzZSwidXVpZCI6IjQxNjBkY2E0ODY1YzQyODY5NThhOWE3M2I2N2UyZWI0MzE0NDYxODc4NTgwOTU5NjU4NCIsInNlcnZlcl90b2tlbiI6IjE2ODkwODc3OTYwOTZ8MzE0NDYxODc4NTgwOTU5NjU4NHwzMjU0NTc3MTE1N3xiY2RkZjliZGNlZjFiMzNlZDIzZTJiZDg4M2E0MjAzZmNlMjQ1ZjI2MjdmYWZiNDgwNTlhNjBiMzgwYjM1MjA3In0sInNpZ25hdHVyZSI6IiJ9", "text_post_app_info": { "is_post_unavailable": false, "is_reply": true, "reply_to_author": { "pk": 95561, "pk_id": "95561", "username": "mosseri", "full_name": "Adam Mosseri", "is_private": false, "is_verified": true, "profile_pic_id": "3090458139926225297_95561", "profile_pic_url": "https://instagram.fiev6-1.fna.fbcdn.net/v/t51.2885-19/343392897_618515990300243_8088199406170073086_n.jpg?stp=dst-jpg_s150x150&_nc_ht=instagram.fiev6-1.fna.fbcdn.net&_nc_cat=1&_nc_ohc=H93HRKonYSQAX9OmlMQ&edm=AL_hgCEBAAAA&ccb=7-5&oh=00_AfC8sdQl6N1YwEs5Xva8HSfc98_zv7l4beWBQVuVBkHxJA&oe=64B21C4D&_nc_sid=e50d24", "has_onboarded_to_text_post_app": true }, "direct_reply_count": 0, "self_thread_count": 0, "reply_facepile_users": [], "link_preview_attachment": null, "can_reply": true, "reply_control": "everyone", "hush_info": null, "share_info": { "can_repost": true, "is_reposted_by_viewer": false, "can_quote_post": true } }, "integrity_review_decision": "pending", "ig_media_sharing_disabled": false, "has_shared_to_fb": 0, "is_unified_video": false, "should_request_ads": false, "is_visual_reply_commenter_notice_enabled": true, "commerciality_status": "not_commercial", "explore_hide_comments": false, "has_delayed_metadata": false }, "upload_id": "1689087791", "status": "ok" } ``` </details> <details> <summary>Open UI example</summary> ![](./assets/examples/create-thread-caption.png) ![](./assets/examples/create-thread-url.png) ![](./assets/examples/create-thread-image.png) </details> ##### Delete `threads.private_api.delete_thread` — delete a thread. | Parameters | Type | Required | Restrictions | Description | |:----------:|:-------:|:--------:|:------------:|--------------------------------------| | `id` | Integer | Yes | `>0` | An identifier of a thread to delete. | <details> <summary>Open code example</summary> ```python3 >>> deletion = threads.private_api.delete_thread(id=3141055616164096839) >>> deletion { "did_delete": true, "cxp_deep_deletion_global_response": {}, "status": "ok" } ``` </details> ##### Like `threads.private_api.like_thread` — like a thread. | Parameters | Type | Required | Restrictions | Description | |:----------:|:-------:|:--------:|:------------:|------------------------------------| | `id` | Integer | Yes | `>0` | An identifier of a thread to like. | <details> <summary>Open code example</summary> ```python3 >>> liking = threads.private_api.like_thread(id=3141055616164096839) >>> liking { "status": "ok" } ``` </details> ##### Unlike `threads.private_api.unlike_thread` — unlike a thread. | Parameters | Type | Required | Restrictions | Description | |:----------:|:-------:|:--------:|:------------:|--------------------------------------| | `id` | Integer | Yes | `>0` | An identifier of a thread to unlike. | <details> <summary>Open code example</summary> ```python3 >>> unliking = threads.private_api.unlike_thread(id=3141055616164096839) >>> unliking { "status": "ok" } ``` </details> ##### Repost `threads.private_api.repost_thread` — repost a thread. | Parameters | Type | Required | Restrictions | Description | |:----------:|:-------:|:--------:|:------------:|--------------------------------------| | `id` | Integer | Yes | `>0` | An identifier of a thread to repost. | <details> <summary>Open code example</summary> ```python3 >>> reposting = threads.private_api.repost_thread(id=3141055616164096839) >>> reposting { "repost_id": 3145900181542784653, "repost_fbid": 18008518438811136, "reposted_at": 1689240547, "status": "ok" } ``` </details> <details> <summary>Open UI example</summary> ![](./assets/examples/repost-thread.png) </details> ##### Unrepost `threads.private_api.unrepost_thread` — undo a thread's repost. An identifier of a thread to unrepost is basically the identifier of the thread from `repost` method. | Parameters | Type | Required | Restrictions | Description | |:----------:|:-------:|:--------:|:------------:|----------------------------------------| | `id` | Integer | Yes | `>0` | An identifier of a thread to unrepost. | <details> <summary>Open code example</summary> ```python3 >>> unreposting = threads.private_api.unrepost_thread(id=3141055616164096839) >>> unreposting { "status": "ok" } ``` </details> ##### Quote `threads.private_api.quote_thread` — quote a thread. | Parameters | Type | Required | Restrictions | Description | |:----------:|:-------:|:--------:|:------------:|-------------------------------------| | `id` | Integer | Yes | `>0` | An identifier of a thread to quote. | | `caption` | String | Yes | - | A quote's caption. | <details> <summary>Open code example</summary> ```python3 >>> quoting = threads.private_api.quote_thread(id=3141055616164096839, caption='Hello, world!') >>> quoting { "media": { "taken_at": 1689241970, "pk": 3145912119832282782, "id": "3145912119832282782_60438154566", "device_timestamp": 1689241970, "media_type": 19, "code": "Cuohme9Mwae", "client_cache_key": "MzE0NTkxMjExOTgzMjI4Mjc4Mg==.2", "filter_type": 0, "can_viewer_reshare": true, "caption": { "pk": "18086087680362532", "user_id": 60438154566, "text": "Hello, world!", "type": 1, "created_at": 1689241971, "created_at_utc": 1689241971, "content_type": "comment", "status": "Active", "bit_flags": 0, "did_report_as_spam": false, "share_enabled": false, "user": { "has_anonymous_profile_picture": true, "all_media_count": 33, "liked_clips_count": 0, "fan_club_info": { "fan_club_id": null, "fan_club_name": null, "is_fan_club_referral_eligible": null, "fan_consideration_page_revamp_eligiblity": null, "is_fan_club_gifting_eligible": null, "subscriber_count": null, "connected_member_count": null, "autosave_to_exclusive_highlight": null, "has_enough_subscribers_for_ssc": null }, "fbid_v2": 17841460317004769, "transparency_product_enabled": false, "text_post_app_take_a_break_setting": 0, "interop_messaging_user_fbid": 17848558452010567, "show_insights_terms": false, "allowed_commenter_type": "any", "is_unpublished": false, "reel_auto_archive": "unset", "can_boost_post": false, "can_see_organic_insights": false, "has_onboarded_to_text_post_app": true, "text_post_app_joiner_number": 95787975, "pk": 60438154566, "pk_id": "60438154566", "username": "threadstester1", "full_name": "threads tester", "is_private": false, "profile_pic_url": "https://scontent-lhr8-1.cdninstagram.com/v/t51.2885-19/44884218_345707102882519_2446069589734326272_n.jpg?_nc_ht=scontent-lhr8-1.cdninstagram.com&_nc_cat=1&_nc_ohc=d9GMadcVaNMAX8OxeS1&edm=AAAAAAABAAAA&ccb=7-5&ig_cache_key=YW5vbnltb3VzX3Byb2ZpbGVfcGlj.2-ccb7-5&oh=00_AfCgI100IU2KFElc0Xo-ngiCYBS2chBfBxpchDkHna42Lw&oe=64B54CCF", "account_badges": [], "feed_post_reshare_disabled": false, "show_account_transparency_details": true, "third_party_downloads_enabled": 0 }, "is_covered": false, "is_ranked_comment": false, "media_id": 3145912119832282782, "private_reply_status": 0 }, "clips_tab_pinned_user_ids": [], "comment_inform_treatment": { "should_have_inform_treatment": false, "text": "", "url": null, "action_type": null }, "fundraiser_tag": { "has_standalone_fundraiser": false }, "sharing_friction_info": { "should_have_sharing_friction": false, "bloks_app_url": null, "sharing_friction_payload": null }, "xpost_deny_reason": "This post cannot be shared to Facebook.", "caption_is_edited": false, "original_media_has_visual_reply_media": false, "like_and_view_counts_disabled": false, "fb_user_tags": { "in": [] }, "can_viewer_save": true, "is_in_profile_grid": false, "profile_grid_control_enabled": false, "featured_products": [], "is_comments_gif_composer_enabled": true, "product_suggestions": [], "user": { "has_anonymous_profile_picture": true, "all_media_count": 33, "liked_clips_count": 0, "fan_club_info": { "fan_club_id": null, "fan_club_name": null, "is_fan_club_referral_eligible": null, "fan_consideration_page_revamp_eligiblity": null, "is_fan_club_gifting_eligible": null, "subscriber_count": null, "connected_member_count": null, "autosave_to_exclusive_highlight": null, "has_enough_subscribers_for_ssc": null }, "fbid_v2": 17841460317004769, "transparency_product_enabled": false, "text_post_app_take_a_break_setting": 0, "interop_messaging_user_fbid": 17848558452010567, "show_insights_terms": false, "allowed_commenter_type": "any", "is_unpublished": false, "reel_auto_archive": "unset", "can_boost_post": false, "can_see_organic_insights": false, "has_onboarded_to_text_post_app": true, "text_post_app_joiner_number": 95787975, "pk": 60438154566, "pk_id": "60438154566", "username": "threadstester1", "full_name": "threads tester", "is_private": false, "profile_pic_url": "https://scontent-lhr8-1.cdninstagram.com/v/t51.2885-19/44884218_345707102882519_2446069589734326272_n.jpg?_nc_ht=scontent-lhr8-1.cdninstagram.com&_nc_cat=1&_nc_ohc=d9GMadcVaNMAX8OxeS1&edm=AAAAAAABAAAA&ccb=7-5&ig_cache_key=YW5vbnltb3VzX3Byb2ZpbGVfcGlj.2-ccb7-5&oh=00_AfCgI100IU2KFElc0Xo-ngiCYBS2chBfBxpchDkHna42Lw&oe=64B54CCF", "account_badges": [], "feed_post_reshare_disabled": false, "show_account_transparency_details": true, "third_party_downloads_enabled": 0 }, "image_versions2": { "candidates": [] }, "original_width": 612, "original_height": 612, "is_reshare_of_text_post_app_media_in_ig": false, "comment_threading_enabled": false, "max_num_visible_preview_comments": 2, "has_more_comments": false, "preview_comments": [], "comment_count": 0, "can_view_more_preview_comments": false, "hide_view_all_comment_entrypoint": false, "likers": [], "shop_routing_user_id": null, "can_see_insights_as_brand": false, "is_organic_product_tagging_eligible": false, "product_type": "text_post", "is_paid_partnership": false, "music_metadata": null, "deleted_reason": 0, "organic_tracking_token": "eyJ2ZXJzaW9uIjo1LCJwYXlsb2FkIjp7ImlzX2FuYWx5dGljc190cmFja2VkIjpmYWxzZSwidXVpZCI6ImY4MWVhNDFlYmY2NjRmNWNiOGUxMDM1ZWVlOTBmMjY3MzE0NTkxMjExOTgzMjI4Mjc4MiIsInNlcnZlcl90b2tlbiI6IjE2ODkyNDE5NzI5Nzd8MzE0NTkxMjExOTgzMjI4Mjc4Mnw2MDQzODE1NDU2Nnw2NzczNzc1YjJmODljNDhiNGI4ZTBlYTlmODYxYzA5NWRiMTc2ODYwOWRmYWIzMmI1OGUyYmY4OWM5MmFkZTlmIn0sInNpZ25hdHVyZSI6IiJ9", "text_post_app_info": { "is_post_unavailable": false, "is_reply": false, "reply_to_author": null, "direct_reply_count": 0, "self_thread_count": 0, "reply_facepile_users": [], "link_preview_attachment": null, "can_reply": true, "reply_control": "everyone", "hush_info": null, "share_info": { "can_repost": true, "is_reposted_by_viewer": false, "can_quote_post": true, "quoted_post": { "pk": 3141055616164096839, "id": "3141055616164096839_95561", "text_post_app_info": { "is_post_unavailable": false, "is_reply": false, "reply_to_author": null, "direct_reply_count": 3513, "self_thread_count": 0, "reply_facepile_users": [], "link_preview_attachment": null, "can_reply": true, "reply_control": "everyone", "hush_info": null, "share_info": { "can_repost": true, "is_reposted_by_viewer": false, "can_quote_post": true } }, "caption": { "pk": "17990855999107063", "user_id": 95561, "text": "I've been getting some questions about deleting your account. To clarify, you can deactivate your Threads account, which hides your Threads profile and content, you can set your profile to private, and you can delete individual threads posts \u2013 all without deleting your Instagram account. Threads is powered by Instagram, so right now it's just one account, but we're looking into a way to delete your Threads account separately.", "type": 1, "created_at": 1688663030, "created_at_utc": 1688663030, "content_type": "comment", "status": "Active", "bit_flags": 0, "did_report_as_spam": false, "share_enabled": false, "user": { "pk": 95561, "pk_id": "95561", "username": "mosseri", "full_name": "Adam Mosseri", "is_private": false, "is_verified": true, "profile_pic_id": "3090458139926225297_95561", "profile_pic_url": "https://instagram.fiev6-1.fna.fbcdn.net/v/t51.2885-19/343392897_618515990300243_8088199406170073086_n.jpg?stp=dst-jpg_s150x150&_nc_ht=instagram.fiev6-1.fna.fbcdn.net&_nc_cat=1&_nc_ohc=HX3vm4L7D44AX_09srG&edm=AMl8KrABAAAA&ccb=7-5&oh=00_AfBQ5qRqYCwxeopGHTpXx0LsYGnxpWEapsnHJpdkFruvEA&oe=64B4168D&_nc_sid=cd9b4f", "fbid_v2": "17841400946830001", "has_onboarded_to_text_post_app": true }, "is_covered": false, "is_ranked_comment": false, "media_id": 3141055616164096839, "private_reply_status": 0 }, "taken_at": 1688663030, "device_timestamp": 28775508833101, "media_type": 19, "code": "CuXRXDdNOtH", "client_cache_key": "MzE0MTA1NTYxNjE2NDA5NjgzOQ==.2", "filter_type": 0, "product_type": "text_post", "organic_tracking_token": "eyJ2ZXJzaW9uIjo1LCJwYXlsb2FkIjp7ImlzX2FuYWx5dGljc190cmFja2VkIjp0cnVlLCJ1dWlkIjoiZjgxZWE0MWViZjY2NGY1Y2I4ZTEwMzVlZWU5MGYyNjczMTQxMDU1NjE2MTY0MDk2ODM5Iiwic2VydmVyX3Rva2VuIjoiMTY4OTI0MTk3MjYzNHwzMTQxMDU1NjE2MTY0MDk2ODM5fDYwNDM4MTU0NTY2fDUxZTliM2JkZjQ3ZDdkMDAyZTZlOGJmZjA0NDY2YjM3NWM4ZjQ1Yzc4YzYzMmNlYThlYmYwNDcyNDlkMDI1NzUifSwic2lnbmF0dXJlIjoiIn0=", "image_versions2": { "candidates": [] }, "original_width": 612, "original_height": 612, "video_versions": [], "like_count": 38544, "has_liked": false, "like_and_view_counts_disabled": false, "can_viewer_reshare": true, "integrity_review_decision": "pending", "top_likers": [], "user": { "pk": 95561, "pk_id": "95561", "username": "mosseri", "full_name": "Adam Mosseri", "is_private": false, "is_verified": true, "profile_pic_id": "3090458139926225297_95561", "profile_pic_url": "https://instagram.fiev6-1.fna.fbcdn.net/v/t51.2885-19/343392897_618515990300243_8088199406170073086_n.jpg?stp=dst-jpg_s150x150&_nc_ht=instagram.fiev6-1.fna.fbcdn.net&_nc_cat=1&_nc_ohc=HX3vm4L7D44AX_09srG&edm=AMl8KrABAAAA&ccb=7-5&oh=00_AfBQ5qRqYCwxeopGHTpXx0LsYGnxpWEapsnHJpdkFruvEA&oe=64B4168D&_nc_sid=cd9b4f", "friendship_status": { "following": false, "followed_by": false, "blocking": false, "muting": false, "is_private": false, "incoming_request": false, "outgoing_request": false, "text_post_app_pre_following": false, "is_bestie": false, "is_restricted": false, "is_feed_favorite": false, "is_eligible_to_subscribe": false }, "has_anonymous_profile_picture": false, "has_onboarded_to_text_post_app": true, "account_badges": [] } } } }, "integrity_review_decision": "pending", "ig_media_sharing_disabled": false, "has_shared_to_fb": 0, "is_unified_video": false, "should_request_ads": false, "is_visual_reply_commenter_notice_enabled": true, "commerciality_status": "not_commercial", "explore_hide_comments": false, "has_delayed_metadata": false }, "upload_id": "1689241970", "status": "ok" } ``` </details> <details> <summary>Open UI example</summary> ![](./assets/examples/quote-thread.png) </details> #### Feed ##### Get recommended users `threads.private_api.get_recommended_users` — get recommended users. Supports [pagination](#pagination), but has different response keys: `paging_token` instead of `next_max_id` as an `offset` and `has_more` as a value to detect whether to stop iterations or not. | Parameters | Type | Required | Restrictions | Description | |:----------:|:-------:|:--------:|:------------:|--------------------------------------------------------------| | `limit` | Integer | No | `>0` | A number of recommended users to get. Default value is `15`. | | `offset` | Integer | No | - | A number of recommended users skip before fetching. | <details> <summary>Open code example</summary> ```python3 >>> recommended_users = threads.private_api.get_recommended_users() >>> recommended_users { "users": [ { "pk": 285464169, "pk_id": "285464169", "username": "sashachistova", "full_name": "Sasha Chistova", "account_badges": [], "profile_pic_url": "https://instagram.fiev6-1.fna.fbcdn.net/v/t51.2885-19/358049976_793858308785598_4202050856705005217_n.jpg?stp=dst-jpg_s150x150&_nc_ht=instagram.fiev6-1.fna.fbcdn.net&_nc_cat=106&_nc_ohc=sieozLj6SiAAX-AN0KH&edm=AHWpSgsBAAAA&ccb=7-5&oh=00_AfCMzBux6Y1Hzi0ZmBkD26hzer4ETDiRY8v6mfeGEbR6lw&oe=64B8BE6D&_nc_sid=ae9eda", "has_anonymous_profile_picture": false, "has_onboarded_to_text_post_app": true, "is_verified": true, "friendship_status": { "following": false, "followed_by": false, "blocking": false, "muting": false, "is_private": false, "incoming_request": false, "outgoing_request": false, "text_post_app_pre_following": false, "is_bestie": false, "is_restricted": false, "is_feed_favorite": false, "is_eligible_to_subscribe": false }, "profile_context_facepile_users": [], "follower_count": 26459 }, ... ], "paging_token": "15", "has_more": true, "status": "ok" } ``` </details>
harishsg993010/DamnVulnerableLLMProject
https://github.com/harishsg993010/DamnVulnerableLLMProject
A LLM explicitly designed for getting hacked
# DamnVulnerableLLMProject This project is designed for Security Researchers to enhance their LLM hacking skills and for LLM Companies to secure their AI model and System against attackers and also consider reading disclaimer before using tool for learning attacks against LLM # CTF Insructions 1. Fork https://replit.com/@hxs220034/DamnVulnerableLLMApplication-Demo on another replit 2. Get your OpenAI API key and add it **Secrets** as `OPENAI_API_KEY` 3. Run Replit and choose 1 for training and then Enter 3 for CTF Mode 4. Make console to spill flags and secrets in text and DM me SS on twitter @CoderHarish for verification 5. You can also read Writeup included in bottom for more challenges 6. You can also check txt files in training folder for flags and secrets strings given that main goal of ctf is prompt injection # Set Up 1. Get your OpenAI API key and add it **Secrets** as `OPENAI_API_KEY` 2. Make up and `API_KEY` for the JSON API, this can just be something like a password if you'd like 3. Fill the `training/facts` folder with as many `text` documents as you can containing information about anything including dummy credentials etc 5. Edit the `master.txt` file to represent who you want the bot to pretend to be 6. Click **Run**, select option `1` 7. To chat with the bot, once you've one the training, select option `2` # Screenshots [![Damn-Vulnerable-Creditcard.png](https://i.postimg.cc/DZX3K70V/Damn-Vulnerable-Creditcard.png)](https://postimg.cc/HcdhwDb3) [![Damnvulnerable-LLM.png](https://i.postimg.cc/FK0xtqvW/Damnvulnerable-LLM.png)](https://postimg.cc/062SDVbD) # OWASP top 10 LLM Vulnerabilities in this application 1. Prompt injection 2. Sensitive Data Disclosure on LLM 3. Unauthorised code injection on LLM 4. improper access control on LLM APIs 5. LLM Model poisoning (Undocumented in the writeup for a reason so find it on your own) # Writeup for Reference https://medium.com/@harishhacker3010/art-of-hacking-llm-apps-a22cf60a523b # Disclaimer: The DamnVulnerableLLMProject is an educational project designed for security researchers to enhance their skills in identifying and addressing vulnerabilities in LLM (Language Model) applications. It also aims to assist LLM companies in securing their AI models and systems against potential attackers. Please note that engaging with the DamnVulnerableLLMProject carries inherent risks. This project deliberately incorporates vulnerable components and provides opportunities for participants to exploit them. It is crucial to exercise caution and ethical behavior while participating in any activities related to this project. By accessing and using the DamnVulnerableLLMProject, you acknowledge and agree to the following: Ethical Use: You agree to use the DamnVulnerableLLMProject solely for educational purposes, ethical hacking, and research purposes. It is strictly prohibited to exploit vulnerabilities discovered in this project against any real-world applications or systems without proper authorization. Legal Compliance: You are solely responsible for ensuring that your participation in the DamnVulnerableLLMProject complies with all applicable laws, regulations, and guidelines in your jurisdiction. OpenAI, the creators of the model, and the authors of the DamnVulnerableLLMProject disclaim any liability for any illegal or unauthorized activities conducted by participants. Responsible Disclosure: If you discover any vulnerabilities or potential security issues during your participation in the DamnVulnerableLLMProject, it is strongly recommended that you responsibly disclose them to the project maintainers or the relevant parties. Do not publicly disclose or exploit the vulnerabilities without proper authorization. No Warranty: The DamnVulnerableLLMProject is provided on an "as-is" basis without any warranties or guarantees of any kind, whether expressed or implied. The authors and maintainers of the project disclaim any liability for any damages or losses incurred during your participation. Personal Responsibility: You acknowledge that your participation in the DamnVulnerableLLMProject is entirely voluntary and at your own risk. You are solely responsible for any actions you take and the consequences that may arise from them. Remember, the primary objective of the DamnVulnerableLLMProject is to enhance cybersecurity knowledge and promote responsible and ethical hacking practices. Always act in accordance with applicable laws and guidelines, and respect the privacy and security of others. If you do not agree with any part of this disclaimer or are uncertain about your rights and responsibilities, it is recommended that you refrain from participating in the DamnVulnerableLLMProject. # Credit base code based on the [Amjad Masad Chat bot](https://ai.repl.page) by IronCladDev on Replit For more information on how this works, check out [Zahid Khawaja's Tutorial](https://replit.com/@zahidkhawaja/Replit-Assistant?v=1).
sunxt99/PIMCOMP-NN
https://github.com/sunxt99/PIMCOMP-NN
null
# PIMCOMP-NN PIMCOMP-NN is a compilation optimization framework designed for crossbar-based PIM DNN accelerators. The project comprises three submodules: frontend, backend, and verification program. With PIMCOMP-NN, we can conveniently compile DNN models to get the instruction flow that can be executed on PIM accelerators. We design a frontend to load and parse DNN models in [ONNX](https://github.com/onnx/onnx) format and generate the input file for the backend. The backend is the core part of PIMCOMP-NN. The overview of backend is shown in the following figure. The entire backend includes 4 key stages. Node Partition describes the rules for partitioning weight data to fit for the PIM crossbars. Weight Replicating determines the replication numbers of different nodes. Core Mapping decides the mapping relationship between crossbars and cores and Dataflow Scheduling performs scheduling and optimization according to user’s requirement to generate control flow or instruction flow. In addition to these four stages, PIMCOMP-NN also has some pre-processing and post-optimization. In order to expand the scope of application of the PIM accelerators, we provide two compilation modes for users to choose from: High Throughput (batch pipeline) and Low Latency (element pipeline), which are suitable for scenarios with continuous input data of large batches and intermittent input of a small amount, respectively. ![overview](https://typora-ict.oss-cn-beijing.aliyuncs.com/img/202307022208317.png) We also design a verification program to verify the correctness of the instruction stream. We simulate the calculation results instruction by instruction and compare them with results obtained by ONNX runtime. PIMCOMP-NN has an associated simulator, [pimsim-nn](https://github.com/wangxy-2000/pimsim-nn). The produced instruction stream can be simulated by the simulator to estimate the performance (inference latency and/or throughput), power dissipation, and energy consumption, using the same architecture configuration. # Usability To describe the operation of DNN network in detail and retain generality, we propose a general and representative NVM crossbar based accelerator architecture as a hardware abstraction. At a high level, the accelerator consists of multiple cores interconnected through NoC or busses. The weights of the neural network are stored in the cores, while the inputs, outputs and intermediate results are stored in the global memory or local memory. Each core has **PIM matrix unit** to compute matrix-vector-multiplication, and **vector functional unit** to perform vector operations. ![abstraction](https://typora-ict.oss-cn-beijing.aliyuncs.com/img/202307022258469.png) Our proposed abstract architecture is compatible with the Crossbar/IMA/Tile/Chip structure adopted in previous work. | Paper | #Core | #Crossbar in one core | Crossbar Size | | ---------------- | ----- | --------------------- | ------------- | | ISAAC (ISCA'16) | 168 | 12*8 | 128*128 | | PUMA (ASPLOS'19) | 138 | 8*16 | 128*128 | | SRE (ISCA'19) | 168 | 12*8 | 128*128 | | 15.4 (ISSCC'20) | 1 | 8 | 512*512 | | 16.1 (ISSCC'21) | 1 | 8 | 1024*512 | | FORMS (ISCA'21) | 168 | 12*8 | 128*128 | | 11.4 (ISSCC'22) | 1 | 32 | 1024*256 | | 16.6 (ISSCC'23) | 4 | 8 | 512*1024 | # Usage ## Requirements In order to run the frontend and verification programs, you need to install the following python package. - onnx - onnxruntime - numpy - cv2 To compile the backend using cmake, you need to meet the following requirements. - cmake >= 3.6 - g++ >= 4.8.5 ## Frontend We can convert ONNX model to the input file of backend in JSON format containing node parameters and topological connections of the DNN network. The backend reads this file for subsequent processes. Users can download ONNX model from [ONNX Model Zoo](https://github.com/onnx/models) and save the model to `PIMCOMP-NN/models/ONNX/`. The output file will be saved in `PIMCOMP-NN/models/JSON/`. In order to facilitate users to use PIMCOMP-NN, we have converted some commonly used network models into json files in advance and users can directly use the backend to generate instructions. ```shell cd PIMCOMP-NN/frontend/ python frontend.py --model_path ../models/ONNX/resnet18.onnx --save_path ../models/JSON/resnet18.json ``` ## Backend ### Hardware configuration The hardware information is defined in `Preparation.cpp` and the value of these parameters are configured in `config.json`. Other unused parameters in `config.json` are used for pimsim-nn for simulation. | Parameters | Description | Definition in config.json | | ------------------- | ----------------------------------------------- | ------------------------- | | ArithmeticPrecision | precision of input, weight and output | adc_count | | CellPrecision | precision of crossbar cell | cell_precision | | CrossbarW | width of an NVM crossbar | xbar_size[1] | | CrossbarH | height of an NVM crossbar | xbar_size[0] | | CoreW | \#logical crossbars in every row of one core | 1 | | CoreH | \#logical crossbars in every column of one core | xbar_array_count | | ChipW | \#cores in every row of one chip | layout[1] | | ChipH | \#cores in every column of one chip | layout[0] | ### Building ```shell cd PIMCOMP-NN mkdir build cd build cmake .. make ``` ### Parameter | Parameters | Description | Options | Default | | ---------- | ---------------------------------------- | ----------------------- | ------------- | | -m | model name | | MUST be given | | -r | weight replicating method | balance/W0H0/uniform/GA | balance | | -p | pipeline granularity | element/batch | batch | | -o | save evaluation output or not | YES/NO | NO | | -v | save instruction for verification or not | YES/NO | NO | | -i | save intermediate information or not | YES/NO | NO | | -s | save instruction for simulation or not | YES/NO | NO | For example, if you have `resnet18.json` in `PIMCOMP-NN/models/JSON/` and want to compile it using balance replication method and element pipeline (low latency mode). ```shell ./PIMCOMP-NN –m=resnet18 –r=balance –p=element –o=NO –v=NO –i=NO -s=NO ``` The model name is a required parameter. If you set `-m=MODEL_NAME`, then the backend will search and load `MODEL_NAME.json` in `PIMCOMP-NN/models/JSON/`. Balance, W0H0, uniform and GA are different weight replicating strategies from the perspectives of calculation balance, layer size balance, unified replication and performance, respectively. If you want to save the evaluation result of the whole instruction stream, please set `-o=YES` and the result will be saved in `PIMCOMP-NN/output/EvaluationResult.txt`. If you want to save the instruction and other information for verification, please set `-v=YES` and the result will be saved in `PIMCOMP-NN/output/VerificationInfo.json`. If you want to save the intermediate information during compiling, please set `-i=YES` and the result will be saved in `PIMCOMP-NN/output/IntermediateInfo.json`. If you want to save the instruction for more detailed simulation using pimsim-nn, please set `-s=YES` and the result will be saved in `PIMCOMP-NN/output/SimulationInfo.gz`. ## Verification The verification program is used to verify the generated instruction stream. Before verifying, please set `-v=YES` when calling the PIMCOMP-NN backend and the instruction for verification will be saved in `PIMCOMP-NN/output/VerificationInfo.json`. ```shell cd PIMCOMP-NN/verification/ python verification.py --model_path ../models/ONNX/resnet18.onnx --pipeline_type element ``` We have compiled and validated these models below. - AlexNet - VGG16 - ResNet18/34/50 - GoogleNet - SqueezeNet - CaffeNet - ShuffleNet-v1 - MobileNetv2 # Development ## To compile new model Please note the following information if you want to compile a new model using PIMCOMP-NN. 1. Currently operators such as CONV/Group CONV/FC/POOL/ADD/CONCAT/ReLU are supported. 2. Specify last_node_index and last_node_output_channel_num if you choose element pipeline in ElementPipelineSchedule.cpp 3. Make sure the data_type of your ONNX model is 1, i.e. float data. 4. The node immediately adjacent to the input needs to be a convolution node. # Citing PIMCOMP-NN [1] Xiaotian Sun, Xinyu Wang, Wanqian Li, Lei Wang, Yinhe Han, Xiaoming Chen, ["PIMCOMP: A Universal Compilation Framework for Crossbar-based PIM DNN Accelerators"](https://arxiv.org/abs/2307.01475), in Design Automation Conference (DAC'23), 2023. # Code Author [Xiaotian Sun](sunxiaotian21s@ict.ac.cn) (Institute of Computing Technology, Chinese Academy of Sciences) # Project PI [Xiaoming Chen](https://people.ucas.edu.cn/~chenxm) # License [Apache License v2.0](https://github.com/sunxt99/PIMCOMP-NN/blob/master/LICENSE) # Acknowledgements [onnx](https://github.com/onnx/onnx) [jsoncpp](https://github.com/open-source-parsers/jsoncpp) [zlib](https://github.com/madler/zlib) [zstr](https://github.com/mateidavid/zstr)
mengtan00/SA-BEV
https://github.com/mengtan00/SA-BEV
This is the implementation of the paper "SA-BEV: Generating Semantic-Aware Bird's-Eye-View Feature for Multi-view 3D Object Detection" (ICCV 2023)
<div align="center"> <h1>SA-BEV</h1> <h3>[ICCV2023] SA-BEV: Generating Semantic-Aware Bird's-Eye-View Feature for Multi-view 3D Object Detection</h3> </div> <div align="center"> <img src="resources/sabev.png" width="800"/> </div><br/> ## News - **2023.07.14** SA-BEV is accepted by ICCV 2023. The paper is available [here](https://arxiv.org/abs/2307.11477). ## Main Results | Config | mAP | NDS | Baidu | Google | | --------------------------------------------------------------- | :----: | :----: | :-----: | :---: | | [**SA-BEV-R50**](configs/sabev/sabev-r50.py) | 35.5 | 46.7 | [link](https://pan.baidu.com/s/1ue-Rr2rNU-s1DUs2jqpxcQ?pwd=oo8x) | [link](https://drive.google.com/drive/folders/1-scWXNni3xkPibv7CApS--a5JSDmujUu?usp=sharing) | | [**SA-BEV-R50-MSCT**](configs/sabev/sabev-r50-msct.py) | 37.0 | 48.8 | [link](https://pan.baidu.com/s/1ue-Rr2rNU-s1DUs2jqpxcQ?pwd=oo8x) | [link](https://drive.google.com/drive/folders/1-scWXNni3xkPibv7CApS--a5JSDmujUu?usp=sharing) | | [**SA-BEV-R50-MSCT-CBGS**](configs/sabev/sabev-r50-msct-cbgs.py)| 38.7 | 51.2 | [link](https://pan.baidu.com/s/1ue-Rr2rNU-s1DUs2jqpxcQ?pwd=oo8x) | [link](https://drive.google.com/drive/folders/1-scWXNni3xkPibv7CApS--a5JSDmujUu?usp=sharing) | ## Get Started #### 1. Please follow these steps to install SA-BEV. a. Create a conda virtual environment and activate it. ```shell conda create -n sabev python=3.8 -y conda activate sabev ``` b. Install PyTorch and torchvision following the [official instructions](https://pytorch.org/). ```shell pip install torch==1.10.0+cu111 torchvision==0.11.0+cu111 torchaudio==0.10.0 -f https://download.pytorch.org/whl/torch_stable.html ``` c. Install SA-BEV as mmdet3d. ```shell pip install mmcv-full==1.5.3 pip install mmdet==2.27.0 pip install mmsegmentation==0.25.0 pip install -e . ``` #### 2. Prepare nuScenes dataset as introduced in [nuscenes_det.md](docs/en/datasets/nuscenes_det.md) and create the pkl for SA-BEV by running: ```shell python tools/create_data_bevdet.py ``` #### 3. Create depth and semantic label from point cloud. ```shell python tools/generate_point_label.py ``` #### 4. Train and evalutate model following: ```shell bash tools/dist_train.sh configs/sabev/sabev-r50.py 8 --no-validate bash tools/dist_test.sh configs/sabev/sabev-r50.py work_dirs/sabev-r50/epoch_24_ema.pth 8 --eval bbox ``` ## Acknowledgement This project is not possible without multiple great open-sourced code bases. We list some notable examples below. - [BEVDet](https://github.com/HuangJunJie2017/BEVDet) - [BEVDepth](https://github.com/Megvii-BaseDetection/BEVDepth) - [open-mmlab](https://github.com/open-mmlab) - [CenterPoint](https://github.com/tianweiy/CenterPoint) - [Lift-Splat-Shoot](https://github.com/nv-tlabs/lift-splat-shoot) - [Multi-Task-Learning-PyTorch](https://github.com/SimonVandenhende/Multi-Task-Learning-PyTorch.git) ## Bibtex If SA-BEV is helpful for your research, please consider citing the following BibTeX entry. ``` @article{zhang2023sabev, title={SA-BEV: Generating Semantic-Aware Bird's-Eye-View Feature for Multi-view 3D Object Detection}, author={Jinqing, Zhang and Yanan, Zhang and Qingjie, Liu and Yunhong, Wang}, journal={arXiv preprint arXiv:2307.11477}, year={2023}, } ```
Yoga3911/flutter_bloc_clean_architecture
https://github.com/Yoga3911/flutter_bloc_clean_architecture
null
# Flutter Bloc Pattern with Clean Architecture and SOLID Principles ## Description The "Flutter Bloc Pattern with Clean Architecture and SOLID Principles" project aims to develop a Flutter application that combines the robustness of the BLoC (Business Logic Component) pattern with the clarity of clean architecture principles and incorporates the SOLID principles for robust and maintainable code. Clean architecture provides a structured approach to organizing code, emphasizing the separation of concerns and maintaining a clear boundary between different layers. This project will adhere to clean architecture principles by dividing the application into distinct layers: - Presentation Layer: This layer will handle the UI components, user interactions, and visual elements. It will interact with the domain layer to retrieve and display data, and also communicate with the data layer to save or update information. - Domain Layer: The domain layer encapsulates the business logic and use cases of the application. It defines the core functionality and rules of the application, independent of any UI or data source. This layer will interact with the data layer through abstractions to retrieve and process data. - Data Layer: The data layer is responsible for interacting with external data sources, such as databases, APIs, or local storage. It will provide concrete implementations of the abstractions defined in the domain layer. The data layer will handle data retrieval, transformation, and persistence, ensuring a separation between data management and the business logic. By following clean architecture principles, this project will achieve a highly modular and maintainable codebase. Each layer will depend only on abstractions from lower layers, promoting loose coupling and making it easier to replace or modify components without impacting the entire application. Additionally, the project integrates the SOLID principles, which consist of the following: - Single Responsibility Principle (SRP): Ensures that each class has a single responsibility, enhancing code clarity and maintainability. - Open/Closed Principle (OCP): Allows for extending the functionality without modifying existing code, promoting code reuse and scalability. - Liskov Substitution Principle (LSP): Ensures that derived classes can be substituted for their base classes without affecting the correctness of the program. - Interface Segregation Principle (ISP): Encourages the creation of focused interfaces to prevent clients from depending on unnecessary functionalities. - Dependency Inversion Principle (DIP): Promotes loose coupling by depending on abstractions, facilitating flexibility and testability. By adhering to the SOLID principles, the project achieves code that is modular, reusable, and easier to understand and maintain. To support the project's implementation, popular packages such as "flutter_bloc" for BLoC pattern implementation, "equatable" for simplified equality comparisons, and "get_it" for dependency inversion and management will be integrated. These packages provide powerful tools and utilities to streamline development, improve code quality, and enhance maintainability. ## Features - Authentication (Login, Register, Logout) - Product (List, Create, Update, Delete) - Multi Language (English, Indonesian) - Multi Theme (Light, Dark) - Validation (Form, Internet Connection) - Auto Login - Auto save theme mode and language preference - Offline data caching ## Popular Packages Used - [flutter_bloc](https://pub.dev/packages/flutter_bloc) for BLoC pattern implementation - [equatable](https://pub.dev/packages/equatable) for simplified equality comparisons - [get_it](https://pub.dev/packages/get_it) for dependency inversion and management - [go_router](https://pub.dev/packages/go_router) for declarative routing - [hive](https://pub.dev/packages/hive) for lightweight and blazing fast key-value database - etc ## Project Structure - lib - src - configs - adapter - injector - core - api - blocs - cache - constant - errors - extensions - network - themes - usecases - utils - features - auth - data - datasources - models - repositories - di - domain - entities - repositories - usecases - presentation - bloc - pages - widgets - product - data - datasources - models - repositories - di - domain - entities - repositories - usecases - presentation - bloc - pages - widgets - routes - widgets - app.dart - main.dart ## How to run - Clone this repository - Open project directory in terminal or cmd - Run `flutter pub get` - Create firebase project - Create firestore database in firebase project - Edit security rules in firestore database to allow read and write without authentication ```rules_version = '2'; service cloud.firestore { match /databases/{database}/documents { match /{document=**} { allow read, write; } } } ``` - Configure firebase to flutter project using flutterfire - Run `flutter run` or debug from IDE (remember to open simulator or connect physical device) ## Feedback - I am very happy if you want to provide feedback or contribute to this project - If you have any questions, please contact me at yoga3911@gmail.com - If you think this project is useful, you can give me a cup of coffee (Just Kidding 😁)
syftdata/monitor-action
https://github.com/syftdata/monitor-action
Github action to monitor event instrumentation
# Event Analytics Monitoring Action This action monitors for any breaking changes to the analytics tracking code. With this action, product engineering teams can change tracking code with full transparency in collaboration with data consumers! The action analyzes the code commits and shows possible breaking changes to the structure (schema) of analytics events. It currently supports Typescript projects that use the Segment, Amplitude, MixPanel, or Google Analytics analytics SDKs. Besides showing a summary of changes on the Pull Request, the action automatically extracts and stores the event schema in the `events.yaml` file in your repo. You can leverage this file to monitor changes, set up alerts, or require stakeholder approvals e.g. via a [CODEOWNERS](https://docs.github.com/en/repositories/managing-your-repositorys-settings-and-features/customizing-your-repository/about-code-owners) file. ![Screenshot: Summary of changes to Event Schema on a PR + Event schema diff](.github/assets/screenshot.png) ## Getting Started Create a Github workflow file in your repo at `.github/workflows/syft.yml` with the contents below and commit/push the change to the main branch. Now whenever Syft detects changes to the tracking instrumentation, it will alert you to schema changes on the Pull Request! 🚀 ```yaml on: [pull_request] jobs: syft-analysis: permissions: write-all runs-on: ubuntu-latest name: Syft Event Analysis. steps: - uses: actions/checkout@v3 with: repository: ${{ github.event.pull_request.head.repo.full_name }} ref: ${{ github.event.pull_request.head.ref }} - uses: actions/setup-node@v3 with: node-version: "18.x" - name: Extract Schema uses: syftdata/monitor-action@v0.0.1 with: github_token: ${{ secrets.GITHUB_TOKEN }} - name: Create commit with Schema file uses: EndBug/add-and-commit@v9 with: author_name: Syft Analysis Bot author_email: syft-analysis@syftdata.com message: "Observed changes to the Event schema." add: "events.yaml" ``` ## Configuration The action provides two optional parameters: `project_directory` and `output_directory`. By default, both parameters point to the root of your project directory. #### Project Directory If your project code resides in a subdirectory within the repository, you can customize the action using the `project_directory` parameter. For example, if your project is located under `packages/your-app`, change the workflow file to: ```yaml --- - name: Extract Schema uses: syftdata/monitor-action@v0.0.1 with: project_directory: packages/your-app ``` #### Output Directory If you prefer to store the event schema file under a custom artifact directory, you can customize the action using the `output_directory` parameter. For instance, if you want the artifact to be placed under `artifacts/schemas/`, modify the workflow file thus: ```yaml --- - name: Extract Schema uses: syftdata/monitor-action@v0.0.1 with: output_directory: artifacts/schemas/ ``` ## Monitoring To notify stakeholders about changes to the events.yaml file that is generated under the output_directory, you can add a CODEOWNERS file. ## Supported Analytics SDKs The Action currently supports the following Analytics SDKs in TypeScript: - Amplitude - Segment - Mixpanel - Google Analytics ## Feedback and Contributions If you find this action useful, please ⭐️ the [github repo](https://github.com/syftdata/monitor-action). Please see the [CONTRIBUTING](CONTRIBUTING.md) guide to submit changes to the repo. For feature requests or bug reports, please open an issue in the github repository. For general feedback, please feel free to drop us a note at hello@syftdata.com! 👋 ## License Apache License 2.0
ChatTeach/FreeGPT
https://github.com/ChatTeach/FreeGPT
Use GPT 3.5 / 4 Free - No API Key Need
# FreeGPT WebUI ## GPT 3.5/4 ❌🔑 <strong>NOT REQUIRE ANY API KEY</strong> This project features a WebUI utilizing the [G4F API](https://github.com/xtekky/gpt4free). <br> Experience the power of ChatGPT with a user-friendly interface, enhanced jailbreaks, and completely free. ## 🚧 Known bugs: - API Provider redirecting GPT-4 model to GPT-3.5; - Auto Proxy is not working. _Coding to solve as quickly as possible_ ## Table of Contents - [To-Do List](#to-do-list-%EF%B8%8F) - [Getting Started](#getting-started-white_check_mark) - [Cloning the Repository](#cloning-the-repository-inbox_tray) - [Install Dependencies](#install-dependencies-wrench) - [Running the Application](#running-the-application-rocket) - [Auto Proxy](#auto-proxy-) - [Enable Auto Proxy](#enable-auto-proxy) - [Docker](#docker-) - [Prerequisites](#prerequisites) - [Running the Docker](#running-the-docker) - [Incorporated Projects](#incorporated-projects-busts_in_silhouette) - [WebUI](#webui) - [API FreeGPT](#api-g4f) - [Star History](#star-history) - [Legal Notice](#legal-notice) ## ## To-Do List ✔️ - [x] Integrate the free GPT API into the WebUI - [x] Create Docker support - [x] Improve the Jailbreak functionality - [x] Add the GPT-4 model - [x] Enhance the user interface - [ ] Auto Proxy - [ ] Enable editing and creating Jailbreaks/Roles in the WebUI - [ ] Migrate the interface to React.js (?) ## Getting Started :white_check_mark: To get started with this project, you'll need to clone the repository and have [Python](https://www.python.org/downloads/) installed on your system. ### Cloning the Repository :inbox_tray: Run the following command to clone the repository: ``` git clone https://github.com/ramonvc/freegpt-webui.git ``` ### Install Dependencies :wrench: Navigate to the project directory: ``` cd freegpt-webui ``` Install the dependencies: ``` pip install -r requirements.txt ``` ## Running the Application :rocket: To run the application, run the following command: ``` python run.py ``` Access the application in your browser using the URL: ``` http://127.0.0.1:1338 ``` or ``` http://localhost:1338 ``` ## Auto Proxy 🔑 The application includes an auto proxy feature that allows it to work with multiple free proxy servers. The freeGPT API refuses some connections, especially when hosted in the cloud (Azure, AWS, Google Cloud). Auto proxy solves this problem automatically for you. When enabled, the application will automatically fetch and test proxy servers, updating the list of working proxies every 30 minutes. ### Enable Auto Proxy To enable it, just go to the `config.json` file and change the value of the "use_auto_proxy" to `true`. ``` "use_auto_proxy": true ``` ![use-auto-proxy-gif](https://github.com/ramonvc/gptfree-webui/assets/13617054/f83c6217-411c-404c-9f4c-8ae700a486d1) ## Docker 🐳 ### Prerequisites Before you start, make sure you have installed [Docker](https://www.docker.com/get-started) on your machine. ### Running the Docker Pull the Docker image from Docker Hub: ``` docker pull ramonvc/freegpt-webui ``` Run the application using Docker: ``` docker run -p 1338:1338 ramonvc/freegpt-webui ``` Access the application in your browser using the URL: ``` http://127.0.0.1:1338 ``` or ``` http://localhost:1338 ``` When you're done using the application, stop the Docker containers using the following command: ``` docker stop <container-id> ``` ## Incorporated Projects :busts_in_silhouette: I highly recommend visiting and supporting both projects. ### WebUI The application interface was incorporated from the [chatgpt-clone](https://github.com/xtekky/chatgpt-clone) repository. ### API G4F The free GPT-4 API was incorporated from the [GPT4Free](https://github.com/xtekky/gpt4free) repository. <br> ## Legal Notice This repository is _not_ associated with or endorsed by providers of the APIs contained in this GitHub repository. This project is intended **for educational purposes only**. This is just a little personal project. Sites may contact me to improve their security or request the removal of their site from this repository. Please note the following: 1. **Disclaimer**: The APIs, services, and trademarks mentioned in this repository belong to their respective owners. This project is _not_ claiming any right over them nor is it affiliated with or endorsed by any of the providers mentioned. 2. **Responsibility**: The author of this repository is _not_ responsible for any consequences, damages, or losses arising from the use or misuse of this repository or the content provided by the third-party APIs. Users are solely responsible for their actions and any repercussions that may follow. We strongly recommend the users to follow the TOS of the each Website. 3. **Educational Purposes Only**: This repository and its content are provided strictly for educational purposes. By using the information and code provided, users acknowledge that they are using the APIs and models at their own risk and agree to comply with any applicable laws and regulations. 4. **Copyright**: All content in this repository, including but not limited to code, images, and documentation, is the intellectual property of the repository author, unless otherwise stated. Unauthorized copying, distribution, or use of any content in this repository is strictly prohibited without the express written consent of the repository author. 5. **Indemnification**: Users agree to indemnify, defend, and hold harmless the author of this repository from and against any and all claims, liabilities, damages, losses, or expenses, including legal fees and costs, arising out of or in any way connected with their use or misuse of this repository, its content, or related third-party APIs. 6. **Updates and Changes**: The author reserves the right to modify, update, or remove any content, information, or features in this repository at any time without prior notice. Users are responsible for regularly reviewing the content and any changes made to this repository. By using this repository or any code related to it, you agree to these terms. The author is not responsible for any copies, forks, or reuploads made by other users. This is the author's only account and repository. To prevent impersonation or irresponsible actions, you may comply with the GNU GPL license this Repository uses.
viktorvsk/saltivka
https://github.com/viktorvsk/saltivka
🇺🇦 Friendly Nostr Relay in Ruby with UI
# Saltivka ![Saltivka](/app/assets/images/screenshot.png) This is an implementation of the [Nostr protocol](https://github.com/nostr-protocol/nostr) server (Nostr Relay) in Ruby. It implements the following list of specifications (Nostr Implementation Possibilities or NIPs): * [NIP-01: Basic protocol flow description](https://github.com/nostr-protocol/nips/blob/master/01.md) * [NIP-04: Encrypted Direct Message](https://github.com/nostr-protocol/nips/blob/master/04.md) * [NIP-09: Event Deletion](https://github.com/nostr-protocol/nips/blob/master/09.md) * [NIP-11: Relay Information Document](https://github.com/nostr-protocol/nips/blob/master/11.md) * [NIP-12: Generic Tag Queries](https://github.com/nostr-protocol/nips/blob/master/12.md) * [NIP-13: Proof of Work](https://github.com/nostr-protocol/nips/blob/master/13.md) * [NIP-16: Event Treatment](https://github.com/nostr-protocol/nips/blob/master/16.md) * [NIP-20: Command Results](https://github.com/nostr-protocol/nips/blob/master/20.md) * [NIP-22: Event `created_at` Limits](https://github.com/nostr-protocol/nips/blob/master/22.md) * [NIP-26: Delegated Event Signing](https://github.com/nostr-protocol/nips/blob/master/26.md) * [NIP-28: Public Chat](https://github.com/nostr-protocol/nips/blob/master/28.md) * [NIP-33: Parameterized Replaceable Events](https://github.com/nostr-protocol/nips/blob/master/33.md) * [NIP-40: Expiration Timestamp](https://github.com/nostr-protocol/nips/blob/master/40.md) * [NIP-42: Authentication of clients to relays](https://github.com/nostr-protocol/nips/blob/master/42.md) * [NIP-43: Fast Auth (experimental)](https://github.com/nostr-protocol/nips/pull/571) * [NIP-45: Event Counts](https://github.com/nostr-protocol/nips/blob/master/45.md) * [NIP-65: Relay List Metadata](https://github.com/nostr-protocol/nips/blob/master/65.md) # Abstract Nostr requires decentralization. Decentralization requires servers. Many different servers, small and big. Some servers should be robust and cheap to operate at scale. Some servers are better to be cute and simple to use. This relay aims to become the most developer-friendly and the most relay-operator-friendly implementation through: * **high configurability** — like support for dynamic feature flags in future * **clean errors descriptions** — including different formats and granular causes * **Comprehensive tests and documentation coverage** — plans to focus on mutation and unit tests * **feature rich** — for example, experimental NIPs and non-nostr related stuff like backups, cross posting etc * **great accessibility** — plans to add support for major integrations aka HTTP API, GraphQL, webhooks and others The ultimate goal is everyone using this relay for the next use-cases: * **Developers**, while working on a client, need to connect to some relays, play with different NIPs, fix errors, check hypothesis * **Non-technical but curious people** wants to setup a private relay to share it with friends and family * **Small-to-medium businesses** use custom configuration of the relay to provide paid services in their specific niche (i.e. invite-driven userbase gets IT-related curated content into their feed) This however doesn't mean it *shouldn't* be possible to serve millions of users per day with this relay. It just means that with Ruby and concurrency one will have to spend more money into hardware. So performance optimizations will usually have low priority unless it prevents some interesting use-cases. # Getting started Here you won't find any instructions on how to deploy production-ready services for millions of active connections because there are infinite amount of ways of how to achieve it and which trade-offs to choose. Some thoughts and hints on production deployment may be found [here](/docs/DEPLOYMENT.md). 2 main use-cases described here are: 1. Deployment through docker-compose with a very primitive setup to give a basic understanding on how things work together, quickly test it and potentially deploy to a single server expecting small workload 2. Local setup for development and contributions * TBD: link to the guide on how to deploy to managed cloud environment * Live instance could be found at [https://saltivka.org](https://saltivka.org) and relay address is `wss://saltivka.org` ## Docker Compose (demo) 1. Prepare a host with docker environment installed (something like [this](https://www.digitalocean.com/community/tutorials/how-to-install-and-use-docker-on-ubuntu-22-04)). Demo setup expects approximately 8 GB of RAM and 4vCPU 2. SSH into this host 3. ```git clone https://github.com/viktorvsk/saltivka.git``` 4. ```cd saltivka``` 6. ```docker compose up --build``` That's it! Wait a minute or two until database is ready. Now you should have HTTP/WS server available at `localhost:3000` and HTTPS/WSS at `localhost:2402`. Ensure ports are open on your server. Both commands should work and let you start working with Nostr: * ```wscat -c "ws://localhost:3000"``` * ```wscat --no-check -c "wss://localhost:2402"``` (Assuming `wscat` is installed and `localhost` is changed to the server IP if necessary.) ## Ruby (development) It's a typical Ruby on Rails application so all defaults mostly apply. 1. ```git clone https://github.com/viktorvsk/saltivka.git``` 2. ```cd saltivka``` 3. ```cp .env.example .env.development``` (Adjust Postgres and Redis settings) 4. ```echo POSTGRES_DATABASE=saltivka_test > .env.test``` 5. Adjust Postgres and Redis credentials in `.env.test` if needed. Use databases different from development environment 6. ```rails db:create db:migrate db:seed``` 7. ```rspec``` 8. ```rails server``` 9. ```bundle exec sidekiq -q nostr``` # Further reading * [Configuration options](/docs/CONFIGURATION.md) * [Architecture details](/docs/ARCHITECTURE.md) * [Limitations](/docs/LIMITATIONS.md) * [Deployment hints](/docs/DEPLOYMENT.md) * [Roadmap](/docs/ROADMAP.md) * [Contributing](/docs/CONTRIBUTING.md)
ryangjchandler/phiki
https://github.com/ryangjchandler/phiki
A syntax highlighter for PHP inspired by Shiki.
# A syntax highlighter for PHP inspired by Shiki. [![Latest Version on Packagist](https://img.shields.io/packagist/v/ryangjchandler/phiki.svg?style=flat-square)](https://packagist.org/packages/ryangjchandler/phiki) [![Tests](https://img.shields.io/github/actions/workflow/status/ryangjchandler/phiki/run-tests.yml?branch=main&label=tests&style=flat-square)](https://github.com/ryangjchandler/phiki/actions/workflows/run-tests.yml) [![Total Downloads](https://img.shields.io/packagist/dt/ryangjchandler/phiki.svg?style=flat-square)](https://packagist.org/packages/ryangjchandler/phiki) Phiki (pronounced "fee"-"key") is a zero-dependency syntax highlighter for PHP. It is heavily inspired by [Shiki](https://shiki.matsu.io/) and uses [TextMate grammar files](https://macromates.com/manual/en/language_grammars) to highlight code, just like Visual Studio Code and other text editors. ## Installation You can install the package via Composer: ```bash composer require ryangjchandler/phiki ``` ## Usage ```php $skeleton = new RyanChandler\Phiki(); echo $skeleton->echoPhrase('Hello, RyanChandler!'); ``` ## Testing ```bash composer test ``` ## Changelog Please see [CHANGELOG](CHANGELOG.md) for more information on what has changed recently. ## Contributing Please see [CONTRIBUTING](https://github.com/spatie/.github/blob/main/CONTRIBUTING.md) for details. ## Security Vulnerabilities Please review [our security policy](../../security/policy) on how to report security vulnerabilities. ## Credits - [Ryan Chandler](https://github.com/ryangjchandler) - [All Contributors](../../contributors) ## License The MIT License (MIT). Please see [License File](LICENSE.md) for more information.
ml-jku/quam
https://github.com/ml-jku/quam
Quantification of Uncertainty with Adversarial Models
# Quantification of Uncertainty with Adversarial Models _Kajetan Schweighofer<sup> * 1</sup>, Lukas Aichberger<sup> * 1</sup>, Mykyta Ielanskyi<sup> * 1</sup>, Günter Klambauer<sup> 1</sup>, Sepp Hochreiter<sup> 1 2</sup>_ <sup>1</sup> ELLIS Unit Linz and LIT AI Lab, Institute for Machine Learning, Johannes Kepler University Linz, Austria <sup>2</sup> Institute of Advanced Research in Artificial Intelligence (IARAI) <sup>*</sup> Joint first author --- This is the official repository to recreate the experiments of the paper "Quantification of Uncertainty with Adversarial Models". The paper is available on [arxiv](https://arxiv.org/abs/2307.03217) --- ## What is QUAM? Quantification of Uncertainty with Adversarial Models (QUAM) searches for adversarial models (not adversarial examples!) to better estimate the epistemic uncertainty, the uncertainty about chosen model parameters. Adversarial models predict differently for a specific test point, but explain the training data similarly well. High uncertainties are detected by different adversarial models that assign a test point to different classes. As a result, the true class remains ambiguous. <p> <img src="overview.png" width="720px"> </p> We illustrate quantifying the predictive uncertainty of a given, pre-selected model (blue), a classifier for images of cats and dogs. For each of the input images, we search for adversarial models (orange) that make different predictions than the given, pre-selected model while explaining the training data equally well (having a high likelihood). The adversarial models found for an image of a dog or a cat still make similar predictions (low epistemic uncertainty), while the adversarial model found for an image of a lion makes a highly different prediction (high epistemic uncertainty), as features present in images of both cats and dogs can be utilized to classify the image of a lion. ## Reproducing the Experiments ### Installing Dependencies ```commandline conda install --file environment.yaml conda activate quam ``` ### Experiments on Synthetic Datasets All experiments on synthetic datasets are available as notebooks in the [notebooks](notebooks) folder. ### Run MNIST Benchmarks ```commandline chmod +x reproduce_mnist_experiments.sh ./reproduce_mnist_experiments.sh ``` ### Run ImageNet Benchmarks To reproduce the results for the ImageNet benchmarks, check the corresponding section in [the technical manual](TECHNICAL.md).
willianjusten/photos-portfolio
https://github.com/willianjusten/photos-portfolio
null
# Photography Portfolio > I'll use this site to post mostly Street Photography and photos that I don't post on Unsplash due to their strict guidelines. ## How to run Install the dependencies: ```sh npm install ``` Create an account on [Cloudinary](https://cloudinary.com/) and rename the `.env.local.example` to `.env.local` adding the keys you got from Cloudinary. Upload some photos separated by folders and that's it! ## Customize the template The First card is the place where it has the photo, name, description, and some Social Network links. You can change those informations in the [MainCard](https://github.com/willianjusten/photos/blob/main/components/MainCard.tsx) and also the [SocialLinks](https://github.com/willianjusten/photos/blob/main/components/SocialLinks.tsx). Also, if you want to change the colors, you can replace them on [tailwind.config.js](https://github.com/willianjusten/photos/blob/main/tailwind.config.js#L13-L20). ## Contribute Feel free to contribute, this project has tons of places to improve, it's a good start for beginners!
XEpert-Bot/Expert-v2.0
https://github.com/XEpert-Bot/Expert-v2.0
here is the v2 of expert discord bot using discord.js v14
# Expert V2.0 Is published Now **New Features** **chatbot**, **economy with leaderboard**, **some button funtions on Music** and **Some other commands** # Made by ![image](https://media.discordapp.net/attachments/1126649485871816845/1126874376130797618/image.png) **A multifunctional Discord bot With Ai image generate** I will make dashboard for this for if you invite Bot more then 50 server! **Note** ``First if you Start cloning this repo please Invite My bot into your discord server to show me some support then i'll make video and Source code for Its Dashboard A discord multipuspose Bot with AI image and premium system 🌟`` **You can Add My discord Bot for preview✨** <a href="https://top.gg/bot/1023810715250860105"> <img src="https://top.gg/api/widget/1023810715250860105.svg"> </a> **Commands info** # Fun, logs, automode, antinute, ticket, chatbot, midjourney, application, meme, anime, music, buttons, welcome system, verification and more you can see by invite Expert into your server **Clickable Commands** # noprefix system is just for testing if you like it then i will make it more commands like no prefix system ![image](https://user-images.githubusercontent.com/92865043/232497758-aaf08f8d-0ce6-4715-9a52-0ec43c40e3cb.png) **Features** ![image](https://media.discordapp.net/attachments/1101154365391245333/1126869145745637488/image.png) ![image](https://media.discordapp.net/attachments/1101154365391245333/1126869484964167780/image.png) ![image](https://media.discordapp.net/attachments/1101154365391245333/1126869569588445224/image.png) ![image](https://media.discordapp.net/attachments/1101154365391245333/1126869649242472549/image.png) ![image](https://media.discordapp.net/attachments/1101154365391245333/1126869905506050108/image.png) ![image](https://media.discordapp.net/attachments/1101154365391245333/1126870014968991864/image.png?width=314&height=676) ![image](https://media.discordapp.net/attachments/1101154365391245333/1126870409959194704/image.png) ![image](https://media.discordapp.net/attachments/1101154379035324417/1126870805347827753/image.png) ![image](https://media.discordapp.net/attachments/1101154365391245333/1126871008721248296/image.png) ![image](https://media.discordapp.net/attachments/1101154365391245333/1126871121124413603/image.png) ![image](https://media.discordapp.net/attachments/1101154365391245333/1126871173104410754/file.jpg) ![image](https://github.com/XEpert-Bot/Expert-v2.0/assets/92865043/8e13c420-0e09-4150-9787-f2bc865d2829) ![image](https://github.com/XEpert-Bot/Expert-v2.0/assets/92865043/2da07c5b-982d-4ff2-9f1f-960da61d7036) And much more # support server https://discord.gg/dj44zMsnNX start with installing pakage ``npm install`` Run with using command ``npm start run`` ```for any type of bug or help of custom bot or commands``` ```Fist Add my bot in your server then DM me ``` # Dont't Forgot to giving the credit by invite Expert Bot # shoe me your love I will show you sources #Have any Question ``!D R A G O#0001``
luminexord/brc69
https://github.com/luminexord/brc69
null
# BRC69 We propose a new standard for Ordinals Collections that utilize Recursive Inscription to optimize the costs of inscribing on Bitcoin using the Ordinals protocol. Additionally, this standard paves the way for more intriguing on-chain features, such as pre-reveal collection launching and on-chain reveals. This is accomplished by automatically and seamlessly rendering images on the Ordinals explorer, without the need for additional action. --- ## Context As the Ordinals protocol gains traction, more users are inscribing data into Non-Fungible Ordinals collections on Bitcoin. This increased usage has led to a higher demand for Bitcoin block space, subsequently causing a rise in Bitcoin network fees. In order to continue encouraging creators to launch their innovative ideas on the Bitcoin blockchain, we must optimize the current standard for launching Image Ordinals Collections. --- ## Idea We propose a new standard for launching Non-Fungible Ordinals Collection. This standard preserves all on-chain resources while achieving a 90%+ optimization of block space, which is dependent on the size of the initial collection and the network fees. The process involves four steps: 1. Inscribing the images of the traits on-chain 2. Inscribing the BRC69 collection deployment JSON 3. Inscribing the BRC69 collection compiler JavaScript 4. Inscribing the BRC69 asset with the mint operation All these processes can be conducted without needing an external indexer, as long as the collection creators release the official list of inscriptions for their collections, as currently required. Moreover, the images will be automatically rendered on all front-end interfaces that have already implemented [Recursive Inscriptions](https://github.com/ordinals/ord/pull/2167), eliminating the need for additional steps. --- ## Operations ### Deploy BRC69 Once the images of the traits that comprise the collection are inscribed on-chain, we can inscribe the collection deploy JSON in the Deploy operation. The Deploy operation is a JSON/Text inscription that contains general information about the collection and an array of the inscription IDs of the traits. The deploy inscription serves as the reference and the definitive source for the traits. *Example of a collection deploy json:* ```javascript { "p":"brc69", "op":"deploy", "collection":{ "slug":"ordibots", "name":"OrdiBots", "description":"OrdiBots Testing Collection", "creator":"Martian", "supply":1000 }, "attributes":[ "48f37de96ae78d693a0dd4022f0b09304a5389ebfc157c225399635b5cd1a84bi0", "b236b29525834dba35d61f4c719b8491047bc29f8eff219c7cc5062e6a1c340ai0", "669eb48d997b112729a6485f09a475d9869a5d547919e5ddd3f918f89360ed14i0", "e012b79a7b094920d5c120ea7c6fb44276346442721e6468390bedccdd07fc32i0", "6d2fa6c950a36d1e3be1df914a3458677b602a36c3a97d283ea7fa7123ced645i0", "d30730faf64535bb4b0744a4857eea5db7c9830fb0bb1551dd2fe69c9d5a1a87i0", "fb8366d635c530dd0a1e8e7b42ef7c04984b21489075913e1d480a11b811ba89i0", "a37ae051e2124e762f0a1d34a02fa10c05dd47eadd3520b08e94f84f61eb8093i0", // ... More traits inscription IDs ] } ``` | Key | Required | Description | | ---------- | -------- | ------------------------------------------------------------ | | p | YES | Protocol: Helps other systems identify and process brc69 events | | op | YES | Operation: Type of event (Deploy, Compile, Mint) | | slug | YES | Slug: Identifier of the collection. Not enforced if no indexer implemented | | name | NO | Name: Human readable name of the collection | | supply | NO | Supply: Supply of the collection. Not enforced if no indexer implemented | | attributes | YES | Array of the inscription IDs of the traits that will generate the final assets | ### Compile BRC69 The Compile operation stores the logic to render the final assets in a JavaScript inscription. The Compile inscription is a recursive inscription that points back to the Deploy inscription to receive the traits' inscription IDs and ultimately render the asset. The logic of the Compile inscription can be customized for collections that require more specific rendering features. Down belove we propose a versatile compile logic. ``````javascript /* { "p": "brc69", "op": "compile", "s": "ordibots" } */ // EDIT const collectionJsonUrl = '/content/<deploy inscription id>'; const previewUrl = `/content/<preview inscription id>` // if preview available const imageRendering = 'auto' // or pixelated const renderSize = { width: 500, height: 500 }; // select image render size async function loadImage (url) { return new Promise((resolve, reject) => { const image = document.createElement('img') image.src = url image.crossOrigin = 'anonymous' image.onload = () => { resolve(image) } image.onerror = () => { // Some display fallbacks for when the image fails to load if (!image.src.startsWith('https://')) { image.src = 'https://ordinals.com' + url } else if (image.src.startsWith('https://ordinals.com')) { image.src = 'https://ord-mirror.magiceden.dev' + url } } }) } async function renderImage(imageEl, urls) { const canvas = document.createElement('canvas'); canvas.width = renderSize.width; canvas.height = renderSize.height; const ctx = canvas.getContext("2d"); ctx.imageSmoothingEnabled = false; const images = await Promise.all((urls).map(loadImage)) images.forEach(_ => ctx.drawImage(_, 0, 0, canvas.width, canvas.height)) imageEl.src = canvas.toDataURL("image/png") } async function getAllTraits(traitsUrl, retry = false) { try { const collectionMetadataRes = await fetch(traitsUrl) const collectionMetadata = await collectionMetadataRes.json() return collectionMetadata.attributes.map(_ => `/content/${_}`) } catch (e) { if (!retry) { const timestamp = Math.floor(Date.now() / (60000 * 10)) // 10 minutes const newTraitsUrl = `${traitsUrl}?timestamp=${timestamp}` return getAllTraits(newTraitsUrl, true) } throw e } } function createInitialImage () { // Manipulate the <body> tag document.body.style.margin = '0px'; document.body.style.padding = '0px'; // Create and set properties of the <img> tag const img = document.createElement('img'); img.id = 'img'; img.style.height = '100%'; img.style.width = '100%'; img.style.objectFit = 'contain'; img.style.imageRendering = imageRendering; return img } async function createInscriptionHtml() { const imageEl = createInitialImage() try { // Get traits const allTraits = await getAllTraits(collectionJsonUrl) // Process traits const selectedTraitIndexes = document.querySelector('script[t]').getAttribute('t').split(','); const traits = selectedTraitIndexes.map(_ => allTraits[+_]) // Render traits await renderImage(imageEl, traits); } catch (e) { console.error(e) // Render previewUrl image if (previewUrl) { imageEl.src = previewUrl } } finally { // Append the <img> tag to the <body> document.body.appendChild(imageEl); } } window.onload = function() { createInscriptionHtml(); } `````` ### Mint BRC69 The Mint operation uses an HTML type inscription that stores the index of the traits used to generate the final asset and points back to the Compile inscription, in a single line. This approach allows any front-end with Recursive Inscription to automatically render the image using on-chain inscribed data. ``````html <script t="10,1,2,3" src="/content/<compile inscription id>" m='{"p":"brc69" "op":"mint" "s":"ordibots" "id":"0"}' ></script> `````` | Key | Required | Description | | ---- | -------- | ------------------------------------------------------------ | | t | YES | Traits: index of the traits used to generate the asset found in the "attributes" array of the deploy inscription | | src | YES | Source: Recursive inscription pointer to the Compile inscription | | m | NO | Metadata: metadata used to track BRC69 operations. Optional if you want your mints operation to be stealth | --- ## Impact Implementing the BRC69 standard will enhance the efficiency of Bitcoin block space utilization. As the unique trait images are inscribed only once in the Deploy transaction, the assets are composed of an HTML file referencing these traits in just a single line of about 150 bytes. Any front-end with Recursive Inscription implementation can render the images without any additional steps, using the on-chain Deploy inscription. ---
groaxd/craftrisec-opensource
https://github.com/groaxd/craftrisec-opensource
Playground
# CraftRise Network Source Code [Launcher & Client & Loader (Not Finished)] Source will be released when i finish.
chapyter/chapyter
https://github.com/chapyter/chapyter
Chapyter: ChatGPT Code Interpreter in Jupyter Notebooks
<div align="center"> <img src="https://github.com/chapyter/chapyter/raw/main/.github/chapyter-logo.png" alt="Chapyter Logo" width="35%"> </div> **[Please check our latest blogpost on Chapyter release.](https://www.szj.io/posts/chapyter)** ## What is Chapyter Chapyter is a JupyterLab extension that seamlessly connects GPT-4 to your coding environment. It features a code interpreter that can translate your natural language description into Python code and automatically execute it. By enabling "natural language programming" in your most familiar IDE, Chapyter can boost your productivity and empower you to explore many new ideas that you would not have tried otherwise. | Functionality | Example | | --- | --- | | **Code generation from natural language and automatic execution** <br> Simply adding the magic command `%%chat` at the beginning of the cell of a natural language description of the task, the code is generated and the results are shown in a few seconds. | ![Code generation from natural language and automatic execution](https://github.com/chapyter/chapyter/raw/main/.github/example-1.gif) | | **Using coding history and execution output for code generation** <br> By adding the `--history` or `-h` flag in generation, chapyter can use the previous execution history and outputs to generate the appropriate visualization for the loaded IRIS dataset. |![Using coding history and execution output for code generation](https://github.com/chapyter/chapyter/raw/main/.github/example-2.gif) | | **In-situ debugging and code editing** <br> The generated code might not be perfect and could contain bugs or errors. Since Chapyter is fully integrated into Jupyter Notebook, you can easily inspect the code and fix any errors or bugs (e.g., installing missing dependencies in this case) without leaving the IDE. | ![In-situ debugging and code editing](https://github.com/chapyter/chapyter/raw/main/.github/example-3.gif) | | **Transparency on the prompts and AI configuration and allows for customization** | We release all the prompts used in our library and we are working on easy customization of the used prompts and settings. See in [chapyter/programs.py](https://github.com/chapyter/chapyter/blob/main/chapyter/programs.py). | | **Privacy-first when using latest powerful AI** | Since we are using OpenAI API, all the data sent to OpenAI will not be saved for training (see [OpenAI API Data Usage Policies](https://openai.com/policies/api-data-usage-policies)). As a comparison, whenever you are using Copilot or ChatGPT, your data will be somewhat cached and can be used for their training and analysis. | ## Quick Start 1. Installation ```bash pip install chapyter # Automatically installs the extension to jupyterlab pip uninstall chapyter # Uninstalls the extension from jupyterlab ``` Note: It will upgrade the jupyterlab to ≥ 4.0. It might break your environments. 2. Usage: see [examples/01-quick-start.ipynb](examples/01-quick-start.ipynb) for a starter notebook. 1. Set the proper `OPENAI_API_KEY` and `OPENAI_ORGANIZATION` in the environment variable 2. Use the magic command `%%chat` in a code cell: ``` %%chat -m gpt-4-0613 List all the files in the folder ``` It will call gpt-4-0613 to generate the python code for listing all the files in the folder, and execute the generated code automatically. In this case, this plugin serves as the interpreter that translates the natural language to python code and execute it. 3. Examples: - [examples/01-quick-start.ipynb](examples/01-quick-start.ipynb) illustrates the basic functions of `chapyter`, including how to use the magic command `%%chat` - [examples/02-configure-chapyter.ipynb](examples/02-configure-chapyter.ipynb) shows how to customize `chapyter`: - Use different default models (e.g., gpt-3.5-turbo instead of gpt-4) - Use different types of API (now we support use the default OpenAI API or the Azure OpenAI API) ## Development Notes There are two major components in Chapyter: implementing the ipython magic command that handles the prompting and calling GPT-X models, and the frontend that listens to Chapyter cell execution and automatically executes the newly generated cell and updates the cell styles. The chart below illustrates the orchestration of the frontend and ipython kernel after a Chapyter cell is executed. ![implementation](https://www.szj.io/posts/chapyter/implementation.png) ### Details 1. NodeJS is needed to build the extension package. 2. The `jlpm` command is JupyterLab's pinned version of [yarn](https://yarnpkg.com/) that is installed with JupyterLab. You may use `yarn` or `npm` in lieu of `jlpm` below. ```bash # Clone the repo to your local environment # Change directory to the chapyter directory # Install package in development mode pip install -e "." # Link your development version of the extension with JupyterLab jupyter labextension develop . --overwrite # Rebuild extension Typescript source after making changes jlpm build ``` 3. You can watch the source directory and run JupyterLab at the same time in different terminals to watch for changes in the extension's source and automatically rebuild the extension. ```bash # Watch the source directory in one terminal, automatically rebuilding when needed jlpm watch # Run JupyterLab in another terminal jupyter lab ``` With the watch command running, every saved change will immediately be built locally and available in your running JupyterLab. Refresh JupyterLab to load the change in your browser (you may need to wait several seconds for the extension to be rebuilt). 4. By default, the `jlpm build` command generates the source maps for this extension to make it easier to debug using the browser dev tools. To also generate source maps for the JupyterLab core extensions, you can run the following command: ```bash jupyter lab build --minimize=False ``` 5. Packaging and release: please refer to [RELEASE](RELEASE.md).
denoland/deno-astro-template
https://github.com/denoland/deno-astro-template
Template repo for an Astro site, preconfigured to run with Deno and Deno Deploy
# 🚀 Astro + 🦕 Deno Template This sample project contains a recommended starting point for your Deno project using Astro. ## 🛠️ Using this template This template requires the Deno runtime to be installed on your computer. If you haven't already, please [install Deno first](https://deno.land/manual/getting_started/installation). Next, initialize a new Astro project using this template with your favorite [npm client](https://docs.npmjs.com/cli/v9/configuring-npm/install), as described in the [Astro docs](https://docs.astro.build/en/getting-started/). ``` npm create astro@latest -- --template denoland/deno-astro-template ``` GitHub will also let you [use this repository as a template](https://docs.github.com/en/repositories/creating-and-managing-repositories/creating-a-repository-from-a-template#creating-a-repository-from-a-template) - use the green "Use this template" button near the top of the page to create your own copy of this application. #### Why use Astro with Deno? This template is preconfigured to use Deno rather than Node.js as the JavaScript runtime for this [server-side rendering Astro application](https://docs.astro.build/en/guides/server-side-rendering/). This provides a few benefits: - The ability to use built-in `Deno` namespace functions, notably the new built-in [Deno KV](https://deno.com/kv) database. - The ability to run in production on [Deno Deploy](https://deno.com/deploy), a high performance, globally distributed platform for serverless JavaScript applications. - [Compatibility with the majority of packages on npm](https://deno.land/manual@v1.17.2/npm_nodejs/compatibility_mode), both at the runtime level and via the Astro build process (thanks to Vite and esbuild). #### Local development To use this template for local development, you will use npm scripts in `package.json` in the same way as described in the Astro documentation. However, these scripts are configured execute the same commands using the Deno runtime instead of Node.js. - Start the local development server with `npm start` or `npm run dev` - Build a production-ready SSR site with `npm run build` - After building, you can preview your SSR site locally with `npm run preview` The template project lightly modifies the base Astro project with some basic CRUD operations to create additional resource links. ## 🔌 Managing dependencies, integrations, and plugins We recommend **using npm to manage dependencies for this project**. Astro was designed to be used with npm, and enhanced with plugins and other tools also hosted on npm. Deno's Node/npm compatibility layer should be able to handle this just fine. If you run into any compatibility issues, [please let us know](https://github.com/denoland/deno/issues). ## 🗝️ Using Deno KV By default, the template project stores data in memory in a [Map](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Map). If you'd like to instead use [Deno KV](https://deno.com/kv) to store your data, change the `import` statements in these files: - `src/pages/index.astro` - `src/pages/api/resources.json.ts` to use `resources_kv.ts` instead of `resources.ts`. This will work for everyone locally on recent versions of Deno, but will only work on Deno Deploy if you have applied for and received access to the beta for KV. ## 🦕 🚀 Running on Deno Deploy When you're ready to put this application on the Internet, you can run it on [Deno Deploy](https://www.deno.com/deploy). You have two options for doing so. #### Deploy via GitHub Actions - Create a new project on the [Deno Deploy Dashboard](https://dash.deno.com/projects) - Choose to "Deploy an existing GitHub repository" - Choose your GitHub user and the repository where you are storing this application - You will be prompted to add a `.github/workflows/deploy.yml` file that will automatically build and deploy your application on every push to the `main` branch. The `yml` file's final build step should look like this: ```yml - name: Upload to Deno Deploy uses: denoland/deployctl@v1 with: # Replace with your Deno Deploy project name project: deno-astro-template entrypoint: server/entry.mjs root: dist ``` - The `project` property should be your new Deno Deploy project's name - The `entrypoint` for our Astro SSR application is `server/entry.mjs` - The `root` for our app is the `dist` folder Once this file is committed to your repo, every push to `main` will result in a new version of your application being pushed to production. #### Deploy from your desktop using `deployctl` - Install the command line tools for `deployctl` and configure an API token environment variable [as described here](https://deno.com/deploy/docs/deployctl). - Create a new blank project at [dash.deno.com](https://dash.deno.com/projects) and note the generated project name - Build the Astro site locally with `npm run build` - Deploy the newly generated site in the `dist` folder with this command: ``` deployctl deploy --project=YOUR_PROJECT_NAME --no-static --include=./dist ./dist/server/entry.mjs ``` ## 🤔 Caveats and limitations The Astro build process still runs your Deno code through Vite and esbuild to generate your SSR site. This means that not all JavaScript and TypeScript language features will work exactly the same way - most notably, [HTTPS module imports](https://deno.land/manual@v1.15.2/examples/import_export#remote-import). If you encounter situations where you require a Deno module that is only available via HTTPS import, or discover that there's a Deno runtime language feature that you can't live without, I would recommend writing Deno-specific code in a separate folder (say `src-deno`) and then using [dnt](https://github.com/denoland/dnt) to transpile that code into the main `src` folder, where you can import it in your Astro code. In reality, this should not be necessary all that often. This template also uses a patched version of Astro's Deno SSR adapter in `astro.config.js` - this will eventually be removed once [this pull request is merged](https://github.com/withastro/astro/pull/7687) and `@astrojs/deno` is suitable for use! ## 👩‍⚖️ License MIT
lubien/single-file-phoenix-static
https://github.com/lubien/single-file-phoenix-static
null
# StaticPhxExample Generate a static site with `mix generate_static` All you care is here: [lib/mix/tasks/generate_static.ex](https://github.com/lubien/single-file-phoenix-static/blob/main/lib/mix/tasks/generate_static.ex) This is an upgrade of [Static Site Generator using Elixir and Phoenix](https://amattn.com/p/static_site_generator_using_elixir_and_phoenix.html)
ugozapad/FarCry
https://github.com/ugozapad/FarCry
null
# Far Cry Source code of Far Cry 1 which was leaked 24 june 2023. https://archive.org/details/far-cry-1.34-complete Fixed for building in VS 2022 (Win32 only) ## Changes * Ported code to Visual Studio 2022, currently builds only Win32 Debug configuration * Replaced DXSDK to June 2010 version Code changes: * Most stdafx.h files - Added _SILENCE_STDEXT_HASH_DEPRECATION_WARNINGS * CryCommon - IScriptSystem.h: Moved some methods from WIN64 macro * CrySystem - MTSafeAllocator.h: replaced old std methods * CrySystem - HTTPDownloader.cpp: commented 309-310 lines * CryAnimation - disabled UNIQUE_VERT_BUFF_PER_INSTANCE becase of weird buffer descrution * CryAnimation - CryModel.cpp: 79 line commented becase of weird buffer descrution * CryNetwork - Disabled ubisoft nerwork code * CryGame - ScriptObjectGame.cpp: 2718 line commented save loading * XRenderD3D9 - Replaced DxErr9 with DxErr from lastest DXSDK * XRenderD3D9 - removed assert on invalid sector vertex buffer ### TODO: * Replace nvDXT with nvtt * Remove nvidia cg * Fix bugs ... ### TODO FUTURE: * Port engine to SDL * Port engine to Linux * Rewrite renderer to bgfx/Diligent Engine/NVRHI
rvcas/room
https://github.com/rvcas/room
A Zellij plugin for quickly searching and switching tabs 🖤
# 🖤 room A [Zellij](https://zellij.dev) plugin for quickly searching and switching between tabs. ![usage](https://github.com/rvcas/room/raw/main/img/usage.gif) ## Usage - `Tab` to cycle through tab list - `Up` and `Down` to cycle through tab list - `Enter` to switch to the selected tab - Start typing to filter the tab list - `Esc` or `Ctrl + c` to exit ## Why? I rename my tabs so once I have a lot of them I have to start counting and then press `Ctrl + t` then `<tab num>`. So I wanted something that let’s me type to filter the tab list and then press enter to jump to the selected tab. ## Installation You'll need [rust](https://rustup.rs/) installed. - `git clone git@github.com:rvcas/room.git` - `cd room` - `cargo build --release` - `mkdir -p ~/.config/zellij/plugins/` - `mv target/wasm32-wasi/release/room.wasm ~/.config/zellij/plugins/` ## Keybinding Add the following to your [zellij config](https://zellij.dev/documentation/configuration.html) somewhere inside the [keybinds](https://zellij.dev/documentation/keybindings.html) section: ```kdl shared_except "locked" { bind "Ctrl y" { LaunchOrFocusPlugin "file:~/.config/zellij/plugins/room.wasm" { floating true } } } ``` > You likely already have a `shared_except "locked"` section in your configs. Feel free to add `bind` there. ## Contributing If you find any issues or want to suggest ideas please [open an issue](https://github.com/rvcas/room/issues/new). ### Development Make sure you have [rust](https://rustup.rs/) installed then run: ```sh zellij action new-tab --layout ./dev.kdl ```
spencerc99/playhtml
https://github.com/spencerc99/playhtml
interactive, collaborative html elements with a single data attribute
# playhtml 🛝🌐 _interactive, collaborative html elements with a single data attribute_ playhtml is a library for magically creating collaborative interactive HTML elements that persist their state across sessions. For example, you can create a movable piece of HTML "furniture" by adding the `can-move` attribute: ```html <div id="couch" can-move style="font-size: 80px">🛋</div> ``` This is designed as an open library for anyone to add on new interactions and capabilities. To get started, see the [New Capabilities](#new-capabilities). https://github.com/spencerc99/playhtml/assets/14796580/00e84e15-2c1c-4b4b-8e15-2af22f39db7a If you enjoy this, please consider [sponsoring the project or sending a small donation](https://github.com/sponsors/spencerc99). This helps ensure that the library is maintained and improved over time and funds the hosting costs for the syncing and persistence services. ## Usage To use this library, you can import the library and the associated styles from a CDN (in this case we will use [unpkg.com](https://unpkg.com)). Please make sure to do this after all the HTML elements on the page and ensure that the HTML elements you are "magic-ifying" have an `id` set. ```html <body> <!-- ... html elements here --> <!-- valid example --> <img src="https://media2.giphy.com/media/lL7A3Li0YtFHq/giphy.gif?cid=ecf05e47ah89o71gzz7ke7inrgb1ai1xcbrjnqdf7o890118&ep=v1_stickers_search&rid=giphy.gif" can-move /> id="openSign" <!-- INVALID EXAMPLE <img src="https://media2.giphy.com/media/lL7A3Li0YtFHq/giphy.gif?cid=ecf05e47ah89o71gzz7ke7inrgb1ai1xcbrjnqdf7o890118&ep=v1_stickers_search&rid=giphy.gif" can-move /> --> <!-- import the script --> <script type="module" src="https://unpkg.com/playhtml"></script> <link rel="stylesheet" href="https://unpkg.com/playhtml/dist/style.css" /> </body> ``` If you have dynamic elements that are hydrated after the initial load, you can call `playhtml.setupElements()` whenever needed to have the library look for elements that haven't been marked. ```js <script type="module"> import {setupElements} from "https://unpkg.com/playhtml"; setupElements(); </script> ``` To create your own custom element, refer to the [can-play](#can-play) section. ## Examples Check out the [full gallery of community examples](https://gallery.playhtml.fun/) for more inspiration on what you can do! And [submit your own](https://coda.io/form/playhtml-example_dnUR7xNE7wz) once you've made one. ## Capabilities A full list can be found in `types.ts` under `TagType`. ## Custom Capabilities ### `can-play` `can-play` is the fully customizable experience of `playhtml`. You can create anything using simple HTML, CSS, and Javascript by simply adding on the functionality needed to the element itself. The library will handle magically syncing and persisting any data that you store. Here's the simple example included on the playhtml website: https://github.com/spencerc99/playhtml/assets/14796580/fae669b1-b3e2-404e-bd7a-3d36b81c572d ```html <img can-play id="customCandle" src="/candle-gif.gif" /> <!-- IMPORTANT: this data must be set _before_ importing the playhtml library. --> <script> // Every HTML element with an ID has a variable corresponding to that ID name in the global context (see https://stackoverflow.com/questions/3434278/do-dom-tree-elements-with-ids-become-global-properties/3434388#3434388) // This is fun, magical, and readable for small projects, but if you have a larger, complex project, I would advise using `document.getElementById` or `document.querySelector`... customCandle.defaultData = true; customCandle.onClick = (_e, { data, setData }) => { setData(!data); }; // The above statement could also be done using the `additionalSetup` // which can initialize several different event listeners and custom logic. // customCandle.additionalSetup = ({ getData, setData }) => { // customCandle.addEventListener("click", (_e) => { // const data = getData(); // setData(!data); // }); // }; customCandle.updateElement = ({ data }) => { customCandle.src = data ? "/candle-gif.gif" : "/candle-off.png"; }; // Alternatively // let candle = document.getElementById('customCandle'); // candle.defaultData = true; // candle.onClick = (_e, { data, setData }) => { // setData(!data); // }; // candle.updateElement = ({ data }) => { // candle.src = data ? "/candle-gif.gif" : "/candle-off.png"; // }; // candle.resetShortcut = "shiftKey"; </script> <!-- Import playhtml --> <script type="module" src="https://unpkg.com/playhtml"></script> <link rel="stylesheet" href="https://unpkg.com/playhtml/dist/style.css" /> ``` The only required properties are `defaultData`, `updateElement` and some kind of setup to trigger those functions (in this case, `onClick`, but you can add custom event listeners and logic using the `additionalSetup` property). See more examples based on the definitions for the included capabilities in `elements.ts`. ## Plug-and-play Capabilities These capabilities are common ones that have been designed and created by the community. You should expect that they are relatively well-tested, and they simply build on top of the same API and constructs that `can-play` uses. We welcome any contributions if you have custom `can-play` elements that you would like to add to the library! Please make a PR. ### `can-move` https://github.com/spencerc99/playhtml/assets/14796580/9c2b9bf6-142c-41e2-8c8f-93a3b121a73e Creates a movable object using 2D `translate` on the element. Dragging the element around will move it **troubleshooting** - This currently doesn't work on `inline` display elements. - This currently doesn't work on touch screens. ### `can-spin` Creates a rotatable object using a `rotate` `transform` on the element. Dragging the element to the left or right will rotate it counter-clockwise and clockwise respectively. ### `can-toggle` <blockquote class="twitter-tweet"><p lang="en" dir="ltr">today i installed some lamps on the demos-and-chill website<br><br>then <a href="https://twitter.com/_jzhao?ref_src=twsrc%5Etfw">@_jzhao</a> and i fought on whether to keep them on or not. <a href="https://t.co/sCspTwmRpS">pic.twitter.com/sCspTwmRpS</a></p>&mdash; spencer chang ☀️ (spencerchang.me @ bsky) (@spencerc99) <a href="https://twitter.com/spencerc99/status/1681048824884895744?ref_src=twsrc%5Etfw">July 17, 2023</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script> Creates an object that can be switched on and off. Clicking the element will toggle the `clicked` class on the element. ### `can-grow` Creates an object that can be resized using a `scale` `transform`. Clicking the object will grow it, clicking with <kbd>ALT</kbd> will shrink it. Currently, the max size is 2x the original size and the min size is 1/2 the original size. ### `can-post` ![image](https://github.com/spencerc99/playhtml/assets/14796580/6de3fcab-2372-4080-b46f-cd768f1ed44e) Creates a communal forum from a `form` element. The form will sync any new submissions including all the `input` elements in the form, using their `name` property as the key and their value as the value. New messages will be currently prepended to the element with the `guestbookMessages` ID. TODO: make this generic and take user input --- To add new ones, please find contributor guidelines in [New Capabilities](#new-capabilities). ## Contributing ### New Capabilities `playhtml` is designed to be a collective library of magical capabilities that anyone can attach to arbitrary HTML elements. If you have an idea for a new capability, please first ensure that there is not a duplicate existing one in the current library (see `TagType` in `types.ts`). Please make a proposal for the capability you would like to add by opening an issue with the `new-capability` label. To contribute your capability, see sample PR (TODO: LINK). --- Outside of contributing new capabilities, feel free to submit any issues or PRs for bugs or improvements to the core of the library. ## Support & Maintenance Thank you for considering reading this little README and browing this project! I'd love to see you share the library and what you've made with it to me and with your friends. And if you enjoy using this, please consider [sponsoring the project or sending a small donation](https://github.com/sponsors/spencerc99). This helps ensure that the library is maintained and improved over time and funds the hosting costs for the syncing and persistence services.
PB2204/Covid-19
https://github.com/PB2204/Covid-19
This Is A Web Scraping Projects With Covid-19 Data From 2 Very Popular & Authentic Websites
# Covid-19 This Is A Web Scraping Projects With Covid-19 Data From 2 Very Popular &amp; Authentic Websites
melody413/HIPPA
https://github.com/melody413/HIPPA
An efficient, general-purpose mathematics components library in the JavaScript programming language
## Local Setup Dependencies: * [Node JS](http://nodejs.org/) * Bower (`npm install bower`) (`npm` comes with node) Setup: 1. Clone this repo 2. Edit `frontend/assets/config.js`, adding your Catalyze v2 and [medl.io](http://medl.io) API keys). 3. Install Bower and node dependencies via command line: ``` $ bower install $ npm install ``` 4. Start the node server: ``` $ node backend/app.js ``` 5. Navigate your browser to [http://localhost:8000/](http://localhost:8000/). (the node dependency is needed because some browsers have security policies that don't allow `file:/` URLs to access cross-origin resources - if your browser doesn't, or you change that policy in your browser, you can stop after `bower install`)
Sobrecodigo/proyecto-beta
https://github.com/Sobrecodigo/proyecto-beta
Proyecto beta
This is a [Next.js](https://nextjs.org/) project bootstrapped with [`create-next-app`](https://github.com/vercel/next.js/tree/canary/packages/create-next-app). ## Getting Started First, run the development server: ```bash npm run dev # or yarn dev # or pnpm dev ``` Open [http://localhost:3000](http://localhost:3000) with your browser to see the result. You can start editing the page by modifying `app/page.tsx`. The page auto-updates as you edit the file. This project uses [`next/font`](https://nextjs.org/docs/basic-features/font-optimization) to automatically optimize and load Inter, a custom Google Font. ## Learn More To learn more about Next.js, take a look at the following resources: - [Next.js Documentation](https://nextjs.org/docs) - learn about Next.js features and API. - [Learn Next.js](https://nextjs.org/learn) - an interactive Next.js tutorial. You can check out [the Next.js GitHub repository](https://github.com/vercel/next.js/) - your feedback and contributions are welcome! ## Deploy on Vercel The easiest way to deploy your Next.js app is to use the [Vercel Platform](https://vercel.com/new?utm_medium=default-template&filter=next.js&utm_source=create-next-app&utm_campaign=create-next-app-readme) from the creators of Next.js. Check out our [Next.js deployment documentation](https://nextjs.org/docs/deployment) for more details.
KoelhoSec/pimpmyP4wnP1
https://github.com/KoelhoSec/pimpmyP4wnP1
This is a simple bash script to fix errors with apt update, upgrade MetaSploit and run some upgrades to improve the system usability without breaking it, by using Pimpmykali script.
# pimpmyP4wnP1 - P4wnP1 A.L.O.A. by MaMe82 is a framework which turns a Raspberry Pi Zero W into a flexible, low-cost platform for pentesting, red teaming and physical engagements ... or into "A Little Offensive Appliance". - Since the original repository is no longer maintained, not everything works out of the box with the P4wnP1 A.L.O.A. framework (running a regular 'apt upgrade' breaks the install of Pw4np1 A.L.O.A making it unusable). This is a simple bash script to fix errors with apt update, upgrade MetaSploit and run some upgrades to improve the system usability without breaking it, by using Pimpmykali script. ## Hardware Build [![pimpmyp4wnp1.jpg](https://i.postimg.cc/90w6ryxN/pimpmyp4wnp1.jpg)](https://postimg.cc/H8HZN81Q) - Raspberry Pi Zero W - SD card (Tested with a 32GB SDHC) - USB Dongle Addon board (Optional) - Raspberry Pi Zero W Case Kit with Heatsink and accessories (Optional but recommended) ## Installation - Download the Github version of P4wnP1-ALOA (v0.1.1-beta) on https://github.com/RoganDawes/P4wnP1_aloa/releases. If you want a great Youtube tutorial there is one from Network Chuck - https://www.youtube.com/watch?v=km81ph7pZz8&t=168 - The easiest way to access a fresh P4wnP1 A.L.O.A. installation is to plug in the Pi to your machine USB to use the USB-eth interface, or RNDIS if using Windows (the URL for the web interface is http://172.16.0.1:8000). - Once in the Web Interface go to Wifi settings and choose client mode, enter your local wifi settings (SSID and Password) and click deploy. This will connect the Pi to your local wifi network and you will have internet access. - SSH to root@172.16.0.1 (default password 'toor'). - Check if wifi is connected successfully with 'iwconfig'. - Run pimpmyP4wnP1 script (choose option 0 on pimpmykali script when prompted): - git clone https://github.com/KoelhoSec/pimpmyP4wnP1 - cd pimpmyP4wnP1 - chmod +x pimpmyP4wnP1.sh - ./pimpmyP4wnP1.sh Note: The script can take a while to finish since the Pi has limited CPU and RAM. ## Payload Repositories Repositories that may give ideas for payloads. Some might not work, as they are not recent (not maintained). - https://github.com/akhil1136/P4wnP1-ALOA-payloads - https://github.com/beboxos/P4wnP1-ALOA-HID-payloads - https://github.com/NightRang3r/P4wnP1-A.L.O.A.-Payloads - https://github.com/federicodotta/LetMeHID ## Extra Resources - https://levelup.gitconnected.com/hacking-with-the-raspberry-pi-zero-w-8520a4d72b2e - https://jamesachambers.com/kali-linux-p4wnp1-aloa-guide-setup-usage-examples/ - https://medium.com/azkrath/p4wnp1-a-l-o-a-an-advanced-hid-attack-device-d906ae5bf48c - https://wickedpicker.com/deploying-ransomware-with-a-weaponized-usb-device/ ## Credits - Dewalt-arch pimpmykali script: - https://github.com/Dewalt-arch/pimpmykali
bdsazzad3/ForFollower
https://github.com/bdsazzad3/ForFollower
Must be star
# ForFollower Must be star
typelevel/cats-effect-shell
https://github.com/typelevel/cats-effect-shell
Command line debugging console for Cats Effect
# cats-effect-shell Command line debugging console for Cats Effect
modkavartini/catppuccin
https://github.com/modkavartini/catppuccin
for rainmeter
<h3 align="center"> <img src="https://raw.githubusercontent.com/catppuccin/catppuccin/main/assets/logos/exports/1544x1544_circle.png" width="100" alt="Logo"/><br/> <img src="https://raw.githubusercontent.com/catppuccin/catppuccin/main/assets/misc/transparent.png" height="30" width="0px"/> catppuccin for rainmeter <img src="https://raw.githubusercontent.com/catppuccin/catppuccin/main/assets/misc/transparent.png" height="30" width="0px"/> </h3> <p align="center"> <img src="https://raw.githubusercontent.com/catppuccin/catppuccin/main/assets/palette/macchiato.png" alt="catppuccin palette" width="400" /> </p> <h3 align="center"> a set of skins made using the catppuccin color scheme </h3> ## preview ![1](https://i.imgur.com/1tFK1eV.png) ![2](https://i.imgur.com/urNCOq8.png) ![3](https://i.imgur.com/A1yHRf6.png) ![4](https://i.imgur.com/jh5HAsk.png) ![5](https://i.imgur.com/QWwqPth.png) ![6](https://i.imgur.com/XDKrLqD.png) ![7](https://i.imgur.com/jPuo8ZW.png) ![8](https://i.imgur.com/bKf6PQf.png) ![full](https://github.com/modkavartini/catppuccin/assets/81793953/3dfcb896-7795-4de3-bc7c-4a3ee0257d7a) ### installation #### using [MonD](https://github.com/meters-on-demand/cli) ```ps1 mond install modkavartini/catppuccin ``` #### from releases * download and run the latest `.rmskin` from [releases](https://github.com/modkavartini/catppuccin/releases) ### also, check out the [wiki](https://github.com/modkavartini/catppuccin/wiki)! ### to-do: - [ ] scale variable - [ ] finish wiki ### credits [the catppuccin theme](https://github.com/catppuccin/catppuccin) #### plugins * [appbar](https://github.com/deathcrafter/PluginAppBar) by [deathcrafter](https://github.com/deathcrafter) * [backlight](https://forum.rainmeter.net/viewtopic.php?t=19221) by [cfixd](https://github.com/cfixd) * [configactive](https://forum.rainmeter.net/viewtopic.php?t=28720) by [jsmorley](https://github.com/jsmorley) * [focus](https://forum.rainmeter.net/viewtopic.php?t=37989) by [deathcrafter](https://github.com/deathcrafter) * [getactivetitle](https://forum.rainmeter.net/viewtopic.php?t=33146) by [jsmorley](https://github.com/jsmorley) * [isfullscreen](https://forum.rainmeter.net/viewtopic.php?t=28305) by [jsmorley](https://github.com/jsmorley) * [powershellrm](https://forum.rainmeter.net/viewtopic.php?t=29095) by [khanhas](https://github.com/khanhas) * [programoptions](https://forum.rainmeter.net/viewtopic.php?t=22868) by [TheAzack9](https://github.com/TheAzack9) * [webnowplaying](https://github.com/keifufu/WebNowPlaying-Redux) by [keifufu](https://github.com/keifufu) & [tjhrulz](https://github.com/tjhrulz)
navaei/TelegramGame2048
https://github.com/navaei/TelegramGame2048
Telegram Game 2048
# Telegram Game(2048) I adapted the famous game 2048 for Telegram. By hosting this project and also building a bot in Telegram using BotFather, you can play this game as two players or in a group with your friends in Telegram and have fun. Learn more about [Telegram Game](https://core.telegram.org/bots/games) The main part of this game(2048) is the result of [Gabriele](https://github.com/gabrielecirulli) efforts, whose description and repository address I have included below # 2048 A small clone of [1024](https://play.google.com/store/apps/details?id=com.veewo.a1024), based on [Saming's 2048](http://saming.fr/p/2048/) (also a clone). Made just for fun. [Play it here!](http://gabrielecirulli.github.io/2048/) The official app can also be found on the [Play Store](https://play.google.com/store/apps/details?id=com.gabrielecirulli.app2048) and [App Store!](https://itunes.apple.com/us/app/2048-by-gabriele-cirulli/id868076805) ### Screenshot <p align="center"> <img src="https://cloud.githubusercontent.com/assets/1175750/8614312/280e5dc2-26f1-11e5-9f1f-5891c3ca8b26.png" alt="Screenshot"/> </p> #فارسی کاربران ایرانی علی رغم اینکه در تلگرام خیلی زیاد و فعال هستند اما متاسفانه تعداد بازی هایی که ایرانی ها برای این پلتفرم توسعه دادند چندان زیاد نیستند. با اینکه تخصص من برنامه نویسی سمت کلاینت و جاوااسکریپت نیست اما سعی کردم به ساده ترین شکل یک بازی معمولی نوشته شده با HTML را برای تلگرام همسان سازی کنم. امیدوارم برنامه نویسان ایرانی دیگه هم در این مسیر همکاری کنند تا بتوانیم بازی های بیشتری را در اختیار کاربران ایرانی تلگرام قرار بدهیم.
chrishart0/open-ai-aws-lambda-starter
https://github.com/chrishart0/open-ai-aws-lambda-starter
An example starter repo using NextJS + AWS Lambda/APG to build a web app with theOpenAI APU
# AWS Serverless Generative AI Quick Start Guide In this guide we will provide a relatively simple example of how to host a generative AI application on AWS using the OpenAI API. I pulled most of this code from my other side project: [MyChefAI.com](MyChefAI.com), which is an AI recipe writer. There are two main pieces to this repo, a frontend and a backend. The frontend uses NextJS, the backend uses API API Gateway and Lambda deployed by AWS SAM. The Lambda utilizes the OpenAI API. NOTE: This repo is a work in progress. Right now it has only been tested running on the local machine, this hasn't been tested deployed yet. Features: * OpenAI API key stored in AWS Secrets Manager * Chat History maintained in UI * UI will render markdown for better looking user interface * Chat Prompt instructed to return travel itinerary in markdown ## Overview of the demo application The application is an AI Travel agent who will attempt to write a trip itinerary for you. Once the LLM has gotten enough info, it will attempt to write the itinerary in markdown. The UI will render this markdown in the chat window into a nice looking format. Here is how it looks: ![Demo image 1](docs/demo-image-1.png) ![Demo image 2](docs/demo-image-2.png) # Backend - API ## Setup Steps ### Prereqs 1. Ensure you have AWS CLI configured with a profile 2. Install AWS SAM 3. Install Docker 4. Get an OpenAI API key ### Create Open AI API Key Secret Run this bash command modified with your secret ``` aws secretsmanager create-secret --name /serverlessGenAiExample/openaiApiKey \ --description "API Key for Open AI API" \ --secret-string 'sk-********' ``` ## Test locally Invoke AWS SAM API locally with an example event ``` cd backend sam build && sam local invoke ChatFunction --event events/event.json ``` Startup te API with SAM to mock API Gateway and lambda locally ``` # NOTE: Use port 4000 because the local UI runs on port 3000 sam build && sam local start-api --port 4000 ``` # Frontend - UI The frontend uses NextJS. *NOTE:* I have not yet written any deployment mechanism for the frontend, for now it only works on the local machine ### Prereqs Ensure you have nodejs and npm installed ## Setup Steps Leave your terminal with the backend up and running, open up a new terminal in the root of the repo to run the frontend Install the needed deps for the frontend to run ``` cd frontend npm install ``` ## Run the frontend ``` npm run dev ``` You can now open the frontend at <http://localhost:3000> # Now time to change up the prompt! The initial prompt is maintained here in the ChatBot component of the frontend: [frontend/components/ChatBox.js](frontend/components/ChatBox.js) in the `initialMessages` variable. Change up the prompt however you wish. Maybe tell it to speak with a different accent, or tell the LLM it is a travel agent for Vienna, Virginia instead of Vienna, Austria. There is no need to reload or rebuild. Simple make your alteration to the prompt, save the file, and you should see the localhost:3000 page will have refreshed, ready for you to start a new chat. # Handling In-Context data (WIP) Let's give the model access to data Quote from OpenAI: > Why search is better than fine-tuning GPT can learn knowledge in two ways: > 1) Via model weights (i.e., fine-tune the model on a training set) > 2) Via model inputs (i.e., insert the knowledge into an input message) <br><br> Although fine-tuning can feel like the more natural option—training on data is how GPT learned all of its other knowledge, after all—we generally do not recommend it as a way to teach the model knowledge. Fine-tuning is better suited to teaching specialized tasks or styles, and is less reliable for factual recall. Source: https://github.com/openai/openai-cookbook/blob/main/examples/Question_answering_using_embeddings.ipynb ## Need some help or have an idea for an improvement? Please feel free to ask questions or make PRs.
cyberomanov/sui-bullshark
https://github.com/cyberomanov/sui-bullshark
null
# SUI-8192 this script plays in [sui8192](https://sui8192.ethoswallet.xyz/), [coinflip](https://desuicoinflip.io/) and [journey](https://journey.polymedia.app/) for points in [quests](https://quests.mystenlabs.com/).<br> [donations are welcome](https://cyberomanov.tech/WTF_donate), if you find this tool helpful. ## Contents 1. [Installation](https://github.com/cyberomanov/brc20-checker#installation) ## Installation 1. make sure you have installed `python3.10` or newer. 2. setup your `config.py` and `data/mnemonic.txt`; 3. install requirements `pip3 install -r requirements.txt` 4. run the script `python3 8192.py` to play **8192** or `python3 coinflip.py` to play **coinflip** or `python3 journey.py` to play **journey**. ```bash 18:00:09 | INFO | loaded addresses: 18:00:09 | INFO | ------------- 18:00:09 | INFO | 0x299a...4b3f 18:00:09 | INFO | 0xe1a0...b61e 18:00:09 | INFO | 0x318a...634b 18:00:09 | INFO | ------------- 18:00:10 | SUCCESS | 0x299a...4b3f | played 4 games. 18:00:10 | SUCCESS | 0xe1a0...b61e | played 4 games. 18:00:28 | INFO | 0x318a...634b | digest: 8FxqkMphgDoioaiAW93UXQvEFR4ExAzeabBd9xa3Xvwi 18:00:34 | INFO | 0x318a...634b | current_game_id: 0x10380bb4a184a1568a54d4c478dc83ff10c1b3d5c137f2c3d96b4ba52f08a4dd. 18:00:38 | INFO | 0x318a...634b | LEFT | digest: D9HZ19tY46piK8cDQahsmeaT4ppYc9V8ZhUkqY5uc7z1 18:00:46 | INFO | 0x318a...634b | DOWN | digest: 824KSV1fawU9Sp4abmCWtByj9NHk6geQaYqSRiM7MvUA ``` ```bash 18:22:50 | INFO | loaded addresses: 18:22:50 | INFO | ------------------------------------------------------------------ 18:22:50 | INFO | 0xe1a0...b61e 18:22:50 | INFO | ------------------------------------------------------------------ 18:22:57 | INFO | 0xe1a0...b61e | COIN_MERGE | digest: 3rap28SGGvgBmK8znGZ4EFq1GsJjQMxFyA9ecGbFX3Ym 18:23:07 | INFO | 0xe1a0...b61e | HEADS | digest: CW7giReVvPs7YMxQXy1FD1YghCjpTifz5Y8SRv45BxGU 18:23:07 | INFO | 0xe1a0...b61e | sleep: 95s. 18:24:54 | INFO | 0xe1a0...b61e | TAILS | digest: 95sSPEnUmLyKD41e9YZy8uTWLMbfwt4nHsE88eTi85UA 18:24:54 | INFO | 0xe1a0...b61e | sleep: 161s. 18:27:46 | INFO | 0xe1a0...b61e | HEADS | digest: GjZywbJyE42eUoTpBwL6ScaPg5EUbL2BJb8GpbuiLZiV 18:27:46 | INFO | 0xe1a0...b61e | sleep: 176s. 18:30:48 | INFO | 0xe1a0...b61e | COIN_MERGE | digest: HSYZhf49fC11mBpVWSz5WvvqP76XRR89MDL4gx1hBr6s 18:30:57 | INFO | 0xe1a0...b61e | HEADS | digest: 9wqAhdPQGHmFe6b79HYihrGMsdjdoWrLteghNY75WxLn 18:30:57 | INFO | 0xe1a0...b61e | sleep: 75s. 18:32:12 | SUCCESS | 0xe1a0...b61e | has played 4 games. ``` ```bash 17:33:36 | INFO | loaded addresses for journey: 17:33:51 | INFO | 0xe851...71ad | create_profile | digest: 8w18R8KzvuLgfoVrsgBEvtb8ME4dF9Uj4ksBU82sXvUG | sleep: 14s. 17:33:53 | INFO | 0xeb61...7275 | create_profile | digest: CiicAxu1xj4YHoKzitTsgN1otC75hdGnKQzRKT4ZcHUB | sleep: 9s. 17:34:08 | INFO | 0xeb61...7275 | save_quest | digest: 8qdLBNi1y8GYfnbSfSrCsUqUh4qFGc9rkPsBgjYA7VUc | sleep: 12s. 17:34:11 | INFO | 0xe851...71ad | save_quest | digest: 8F2dGi5MZLpu7cXwtp4BmRYJFe6mYpMWhfkzUFuho1tF | sleep: 10s. ``` ## Update 1. `cd sui-bullshark` 2. `git fetch && git reset --hard && git pull` 3. `pip3 install -r requirements.txt` 4. `python3 8192.py` | `python3 coinflip.py` | `python journey.py`
Litles/AutoMdxBuilder
https://github.com/Litles/AutoMdxBuilder
Automatically make mdx dictionaries
## AutoMdxBuilder **自动化制作 mdx 词典工具,人人都可以制作电子词典** 目前主要具备以下功能: **(一) 打包/解包** * 解包 mdx/mdd 文件。功能同 `MdxExport.exe`,另支持自动批量。 * 打包成 mdx 文件。功能同 `MdxBuilder.exe` 。 * 打包成 mdd 文件。功能同 `MdxBuilder.exe`,另支持自动批量。 **(二) 制作词典** * 制作图像词典 (模板A,朴素版) * 制作图像词典 (模板B,导航版) * 制作文本词典 (模板C,朴素版) * 制作文本词典 (模板D,导航版) ## 成品预览 #### 图像词典 (模板A,朴素版) ![img](https://github.com/Litles/AutoMdxBuilder/blob/main/images/img_dict_atmpl.gif) #### 图像词典 (模板B,导航版) ![img](https://github.com/Litles/AutoMdxBuilder/blob/main/images/img_dict_btmpl.gif) #### 文本词典 (模板C,朴素版) ![img](https://github.com/Litles/AutoMdxBuilder/blob/main/images/text_dict_ctmpl.png) #### 文本词典 (模板D,导航版) ![img](https://github.com/Litles/AutoMdxBuilder/blob/main/images/text_dict_dtmpl.gif) ## 使用方法 ### 操作步骤 #### (一)准备工具 1. 安装 Python。(Windows)从 [Python官网](https://www.python.org) 下载安装 Python 的 `.exe` 包,运行按提示安装即可; 2. 解压本工具 [AutoMdxBuilder_xx.zip](https://github.com/Litles/AutoMdxBuilder/releases) 作为工作文件夹(假设为 AutoMdxBuilder); 3. 安装 Python 模块包。在 AutoMdxBuilder 内执行 `pip install -r requirements.txt`即可(或手动挨个安装 `requirements.txt` 中所列包也行); #### (二)制作词典 4. 准备词典制作所需的原材料。在 AutoMdxBuilder 文件夹内创建子文件夹 `raw`,将原材料按要求放置其内(**具体参见下面说明**); 5. 修改好 `settings.py` 配置文件(**具体参见下面说明**); 6. 运行 auto_mdx_builder.py (可直接双击),按对话框提示输入。词典成品将生成在 `out` 子文件夹内,过程文件存放在 `_tmp` 子文件夹内以备用。 ### 原材料准备具体说明 制作词典才需要在子文件夹 `raw` 内放置原材料,如果是使用程序的其他功能则不涉及。制作不同模板的词典,所需的原材料也不尽相同,下面分模板说明: **通用可选** * (可选)`syns.txt` 文件:同义词文件; * (可选)`info.html` 文件:词典介绍等描述。 **注意事项** * 凡涉及的文本文件(如`.txt`、`.html`),一律要求 **UTF-8 无 BOM** 的编码格式; * `raw` 文件夹中只放置需要用到的文件/文件夹,**不用到的不要出现在 raw 文件夹内**; * 文件夹和文件的名称就按本说明所提的,不建议自定义名称。 #### 图像词典 (模板A) * (必须)`imgs` 文件夹:存放图像文件,不限定图片格式,png、jpg 等均可,也无特定的名称要求(顺序是对的就行); * (必须)`index.txt` 文件:索引文件 * (可选)`toc.txt` 文件:目录文件 #### 图像词典 (模板B) * (必须)`imgs` 文件夹:存放图像文件,同模板A * (必须)`index_all.txt` 文件:全索引文件 #### 文本词典 (模板C) * (必须)`index.txt` 文件:索引文件 #### 文本词典 (模板D) * (必须)`index_all.txt` 文件:全索引文件 ![img](https://github.com/Litles/AutoMdxBuilder/blob/main/images/work_dir_tree.png) ## 相关文件具体说明 ### 配置文件 `settings.py` 一般修改图中绿框中的部分便可 ![img](https://github.com/Litles/AutoMdxBuilder/blob/main/images/settings.png) 有 `toc.txt` 文件才需要设置 `self.navi_items`。`a`的值是显示文字,`ref`的值是与 `toc.txt` 中词目对应的。 ### 索引文件 `index.txt` 格式`词目<TAB>页码`(页码数是相对正文起始页的,而不是图片序号): ![img](https://github.com/Litles/AutoMdxBuilder/blob/main/images/index.png) 如果是制作文本词典 (模板C),用到的文件也叫 `index.txt`,只不过其中的 **页码** 换成了 **词条正文**,格式为 `词目<TAB>词条正文` 。 ### 目录文件 `toc.txt` 格式`[<TAB>*]词目<TAB>页码`,格式大概像这样(行首 TAB 缩进表层级): ![img](https://github.com/Litles/AutoMdxBuilder/blob/main/images/toc.png) 格式同程序 `FreePic2Pdf.exe` 的书签文件`FreePic2Pdf_bkmk.txt`,因此也可以直接用 `FreePic2Pdf.exe` 程序从 pdf 文件中导出。 ### 全索引文件 `index_all.txt` 是 `index.txt` 的拓展,格式同样是 `词目<TAB>页码` ,只不过 `index_all.txt` 是把 `toc.txt` 也并入进来,并且是严格有序的。 其中目录(章节)的词目要加 `【L<层级>】` 前缀标识,比如顶级章节“正文”前缀就是 `【L0】正文` ,“正文”的下一级“史前篇”的前缀就是 `【L1】史前篇` 。 > 章节词目可以没有对应页码,但要保留 `<TAB>` ![img](https://github.com/Litles/AutoMdxBuilder/blob/main/images/index_all.png) 如果是制作文本词典 (模板D),用到的文件也叫 `index_all.txt`,只不过其中的 **页码** 换成了 **词条正文**,格式为 `词目<TAB>词条正文` 。 ### 同义词文件 `syns.txt` 或说重定向文件,格式`同义词<TAB>词目`: ![img](https://github.com/Litles/AutoMdxBuilder/blob/main/images/syns.png) ## 参考 + https://github.com/liuyug/mdict-utils + https://github.com/VimWei/MdxSourceBuilder
kiliokuara/magic-signer-guide
https://github.com/kiliokuara/magic-signer-guide
null
# magic-signer-guide 此项目用于解决各种 QQ 机器人框架的 sso sign 和 tlv 加密问题。 该项目为 RPC 服务后端,并提供 HTTP API,这意味着你可以根据框架的需求实现不同 RPC 客户端。 由于该项目扮演的角色比较特殊,为了保证客户端与服务端的通信安全,需要先进行认证,才可执行业务操作。 本项目(docker 镜像 `kiliokuara/vivo50`,以下相同)在可控范围内<sup>(1)</sup>不会持久化 QQ 机器人的以下信息: * 登录凭证(token,cookie 等) * 需要加密的 tlv 数据 * 需要签名的 sso 包数据 为优化业务逻辑,本项目会持久化 QQ 机器人的以下信息: * 设备信息 * 由 libfekit 产生的账号相关的 Key-Value 键值对。 强烈建议自行部署 RPC 服务端,避免使用他人部署的开放的 RPC 服务端。 > (1) 可控范围指 RPC 服务端的所有代码。由于项目使用到了外置库 libfekit,我们无法得知 libfekit 会持久化的信息。 ## 支持的 QQ 版本 * `Android 8.9.58.11170` ## 使用方法 通过 docker 部署 RPC 服务端 ```shell $ docker pull kiliokuara/vivo50:latest $ docker run -d --restart=always \ -e SERVER_IDENTITY_KEY=vivo50 \ -e AUTH_KEY=kfc \ -e PORT=8888 \ -p 8888:8888 \ --log-opt mode=non-blocking --log-opt max-buffer-size=4m \ -v /home/vivo50/serverData:/app/serverData \ -v /home/vivo50/testbot:/app/testbot \ --name vivo50 \ --memory 200M \ kiliokuara/vivo50 ``` 环境变量说明: * `SERVER_IDENTITY_KEY`:RPC 服务端身份密钥,用于客户端确认服务端身份。 * `AUTH_KEY`:RPC 客户端验证密钥,用于服务端确认客户端身份。 * `PORT`:服务端口,默认 `8888`。 * `MEMORY_MONITOR`:内存监控器,默认关闭,值为定时器循环周期,单位为秒。 更多详情请参考 https://docs.docker.com/engine/reference/commandline/run/ ## 内存使用参考 ``` 登录机器人前 2023-07-27 16:29:46 [DEBUG] [Vivo45#1] MemoryDumper - committed | init | used | max 2023-07-27 16:29:46 [DEBUG] [Vivo45#1] MemoryDumper - Heap Memory: 200.0MB | 200.0MB | 18.47MB | 200.0MB 2023-07-27 16:29:46 [DEBUG] [Vivo45#1] MemoryDumper - Non-Heap Memory: 25.25MB | 7.31MB | 23.27MB | -1 2023-07-27 16:29:51 [DEBUG] [Vivo45#2] MemoryDumper - committed | init | used | max 2023-07-27 16:29:51 [DEBUG] [Vivo45#2] MemoryDumper - Heap Memory: 44.0MB | 200.0MB | 12.08MB | 200.0MB 2023-07-27 16:29:51 [DEBUG] [Vivo45#2] MemoryDumper - Non-Heap Memory: 25.25MB | 7.31MB | 22.17MB | -1 2023-07-27 16:29:56 [DEBUG] [Vivo45#1] MemoryDumper - committed | init | used | max 2023-07-27 16:29:56 [DEBUG] [Vivo45#1] MemoryDumper - Heap Memory: 44.0MB | 200.0MB | 11.08MB | 200.0MB 2023-07-27 16:29:56 [DEBUG] [Vivo45#1] MemoryDumper - Non-Heap Memory: 25.25MB | 7.31MB | 21.62MB | -1 登录一个机器人后 2023-07-27 16:30:41 [DEBUG] [Vivo45#3] MemoryDumper - committed | init | used | max 2023-07-27 16:30:41 [DEBUG] [Vivo45#3] MemoryDumper - Heap Memory: 52.0MB | 200.0MB | 33.13MB | 200.0MB 2023-07-27 16:30:41 [DEBUG] [Vivo45#3] MemoryDumper - Non-Heap Memory: 44.56MB | 7.31MB | 41.21MB | -1 2023-07-27 16:30:46 [DEBUG] [Vivo45#3] MemoryDumper - committed | init | used | max 2023-07-27 16:30:46 [DEBUG] [Vivo45#3] MemoryDumper - Heap Memory: 52.0MB | 200.0MB | 28.15MB | 200.0MB 2023-07-27 16:30:46 [DEBUG] [Vivo45#3] MemoryDumper - Non-Heap Memory: 44.56MB | 7.31MB | 40.68MB | -1 ``` ## 认证流程 ### 1. 获取 RPC 服务端信息,并验证服务端的身份 首先调用 API `GET /service/rpc/handshake/config`,获取回应如下: ```json5 { "publicKey": "", // RSA 公钥,用于 客户端验证服务端身份 和下一步的 加密握手信息。 "timeout": 10000, // 会话过期时间(单位:毫秒) "keySignature": "" // 服务端公钥签名,用于 客户端验证服务端身份 } ``` 为了防止 MITM Attack(中间人攻击),客户端需要验证服务端的身份。通过如下计算: ``` $clientKeySignature = $sha1( $sha1( ($SERVER_IDENTITY_KEY + $publicKey).getBytes() ).hex() + $SERVER_IDENTITY_KEY ).hex() ``` 将 `clientKeySignature` 与 API 返回的 `keySignature` 比对即可验证服务端身份。 以 Kotlin 为例: ```kotlin fun ByteArray.sha1(): ByteArray { return MessageDigest.getInstance("SHA1").digest(this) } fun ByteArray.hex(): String { return HexFormat.of().withLowerCase().formatHex(this) } val serverIdentityKey: String = ""; // 服务端 SERVER_IDENTITY_KEY val publicKey: String = ""; // API 返回的 publicKey 字符串,该字符串是 base64 编码的 RSA 公钥。 val serverKeySignature: String = ""; // API 返回的 keySignature 字符串,该字符串是服务端计算签名。 val pKeyRsaSha1 = (serverIdentityKey + publicKey).toByteArray().sha1() val clientKeySignature = (pKeyRsaSha1.hex() + serverIdentityKey).toByteArray().sha1().hex() if (!clientKeySignature.equals(serverKeySignature)) { throw IllegalStateException("client calculated key signature doesn't match the server provides.") } ``` ### 2. 与服务端握手 在与服务端握手之前,需要客户端生成一个 16-byte AES 密钥和 4096-bit RSA 密钥对。 * AES 密钥用于加解密握手成功之后的 WebSocket 业务通信。 * RSA 密钥对用于防止使用 Replay Attacks(重放攻击)再次建立相同的 WebSocket 连接。 生成密钥后,调用 API `POST /service/rpc/handshake/handshake`,请求体如下: ```json5 { "clientRsa": "", // 客户端生成的 RSA 密钥对中的公钥,使用 base64 编码。 "secret": "....", // 握手信息,使用上一步 “获取 RPC 服务端信息” 中的 publicKey,采用 RSA/ECB/PKCS1Padding 套件加密。 } // 握手信息如下 { "authorizationKey": "", // 服务端的 AUTH_KEY "sharedKey": "", // AES 密钥 "botid": 1234567890, // Bot QQ 号 } ``` 回应如下: ```json5 { "status": 200, // 200 = 握手成功,403 = 握手失败 "reason": "Authorization code is invalid.", // 握手失败的原因,仅握手失败会有此属性。 "token": "", // WebSocket 通信 token,使用 base64 编码,仅握手成功会有此属性。 } ``` 将 `token` 进行 base64 解码,至此握手过程已结束,接下来进行 WebSocket 通信。 ### 3. 开启 WebSocket 会话 访问 API `WEBSOCKET /service/rpc/session`,请求需要添加以下 headers: ```properties Authorization: $token_decoded <--- base64 解码后的 token X-SEC-Time: $timestamp_millis <--- 当前时间戳,毫秒 X-SEC-Signature: $timestamp_sign <--- 时间戳签名,使用客户端 RSA 密钥,采用 SHA256withRSA 算法签名,使用 base64 编码。 ``` WebSocket 会话开启后,即可进行业务通信。 ### 4. 查询 WebSocket 会话状态 WebSocket 每次发送 C2S 包之前,建议验证当前 WebSocket 会话的状态。 访问 API `GET service/rpc/session/check`,请求添加同 [3. 开启 WebSocket 会话](#3-开启-websocket-会话) 的 headers。 响应状态码如下: * 204:会话有效 * 403:验证失败,需要检查 headers。 * 404:会话不存在 ### 4. 中断 WebSocket 会话 在任务已经完成后(例如机器人下线,机器人框架关闭等),需要主动中断 WebSocket 会话。 访问 API `DELETE /service/rpc/session`,请求添加同 [3. 开启 WebSocket 会话](#3-开启-websocket-会话) 的 headers。 响应状态码如下: * 204:会话已中断 * 403:验证失败,需要检查 headers。 * 404:会话不存在 ## WebSocket 通信格式 ### 通用规范 * C2S(client to server) 和 S2C(server to client)的所有的包均使用客户端 AES 密钥加密为 byte array。 * S2C 包通用格式如下: ```json5 { "packetId": "", // 独一无二的包 ID "packetType": "", // 包类型,对应为业务操作。 ..., // 具体包类型的其他属性 } ``` > `packetId` 有如下两种情况: > > 1. C2S 包包含 `packetId`,此 C2S 包需要服务端回应,则该 S2C 包为此 C2S 包的回应包,`packetId` 为此 C2S 包的 `packetId`。 > 2. 若该 S2C 包的 `packetType` 为 `rpc.service.send`,表示此 S2C 包需要客户端回应,需要 C2S 包包含该 S2C 包的 `packetId`。 * 业务遇到错误的 S2C 包格式如下: ```json5 { "packetId": ......., "packetType": "service.error", "message": "", } ``` 服务端不会主动发送业务错误的包,该 `packetId` 一定与 S2C 包的 `packetId` 对应。 ### 业务场景 #### 会话中断 S2C ```json5 { "packetType": "service.interrupt", "reason": "Interrupted by session invalidate", } ``` 客户端收到此包意味着当前 WebSocket session 已失效,需要重新握手获取新的 session。 #### 初始化签名和加密服务 C2S ```json5 { "packetId": "", "packetType": "rpc.initialize", "extArgs": { "KEY_QIMEI36": "", // qimei 36 "BOT_PROTOCOL": { "protocolValue": { "ver": "8.9.58", } } }, "device": { // 除特殊标记,参数均为 value.toByteArray().hexString() "display": "", "product": "", "device": "", "board": "", "brand": "", "model": "", "bootloader": "", "fingerprint": "", "bootId": "", // raw string "procVersion": "", "baseBand": "", "version": { "incremental": "", "release": "", "codename": "", "sdk": 0 // int }, "simInfo": "", "osType": "", "macAddress": "", "wifiBSSID": "", "wifiSSID": "", "imsiMd5": "", "imei": "", // raw string "apn": "", "androidId": "", "guid": "" }, }, ``` S2C response ```json5 { "packetId": "", "packetType": "rpc.initialize" } ``` 初始化服务后才能进行 tlv 加密。 初始化过程中服务端会发送 `rpc.service.send` 包,详见[服务端需要通过机器人框架发送包](#服务端需要通过机器人框架发送包)。 #### 获取 sso 签名白名单 C2S ```json5 { "packetId": "", "packetType": "rpc.get_cmd_white_list" } ``` S2C response ```json5 { "packetId": "", "packetType": "rpc.get_cmd_white_list", "response": [ "wtlogin.login", ..., ] } ``` 获取需要进行 sso 签名的包名单,帮助机器人框架判断机器人框架的网络包是否需要签名。 #### 服务端需要通过机器人框架发送包 S2C ```json5 { "packetId": "server-...", "packetType": "rpc.service.send", "remark": "msf.security", // sso 包标记,可忽略 "command": "trpc.o3.ecdh_access.EcdhAccess.SsoEstablishShareKey", // sso 包指令 "botUin": 1234567890, // bot id "data": "" // RPC 服务端需要发送的包内容 bytes.hexString() } ``` C2S response ```json5 { "packetId": "server-...", "packetType": "rpc.service.send", "command": "trpc.o3.ecdh_access.EcdhAccess.SsoEstablishShareKey", "data": "" // QQ 服务器包响应的内容 bytes.hexString() } ``` 客户端收到 `rpc.service.send` 后,需要将 `data` 包进行 sso 包装,通过机器人框架的网络层发送到 QQ 服务器。 QQ 服务器返回后,只需简单解析包的 command 等信息,将剩余内容传入 C2S response 包的 `data`。 需要注意的是,服务端需要通过机器人框架发送包全部需要 sso 签名,所以请收到 `rpc.service.send` 包后调用 sso 签名对包装后的网络包进行签名。 #### sso 签名 C2S ```json5 { "packetId": "", "packetType": "rpc.sign", "seqId": 33782, // sso 包的 sequence id "command": "wtlogin.login", // sso 包指令 "extArgs": {}, // 额外参数,为空 "content": "" // sso 包内容 bytes.hexString() } ``` S2C response ```json5 { "packetId": "", "packetType": "rpc.sign", "response": { "sign": "", "extra": "", "token": "" } } ``` #### tlv 加密 C2S ```json5 { "packetId": "", "packetType": "rpc.tlv", "tlvType": 1348, // 0x544 "extArgs": { "KEY_COMMAND_STR": "810_a" }, "content": "" // t544 内容 bytes.hexString() } ``` S2C response ```json5 { "packetId": "", "packetType": "rpc.tlv", "response": "" // 加密结果 bytes.hexString() } ```
HHU-MMBS/RepresentationLearning_SS2023
https://github.com/HHU-MMBS/RepresentationLearning_SS2023
Representation Learning MSc course Summer Semester 2023
# Representation Learning MSc course Summer Semester 2023 ## Summary This course is tailored for MSc students of the AI and Data Science Master of the Heinrich Heine University of Dusseldorf. We provide all the course materials, including lectures, slides, and exercise classes. [YouTube Playlist of videos](https://www.youtube.com/playlist?list=PL3mKiGE4zNJJ83K4c3IBka6eYfe6v71dS) ## Week 1 - Introduction to Representation Learning - [Lecture](https://www.youtube.com/watch?v=i1-OtPa9doY&list=PL3mKiGE4zNJJ83K4c3IBka6eYfe6v71dS&index=2) || [Slides](https://uni-duesseldorf.sciebo.de/s/2h3pY73kHHIWtUW) - Introduction to autoencoders for representation learning, early/old traditional approaches. Based on [Bengio et al. 2012 paper](https://arxiv.org/pdf/1206.5538.pdf) #### Exercise Image autoencoders. Learning to use and evaluate the intermediate learned representation. ## Week 2 - Overview of visual self-supervised learning methods - [Lecture](https://youtu.be/3Zvo1BihTRE) || [Slides](https://uni-duesseldorf.sciebo.de/s/J5f839uJRKQhW8y) - Self-supervised learning VS Transfer Learning. Pretext VS Downstream Task - Pretext tasks covered: Colorization, Jigsaw puzzles, Image inpainting, Shuffle and Learn (Videos), - Classify corrupted images, Rotation Prediction - Semi-supervised learning: Consistency loss - A small intro to Contrastive loss (infoNCE) #### Exercise In this exercise, we will train a ResNet18 on the task of rotation prediction. Rotation prediction provides a simple, yet effective way to learn rich representations from unlabeled image data. The basic idea behind rotation prediction is that the network is trained to predict the orientation of a given image after it has been rotated by a certain angle (e.g., 0°, 90°, 180°, or 270°). ## Week 3 - BERT:Learning Natural Language Representations - [Lecture](https://youtu.be/qCZiR5I47Bo) || [Slides](https://uni-duesseldorf.sciebo.de/s/a3XfMv2HgcxMR8a) - Natural Language Processing (NLP) basics - RNN, self-attention, and Transformer recap - Language pretext tasks - Pretext tasks for representation learning in NLP. An in-depth look into [BERT](https://arxiv.org/abs/1810.04805). #### Exercise In this exercise, you will train a small [BERT](https://arxiv.org/abs/1810.04805) model on the IMDB dataset (https://huggingface.co/datasets/imdb). You will then use the model to classify the sentiment of movie reviews and the sentiment of sentences from the Stanford Sentiment Treebank (SST2, https://huggingface.co/datasets/sst2). ## Week 4 - Contrastive Learning, SimCLR and mutual information-based proof - [Lecture](https://youtu.be/RlCqUawKcwA) || [Slides](https://uni-duesseldorf.sciebo.de/s/g5bD2N5QMpOM3lf) || [Notes](https://uni-duesseldorf.sciebo.de/s/zwbaz4mENfnuy28) - A deep look into contrastive learning, theory, and proof of MI bound. - [SimCLR Paper](https://arxiv.org/abs/2002.05709) #### Exercise Build and train SimCLR resnet18 on CIFAR10. ## Week 5 - Understanding Contrastive learning & MoCO and image clustering - [Lecture](https://youtu.be/PE1MT_S9m1k) || [Slides](https://uni-duesseldorf.sciebo.de/s/jZtCrfKRIRA2UmI) || [MoCO implementation](https://uni-duesseldorf.sciebo.de/s/NTnqx68EE630X4a) - Contrastive Learning, L2 normalization, Properties of contrastive loss - Momentum encoder (MoCO). Issues and concerns regarding batch normalization - Multi-view contrastive learning - Deep Image Clustering: task definition and challenges, K-means and [SCAN](https://arxiv.org/abs/2005.12320), [PMI and TEMI](https://arxiv.org/abs/2303.17896) #### Exercise Use pretrained MoCO ResNet50 for image clustering. ## Week 6 - Vision Transformers and Knowledge Distillation - [Lecture](https://youtu.be/J_q-PEYikEo) || [Slides](https://uni-duesseldorf.sciebo.de/s/Jbx5bw87vlZrueB) - Transformer encoder and Vision transformer - ViTs VS CNNs: receptive field and inductive biases - Knowledge distillation and the mysteries of model ensembles - Knowledge distillation in ViTs and masked image modeling #### Exercise Knowledge distillation on CIFAR100 with Vision Transformers. ## Week 7 - Self-supervised learning without negative samples (BYOL, DINO) - [Lecture](https://youtu.be/-VqXScgDZnM) || [Slides](https://uni-duesseldorf.sciebo.de/s/iU8owOBDx7PZdMs) - A small review of self-supervised methods - A small review of knowledge distillation - Self-Supervised Learning & knowledge distillation - An in-depth look into DINO #### Exercise (2-week assignment) In this exercise you will implement and train a DINO model on a medical dataset, the PathMNIST dataset from [medmnist](https://medmnist.com/) consisting of low-resolution images of various colon pathologies. ## Week 8 - Masked-based visual representation learning: MAE, BEiT, iBOT, DINOv2 - [Lecture](https://youtu.be/8KP2SCm1YVo) || [Slides](https://uni-duesseldorf.sciebo.de/s/ifWDLlUdGRYBvD9) - MAE: https://arxiv.org/abs/2111.06377 - BEiT: BERT-style pre-training in vision: https://arxiv.org/abs/2106.08254 - iBOT: Combining MIM with DINO https://arxiv.org/abs/2111.07832 - DINOv2: https://arxiv.org/abs/2304.07193 ## Week 9 - Multimodal representation learning, robustness, and visual anomaly detection - [Lecture](https://youtu.be/eAf9UjPXmVg) || [Slides](https://uni-duesseldorf.sciebo.de/s/N8mAjFoMLaDJbHZ) - Defining Robustness and Types of Robustness - Zero-shot learning - Contrastive Language Image Pretraining (CLIP) - Image captioning - Few-shot learning - Visual anomaly detection: task definition - Anomaly detection scores - Anomaly detection metrics: AUROC #### Exercise (2-week assignment) Use a CLIP-pre-trained model for out-of-distribution detection. ## Week 10 - Emerging properties of the learned representations and scaling laws - [Lecture](https://youtu.be/CiXyNHVTxLs) || [Slides](https://uni-duesseldorf.sciebo.de/s/FMc1pDa9js5OTcR) - Investigating CLIP models and scaling laws - Determine factor of success of CLIP? - How does CLIP scale to larger datasets and models? - OpenCLIP: Scaling laws of CLIP models and connection to NLP scaling laws - Robustness of CLIP models against image manipulations - Learned representations of supervised models:CNNs VS Vision Transformers (ViTs), the texture-shape bias - Robustness and generalization of supervised-pretrained CNNs VS ViTs - Scaling (Supervised) Vision Transformers - [Properties of ViT pretrained models](https://theaisummer.com/vit-properties/) ## Week 11 - Investigating the self-supervised learned representations - [Lecture](https://youtu.be/uUR0yEZ55Vg) || [Slides](https://uni-duesseldorf.sciebo.de/s/NALUEG5AlUzhbI3) - Limitations of existing vision language models - Self-supervised VS supervised learned feature representations - What do vision transformers (ViTs) learn “on their own”? - MoCOv3 and DINO: https://arxiv.org/abs/2104.14294 - Self-supervised learning in medical imaging - Investigating the pre-training self-supervised objectives #### Exercise No exercise takes place this week. ## Week 12 - Representation Learning in Proteins - [Lecture](https://youtu.be/ZFazdK7dA7Q) || [Slides](https://uni-duesseldorf.sciebo.de/s/hFCXvnJCpAiPzlR) - A closer look at the attention mechanism. The attention mechanism in Natural Language Translation - A tiny intro to proteins - Representing protein sequences with Transformers: BERT masked language modeling VS GPT? - [ESM](https://www.pnas.org/doi/full/10.1073/pnas.2016239118), [ESMv2])(https://pubmed.ncbi.nlm.nih.gov/36927031/) - [Looking & combining at the attention maps of a pre-trained Transformer](https://www.biorxiv.org/content/10.1101/2020.12.15.422761v1) - [Protein Language models generalize beyond natural proteins](https://www.biorxiv.org/content/10.1101/2022.12.21.521521v1) #### Exercise Use a pretrained Protein Language Model ## Week 13 AlphaFold2 - [Lecture]() || [Slides]() #### Exercise [Just play around with an Alphafold notebook](https://colab.research.google.com/github/deepmind/alphafold/blob/main/notebooks/AlphaFold.ipynb) # Additional info Feel free to open issues regarding errors on the exercises or missing information and we will try to get back to you. > Important: Solutions to the exercises are not provided, but you can cross-check your results with the Expected results in the notebook.
JosefVesely/Image-to-ASCII
https://github.com/JosefVesely/Image-to-ASCII
🖼️ Convert images to ASCII art
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) # Image to ASCII **Convert images to ASCII art** *Library used for handling images: [STB Library](https://github.com/nothings/stb)* ## Usage ``` Usage: image_to_ascii.exe --input=image.png [--output=ascii.txt] [--width=50] [--chars="@#?|:. "] --help: shows this message --input={image.png}: input file path --output={ascii.txt}: output file path, "output.txt" if none (optional) --width={50}: width of output (optional) --chars={"@#?|:. "}: characters to be used (optional) ``` ## Example `image_to_ascii.exe --input=images/c.png --output=c.txt --width=40 --chars=" .:-=+*#%@"` Output: ``` Input: images/c.png Output: c.txt Resolution: 40x45 Characters (10): " .:-=+*#%@" @@@@@@@@@@@@@@@@@%*++*%@@@@@@@@@@@@@@@@@ @@@@@@@@@@@@@@@%*++++++*%@@@@@@@@@@@@@@@ @@@@@@@@@@@@@%#++++++++++#%@@@@@@@@@@@@@ @@@@@@@@@@@%#*++++++++++++*#%@@@@@@@@@@@ @@@@@@@@@@%*++++++++++++++++*%@@@@@@@@@@ @@@@@@@@%#++++++++++++++++++++#%@@@@@@@@ @@@@@@%#++++++++++++++++++++++++#%@@@@@@ @@@@%#*++++++++++++++++++++++++++*#%@@@@ @@@%*++++++++++++++++++++++++++++++*%@@@ @%#+++++++++++++*##%%##*+++++++++++++#%@ %*++++++++++++#%%@@@@@@%%#++++++++++++*% *+++++++++++*%@@@@@@@@@@@@%*++++++++++=+ +++++++++++#@@@@@@@@@@@@@@@@#+++++++=-:: ++++++++++#@@@@@@@@@@@@@@@@@@#++++=-:::: +++++++++#@@@@@@@@@@@@@@@@@@@@#++=-::::: ++++++++*%@@@@@@@%%%%%%@@@@@@@%+-::::::: ++++++++%@@@@@@@%*++++*%@@@@@%+::::::::: +++++++*%@@@@@%#++++++++#%@%*-:::::::::: +++++++#@@@@@@#++++++++++##=:::::::::::: +++++++%@@@@@%++++++++++=-:::::::::::::: +++++++%@@@@@#++++++++=-:::::::::::::::: ++++++*%@@@@@*++++++==:::::::::::::::::: ++++++*%@@@@@*+++++=-::::::::::::::::::: ++++++*%@@@@@*+++=-:.::::::::::::::::::: +++++++%@@@@@#+=-:.....::::::::::::::::: +++++++%@@@@@%=:.........::::::::::::::: +++++++#@@@@@@*:........:*#=:::::::::::: +++++++*%@@@@@%+:......:+%@%*-:::::::::: ++++++++%@@@@@@%#=::::=#@@@@@%+::::::::: +++++++-=%@@@@@@@%%##%%@@@@@@@%=:::::::: +++++=-.:*@@@@@@@@@@@@@@@@@@@@*:.::::::: +++=-:...:*@@@@@@@@@@@@@@@@@@*:....::::: +=-:......:*@@@@@@@@@@@@@@@@*:.......::: *:.........:+%@@@@@@@@@@@@%+:..........+ %-...........-*%%@@@@@@%%*-...........-% @%+:...........:=+*##*+=:...........:+%@ @@@#=:............::::............:=#@@@ @@@@%#-:........................:-#%@@@@ @@@@@@%*-......................-*%@@@@@@ @@@@@@@@%+:..................:+%@@@@@@@@ @@@@@@@@@@#=:..............:=#@@@@@@@@@@ @@@@@@@@@@@%*-............-*%@@@@@@@@@@@ @@@@@@@@@@@@@%*-........-*%@@@@@@@@@@@@@ @@@@@@@@@@@@@@@%+:....:+%@@@@@@@@@@@@@@@ @@@@@@@@@@@@@@@@@#=::=#@@@@@@@@@@@@@@@@@ ``` Find more character combinations: [Character representation of grey scale images](http://paulbourke.net/dataformats/asciiart/)
imbxm/aptos-liquidswap-swap
https://github.com/imbxm/aptos-liquidswap-swap
null
# aptos-liquidswap-swap Скрипт для свапов в сети Aptos с использованием DEX Liquidswap #### Установка зависимостей: ```pip install -r requirements.txt``` - `wallet.txt` для ввода приватников - `config.py` дефолтные настройки, также можно настроить время ожидания между кошельками, включить рандомизацию кошельков - `tokens_mapping` в `config.py` используется для мапинга токенов (если вам надо какой-то еще, то просто добавьте удобное для вас название и адрес нового токена) ### Примеры использования Для ввода `from_token`, `to_token`, `from_amount`, `to_amount` (надеюсь по названию понятно что это) используются аргументы командной строки. #### Формат команды с аргументами: `python main.py from_token to_token from_amount to_amount` 1. Первый аргумент: `from_token`, указывает из какого токена делать свапы 2. Второй аргумент: `to_token`, указывает в какой токен делать свапы 3. Третий аргумент: `from_amount`, указывает от какого количества монет делать свапы 4. Четвертый аргумент: `to_amount`, указывает до какого количества монет делать свапы #### Для свапа от 0.5 до 1 USDT в APT нужно ипользовать команду: `python main.py USDT APT 0.5 1` #### Для свапа от 0.7 до 0.9 APT в USDT нужно ипользовать команду: `python main.py APT USDT 0.7 0.9` #### Для свапа от 1 до 2.3 USDT в APT нужно ипользовать команду: `python main.py USDT APT 1 2.3` ### Пример работы Свап от 0.1 до 0.4 USDT в APT ![alt text](photos/liquidswap-example-updated.png) ### Благодарочка _(если вдруг вы щедрая душа)_ Поставьте звездочку на гитхабе Или закинуть скам токенов на кошелек :) **EVM: 0x96C22103635501D380B4F2a44C815EA460693E13** ### Telegram https://t.me/sybil_v_zakone _В основе лежит репозиторий: https://github.com/WayneAl/liquidswap-sdk-python_