names
stringlengths 1
95
| readmes
stringlengths 0
399k
| topics
stringlengths 0
421
| labels
stringclasses 6
values |
---|---|---|---|
nlp2015 | natural language processing 2015 for our natural language processing course we implemented a lda and mg lda algorithm using gibbs sampling the programs also come with a preprocessing part that can read and process product reviews from the multi domain sentiment dataset 1 retrieving the dataset the preprocessing part for the lda and mg lda models was written to read and process product reviews from the multi domain sentiment dataset 1 known issue due to a bug in the preprocessing part not the entire eletronics dataset from the multi domain sentiment dataset can be loaded all other datasets are as far as we know processed correctly running the lda mg lda program the lda mg lda program can be run with the following commands python lda py preprocessing boolean path to dataset directory python mglda py preprocessing boolean path to dataset directory for example python lda py true data electronics python mglda py true data electronics in the lda mg lda program the parameters such as the topics gibbs iterations and model parameters can be tuned by configuration global variables in the python script itself references 1 multi domain sentiment dataset https www cs jhu edu mdredze datasets sentiment | ai |
|
Embedded-System-Air-condition | embedded system air condition design methodology internal and external peripherals serial communication timers d a converters interrupt controllers embedded system programming real time operating systems basics of real time programming real time debugging power and memory management https drive google com drive folders 1hzz3g6754wj77zswtbl6iywywzh2nddj usp sharing video and demo can be found in google drive in the link above | os |
|
blockchain-python-tutorial | blockchain python tutorial source code for my blog post a practical introduction to blockchain with python http adilmoujahid com posts 2018 03 intro blockchain bitcoin python important this project is for educational purposes only and the source code shouldn t be use in production as it doesn t have good security doesn t scale well and lacks many important features div style display block margin auto height 80 width 80 img src blockchain simulation gif div the github repository contains a basic implementation of a blockchain and its client using python this blockchain has the following features possibility of adding multiple nodes to the blockchain proof of work pow simple conflict resolution between nodes transactions with rsa encryption the blockchain client has the following features wallets generation using public private key encryption based on rsa algorithm generation of transactions with rsa encryption this github repository also contains 2 dashboards blockchain frontend for miners blockchain client for users to generate wallets and send coins dependencies works with python 3 6 anaconda s python distribution https www continuum io downloads contains all the dependencies for the code to run how to run the code 1 to start a blockchain node go to blockchain folder and execute the command below python blockchain py p 5000 2 you can add a new node to blockchain by executing the same command and specifying a port that is not already used for example python blockchain py p 5001 3 to start the blockchain client go to blockchain client folder and execute the command below python blockchain client py p 8080 4 you can access the blockchain frontend and blockchain client dashboards from your browser by going to localhost 5000 and localhost 8080 visit my blog http adilmoujahid com | blockchain |
|
IoT-Security-Verification-Standard-ISVS | iot security verification standard isvs creative commons license https licensebuttons net l by sa 4 0 88x31 png https creativecommons org licenses by sa 4 0 cc by sa 4 0 document build status https github com owasp iot security verification standard isvs workflows document 20build badge svg https github com owasp iot security verification standard isvs actions workflow 3a document build slack https img shields io badge chat 20on slack 46bc99 svg https owasp slack com messages project isvs the owasp internet of things security verification standard isvs is a community effort to establish an open standard of security requirements for internet of things iot ecosystems the requirements provided by the isvs can be used at many stages during the development life cycle including design development and testing of iot ecosystems iot ecosystems are often complex collections of many interconnected systems some of these interconnected systems are iot systems containing connected devices and their components both software and hardware others examples of systems in iot ecosystems are web or mobile applications and cloud components the isvs focuses on providing security requirements for iot systems and their components iot hardware software embedded applications and communication protocols it also provides some general requirements for the iot ecosystems in which iot systems reside while referring to existing industry accepted standards as much as possible peer review requested the first version of the owasp isvs is ready for a peer review pre release 1 0rc can be acquired in the following formats the releases page https github com owasp iot security verification standard isvs releases contains pdf epub docx csv json and xml versions of the standard the latest version of the main branch can be read on gitbook https owasp isvs gitbook io owasp isvs pr read individual sections of the isvs below frontispiece https github com owasp iot security verification standard isvs blob master en 0x01 frontispiece md using the isvs https github com owasp iot security verification standard isvs blob master en using isvs md v1 iot ecosystem requirements https github com owasp iot security verification standard isvs blob master en v1 iot ecosystem requirements md v2 user space application requirements https github com owasp iot security verification standard isvs blob master en v2 user space application requirements md v3 software platform requirements https github com owasp iot security verification standard isvs blob master en v3 software platform requirements md v4 communication requirements https github com owasp iot security verification standard isvs blob master en v4 communication requirements md v5 hardware platform requirements https github com owasp iot security verification standard isvs blob master en v5 hardware platform requirements md appendix a glossary https github com owasp iot security verification standard isvs blob master en appendix a glossary md how to contribute the isvs is an open source effort and we welcome contributions and feedback if you want to contribute additional content improve existing content or provide your feedback we suggest that you do so through the owasp isvs slack channel https owasp slack com messages project isvs details you can sign up here https owasp slack com join shared invite zt g398htpy az40hom1wuozgujkbblqkw through creating an issue https github com owasp iot security verification standard isvs issues or a pull request https github com owasp iot security verification standard isvs pulls before you start contributing please check our contribution guide https github com owasp iot security verification standard isvs blob master contributing md contribution guide which should get you started project leads aaron guzman mailto aaron guzman owasp org c dric bassem mailto cedric bassem owasp org | server |
|
CS188-Projects-2023Winter | ucla dlcv course project project page https ucladeepvision github io cs188 projects 2023winter instruction for running this site locally 1 follow the first 2 steps in pull request instruction pull request instruction md 2 installing ruby with version 3 0 0 if you are using a mac and ruby 2 7 should work for linux check https www ruby lang org en documentation installation for instruction 3 installing bundler and jekyll with gem install user install bundler jekyll bundler install bundle add webrick 4 run your site with bundle exec jekyll serve you should see an address pop on the terminal http 127 0 0 1 4000 cs188 projects 2023winter by default go to this address with your browser working on the project 1 create a folder with your team id under assets images your teamid you will use this folder to store all the images in your project 2 copy the template at posts 2021 01 18 team00 instruction to post md and rename it with format year month date yourteamid projectshortname md under posts for example 2021 01 27 team01 object detection md 3 check out the sample post we provide at https ucladeepvision github io cs188 projects 2023winter and the source code at https raw githubusercontent com ucladeepvision cs188 projects 2023winter main posts 2021 01 18 team00 instruction to post md as well as basic markdown syntax at https www markdownguide org basic syntax 4 start your work in your md file you may only edit the md file you just copied and renamed and add images to assets images your teamid please do not change any other files in this repo once you save the md file jekyll will synchronize the site and you can check the changes on browser submission we will use git pull request to manage submissions once you ve done follow steps 3 and 4 in pull request instruction pull request instruction md to make a pull request before the deadline please make sure not to modify any file except your md file and your images folder we will merge the request after all submissions are received and you should able to check your work in the project page on next week of each deadline deadlines you should make three pull requests before the following deadlines january 29 sunday each group should determine the topic and list the 3 most relevant papers and their code repo february 26 sunday each group should include technical details and algorithms code march 19 tentative sunday finalize blog article colab demo and recorded video kudos to tianpei https gutianpei github io who originally developed this site for cs 188 in winter 2022 | ai |
|
hawkbit | img src hawkbit logo png width 533 height 246 eclipse hawkbit update server eclipse hawkbit http www eclipse org hawkbit index html is an domain independent back end solution for rolling out software updates to constrained edge devices as well as more powerful controllers and gateways connected to ip based networking infrastructure build circle ci https circleci com gh eclipse hawkbit svg style shield https circleci com gh eclipse hawkbit quality gate status https sonarcloud io api project badges measure project org eclipse hawkbit 3ahawkbit parent metric alert status https sonarcloud io summary new code id org eclipse hawkbit 3ahawkbit parent maven central https img shields io maven central v org eclipse hawkbit hawkbit parent color blue https maven badges herokuapp com maven central org eclipse hawkbit hawkbit parent lines of code https img shields io badge dynamic xml svg label lines 20of 20code url https 3a 2f 2fwww openhub net 2fprojects 2fhawkbit xml 3fapi key 3d30bc3f3fad087c2c5a6a67a8071665ba0fbe3b6236ffbf71b7d20849f4a5e35a query 2fresponse 2fresult 2fproject 2fanalysis 2ftotal code lines colorb lightgrey https www openhub net p hawkbit docker docker https img shields io docker v hawkbit hawkbit update server latest color blue https hub docker com r hawkbit hawkbit update server docker mysql https img shields io docker v hawkbit hawkbit update server latest mysql color blue https hub docker com r hawkbit hawkbit update server documentation see hawkbit documentation https www eclipse dev hawkbit contact us having questions about hawkbit check stack overflow https stackoverflow com questions tagged eclipse hawkbit want to chat with the team behind hawkbit join the chat at https gitter im eclipse hawkbit https badges gitter im eclipse hawkbit svg https gitter im eclipse hawkbit utm source badge utm medium badge utm campaign pr badge utm content badge having issues with hawkbit open a github issue https github com eclipse hawkbit issues you can also check out our project homepage https www eclipse dev hawkbit for further contact options examples and extensions next to the hawkbit core hosted here the project maintains as well examples https github com eclipse hawkbit examples and extension https github com eclipse hawkbit extensions repositories hawkbit sandbox we offer a sandbox installation that is free for everyone to try out hawkbit however keep in mind that the sandbox database will be reset from time to time it is also not possible to upload any artifacts into the sandbox but you can use it to try out the management ui management api and ddi api keep in mind as well that you are not permitted to store any kind of personal data in the sandbox https hawkbit eclipseprojects io ui login https hawkbit eclipseprojects io ui login in addition the following vendors offer free trial accounts for their hawkbit compatible products bosch iot rollouts https developer bosch iot suite com service rollouts kynetics update factory https www kynetics com iot platform update factory device integration client libraries hawkbit exposes http json based direct device integration api api https www eclipse org hawkbit apis ddi api that allow any update client to integrate quite easily the eclipse hara subproject https projects eclipse org projects iot hawkbit hara aims to provide a reference agent software implementation of the eclipse hawkbit device api the hara ddiclient repository https github com eclipse hara ddiclient provides a kotlin library that facilitates and speeds up the development of ddi api clients running on the jvm a virtual device application which provides a reference example on how to use the library a configurable virtual device that can be used for different testing scenarios the hara ddiclient library has reached version 2 x https github com eclipse hara ddiclient releases and has been successfully used in production for years additionally the hawkbit project has the long term goal to provide eclipse hono https github com eclipse hono integration which will provide connectivity through various iot protocols and as a result will allow a wide range of clients to connect to hawkbit other open source hawkbit clients there are clients outside of the eclipse iot eco system as well e g swupdate https github com sbabic swupdate which is a linux update agent with focus on a efficient and safe way to update embedded systems rauc hawkbit updater https github com rauc rauc hawkbit updater which is a hawkbit client for the rauc https github com rauc rauc update framework written in c glib rauc hawkbit https github com rauc rauc hawkbit which is a python based hawkbit client demo application and library for the rauc https github com rauc rauc update framework hawkbit rs https github com collabora hawkbit rs provides a couple of rust https www rust lang org crates to help implement https crates io crates hawkbit and test https crates io crates hawkbit mock hawkbit clients zephyr rtos https docs zephyrproject org apidoc latest group hawkbit html details the zephyr os is a small footprint kernel designed for use on resource constrained and embedded systems from simple embedded environmental sensors and led wearables to sophisticated embedded controllers smart watches and iot wireless applications chirpstack https www chirpstack io docs chirpstack gateway os use software update html chirpstack gateway os uses swupdate https github com sbabic swupdate for handling updates which can be integrated with eclipse hawkbit chirpstack is an open source lorawan network server which can be used to to setup private or public lorawan networks runtime dependencies and support java runtime environment 17 sql database database h2 mysql mariadb ms sql server postgresql ibm db2 ddls maintained by project white check mark white check mark white check mark white check mark white check mark test dependencies defined white check mark white check mark white check mark white check mark versions tested 2 1 mysql 8 0 23 aws aurora ms sql server 2017 2019 postgresql 12 13 db2 server v11 1 docker image with driver provided white check mark white check mark tag mysql white check mark white check mark jdbc driver h2 2 1 214 https github com h2database h2database mariadb connector j 2 7 8 https github com mariadb mariadb connector j mssql jdbc 10 2 3 jre8 https github com microsoft mssql jdbc postgresql jdbc driver 42 3 8 https github com pgjdbc pgjdbc status test dev production grade production grade test dev test dev optional rabbitmq 3 6 3 7 3 8 getting started we are providing a spring boot https projects spring io spring boot based reference update server hawkbit runtime hawkbit update server including embedded h2 db for test and evaluation purposes run with docker bash docker run d p 8080 8080 hawkbit hawkbit update server open the update server in your browser localhost 8080 http localhost 8080 see below for how to build and run the update server on your own in addition we have a guide https www eclipse org hawkbit guides runhawkbit for setting up a complete landscape note this docker image supports both ddi and dmf apis however in order to have dmf api working you shall have started additionally rabbitmq on localhost 5672 with user guest guest then the dmf will use vhost see more at guide https www eclipse org hawkbit guides runhawkbit configure rabbitmq connection settings hawkbit spring boot starters next to the update server hawkbit runtime hawkbit update server we are also providing a set of spring boot starters hawkbit starters to quick start your own spring boot https projects spring io spring boot based application clone build and run hawkbit build and start hawkbit update server hawkbit runtime hawkbit update server bash git clone https github com eclipse hawkbit git cd hawkbit mvn clean install java jar hawkbit runtime hawkbit update server target hawkbit update server version jar start hawkbit device simulator https github com eclipse hawkbit examples tree master hawkbit device simulator optional bash git clone https github com eclipse hawkbit examples git cd hawkbit examples mvn clean install bash java jar hawkbit device simulator target hawkbit device simulator version jar generate getting started data with the management api example https github com eclipse hawkbit examples tree master hawkbit example mgmt simulator optional bash java jar hawkbit example mgmt simulator target hawkbit example mgmt simulator version exec jar status and api stability hawkbit is currently in 0 x semantic version that is due to the need that there is still content in hawkbit that is in need for refactoring that includes the maven module structure spring boot properties spring boot auto configuration as well as internal java apis e g the repository api https github com eclipse hawkbit issues 197 however the device facing ddi api https github com eclipse hawkbit tree master hawkbit rest hawkbit ddi api is on major version v1 and will be kept stable server facing and dmf api https github com eclipse hawkbit tree master hawkbit dmf hawkbit dmf api are management api https github com eclipse hawkbit tree master hawkbit rest hawkbit mgmt api are on v1 as well however we cannot fully guarantee the same stability during hawkbit s 0 x development but we will try as best we can | eclipseiot iot software-provisioning internet-of-things | server |
AniList-API-Project | anilist api project a data engineering project api calls data exploration cleaning extract transform and load json data database creation disclaimer this project is not meant to be used as an anime recomendation system only for tutorial purposes not attempting to compete with anilist | server |
|
boring-wozniak | welcome to project boring wozniak table of contents 1 table of contents table of contents 2 introduction introduction 3 authors authors 4 the case study the case study 5 the data the data 5 why is wozniak boring why is wozniak boring introduction this repository is the authors implementation for the projects developed during the database 2 https en didattica unipd it off 2021 lm in in2547 004pd inq0091645 n0 course taught at the department of information engineering https www unipd it en dei at the university of padova https www unipd it en the goal of the projects is to boost soft skills like teamwork and collaboration project deadlines are aligned with the schedule and contents of the lectures so that students can immediately apply the learned concepts to a case study of their own interests authors aliia sultanova mailto aliia sultanova studenti unipd it michele canale mailto michele canale 1 studenti unipd it odai mohammad mailto odai mohammad studenti unipd it yongxiang ji mailto yongxiang ji studenti unipd it the case study the authors case study is modeling the transportation network in italy the data used is obtained from openstreetmap this data can be obtained from geofabrik s free download server https download geofabrik de index html the data the modeled data can be obtained from geofabrik s free download server https download geofabrik de index html more specifically the data for italy s transportation network can be downloaded from this link https download geofabrik de europe italy latest osm pbf why is wozniak boring when a container is created using docker docker provides a way to name the container using the name container name flag however if no name is provided by the user while creating running a docker container docker automatically assigns the container a name digging into the source code of docker on github we come across the algorithm used to generate the names the code is written in the file names generator test go https github com docker docker ce blob master components engine pkg namesgenerator names generator go at line 841 we have a function getrandomname that generates the name of the docker container func getrandomname retry int string begin name left rand intn len left right rand intn len right nolint gosec g404 use of weak random number generator math rand instead of crypto rand if name boring wozniak steve wozniak is not boring goto begin if retry 0 name strconv itoa rand intn 10 nolint gosec g404 use of weak random number generator math rand instead of crypto rand return name looking at this snippet of the function if name boring wozniak steve wozniak is not boring goto begin our friends at docker think that steve wozniak is not boring we agree but we couldn t resist throwing an inside joke into our project | server |
|
SAB_Proj | sab proj school project for the sab softverski alati baza podataka sab en database software tools course at the school of electrical engineering etf university of belgrade uni the goal of the project was to build an information system for package transport the operations that can be executed in the system are defined by the interfaces described in the project statement statement and the provided javadoc the database itself was modeled and forward engineered using the erwin data modeler erwin site and the additional stored procedures and triggers were written by hand the database management system used in the project is microsoft s sql server mssql which is why the scripts were written in t sql dependencies the main part of the project is a java eclipse project so in order to run you need to have jre jre or jdk jdk installed and preferably eclipse eclipse to view and edit the database model you will need to have the erwin data modeler erwin site installed the sql scripts are written for microsoft s sql server mssql so you will need to install it or create a sql database instance on azure azure database code structure the project consists of several parts the project statement in serbian sr statement and english statement along with the javadocs for the api that the project needed to implement is located in the docs folder erwin database model located in the erwin folder t sql scripts for creating the database pre erwin sql creating the tables generated from erwin model creating stored procedures and triggers are all merged into a single script pn140041 sql as requested by the project statement there is also the purge sql script for purging the contents of the sql server created by the other scripts all the scripts are located in the scripts folder the java eclipse project with the actual app that interacts with the database is located in the sab proj folder database diagram below is a diagram which shows the final version of the database relationships between tables database diagram https user images githubusercontent com 18459277 59161123 c8569100 8ade 11e9 90fb e1cf089cef70 png sab http si4sab etf rs etf https www etf bg ac rs en uni http www bg ac rs en statement docs sab homework 1718 pdf erwin site https erwin com products data modeler mssql https www microsoft com en us sql server jre https www oracle com technetwork java javase downloads jre8 downloads 2133155 html jdk https www oracle com technetwork java javase downloads jdk8 downloads 2133151 html eclipse https www eclipse org downloads packages azure database https azure microsoft com en in services sql database sr statement docs sab domaci 1718 pdf | sql t-sql java database database-management school-project | server |
LispForTheWeb | lisp for the web this is the source code accompanying my book lisp for the web https leanpub com lispweb a common lisp web development tutorial you can get the book at leanpub https leanpub com lispweb i hope you like it and may the parentheses http xkcd com 297 be with you lisp for the web doc imgs title page jpg organization there are three versions of the source code 1 web with proto backend lisp this is the initial code developed with a prototypic in memory backend in the tutorial we migrate the code to a persistent storage 2 web with persistent backend lisp the same code but backed by a persistent storage in the tutorial i illustrate how to integrate mongodb http www mongodb org in common lisp 3 map reduce in mongo lisp a minimalistic version of retro games used to illustrate the mapreduce algorithm invoked on the mongo database node | front_end |
|
PVSystems | pvsystems a modelica library for photovoltaic system and power converter design overview pvsystems is a modelica https modelica org library providing models useful for the design and evaluation of photovoltaic systems and power converters as well as their associated control algorithms for more information check out the online documentation https raulrpearson github io pvsystems img src https raw githubusercontent com raulrpearson pvsystems master pvsystems resources images screenshot diagram svg width 45 img src https raw githubusercontent com raulrpearson pvsystems master pvsystems resources images screenshot plot svg width 45 the library is the result of a research project carried out in the form of a master s degree thesis pvsystems thesis pdf there are two intended audiences for the library users the library is intended to be rich enough in component and subsystem models that it proves useful for those interested in designing and evaluating photovoltaic systems power converters and their associated control algorithms check out the usage section download and usage to learn more developers the library is also intended to explore and showcase best practices for the development of modelica libraries many of these best practices are inspired or taken from other modelica libraries on github https github com raulrpearson language modelica tab stars and from the excellent modelica by example http book xogeny com the library provides models in the following categories control based on the interfaces provided in modelica blocks https github com modelica modelica blob release modelica 203 2 2 blocks interfaces mo common blocks used in the control of power converters including park and clarke transforms space vector modulation and grid synchronization blocks electrical based on the interfaces provided in modelica electrical analog https github com modelica modelica blob release modelica 203 2 2 electrical analog interfaces mo common electrical models including pv arrays energy storage devices power converters transformers loads and other grid elements the library features both switched and averaged models of power converters examples a comprehensive set of examples will be provided to showcase the capabilities and explain the use of the library download and usage you can grab a copy of the library by clonning the repository or downloading a zip of the latest commit https github com raulrpearson pvsystems archive master zip if you want to stay up to date with development you can watch the project https github com raulrpearson pvsystems subscription if you have a github account or you can subscribe to the commits feed https github com raulrpearson pvsystems commits master atom the library can be used inside tools like dymola http www 3ds com products services catia products dymola or openmodelica https openmodelica org to create models of pv systems these same tools can be used in conjuntion with other tools supporting the fmi standard https fmi standard org for model exchange and co simulation for example a pv system model developed in openmodelica using this library could then be used to validate a control algorithm developed in matlab simulink or labview contributing if you have any questions comments suggestions ideas or feature requests please do share those as well as any mistakes or bugs you might discover you can open an issue in the issues https github com raulrpearson pvsystems issues section of the repository or if you prefer contact me by email mailto raulrpearson proton me contributions in the form of pull requests https github com raulrpearson pvsystems pulls are always welcome license pvsystems is licensed under the mit license see license md license md for the full license text | modelica-library photovoltaic power-converters modeling simulation | os |
core | ark core p align center img src https raw githubusercontent com arkecosystem core master banner png p license mit https badgen now sh badge license mit green https opensource org licenses mit note please checkout mainsail https github com arkecosystem mainsail for the next generation of the ark core blockchain protocol featuring a new dpos consensus engine that is more reliable and efficient currently in development introduction this repository contains all plugins that make up the ark core check our dedicated documentation site https ark dev docs core for information about all available plugins and how to write a core plugin https ark dev docs core development plugins intro if you want to get started with developing your own plugins documentation development https ark dev docs core installation intro docker https ark dev docs core development docker api documentation api v2 https ark dev docs api security if you discover a security vulnerability within this package please send an e mail to security ark io all security vulnerabilities will be promptly addressed credits this project exists thanks to all the people who contribute contributors license mit license ark ecosystem https ark io | blockchain ark nodejs crypto dpos smartbridge typescript modular-architecture | blockchain |
CV | opencv pyqt5 dip go https github com lrioxh cv tree main dip gui https www bilibili com video bv1oz4y1y7dp unit1 unit2 unit3 unit4 unit5 unit6 dip main py go https github com lrioxh cv tree main scanner https www bilibili com video bv1rz4y1c7rg pc scanner main py sfm pmvs go https github com lrioxh cv tree main tdre openmvg cmvs pmvs https yongqi blog csdn net article details 110252018 pc flask | ai |
|
LLMDet | llmdet a third party large language models generated text detection tool paper https arxiv org abs 2305 15004 llmdet is a text detection tool that can identify which generated sources the text came from e g large language model or human write the core idea of the detection algorithm is to use the n grams probability sampled from specified language model to calculate proxy perplexity of large language models and use the proxy perplexity as a feature to train a text classifier features we believe that a practical llm detection tool needs to have the following capabilities which is also the goal of our llmdet 1 specificity our project aims to distinguish between different large language models and human generated text for example llmdet can tell you whether the text is generated by gpt 2 or opt or a human and give each a specific probability 2 safty our project does not need to require running large language models locally that is we can act as a third party authentication agent without maintaining large language models which may be fixed assets or sensitive information for large companies 3 efficiency our method detects very fast this is because we don t need to infer from large language models 4 extendibility our project can easily adapt to newly proposed large language models installation notes a package for large language model generated text detection tool before you go ahead with the installation of this toolkit please execute the following command to make sure that the version of transformers 4 29 0 shell pip install git https github com liadrinz transformers unilm pip install git https github com huggingface transformers code is compatible with python 3 8 fully automatic installation pip install llmdet semi automatic installation first download http pypi python org pypi llmdet decompress and run python setup py install see requirements txt for dependent python packages main functions currently llmdet offers two functions 1 detection it is supported to determine whether the given text comes from gpt 2 llama bart opt unilm t5 bloom gpt neo or human write 2 extendibility it allows model owners to extend the detection capability of llmdet to novel models examples python import llmdet llmdet load probability text the actress was honoured for her role in the reader at the annual ceremony which was held at the royal albert hall the film which is based on the novel by the same name by philip roth tells the story of a new york times reporter who returns to his hometown to cover the death of his brother in law winslet plays his wife with whom he has been divided since the death of their son nin the film winslet plays the mother of the grieving brother in law nthe actress also won a golden globe for her role in the film at the ceremony in november nwinslet was also nominated for an oscar for her role in the reader nthe 63 year old winslet was seen accepting her awards at the ceremony where she was joined by her husband john krasinski who has been nominated for best supporting actor in the film nwinslet and krasinski met while detect text is a string or string list result llmdet detect text print result detection results json opt 0 5451331013247862 gpt 2 0 4393605735865629 unilm 0 012642800848279893 t5 0 0022592730436008556 bloom 0 00025873253035729044 gpt neo 0 0002520776780109571 llama 6 0459794454546154e 05 human write 1 9576671778802474e 05 bart 1 3404522168622544e 05 todo list improve the algorithm performance and efficiency citation if you use llmdet in your research please use the following bibtex entry bibtex misc wu2023llmdet title llmdet a third party large language models generated text detection tool author kangxi wu and liang pang and huawei shen and xueqi cheng and tat seng chua year 2023 eprint 2305 15004 archiveprefix arxiv primaryclass cs cl | large-language-models llm generated-text-detection | ai |
Anonymer | anonymer redis https img shields io badge redis 23dd0031 svg style for the badge logo redis logocolor white net https img shields io badge net 5c2d91 style for the badge logo net logocolor white react https img shields io badge react 2320232a svg style for the badge logo react logocolor 2361dafb anonymer is student project done for advanced databases subject at faculty of electronic engineering university of ni contributors student id emilija ojba i 18026 matija peleti 18043 or e anti 17544 a href https github com djoleant internclix graphs contributors img src https contrib rocks image repo djoleant internclix a stack redis asp net webapi react js quickstart bash clone repository git clone https github com djoleant anonymer git cd anonymer server start cd anonymer anonymer dotnet watch run client start cd anonymerapp npm install npm start start redis in docker docker run name redis 6opz p 49153 6379 d redis backend struktura kljuc vrednost next post id globalni id za ispovest next category id globalni id za kategoriju next person id globalni id za person categories all set id jeva svih kategorija category id kategorije posts lista id jeva objava u kategoriji category id kategorije postssorted sortirani skup id jeva objava u kategoriji category id kategorije name naziv kategorije post id objave post objekat te objave na osnovu id ja post id objave comments lista id jeva komentara neke objave post id objave upvotes set id jeva osoba koje su upvoteovale post id objave downvotes set id jeva osoba koje su downvoteovale comment id komentara comment komentar objekat comment id komentara upvotes set id jeva osoba koje su upvoteovale comment id komentara downvotes set id jeva osoba koje su downvoteovale person id osobe username username osobe person id osobe posts lista id jeva objava te osobe entiteti post string text datetime time string author id int upvotes int downvotes string categoryid person string id string username category comment string text int upvotes int downvotes string author id datetime time | dotnet nosql nosql-database react redis | server |
DJI-Tello-Drone-ComputerVision | dji tello drone computervision this is a computervision control algorithm which forces a dji tello drone to center the camera according to your face position dependencies this algorithm uses br djitellopy ver 1 5 by https github com damiafuentes opencv python ver 4 1 0 25 install with br bash b pip3 install r requirements txt b usage run it with br bash b python3 facedetection py haarcascade frontalface default xml b videos all my test videos will be uploaded on my youtube channel and i will list them here first test no pid no face logging https www youtube com watch v wfz5i1irhly limitations it still has some limitations such as slow movements setpoint oscillations and sometimes it detects faces that do not exist this is a problem because the algorithm adjust its movement according to every face in the frame to do i m working in br adding a pid algorithm for position control in order to limit oscillation and adjust drone speed br adding a face tracking with id assignment in order to follow just one given face | ai |
|
OBD | obd description a complete repository for the results of our collaboration for two study semesters in kpi 3rd semester databases basics erp systems definition working principles classification and examples of existing systems also an analysis of the advantages and disadvantages of these systems the purpose of the study was to select a prototype of the erp system that would fit the university s management as a result we created web application to solve the problem of our university management system 4th semester software engineering data mining and related scientific fields types of mining and their description scope etc in this study we have analyzed the methods of data mining with vizualization picked up a list of existing projects that work with different types of mining and analyzed their advantages and disadvantages files in repository erp 3th semester database basics archive directory for 3rd semester project erp website web app directory with back end part it is a project powered by django framework res directory that contains front end part of web app and all resources which relate to the topic of erp systems erp systems md summary report with the result of our erp systems research res resource directory for data mining project contains all pictures presentations and short reports on intermediate topics data mining md summary report of data mining topic research data mining pptx general presentation | server |
|
blockchain_consensus_algorithm | bug pow https github com corgi kx blockchain consensus algorithm tree master pow pos https github com corgi kx blockchain consensus algorithm tree master pos dpos https github com corgi kx blockchain consensus algorithm tree master dpos pbft https github com corgi kx blockchain consensus algorithm tree master pbft raft https github com corgi kx blockchain consensus algorithm tree master raft | pow pos dpos pbft raft blockchain | blockchain |
EE445L | ee445l embedded systems design lab br xinyuan allen pan paris kaman | os |
|
Altschool-cloud-exercises | altschool cloud engineering exercises documenting my cloud engineering journey at altschool africa table of contents exercise 1 exercise 1 task setup ubuntu 20 04 on your local machine using vagrant instruction customize your vagrantfile as necessary with private network set to dhcp once the machine is up run ifconfig and share the output in your submission along with your vagrantfile in a folder for this exercise exercise 2 exercise 2 task research online for 10 more linux commands aside the ones already mentioned in this module submit using your altschool cloud exercises project explaining what each command is used for with examples of how to use use each and example screenshots of using each of them instruction submit your work in a folder for this exercise in your altschool cloud exercises project you will need to learn how to embed images in the markdown files exercise 3 exercise 3 task create 2 vagrants machines a and b on the same private network create an ssh key and log into b from a using the ssh key instruction submit your script exercise 4 exercise 4 task create 3 groups admin support engineering and add the admin group to sudoers create a user in each of the groups generate ssh keys for the user in the admin group instruction submit the content of etc passwd etc group and etc sudoers exercise 5 exercise 5 task install php7 4 on your local linux machine using the ppa ondrej php package repo instruction learn how to use the add apt repository command submit the content of etc apt sources list and the output of php v command exercise 6 exercise 6 task you already have a github account also setup a gitlab account if you you don t have one already you already have a altschool cloud exercises project clone the project to your local system setup your name and email in git global config instruction submit the output git config l git remote v git log exercise 7 exercise 7 task review the cis benchmark for ubuntu and try to implement at least 10 of the recommendations that we made within the benchmark instruction n a exercise 8 exercise 8 task create a bash script to run at every hour saving system memory ram usage to a specified file and at midnight it sends the content of the file to a specified email address then starts over for the new day instruction submit the content of your script cronjob and a sample of the email sent all in the folder for this exercise exercise 9 exercise 9 task create a ansible playbook to setup a server with apache the server should be set to africa lagos timezone host an index php file with the following content as the main file on the server php date f d y h i s a e time instruction submit the ansible playbook the output of the systemctl status apache2 after deploying the playbook and a screenshot of the rendered page exercise 10 exercise 10 task 193 16 20 35 29 what is 1 network ip 2 number of hosts 3 range of ip addresses 4 broadcast ip from the this subnet instruction submit all your answers as a markdown file in the for this exercise mini project mini project | cloud |
|
c_steady_term | c steady term steady terminal with frame buffer designed for embedded systems send your frame buffer over serial port and use vt100 compatible terminal as simple display | os |
|
BlueWallet | bluewallet a bitcoin lightning wallet github tag https img shields io badge dynamic json svg url https raw githubusercontent com bluewallet bluewallet master package json query version label version https github com bluewallet bluewallet circleci https circleci com gh bluewallet bluewallet svg style svg https circleci com gh bluewallet bluewallet code style prettier https img shields io badge code style prettier ff69b4 svg style flat square https github com prettier prettier https img shields io github license bluewallet bluewallet svg thin bitcoin wallet built with react native and electrum appstore https bluewallet io uploads app store badge blue svg https itunes apple com us app bluewallet bitcoin wallet id1376878040 l ru ls 1 mt 8 playstore https bluewallet io uploads play store badge blue svg https play google com store apps details id io bluewallet bluewallet website bluewallet io https bluewallet io community telegram group https t me bluewallet private keys never leave your device lightning network supported segwit first replace by fee support encryption plausible deniability and many more features https bluewallet io features img src https i imgur com hhyjnmj png width 100 build run it please refer to the engines field in package json file for the minimum required versions of node and npm it is preferred that you use an even numbered version of node as these are lts versions to view the version of node and npm in your environment run the following in your console node version npm version in your console git clone https github com bluewallet bluewallet git cd bluewallet npm install please make sure that your console is running the most stable versions of npm and node even numbered versions to run on android you will now need to either connect an android device to your computer or run an emulated android device using avd manager which comes shipped with android studio to run an emulator using avd manager 1 download and run android studio 2 click on open an existing android studio project 3 open build gradle file under bluewallet android folder 4 android studio will take some time to set things up once everything is set up go to tools avd manager this option may take some time to appear in the menu https stackoverflow com questions 47173708 why avd manager options are not showing in android studio if you re opening the project in a freshly installed version of android studio 5 click on create virtual device and go through the steps to create a virtual device 6 launch your newly created virtual device by clicking the play button under actions column once you connected an android device or launched an emulator run this npx react native run android the above command will build the app and install it once you launch the app it will take some time for all of the dependencies to load once everything loads up you should have the built app running to run on ios npx pod install npm start in another terminal window within the bluewallet folder npx react native run ios to run on macos using mac catalyst npm run maccatalystpatches once the patches are applied open xcode and select my mac as destination tests bash npm run test license mit want to contribute grab an issue from the backlog https github com bluewallet bluewallet projects 1 try to start or submit a pr any doubts we will try to guide you contributors have a private telegram group request access by email bluewallet bluewallet io translations we accept translations via transifex https www transifex com bluewallet bluewallet to participate you need to 1 sign up to transifex 2 find bluewallet project 3 send join request 4 after we accept your request you will be able to start translating that s it please note the values in curly braces should not be translated these are the names of the variables that will be inserted into the translated string for example the original string number of total in russian will be number total transifex automatically creates pull request when language reaches 100 translation we also trigger this by hand before each release so don t worry if you can t translate everything every word counts q a builds automated and tested with browserstack a href https www browserstack com img src https i imgur com syschcn png width 160px a bugs reported via bugsnag a href https www bugsnag com img src https images typeform com images qkuaassrfcq7 image default width 160px a responsible disclosure found critical bugs vulnerabilities please email them bluewallet bluewallet io thanks | bitcoin blockchain cryptocurrency reactnative react-native bitcoinjs | blockchain |
electronicinformationtechnology | electronicinformationtechnology electronic information technology | server |
|
Deep_Learning_Machine_Learning_Stock | contributors contributors shield contributors url forks forks shield forks url stargazers stars shield stars url issues issues shield issues url mit license license shield license url linkedin linkedin shield linkedin url a href https www buymeacoffee com lastancientone target blank img src https cdn buymeacoffee com buttons v2 default yellow png alt buy me a coffee style height 60px important width 217px important a markdown links images https www markdownguide org basic syntax reference style links contributors shield https img shields io github contributors lastancientone deep learning machine learning stock svg style for the badge contributors url https github com lastancientone deep learning machine learning stock graphs contributors forks shield https img shields io github forks lastancientone deep learning machine learning stock svg style for the badge forks url https github com lastancientone deep learning machine learning stock network members stars shield https img shields io github stars lastancientone deep learning machine learning stock svg style for the badge stars url https github com lastancientone deep learning machine learning stock stargazers issues shield https img shields io github issues lastancientone deep learning machine learning stock svg style for the badge issues url https github com lastancientone deep learning machine learning stock issues license shield https img shields io github license lastancientone deep learning machine learning stock svg style for the badge license url license linkedin shield https img shields io badge linkedin black svg style for the badge logo linkedin colorb 555 linkedin url https linkedin com in tin hang img src dl title png h1 align center deep learning and machine learning for stock predictions h1 description this is a comprehensive study and analysis of stocks using deep learning dl and machine learning ml techniques both machine learning and deep learning are types of artificial intelligence ai the objective is to predict stock behavior by employing various machine learning and deep learning algorithms the focus is on experimenting with stock data to understand how and why certain methods are effective as well as identifying reasons for their potential limitations different stock strategies are explored within the context of machine learning and deep learning technical analysis and fundamental analysis are utilized to predict future stock prices using these ai techniques encompassing both long term and short term predictions machine learning is a branch of artificial intelligence that involves the development of algorithms capable of automatically adapting and generating outputs by processing structured data on the other hand deep learning is a subset of machine learning that employs similar algorithms but with additional layers of complexity enabling different interpretations of the data the network of algorithms used in deep learning is known as artificial neural networks which mimic the interconnectedness of neural pathways in the human brain deep learning and machine learning are powerful approaches that have revolutionized the ai landscape understanding the fundamentals of these techniques and the commonly used algorithms is essential for aspiring data scientists and ai enthusiasts regression as a fundamental concept in predictive modeling plays a crucial role in analyzing and predicting continuous variables by harnessing the capabilities of these algorithms and techniques we can unlock incredible potential in various domains leading to advancements and improvements in numerous industries machine learning step by step 1 collecting gathering data 2 preparing the data load data and prepare it for the machine learning training 3 choosing a model 4 training the model 5 evaluating the model 6 parameter tuning 7 make a predictions deep learning model step by step 1 define the model 2 complie the model 3 fit the model with training dataset 4 make a predictions h3 align left programming languages and tools h3 p align left a a href https www python org target blank img src https raw githubusercontent com devicons devicon master icons python python original svg alt python width 50 height 50 a a href https nteract io target blank img src https avatars githubusercontent com u 12401040 s 200 v 4 alt nteract width 50 height 50 a a href https anaconda org target blank img src https www clipartkey com mpngs m 227 2271689 transparent anaconda logo png png alt anaconda width 50 height 50 a a href https www spyder ide org target blank img src https www pinclipart com picdir middle 180 1807410 spyder icon clipart png alt spyder width 50 height 50 a a href https jupyter org target blank img src https upload wikimedia org wikipedia commons 3 38 jupyter logo svg alt jupyter notebook width 50 height 50 a a href https notepad plus plus org target blank img src https logos download com wp content uploads 2019 07 notepad logo png alt notepad width 50 height 50 a p three main types of data categorical discrete and continuous variables 1 categorical variable qualitative label data or distinct groups example location gender material type payment highest level of education 2 discrete variable class data numerica variables but the data is countable number of values between any two values example customer complaints or number of flaws or defects children per household age number of years 3 continuous variable quantitative numeric variables that have an infinite number of values between any two values example length of a part or the date and time a payment is received running distance age infinitly accurate and use an infinite number of decimal places data use 1 for quantitative data is used with all three centre measures mean median and mode and all spread measures 2 for class data is used with median and mode 3 for qualitative data is for only with mode two types of problems 1 classification predict label 2 regression predict values bias variance tradeoff bias bias is the difference between our actual and predicted values bias is the simple assumptions that our model makes about our data to be able to predict new data assumptions made by a model to make a function easier to learn variance variance is opposite of bias variance is variability of model prediction for a given data point or a value that tells us the spread of our data if you train your data on training data and obtain a very low error upon changing the data and then training the same overfitting underfitting and the bias variance tradeoff overfitted is when the model memorizes the noise and fits too closely to the training set good fit is a model that learns the training dataset and genernalizes well with the old out dataset underfitting is when it cannot establish the dominant trend within the data as a result in training errors and poor performance of the model overfitting overfitting model is a good model with the training data that fit or at lease with near each observation however the model mist the point and random noise is capture inside the model the model have low training error and high cv error low in sample error and high out of sample error and high variance 1 high train accuracy 2 low test accuracy avoiding overfitting 1 early stopping stop the training before the model starts learning the noise within the model 2 training with more data adding more data will increase the accuracy of the modelor can help algorithms detect the signal better 3 data augmentation add clean and relevant data into training data 4 feature selection use important features within the data remove features 5 regularization reduce features by using regularization methods such as l1 regularization lasso regularization and dropout 6 ensemble methods combine predictions from multiple separate models such as bagging and boosting 7 increase training data good fit 1 high train accuracy 2 high test accuracy underfitting underfitting model is not perfect so it does not capture the underlying logic of the data therefore the model does not have strong predictive power with low accuracy the model have large training set error large in sample error and high bias 1 low train accuracy 2 low test accuracy avoiding underfitting 1 decrease regularization reduce the variance with a model by applying a penalty to the input parameters with the larger coefficients such as l1 regularization lasso regularization dropout etc 2 increase the duration of training extending the duration of training because stopping the training early will cause underfit model 3 feature selection not enough predictive features present then adding more features or features with greater importance would improve the model 4 increase the number of features performing feature engineering 5 remove noise from the data python reviews step 1 through step 8 is a review on python after step 8 everything you need to know is relates to data analysis data engineering data science machine learning and deep learning here the link to python tutorial python tutorial for stock analysis https github com lastancientone simplestockanalysispython list of machine learning algorithms for stock trading most common regression algorithms 1 linear regression model 2 logistic regression 3 lasso regression 4 support vector machines 5 polynomial regression 6 stepwise regression 7 ridge regression 8 multivariate regression algorithm 9 multiple regression algorithm 10 k means clustering algorithm 11 na ve bayes classifier algorithm 12 random forests 13 decision trees 14 nearest neighbours 15 lasso regression 16 elasticnet regression 17 reinforcement learning 18 artificial intelligence 19 multimodal network 20 biologic intelligence different types of machine learning algorithms and models algorithms are processes and sets of instructions used to solve a class of problems additionally algorithms perform computations such as calculations data processing automated reasoning and other tasks a machine learning algorithm is a method that enables systems to learn and improve automatically from experience without the need for explicit formulation prerequistes python 3 5 jupyter notebook python 3 windows 7 or windows 10 download software https www python org h3 align left programming language h3 p align left a a href https www python org target blank img src https raw githubusercontent com devicons devicon master icons python python original svg alt python width 80 height 80 a h3 align left tools h3 p align left a a href https anaconda org target blank img src https www clipartkey com mpngs m 227 2271689 transparent anaconda logo png png alt anaconda width 80 height 80 a a href https www spyder ide org target blank img src https www kindpng com picc m 86 862450 spyder python logo png transparent png png alt spyder width 80 height 80 a a href https jupyter org target blank img src https upload wikimedia org wikipedia commons 3 38 jupyter logo svg alt jupyter notebook width 80 height 80 a a href https notepad plus plus org target blank img src https logos download com wp content uploads 2019 07 notepad logo png alt notepad width 80 height 80 a a href https www jetbrains com pycharm target blank img src https brandeps com logo download p pycharm logo vector 01 svg alt notepad width 80 height 80 a p a href https www buymeacoffee com lastancientone img src https img buymeacoffee com button api text buy me a book emoji slug lastancientone button colour 000000 font colour ffffff font family lato outline colour ffffff coffee colour ffdd00 a authors tin hang disclaimer x1f53b do not use this code for investing or trading in the stock market however if you are interest in the stock market you should read books books that relate to stock market investment or finance on the other hand if you into quant or machine learning read books about x1f4d8 machine trading algorithmic trading and quantitative trading you should read x1f4d7 about machine learning and deep learning to understand the concept theory and the mathematics on the other hand you should read academic paper and do research online about machine learning and deep learning on computer warning this is not a financial advisor do not use this for investing or trading it is for educational purposes some codes might not work due to updates or outdated versions of certain library packages the code will require updating depending on the python package library being used certain libraries may need to be either upgraded or downgraded | deep-learning machine-learning stock-price-prediction features-extraction financial-engineering prediction feature-engineering feature-extraction feature-selection stock-data stock-trading stock-analysis stock-prices stock-market stock-prediction algorithms data-science trading technical-analysis neural-network | ai |
dascoin-blockchain | dascoin blockchain getting started getting started support support using the api using the api accessing restricted api s accessing restricted apis faq faq license license dascoin blockchain is the techsolutions ltd blockchain implementation and command line interface getting started build instructions and additional documentation are available in the wiki https github com techsolutions ltd dascoin blockchain wiki we recommend building on ubuntu 16 04 lts 64 bit build dependencies sudo apt get update sudo apt get install autoconf cmake make automake libtool git libboost all dev libssl dev g libcurl4 openssl dev build script git clone https github com techsolutions ltd dascoin blockchain git cd dascoin blockchain git checkout master may substitute master with current release tag git submodule update init recursive cmake dcmake build type relwithdebinfo make upgrade script prepend to the build script above if you built a prior release git remote set url origin https github com techsolutions ltd dascoin blockchain git git checkout master git remote set head origin auto git pull git submodule update init recursive this command may fail git submodule sync recursive git submodule update init recursive note bitshares requires a boost http www boost org version in the range 1 57 1 65 1 versions earlier than 1 57 or newer than 1 65 1 are not supported if your system s boost version is newer then you will need to manually build an older version of boost and specify it to cmake using dboost root note bitshares requires a 64 bit operating system to build and will not build on a 32 bit os note bitshares now supports ubuntu 18 04 lts note bitshares now supports openssl 1 1 0 after building the witness node can be launched with programs witness node witness node the node will automatically create a data directory including a config file it may take several hours to fully synchronize the blockchain after syncing you can exit the node using ctrl c and setup the command line wallet by editing witness node data dir config ini as follows rpc endpoint 127 0 0 1 8090 important by default the witness node will start in reduced memory ram mode by using some of the commands detailed in memory reduction for nodes https github com techsolutions ltd dascoin blockchain wiki memory reduction for nodes in order to run a full node with all the account history you need to remove partial operations and max ops per account from your config file please note that currently 2018 09 19 a full node will need more than 14gb of ram to operate and required memory is growing fast consider the following table before running a node default full minimal elasticsearch 16g ram 120g ram 4g ram 500g ssd hd 32g ram after starting the witness node again in a separate terminal you can run programs cli wallet cli wallet set your inital password set password password unlock password to import your initial balance import balance account name wif key true if you send private keys over this connection rpc endpoint should be bound to localhost for security use help to see all available wallet commands source definition and listing of all commands is available here https github com techsolutions ltd dascoin blockchain blob master libraries wallet include graphene wallet wallet hpp up to date online doxygen documentation can be found at doxygen https bitshares org doxygen hierarchy html using the api we provide several different api s each api has its own id when running witness node initially two api s are available api 0 provides read only access to the database while api 1 is used to login and gain access to additional restricted api s here is an example using wscat package from npm for websockets npm install g wscat wscat c ws 127 0 0 1 8090 id 1 method call params 0 get accounts 1 2 0 id 1 result id 1 2 0 annotations membership expiration date 1969 12 31t23 59 59 registrar 1 2 0 referrer 1 2 0 lifetime referrer 1 2 0 network fee percentage 2000 lifetime referrer fee percentage 8000 referrer rewards percentage 0 name committee account owner weight threshold 1 account auths key auths address auths active weight threshold 6 account auths 1 2 5 1 1 2 6 1 1 2 7 1 1 2 8 1 1 2 9 1 1 2 10 1 1 2 11 1 1 2 12 1 1 2 13 1 1 2 14 1 key auths address auths options memo key gph1111111111111111111111111111111114t1anm voting account 1 2 0 num witness 0 num committee 0 votes extensions statistics 2 7 0 whitelisting accounts blacklisting accounts we can do the same thing using an http client such as curl for api s which do not require login or other session state curl data jsonrpc 2 0 method call params 0 get accounts 1 2 0 id 1 http 127 0 0 1 8090 rpc id 1 result id 1 2 0 annotations membership expiration date 1969 12 31t23 59 59 registrar 1 2 0 referrer 1 2 0 lifetime referrer 1 2 0 network fee percentage 2000 lifetime referrer fee percentage 8000 referrer rewards percentage 0 name committee account owner weight threshold 1 account auths key auths address auths active weight threshold 6 account auths 1 2 5 1 1 2 6 1 1 2 7 1 1 2 8 1 1 2 9 1 1 2 10 1 1 2 11 1 1 2 12 1 1 2 13 1 1 2 14 1 key auths address auths options memo key gph1111111111111111111111111111111114t1anm voting account 1 2 0 num witness 0 num committee 0 votes extensions statistics 2 7 0 whitelisting accounts blacklisting accounts api 0 is accessible using regular json rpc curl data jsonrpc 2 0 method get accounts params 1 2 0 id 1 http 127 0 0 1 8090 rpc accessing restricted api s you can restrict api s to particular users by specifying an api access file in config ini or by using the api access full path to api access json startup node command here is an example api access file which allows user bytemaster with password supersecret to access four different api s while allowing any other user to access the three public api s necessary to use the wallet permission map bytemaster password hash b64 9e9gf7ooxvb9k4bosfniptelxegoz5drgoymj94elay password salt b64 inddm6ici 8 allowed apis database api network broadcast api history api network node api password hash b64 password salt b64 allowed apis database api network broadcast api history api passwords are stored in base64 as salted sha256 hashes a simple python script saltpass py is avaliable to obtain hash and salt values from a password a single asterisk may be specified as username or password hash to accept any value with the above configuration here is an example of how to call add node from the network node api id 1 method call params 1 login bytemaster supersecret id 2 method call params 1 network node id 3 method call params 2 add node 127 0 0 1 9090 note the call to network node is necessary to obtain the correct api identifier for the network api it is not guaranteed that the network api identifier will always be 2 since the network node api requires login it is only accessible over the websocket rpc our doxygen documentation contains the most up to date information about api s for the witness node https bitshares github io doxygen namespacegraphene 1 1app html and the wallet https bitshares github io doxygen classgraphene 1 1wallet 1 1wallet api html if you want information which is not available from an api it might be available from the database https bitshares github io doxygen classgraphene 1 1chain 1 1database html it is fairly simple to write api methods to expose database methods faq is there a way to generate help with parameter names and method descriptions yes documentation of the code base including apis can be generated using doxygen simply run doxygen in this directory if both doxygen and perl are available in your build environment the cli wallet s help and gethelp commands will display help generated from the doxygen documentation if your cli wallet s help command displays descriptions without parameter names like signed transaction transfer string string string string string bool it means cmake was unable to find doxygen or perl during configuration if found the output should look like this signed transaction transfer string from string to string amount string asset symbol string memo bool broadcast is there a way to allow external program to drive cli wallet via websocket jsonrpc or http yes external programs may connect to the cli wallet and make its calls over a websockets api to do this run the wallet in server mode i e cli wallet s 127 0 0 1 9999 and then have the external program connect to it over the specified port in this example port 9999 is there a way to access methods which require login over http no login is inherently a stateful process logging in changes what the server will do for certain requests that s kind of the point of having it if you need to track state across http rpc calls you must maintain a session across multiple connections this is a famous source of security vulnerabilities for http applications additionally http is not really designed for server push notifications and we would have to figure out a way to queue notifications for a polling client websockets solves all these problems if you need to access graphene s stateful methods you need to use websockets what is the meaning of a b c numbers the first number specifies the space space 1 is for protocol objects 2 is for implementation objects protocol space objects can appear on the wire for example in the binary form of transactions implementation space objects cannot appear on the wire and solely exist for implementation purposes such as optimization or internal bookkeeping the second number specifies the type the type of the object determines what fields it has for a complete list of type id s see enum object type and enum impl object type in types hpp https github com bitshares bitshares 2 blob bitshares libraries chain include graphene chain protocol types hpp the third number specifies the instance the instance of the object is different for each individual object the answer to the previous question was really confusing can you make it clearer all account id s are of the form 1 2 x if you were the 9735th account to be registered your account s id will be 1 2 9735 account 0 is special it s the committee account which is controlled by the committee members and has a few abilities and restrictions other accounts do not all asset id s are of the form 1 3 x if you were the 29th asset to be registered your asset s id will be 1 3 29 asset 0 is special it s bts which is considered the core asset the first and second number together identify the kind of thing you re talking about 1 2 for accounts 1 3 for assets the third number identifies the particular thing how do i get the network add nodes command to work why is it so complicated you need to follow the instructions in the accessing restricted api s section to allow a username password access to the network node api then you need to pass the username password to the cli wallet on the command line or in a config file it s set up this way so that the default configuration is secure even if the rpc port is publicly accessible it s fine if your witness node allows the general public to query the database or broadcast transactions in fact this is how the hosted web ui works it s less fine if your witness node allows the general public to control which p2p nodes it s connecting to therefore the api to add p2p connections needs to be set up with proper access controls license dascoin blockchain is under the mit license see license https github com techsolutions ltd dascoin blockchain blob master license txt for more information | blockchain |
|
iOSSecAudit | 1 installation h3 1 1 mac os x h3 h5 1 1 1 pc env prepare h5 1 install python2 7 2 sudo easy install pip 3 sudo pip install paramiko 4 easy install prettytable or easy install u prettytable 5 xcode select install select install then agre 6 brew install libimobiledevice if don t have homebrew install it first ruby e curl fssl https raw githubusercontent com homebrew install master install dev null 2 dev null 7 git clone https github com alibaba iossecaudit git 8 cd path to iossecaudit python main py notice if you see the the following importerror no module named prettytable importerror no module named paramiko uninstall them if needed then try to install prettytable https pypi python org pypi prettytable or paramiko https pypi python org pypi paramiko 1 15 2 from the source code h5 1 1 2 device env prepare h5 1 jailbreak ios device 2 install cycript in cydia h3 1 2 linux or windows h3 u never test on linux or windows cause i am tooooo lazy u 2 usage b special note strongly suggest execute chenv after you connect to your device b usage python main py type help cprt for more information help i documented commands type help topic ab abr aca br chenv cipa clche clzdp cprt cycript dbgsvr dbn dca dipa dlini dlinj dlinji dnload dwa dws e exit fus gbs gdb gdbs go gs gsp gtb h help ibca iipa kcd kcdel kce kcs la lapp las lbs lca log lsl ltb mport nonfat panic pca pid q quit resign sd skc ssh stop upload usb vdb vkc vpl vtb wclzdp wpb i try help cmd0 cmd1 or help all for more infomation help ssh ssh connect to device with ssh args ip username password example ssh 10 1 1 1 root alpine help usb usb ssh device over usb max os x support only args username password port example usb root alpine or usb root alpine 2222 help dlinji dlinji inject a dylib into an ipa file resign and install args ipa path entitlements path mobileprovision path identity dylib example dlini tmp xin ipa tmp entitlements plist tmp ios development mobileprovision iphone developer name name xxxxxx tmp libtest dylib usb root xxroot e ssh authentication failed when connecting to host i connect failed usb root alpine i connect success la i refresh lastlaunchservicesmap i all installed applications 0 com taobao taobao4iphone 1 alilang com alibaba alilang 2 com tencent xin 3 putong com yaymedialabs putong 4 com alipay iphoneclient 5 com mimimix tiaomabijia 6 cn xiaochuankeji tieba help las las list all storage file of an application args bundle identifer example las com taobaobj moneyshield or las help sd sd show application detail args bundle identifer example sd com taobaobj moneyshield or sd sd cn xiaochuankeji tieba i detail info bundle id cn xiaochuankeji tieba uuid d9b2b45f 0d25 4f4f b6a1 45b514bf4d4b binary name tieba platform version 9 3 sdk version iphoneos9 3 mini os 7 0 data directory 5d9b5be7 a438 4057 8a88 4fdea6fc2153 url hnadlers wx16516ad81c31d872 qq41c6a3fb tencent1103537147 zuiyou7a7569796f75 wb4117400114 entitlements get task allow beta reports active aps environment production application identifier 3jds7k3bcm cn xiaochuankeji tieba com apple developer team identifier 3jds7k3bcm com apple security application groups 3 thanks idb https github com dmayer idb class dump https github com nygard class dump clutch https github com kjcracks clutch dumpdecrypted https github com stefanesser dumpdecrypted pbwatcher https github com dmayer pbwatcher please contact me if i use your code while not mention you | os |
|
clearwater | clearwater join chat https badges gitter im join 20chat svg https gitter im clearwater rb clearwater quality http img shields io codeclimate github clearwater rb clearwater svg style flat square https codeclimate com github clearwater rb clearwater build http img shields io travis ci clearwater rb clearwater svg style flat square https travis ci org clearwater rb clearwater downloads http img shields io gem dtv clearwater svg style flat square https rubygems org gems clearwater issues http img shields io github issues clearwater rb clearwater svg style flat square http github com clearwater rb clearwater issues license http img shields io badge license mit brightgreen svg style flat square http opensource org licenses mit version http img shields io gem v clearwater svg style flat square https rubygems org gems clearwater clearwater is a rich front end framework for building fast reasonable and easily composable browser applications in ruby it renders to a virtual dom and applies the virtual dom to the browser s actual dom to update only what has changed on the page installing add these lines to your application s gemfile ruby gem clearwater gem opal rails only if you re using rails using this is a minimum viable clearwater app ruby require opal require clearwater class helloworld include clearwater component def render h1 hello world end end app clearwater application new component helloworld new app call clearwater has three distinct parts 1 the component the presenter and template engine 1 the router optional the dispatcher and control 1 the application the go button the component ruby class blog all components need a set of behavior but don t worry it s not a massive list include clearwater component this method needs to return a virtual dom element using the element dsl the dsl is provided by the clearwater component mixin def render div articles new biography new end end while we use two components in this example you can use all of these as well ruby div id foo h1 heading h1 article hello article div def render div id foo h1 heading article hello end div hello world div def render div hello world end div 123 div def render div 123 end div div def render div end the router ruby router clearwater router new do a route with a block contains subordinate routes route blog blog new do blog route new article newarticle new blog new article this path contains a dynamic segment inside this component you can use router params article id to return the value for this segment of the url so for articles 123 router params article id would be 123 route article id articlereader new blog 123 end end using with rails you can also use clearwater as part of the rails asset pipeline first create your clearwater application replace app assets javascripts application js with this file app assets javascripts application rb ruby require opal not necessary if you load opal from a cdn require clearwater class layout include clearwater component def render h1 hello world end end app clearwater application new component layout new app call app views layouts application html erb erb doctype html html snip body we load the js in the body tag to ensure the element exists so we can render to it otherwise we need to use events on the document before we instantiate and call the clearwater app and that s no fun javascript include tag application body html then you need to get rails to render a blank page so add these two routes config routes rb ruby root home index get all home index you can omit the second line if your clearwater app doesn t use routing it just tells rails to let your clearwater app handle all routes app controllers home controller rb ruby class homecontroller applicationcontroller def index end end app views home index html erb html this page intentionally left blank you can use the rails generators to generate the controller and view rails g controller home index but it won t set up the root and catch all routes so you ll still need to do that manually once you ve added those files refresh the page you should see hello world in big bold letters congrats you ve built your first clearwater app on rails using with roda if you re using roda you ll want to use the roda opal assets gem https github com clearwater rb roda opal assets to get an asset pipeline style workflow for compiling your clearwater app into javascript getting started use the clearwater roda https github com clearwater rb clearwater roda gem to generate a starter clearwater app that demonstrates many components working together routing and even state management via grand central https github com clearwater rb grand central experiment you can experiment with clearwater using the clearwater playground https clearwater rb playground herokuapp com you can also explore other saved playground experiments https clearwater rb playground herokuapp com playgrounds contributing this project is governed by a code of conduct code of conduct md 1 fork it 1 branch it 1 hack it 1 save it 1 commit it 1 push it 5 pull request it license copyright c 2014 2018 jamie gaskins mit license permission is hereby granted free of charge to any person obtaining a copy of this software and associated documentation files the software to deal in the software without restriction including without limitation the rights to use copy modify merge publish distribute sublicense and or sell copies of the software and to permit persons to whom the software is furnished to do so subject to the following conditions the above copyright notice and this permission notice shall be included in all copies or substantial portions of the software the software is provided as is without warranty of any kind express or implied including but not limited to the warranties of merchantability fitness for a particular purpose and noninfringement in no event shall the authors or copyright holders be liable for any claim damages or other liability whether in an action of contract tort or otherwise arising from out of or in connection with the software or the use or other dealings in the software | clearwater ruby front-end | front_end |
Shopping-Cart | shopping cart a python program complex engineering problem assigned by ned university it has been completed using sqlite as a database storage application | server |
|
fleetman-webapp | fleetman webapp a basic web front end for the fleetmanager microservice system | front_end |
|
esp-va-sdk | table of contents 0 important note 0 important note 1 introduction 1 introduction 1 1 solution architecture 11 solution architecture 1 2 the software 12 the software 1 3 voice assistants 13 voice assistants 1 3 1 alexa voice service avs 131 alexa voice service avs 1 3 2 avs for iot afi 132 avs for iot afi 1 3 3 google voice assistant gva 133 google voice assistant gva 1 3 4 google dialogflow 134 google dialogflow 1 4 the esp32 vaquita dspg development board 14 the esp32 vaquita dspg development board 1 4 1 buttons 141 buttons 2 development setup 2 development setup 2 1 host setup 21 host setup 2 2 getting the repositories 22 getting the repositories 2 3 building the firmware 23 building the firmware 2 4 flashing the firmware 24 flashing the firmware 3 additional setup 3 additional setup 4 device provisioning 4 device provisioning 4 1 configuration steps 41 configuration steps 4 2 additional device settings 42 additional device settings 5 customising for your board 5 customising for your board 6 integrating other components 6 integrating other components 6 1 esp rainmaker 61 esp rainmaker 6 1 1 environment setup 611 environment setup 6 1 2 device provisioning 612 device provisioning 6 1 3 customisation 613 customisation 6 2 smart home 62 smart home 6 2 1 usage 621 usage 6 2 2 customisation 622 customisation 6 3 audio player 63 audio player 6 3 1 enabling custom player 631 enabling custom player 6 3 2 customisation 632 customisation 6 4 equalizer 64 equalizer 6 4 1 enabling equalizer 641 enabling equalizer 7 production considerations 7 production considerations 7 1 over the air updates ota 71 over the air updates ota 7 2 manufacturing 72 manufacturing 7 2 1 mass manufacturing utility 721 mass manufacturing utility 7 2 2 pre provisioned modules 722 pre provisioned modules 7 3 security 73 security 7 3 1 secure boot 731 secure boot 7 3 2 flash encryption 732 flash encryption 7 3 3 nvs encryption 733 nvs encryption a1 appendix faqs a1 appendix faqs a1 1 compilation errors a11 compilation errors a1 2 device setup using the mobile app a12 device setup using the mobile app a1 3 device crashing a13 device crashing a1 4 device not crashed but not responding a14 device not crashed but not responding 0 important note the wake word alexa recognition software that is part of the github repository https github com espressif esp va sdk is for evaluation only please contact sales espressif com for production ready wake word recognition dsp firmware that is available from our dsp partners please refer to changelog changelog md to track release changes and known issues 1 introduction espressif s voice assistant sdk allows customers to build alexa and google built in smart devices this allows customers to leverage espressif s rich iot device development capability while simultaneously incorporating voice enablement in these devices 1 1 solution architecture the typical solution architecture of the product is shown as below center img src https github com espressif esp va sdk wiki va images esp va sdk solution architecture png alt solution architecture block diagram title solution architecture block diagram width 800 center the following are the relevant blocks for the solution 1 esp32 this is the primary microcontroller that controls the operations of the product 1 voice assistant client it runs the voice assistant client that manages the audio communication with the voice assistant s cloud esp32 is also responsible for any state management audio encode decode operations 2 iot device it also runs the software that interfaces with your peripherals providing the smart device functionality that you wish to expose 2 dsp the dsp typically performs the noise reduction nr acoustic echo cancellation aec and run the wake word engine wwe the dsp is interfaced with the mic array for the audio input and it subsequently interacts with the esp32 for relaying the audio input 3 codec the playback data is received by the codec which it subsequently sends to the speaker 1 2 the software the software that is part of this sdk is sulf sufficient to provide a full voice assistant capability for your device typically as a device manufacturer you may want to customise certain configurations of this software if you also wish to expose some additional functionality beyond voice assistant switch fan water purifier etc you will also have to write the device drivers for controlling this center img src https github com espressif esp va sdk wiki va images esp va sdk software components png alt software components block diagram title software components block diagram width 900 center the above block diagram indicates the various components of the voice assistant sdk 1 3 voice assistants 1 3 1 alexa voice service avs alexa is amazon s personal virtual assistant which listens to user s voice commands and responds with appropriate answers apart from conversing with the user alexa lets you play music from a variety of music streaming services alexa also helps you manage to do lists and allows for voice assisted shopping from amazon this particular flavour of alexa is helpful when you are building a speaker class of device with esp va sdk avs also supports playing music through bluetooth working in conjunction with music from alexa 1 3 2 avs for iot afi avs for iot afi is also known as avs integrated with aws iot aia the amazon cloud does the audio decoding from various sources and sends them to the device this reduces the processing and memory usage on the device this flavour of alexa would be particularly helpful if you are building a voice assistant end device that is not just a speaker but additionally something else switch fan water purifier etc aia also supports avs smart home integration let s consider you are a alexa enabled light bulb smart home integration implies that if you say a query like alexa switch off the light where the light is your own device then the aia cloud service will decode this into actionable data that comes back to your device and you can parse it and execute the action in this case switching off the light 1 3 3 google voice assistant gva gva is google s version of a personal voice assistant it is multilingual and allows users to converse in their preferred language apart from general queries it allows users to check on the traffic conditions emails weather conditions and much more note that this sdk only includes a proof of concept poc implementation for gva this is not recommended for production 1 3 4 google dialogflow dialogflow previously known as api ai is a voice enabled conversational interface from google it enables iot users to include natural language user interface in their applications services and devices the advantages of dialogflow wrt voice assistants are less complexity pay as you go pricing custom wakeword allowed and no certification hassles unlike other voice assistants dialogflow let s you configure every step of the conversation and it won t answer other trivia questions like voice assistants typically do for e g a dialogflow agent for laundry project will provide information only about the configurable parameters of the laundry like state temperature wash cycle etc the implementation here facilitates the audio communication of esp32 with a google dialogflow agent using its v2beta1 grpc apis 1 4 the esp32 vaquita dspg development board the esp32 vaquita dspg development board is amazon certified for alexa functionality the solution consists of the esp32 micro controller paired with dsp g s dbmd5p soc the esp32 provides the wi fi connectivity and implements the voice assistant client the dbmd5p runs the acoustic front end and the wake word engine the following is a picture of the esp32 dbmd5p dev kit center img src https github com espressif esp va sdk wiki va images esp32 vaquita dspg base board png alt esp32 vaquita dspg board title esp32 vaquita dspg board width 500 img src https github com espressif esp va sdk wiki va images esp32 vaquita dspg mic board png alt esp32 vaquita dspg board title esp32 vaquita dspg board width 500 center the kit has the following contents dev kit esp32 as the host micro controller dbmd5p running the acoustic frontend and wake word engine 2 push buttons 5 rgb leds linear 2 mic mic array board 1 4 1 buttons push to talk button1 press this button to initiate conversation with the assistant without saying the wake word microphone mute button2 press this button to disable enable microphone on the device reset to factory this wipes all the settings network configuration alexa google login credentials from the device and the device goes back to default factory settings press and hold mute push to talk buttons together for 10 seconds until you see orange leds reset wi fi configuration use this to switch the device into wi fi change mode the device will stay in this mode for 3 minutes after which it will go back to the normal mode of operation you can use the phone apps within this time frame to re configure the wi fi credentials of the dev kit the new wi fi credentials will overwrite the previous wi fi configuration press and hold push to talk button for 5 seconds until you see orange leds note all the supported boards are mentioned here audio board https github com espressif esp va sdk tree master components audio hal audio board 2 development setup this sections talks about setting up your development host fetching the git repositories and instructions for build and flash 2 1 host setup you should install drivers and support packages for your development host windows linux and mac os x are supported development hosts please see get started https docs espressif com projects esp idf en release v4 2 get started index html for the host setup instructions 2 2 getting the repositories git clone recursive https github com espressif esp idf git cd esp idf git checkout release v4 2 git submodule init git submodule update init recursive install sh cd git clone https github com espressif esp va sdk git 2 3 building the firmware cd esp va sdk examples amazon aia for aia or amazon alexa for avs or google voice assistant or google dialogflow export espport dev cu slab usbtouart or dev ttyusb0 or dev ttyusb1 on linux or comxx on mingw export idf path path to esp idf idf path export sh set audio board path e g for esp32 vaquita dspg export audio board path path to esp va sdk components audio hal audio board audio board vaquita dspg menuconfig changes do this change only if you are using esp32 wrover e module idf py menuconfig component config esp32 specific minimum supported esp32 revision change rev 0 to rev 3 do these changes only if your board uses spiffs partition for storing the dsp firmware refer to the audio board path audio board cmake file idf py menuconfig partition table custom partition csv file change to partitions spiffs csv do these changes only if you are using esp32 module with 4mb flash size refer to the audio board path audio board cmake file idf py menuconfig serial flasher config flash size change to 4mb partition table custom partition csv file change to partitions 4mb flash csv 2 4 flashing the firmware when flashing the sdk for the first time it is recommended to do idf py erase flash to wipe out entire flash and start out fresh idf py flash monitor 3 additional setup the device would need additional configuration on the cloud as well as the device firmware for it to work check the readme in the example directory for the voice assistant specific project setup 4 device provisioning for google voice assistant and google dialogflow please refer to the readmes in the respective examples instead of the description that follows below the configuration step consists of a configuring the wi fi network and b signing into your alexa account and linking the device espressif has released the following phone applications that facilitate the same ios ios app https apps apple com in app esp alexa id1464127534 br android android app https play google com store apps details id com espressif provbleavs please install the relevant application on your phone before your proceed 4 1 configuration steps here are the steps to configure the dev kit on first boot up the dev kit is in configuration mode this is indicated by orange led pattern please ensure that the led pattern is seen as described above before you proceed launch the phone app select the option add new device center img src https github com espressif esp va sdk wiki va images esp alexa app home png alt app home title app home width 300 center a list of devices that are in configuration mode is displayed note that the devices are discoverable over ble bluetooth low energy please ensure that the phone app has the appropriate permissions to access bluetooth on android the location permission is also required for enabling bluetooth center img src https github com espressif esp va sdk wiki va images esp alexa app discover devices png alt app discover devices title app discover devices width 300 center now you can sign in to your amazon alexa account if you have amazon shopping app installed on the same phone app will automatically sign in with the account the shopping app is signed in to otherwise it will open a login page on the phone s default browser it is recommended to install the amazon shopping app on your phone to avoid any other browser related errors center img src https github com espressif esp va sdk wiki va images esp alexa app sign in png alt app sign in title app sign in width 300 center you can now select the wi fi network that the dev kit should connect with and enter the credentials for this wi fi network center img src https github com espressif esp va sdk wiki va images esp alexa app wifi scan list png alt app scna list title app scan list width 300 img src https github com espressif esp va sdk wiki va images esp alexa app wifi password png alt app wi fi password title app wi fi password width 300 center on successful wi fi connection you will see a list of few of the voice queries that you can try with the dev kit center img src https github com espressif esp va sdk wiki va images esp alexa app things to try png alt app things to try title app things to try width 300 center you are now fully setup you can now say alexa followed by the query you wish to ask 4 2 additional device settings some device settings like volume control locale change etc can also be controlled through the phone app launch the phone app select the option manage devices center img src https github com espressif esp va sdk wiki va images esp alexa app home png alt app home title app home width 300 center make sure you are connected to the same network as the device and also that ssdp packets can be sent on your network now select your device from the list of devices for the device settings 5 customising for your board for integrating customising your own board refer to components audio hal readme md 6 integrating other components 6 1 esp rainmaker 6 1 1 environment setup additional setup that needs to be done for integrating esp rainmaker https rainmaker espressif com get the repository git clone recursive https github com espressif esp rainmaker git setting cloud agent export cloud agent path path to esp rainmaker menuconfig changes idf py menuconfig voice assistant configuration enable cloud support enable this 6 1 2 device provisioning the combined app for esp rainmaker esp alexa is still under development till then both the apps can be used separately for provisioning open the esp rainmaker app and sign in click on add device scan the qr code and complete the wi fi setup the app will verify the setup for gvs and dialogflow refer to their respective readmes for provisioning make sure you are connected to the same network as the device open the esp alexa app manage devices find your device and sign in into alexa 6 1 3 customisation to customise your own device you can edit the file examples additional components app cloud app cloud rainmaker c you can check the examples in esp rainmaker for some more device examples 6 2 smart home this is only for aia avs support will be added soon note there is a bug where the device name is being set as demo light instead of what is being set by the device default is light one way to add the smart home functionality is to use esp rainmaker 91 esp rainmaker and the other way is to use examples additional components app smart home this can be initialized in the appilication uncomment app smart home init in app main c 6 2 1 usage once provisioning is done and the device has booted up the smart home feature of the device can be used via voice commands or through the alexa app example by default the device configured is a light with power and brightness functionalities voice commands like turn on the light or change light brightness to 50 can be used in the alexa app this device will show up as light and the power and brightness can be controlled 6 2 2 customisation to customise your own device you can edit the file examples additional components app smart home app smart home c you can refer the files components voice assistant include smart home h and components voice assistant include alexa smart home h for additional apis a device can have the following types of capabilities features parameters power a device can only have a single power param toggle this can be used for params which can be toggled example turning on off the swinging of the blades in an air conditioner range this can be used for params which can have a range of values example changing the brightness of a light mode this can be used for params which need to be selected from a pre defined set of strings example selecting the modes of a washing machine 6 3 audio player the audio player components voice assistant include audio player h can be used to play custom audio files from any source http url local spiffs etc the focus management what is currently being played is already implemented internally by the sdk for aia speech alert music from alexa has higher priority than what is played via the audio player so for example if custom music is being played via the audio player and a query is asked then the music will be paused and the response from alexa will be played once the response is over the music will be resumed unless already stopped basically all alexa audio gets priority over custom audio for avs speech alert from alexa has higher priority than what is played via the audio player so for example if custom music is being played via the audio player and a query is asked then the music will be paused and the response from alexa will be played once the response is over the music will be resumed unless already stopped another example if custom music is being played via audio player and a query is asked for playing music via the cloud then the custom music will be stopped and the music from alexa will take over if alexa music was playing and custom music is played then alexa music will stop and the custom music will take over basically music has the same priority from whichever source it is being played from all other alexa audio gets priority over music for gva and dialogflow speech alerts and music is not yet supported from google has higher priority than what is played via the audio player so for example if custom music is being played via the audio player and a query is asked then the music will be paused and the response from alexa will be played once the response is over the music will be resumed unless already stopped basically all google audio gets priority over custom audio 6 3 1 enabling custom player the examples additional components custom player is an example using the audio player the default example of the custom player can play from http url and or local spiffs and or local sdcard but can be easily extended to play from any other source easiest way to try custom player is using http url include custom player h in the application and call custom player init after the voice assistant early initialisation has been done when the application is now built and flashed on the device the custom player will play the 3 files showing the usage of the audio player 6 3 2 customisation the default custom player just has a demo code which can be used as a reference to build your own player the audio player for now just supports mp3 and aac audio formats for http urls and only mp3 audio format for local files 6 4 equalizer this is only for avs equalizer lets you control the bass mid range and treble of the audio you can use the following commands to get the values for the equalizer set treble to 3 set bass to 3 reset equalizer set movie mode the sdk will give a callback to the application in equalizer c with the respective values for the equalizer the application can then use these values and adjust the audio output 6 4 1 enabling equalizer to enable the equalizer along with alexa include alexa equalizer h in the application and call alexa equalizer init before the voice assistant initialisation has been done 7 production considerations 7 1 over the air updates ota esp idf has a component for ota from any url more information and details about implementing can be found here esp https ota https docs espressif com projects esp idf en latest esp32 api reference system esp https ota html esp https ota 7 2 manufacturing 7 2 1 mass manufacturing utility the devices generally require unique ids and certificates to connect to the cloud server for example in aia aws iot operations require that all devices have a unique certificate and key pair programmed on each device which is used for authentication with the aws iot cloud service these are generally programmed in factory nvs partitions that are unique per device esp idf provides a utility to create instances of factory nvs partition images on a per device basis for mass manufacturing purposes the nvs partition images are created from csv files containing user provided configurations and values details about using the mass manufacturing utility can be found here mass manufacturing https docs espressif com projects esp idf en latest api reference storage mass mfg html 7 2 2 pre provisioned modules esp32 modules can be pre flashed with the factory nvs partition during manufacturing itself and then be shipped to you for example in aia the device certificates are signed by your certificate authority ca and when you register this ca in your cloud all the devices can connect to the cloud out of the box this saves you the overhead of securely generating encrypting and then programming the nvs partition into the device at your end pre provisioning is an optional service which espressif provides please contact your espressif contact person for more information 7 3 security 7 3 1 secure boot secure boot ensures that only trusted code runs on the device esp32 supports rsa based secure boot scheme whereby the bootrom verifies the software boot loader for authenticity using the rsa algorithm the verified software boot loader then checks the partition table and verifies the active application firmware and then boots it details about implementing the secure boot can be found here secure boot https docs espressif com projects esp idf en latest security secure boot html 7 3 2 flash encryption flash encryption prevents the plain text reading of the flash contents esp32 supports aes 256 based flash encryption scheme the esp32 flash controller has an ability to access the flash contents encrypted with a key and place them in the cache after decryption it also has ability to allow to write the data to the flash by encrypting it both the read write encryption operations happen transparently details about implementing the flash encryption can be found here flash encryption https docs espressif com projects esp idf en latest security flash encryption html 7 3 3 nvs encryption for the manufacturing data that needs to be stored on the device in the nvs format esp idf provides the nvs image creation utility which allows the encryption of nvs partition on the host using a randomly generated per device unique or pre generated common for a batch nvs encryption key a separate flash partition is used for storing the nvs encryption keys this flash partition is then encrypted using flash encryption so flash encryption becomes a mandatory feature to secure the nvs encryption keys details about implementing the nvs encryption can be found here nvs encryption https docs espressif com projects esp idf en latest api reference storage nvs flash html nvs encryption a1 appendix faqs a1 1 compilation errors i cannot build the application make sure you are on the correct esp idf branch run git submodule update init recursive to make sure the submodules are at the correct heads make sure you have the correct audio board path selected for your board delete the build directory and also sdkconfig and sdkconfig old and then build again if you are still facing issues reproduce the issue on the default example and then contact espressif for help please make sure to share these the esp va sdk and esp idf branch you are using and the audio board path that you have set the complete build logs a1 2 device setup using the mobile app i cannot add a new device through the phone app if the device is not being shown while adding a new device make sure the required permissions are given to the app also make sure that your bluetooth is turned on android typically requires the location permission also for enabling bluetooth if you are still facing issues update the app to the latest version and try again force closing the app and rebooting the device works in most cases if either of them have gone into an unknown state if you are still facing issues reproduce the issue on the default example for the device and then contact espressif for help make sure to share these screenshots of the mobile app where it is not working mobile app version mobile phone model and the android version or any skin it is running complete device logs taken over uart the esp va sdk and esp idf branch you are using and the audio board path that you have set i cannot manage device through the phone app if the device is not being shown while managing devices make sure you are connected to the same network as the device if you are still facing issues update the app to the latest version and try again force closing the app and rebooting the device works in most cases if either of them have gone into an unknown state if you are still facing issues reproduce the issue on the default example for the device and then contact espressif for help make sure to share these screenshots of the mobile app where it is not working mobile app version mobile phone model and the android version or any skin it is running complete device logs taken over uart the esp va sdk and esp idf branch you are using and the audio board path that you have set a1 3 device crashing my device is crashing given the tight footprint requirements of the device please make sure any issues in your code have been ruled out if you believe the issue is with the alexa sdk itself please recreate the issue on the default example application without any changes and go through the following steps make sure you are on the correct esp idf branch run git submodule update init recursive to make sure the submodules are at the correct heads make sure you have the correct audio board path selected for your board delete the build directory and also sdkconfig and sdkconfig old and then build and flash again if you are still facing issues reproduce the issue on the default example for the device and then contact espressif for help make sure to share these the steps you followed to reproduce the issue complete device logs from device boot up taken over uart voice assistant elf file from the build directory if you have gdb enabled run the command backtrace and share the output of gdb too the esp va sdk and esp idf branch you are using and the audio board path that you have set a1 4 device not crashed but not responding my device is not responding to audio queries make sure your device is connected to the wi fi internet if the device is not taking the wake word make sure the mic is turned on try using the tap to talk button and then ask the query if you are still facing issues reproduce the issue on the default example for the device and then contact espressif for help make sure to share these the steps you followed to reproduce the issue complete device logs taken over uart voice assistant elf file from the build directory the esp va sdk and esp idf branch you are using and the audio board path that you have set also check the appendix sections in the respective voice assistant s directories | alexa voice-assistant iot | server |
LLMeBench | llmebench a flexible framework for accelerating llms benchmarking this repository contains code for the llmebench framework described in a href https arxiv org abs 2308 04945 target blank this paper a the framework currently supports evaluation of a variety of nlp tasks using three model providers openai e g gpt https platform openai com docs guides gpt huggingface inference api https huggingface co docs api inference and petals e g bloomz https huggingface co bigscience bloomz it can be seamlessly customized for any nlp task llm model and dataset regardless of language https github com qcri llmebench assets 3918663 15d989e0 edc7 489a ba3b 36184a715383 p align center picture img alt the architecture of the llmebench framework src https github com qcri llmebench assets 3918663 7f7a0da8 cd73 49d5 90d6 e5c62781b5c3 width 400 height 250 picture p overview p align center picture img alt summary and examples of the 53 datasets 31 tasks 3 model providers and metrics currently implemented and validated in llmebench src https github com qcri llmebench assets 3918663 8a0ddf60 5d2f 4e8c a7d9 de37cdeac104 width 510 height 160 picture p developing llmebench is an ongoing effort and it will be continuously expanded currently the framework features the following supports 31 tasks llmebench tasks featuring 3 model providers llmebench models tested with 53 datasets llmebench datasets associated with 12 languages resulting in 200 benchmarking assets assets ready to run easily extensible to new models accessible through apis extensive caching capabilities to avoid costly api re calls for repeated experiments supports zero and few shot learning paradigms on the fly datasets download and dataset caching open source quick start 1 install https github com qcri llmebench blob main readme md installation llmebench 2 create a new folder data then download arsas dataset https llmebench qcri org data arsas zip into data and unzip it 3 evaluate for example to evaluate the performance of a random baseline llmebench models randomgpt py for sentiment analysis on arsas dataset https github com qcri llmebench blob main llmebench datasets arsas py you need to create an asset assets ar sentiment emotion others sentiment arsas random py a file that specifies the dataset model and task to evaluate then run the evaluation as follows bash python m llmebench filter sentiment arsas random assets results where arsas random is the asset name referring to the arsas dataset name and the random model and assets ar sentiment emotion others sentiment is the directory where the benchmarking asset for the sentiment analysis task on arabic arsas dataset can be found results will be saved in a directory called results installation pip package to be made available soon clone this repository bash git clone https github com qcri llmebench git cd llmebench create and activate virtual environment bash python m venv envs llmebench source envs llmebench bin activate install the dependencies and benchmarking package bash pip install e dev fewshot get the benchmark data in addition to supporting the user to implement their own llm evaluation and benchmarking experiments the framework comes equipped with benchmarking assets over a large variety of datasets and nlp tasks to benchmark models on the same datasets download the benchmarking data from here https llmebench qcri org data an example command to download all these datasets bash mkdir data cd data wget r np nh cut dirs 3 a zip r index html https llmebench qcri org data next unzip the downloaded files to get a directory per dataset bash for i in zip do unzip i d i zip done voil all ready to start evaluation note some datasets and associated assets are implemented in llmebench but the dataset files can t be re distributed it is the responsibility of the framework user to acquaire them from their original sources the metadata for each dataset includes a link to the primary page for the dataset which can be used to obtain the data disclaimer the datasets associated with the current version of llmebench are either existing datasets or processed versions of them we refer users to the original license accompanying each dataset as provided in the metadata for each dataset script https github com qcri llmebench tree main llmebench datasets it is our understanding that these licenses allow for datasets use and redistribution for research or non commercial purposes usage to run the benchmark bash python m llmebench filter benchmarking asset limit k n shots n ignore cache benchmark dir results dir parameters filter benchmarking asset optional this flag indicates specific tasks in the benchmark to run the framework will run a wildcard search using benchmarking asset in the assets directory specified by benchmark dir if not set the framework will run the entire benchmark limit k optional specify the number of samples from input data to run through the pipeline to allow efficient testing if not set all the samples in a dataset will be evaluated n shots n optional if defined the framework will expect a few shot asset and will run the few shot learning paradigm with n as the number of shots if not set zero shot will be assumed ignore cache optional a flag to ignore loading and saving intermediate model responses from to cache benchmark dir path of the directory where the benchmarking assets can be found results dir path of the directory where to save output results along with intermediate cached values you might need to also define environment variables like access tokens and api urls e g azure api url and azure api key depending on the benchmark you are running this can be done by either export azure api key before running the above command or prepending azure api url azure api key to the above command supplying a dotenv file using the env flag sample dotenv files are provided in the env folder each model provider s llmebench models documentation specifies what environment variables are expected at runtime outputs format results dir this folder will contain the outputs resulting from running assets it follows this structure all results json a file that presents summarized output of all assets that were run where results dir was specified as the output directory the framework will create a sub folder per benchmarking asset in this directory a sub folder will contain n json a file per dataset sample where n indicates sample order in the dataset input file this file contains input sample full prompt sent to the model full model response and the model output after post processing as defined in the asset file summary jsonl lists all input samples and for each a summarized model prediction and the post processed model prediction summary failed jsonl lists all input samples that didn t get a successful response from the model in addition to output model s reason behind failure results json contains a summary on number of processed and failed input samples and evaluation results for few shot experiments all results are stored in a sub folder named like 3 shot where the number signifies the number of few shots samples provided in that particular experiment jq https jqlang github io jq is a helpful command line utility to analyze the resulting json files the simplest usage is jq summary jsonl which will print a summary of all samples and model responses in a readable form caching the framework provides caching if ignore cache isn t passed to enable the following allowing users to bypass making api calls for items that have already been successfully processed enhancing the post processing of the models output as post processing can be performed repeatedly without having to call the api every time running few shot assets the framework has some preliminary support to automatically select n examples per test sample based on a maximal marginal relevance based approach using langchain s implementation https python langchain com docs modules model io prompts example selectors mmr this will be expanded in the future to have more few shot example selection mechanisms e g random class based etc to run few shot assets supply the n shots n option to the benchmarking script this is set to 0 by default and will run only zero shot assets if n shots is zero only few shot assets are run tutorial the tutorials directory docs tutorials provides tutorials on the following updating an existing asset advanced usage commands to run different benchmarking use cases and extending the framework by at least one of these components model provider task dataset asset citation please cite our paper when referring to this framework article dalvi2023llmebench title llmebench a flexible framework for accelerating llms benchmarking author fahim dalvi and maram hasanain and sabri boughorbel and basel mousi and samir abdaljalil and nizi nazar and ahmed abdelali and shammur absar chowdhury and hamdy mubarak and ahmed ali and majd hawasly and nadir durrani and firoj alam year 2023 eprint 2308 04945 journal arxiv 2308 04945 primaryclass cs cl url https arxiv org abs 2308 04945 | benchmarking large-language-models llm multilingual | ai |
spring-cloud-build | spring cloud build practical engineering base spring cloud | spring-cloud spring-boot redis | cloud |
Embedded-Systems-Design | embedded systems design a series of embedded systems design experiments covering various aspects of embedded systems microcontroller programming and hardware interfacing exp1 matrix keypad and 7 segment display input system design exp2 pwm based breathing led system design exp3 traffic signal system with independent 7 segment displays and tricolor leds exp4 digital clock system with scanning 7 segment displays exp5 electronic piano system with passive buzzer and matrix keypad equipment or software needed step mxo2 v2 fpga development board step baseboard v2 peripheral experimentation board lattice s diamond fpga integrated development tool modelsim simulation software | os |
|
2018_Group_14 | 2018 group 14 158 383 information technology project description openstreetmap osm is a collaborative project to create a free editable map of the world rather than the map itself the data generated by the project is considered its primary output the creation and growth of osm has been motivated by restrictions on use or availability of map information across much of the world and the advent of inexpensive portable satellite navigation devices osm is considered a prominent example of volunteered geographic information assignment 1 task one usage system required aws instance net core 2 1 with ubuntu server 18 04 version 1 0 storage 30gb our system does not require net support so although this system has prebuilt net core it still can be seen as a blank instance we just need ubuntu server 18 04 sercurity groups http tcp 80 0 0 0 0 0 postgresql tcp 5432 0 0 0 0 0 ssh tcp 22 0 0 0 0 0 http tcp 443 0 0 0 0 0 to install openstreetmap by default settings run curl https raw githubusercontent com damming mapdata master set up openstreetmap sh bash to install openstreetmap with your own password run bash curl s https raw githubusercontent com damming mapdata master set up openstreetmap with password sh your password for example if your password is 12345678 then run bash curl s https raw githubusercontent com damming mapdata master set up openstreetmap with password sh 12345678 the installation process might be stuck by very a little warnings or some steps need times those are all normal test openstreetmap server is ready when you see these lines the numbers may different renderd 12266 starting stats thread renderd 12266 using web mercator projection settings renderd 12266 using web mercator projection settings renderd 12266 using web mercator projection settings renderd 12266 using web mercator projection settings then open http actual ip osm tiles 0 0 0 png to see if tiles can be rendered you can change the values of 0 0 0 png they are scale x y respectivly or http autual ip ol html to see the relinked online map if you want to see if the password is correctly set press control c to interupt the server process then enter cd to go back to the user root directory then enter vi pgpass then you will see your password after the last colon assignment 1 task two discussion use of management and staging environments in task one we successfully set up a working openstreetmap server that will act as our management server for task two we will use our management server to create duplicate osm servers these duplicate copies of osm can be used as staging environments with the same hardware and software settings for testing and evaluating changes privately by having staging environments that are identical or as close as possible to our production environment we can minimise differences which may cause or hide issues in the deployment of changes as developers we want to minimise the risk of errors being transitioned to our live environments and ensure minimal downtime staging environments allow us to ensure there are no conflicts before making changes live new duplicates of our original production environment can be made quickly should there be issues caused by changes in another staging environment usage install ansible if you are going to rerun the playbook please delete the created keypair in aws console run prepared localhost sh this shell script will install ansible and boto in the current system then a ansible playbook used to create a new aws keypair will be created the new keypair is ssh keypairforansible yem parameters are aws access key id aws secret access key region please have a look at the important below first bash curl s https raw githubusercontent com damming mapdata master prepared localhost sh aws access key id aws secret access key region for example your aws access key id is aaaa and your aws secret access key is bbbb and you want to create the new instance in us west 2 then run bash curl s https raw githubusercontent com damming mapdata master prepared localhost sh aaaa bbbb us west 2 run playbook important if you did not create instance in us west 2 then you need to change the value of image line 33 and reigon line 35 before run the script a feasible way is to edit the file directly in github page and then click raw use the address replace the address below wget c https raw githubusercontent com damming mapdata master prepared new instance yml ansible playbook prepared new instance yml test http autual ip ol html assignment 2 task one decrisption in this task six aws instances are required they are 1 assistant server 1 database server 3 openstreetmap program servers including 1 backup and 1 nginx server only assistant server needs to be setup manually the other servers can be setup by ansible playbook running on the assistant server assistant server can also be used to update program server system required ubuntu server 18 04 lts hvm assistant server needs 8g storage the other servers storage will be set by the playbook sercurity groups assistant server http tcp 80 0 0 0 0 0 postgresql tcp 5432 0 0 0 0 0 ssh tcp 22 0 0 0 0 0 http tcp 443 0 0 0 0 0 the other server will be set by the playbook usage please have a look at the usage of assignment 1 task 2 first setup system setup an assistant server then run bash curl s https raw githubusercontent com damming mapdata master ansible sh aws access key id aws secret access key region the shell script you just ran has installed ansible into assistant server next step is setup all the other servers run wget c https raw githubusercontent com damming mapdata master q2 playbook yml ansible playbook q2 playbook yml this playbook will run quiet a while the system will be ready for using after the playbook is running completely update if you want to update all the program servers in one time run wget c https raw githubusercontent com damming mapdata master update yml ansible playbook update yml if you want to update servers one at a time and see the changes download the update playbook by wget c https raw githubusercontent com damming mapdata master update yml then comment the 26th line and decomment the 27th line the 0 in the 27th line is used to control which server is going to be updated the valid options are 0 1 2 run the following command to execute the playbook ansible playbook update yml test http autual ip of nginx server ol html if you only updated one program server the old version and the new version will appear alternately the log of nginx can be found in var log nginx assignment 2 task two decrisption backup and restore the database usage setup a blank backup server then run bash curl s https raw githubusercontent com damming mapdata master backup sh actual database ip now the backup server will backup the database every 15 minutes you can modify the last line of etc crontab to change the backup rate then delete the database server instance and run the following command on backup server bash curl s https raw githubusercontent com damming mapdata master restore sh retore database ip we highly recommand you just use localhost as the place to restore the database otherwise you need to set up a blank database server first which is a bit complex last reconfigure the map program servers download configure playbook wget c https raw githubusercontent com damming mapdata master change database yml modify database ip in line 35 then run ansible playbook change database yml test http autual ip of nginx server ol html | server |
|
ThoughtSource | thoughtsource a framework for the science of machine thinking datasets available datasets tutorial notebook notebooks tutorial ipynb installation guide installation dataset annotator annotator thoughtsource is a central open resource and community centered on data and tools for chain of thought reasoning in large language models wei 2022 https arxiv org abs 2201 11903 our long term goal is to enable trustworthy and robust reasoning in advanced ai systems for driving scientific research and medical practice p align center img alt thoughtsource overview 3 src resources images thoughtsource overview 3 svg p pre print ott et al thoughtsource a central hub for large language model reasoning data https arxiv org abs 2301 11596 arxiv 2023 pre print hebenstreit et al an automatically discovered chain of thought prompt generalizes to novel models and datasets https arxiv org abs 2305 02897 arxiv 2023 workflow p align center img alt thoughtsource overview 1 src resources images thoughtsource overview 1 svg img alt thoughtsource overview 2 src resources images thoughtsource overview 2 svg p available datasets our dataloaders libs cot allow you to access the following datasets in a standardized chain of thought format the dataloaders create objects in the hugging face datasets format https huggingface co docs datasets index we sometimes extensively post processed the source datasets in different ways to create more coherent reasoning chains p align center datasets can be a href http thought samwald info b browsed online through the dataset viewer b a p general question answering commonsense qa https www tau nlp sites tau ac il commonsenseqa multiple choice commonsense knowledge question answering dataset talmor 2018 https arxiv org abs 1811 00937 license mit reasoning chains from three different sources are included human generated reasoning chains derived from the ecqa dataset https github com dair iitd ecqa dataset aggarwal 2021 https aclanthology org 2021 acl long 238 for train and validation split used as gold standard license community data license agreements sharing license 1 0 ai generated few shot prompting reasoning chains from wei 2022 https arxiv org abs 2201 11903 only available for validation split license unknown ai generated zero shot prompting generated reasoning chains from kojima 2022 https arxiv org abs 2205 11916 only available for validation split license unknown strategy qa https allenai org data strategyqa general domain question answering data from the strategyqa dataset reasoning chains are derived from original dataset geva 2021 https direct mit edu tacl article doi 10 1162 tacl a 00370 100680 did aristotle use a laptop a question answering license mit human generated reasoning chains derived from the original dataset for train split used as gold standard license mit ai generated few shot reasoning chains from wei 2022 https arxiv org abs 2201 11903 only available for train split license unknown ai generated zero shot generated reasoning chains from kojima 2022 https arxiv org abs 2205 11916 only available for train split license unknown qed https github com google research datasets qed general domain question answering data and justifications from the qed dataset lamm 2020 https arxiv org abs 2009 06354 license cc by sa 3 0 scientific medical question answering worldtree http cognitiveai org explanationbank scientific question answering data from the worldtree v2 dataset xie 2020 https aclanthology org 2020 lrec 1 671 human generated reasoning chains derived from the original dataset license ai2 mercury entailment bank https allenai org data entailmentbank science exam questions with expert authored explanations from the entailmentbank dataset dalvi 2022 https arxiv org pdf 2104 08661 pdf human generated reasoning chains derived from the original dataset license cc by 4 0 note significant overlap with worldtree v2 open book qa https allenai org data open book qa scientific question answering modeled after open book exams for assessing human understanding from the openbookqa dataset mihaylov 2018 https aclanthology org d18 1260 pdf human generated reasoning chains derived from the original dataset license apache license 2 0 med qa https github com jind11 medqa usmle subset free form multiple choice openqa dataset containing questions from medical board exams in us usmle note the original medqa dataset also provides chinese language data which are currently not included jin 2020 https arxiv org abs 2009 13081v1 license mit br additionally the dataset is also available in an open answer version nair 2023 https arxiv org abs 2303 17071 license mit ai generated zero shot reasoning chains derived from li vin 2022 https arxiv org abs 2207 08143 only available for the test split only us questions license unknown medmc qa https medmcqa github io multiple choice question answering dataset containing real world medical entrance exam questions from the all india institute of medical sciences aiims pg and national eligibility cum entrance test neet pg pal 2022 https arxiv org abs 2203 14371 license mit human generated reasoning chains derived from the original dataset for 85 of train and validation split used as gold standard license mit ai generated zero shot reasoning chains derived from li vin 2022 https arxiv org abs 2207 08143 only available for 1000 samples from the validation split license cc by mmlu https github com hendrycks test massive multitask language understanding is a compendium of 57 distinct question and answer tasks included are the selected six subjects related to medicine anatomy clinical knowledge college biology college medicine medical genetics and professional medicine license mit pubmed qa https github com pubmedqa pubmedqa qa dataset containing biomedical questions extracted from pubmed abstracts that can be answered with yes no maybe jin 2019 https arxiv org abs 1909 06146 license mit human generated reasoning chains derived from the original dataset used as gold standard license mit ai generated zero shot reasoning chains derived from li vin 2022 https arxiv org abs 2207 08143 only available for the test split license cc by math word problems aqua https github com deepmind aqua math word problems from the aqua rat algebra question answering with rationales dataset ling 2017 https arxiv org pdf 1705 04146 pdf reasoning chains derived from the original dataset license apache 2 0 asdiv https github com chaochun nlu asdiv dataset math word problems from the academia sinica diverse mwp dataset miao 2020 https aclanthology org 2020 acl main 92 reasoning chains derived from the original dataset license cc by nc 4 0 gsm8k https github com openai grade school math math word problems from the gsm8k dataset cobbe 2021 https arxiv org abs 2110 14168 reasoning chains derived from the original dataset license mit mawps https github com sroy9 mawps math word problems from mawps the math word problem repository dataset koncel kedziorski 2016 https aclanthology org n16 1136 pdf reasoning chains derived from the original dataset license mit svamp https github com arkilpatel svamp math word problems source svamp patel 2021 https aclanthology org 2021 naacl main 168 reasoning chains derived from the original dataset license mit collections of datasets for quick and economic formative evaluation of cot reasoning we combined random examples of the above datasets to collections thoughtsource 33 hebenstreit 2023 https arxiv org abs 2305 02897 is a collection made up of 33 samples each from commonsense qa medqa usmle medmcqa openbookqa strategyqa and worldtree v2 we generated zero shot cots with ten different prompting strategies each employed by six models davinci 002 davinci 003 gpt 3 5 turbo gpt 4 flan t5 xxl and cohere s command xlarge nightly the data can easily be accessed python collection collection load thoughtsource 33 we are working on collecting and generating additional datasets and on further improving the quality of existing datasets see dataset issues https github com openbiolink thoughtsource issues q is 3aissue label 3adataset we welcome suggestions for the inclusion of other datasets we welcome dataset contributions have a look at our contribution guide contributing md annotator p align center img alt demonstration of the annotator tool src resources images annotator demo webp width 80 the annotator allows for highlighting similarities between different generated reasoning chains making it easier to spot strenghts and weaknesses and to select best results p p align center a href http thought samwald info 3000 b use the web based annotator b a br to try out the annotator simply type in your name and load this a href https github com openbiolink thoughtsource blob main notebooks worldtree 10 json target blank example file a p br installation and code structure installation execute in terminal line by line bash git clone git github com openbiolink thoughtsource git cd thoughtsource install pip and virtualenv sudo apt install python3 pip sudo apt install python3 venv create and activate virtual environment python3 m venv venv source venv bin activate install requirements and api packages pip install e libs cot api applications annotator apps annotator web based tool for annotating chain of thought data dataset viewer apps dataset viewer streamlit application for browsing thoughtsource datasets libraries cot libs cot dataloader creating and processing of thoughtsource datasets based on the hugging face datasets library generate generating reasoning chains with a wide variety of language models currently openai and models on hugging face hub evaluate evaluate the performance of predictions extracted using generated reasoning chains python 1 dataset loading and selecting a random sample collection collection worldtree verbose false collection collection select split train number samples 10 2 language model generates chains of thought and then extracts answers config instruction keys qa 01 answer the following question through step by step reasoning cot trigger keys kojima 01 answer let s think step by step answer extraction keys kojima a d therefore among a through d the answer is api service huggingface hub engine google flan t5 xl warn false verbose false collection generate config config 3 performance evaluation collection evaluate accuracy qa 01 kojima 01 kojima a d 0 6 p align center see the a href https github com openbiolink thoughtsource blob main notebooks tutorial ipynb b tutorial notebook b a for more code examples p citation bibtex misc https doi org 10 48550 arxiv 2301 11596 doi 10 48550 arxiv 2301 11596 url https arxiv org abs 2301 11596 author ott simon and hebenstreit konstantin and li vin valentin and hother christoffer egeberg and moradi milad and mayrhauser maximilian and praas robert and winther ole and samwald matthias keywords computation and language cs cl artificial intelligence cs ai fos computer and information sciences fos computer and information sciences title thoughtsource a central hub for large language model reasoning data publisher arxiv year 2023 copyright creative commons attribution 4 0 international versioning all updates changes to datasets are explicitly mentioned in bold details summary 1 0 0 2023 07 11 summary released thoughtsource 33 collection with 60 reasoning chains for each item collection load thoughtsource 33 added an option for creating chained commands added chat option of gpt added filtering functions for already created chain of thoughts added new datasets mmlu six medical subsets and open ended question version of medqa details details summary 0 0 5 2023 03 10 summary added a function to select which generated cots to keep after loading collection select generated cots author thoughtsource details details summary 0 0 4 2023 03 08 summary improved evaluation function added a function to load thoughtsource100 collection collection load thoughtsource 100 details details summary 0 0 3 2023 02 24 summary released thoughtsource 100 collection with reasoning chains from gpt text davinci 003 flan t5 xxl and cohere s command xl details details summary 0 0 2 2023 02 15 summary updated annotator tool for correct data schema this might result in errors loading old datasets when loading from json files pubmed qa included long answer from origin schema as cot in thoughtsource schema details details summary 0 0 1 2023 02 01 summary initial release after twitter announcement of project details | dataset machine-learning natural-language-processing question-answering reasoning | ai |
Smart-Mixer-Detector | smart mixer detector benr3523 embedded system design assignment universiti teknikal malaysia melaka supervisor lecturer profesor madya dr soo yew guan group c 3benrs2 nur fazira binti hashim b022010032 syazani bin syamizey b021910092 | os |
|
Mastering-Full-Stack-React-Web-Development | mastering full stack react web development this is the code repository for mastering full stack react web development https www packtpub com web development mastering full stack react web development utm source github utm medium repository utm campaign 9781786461766 published by packt www packtpub com it contains all the supporting project files necessary to work through the book from start to finish about the book full stack web development is being redefined by the impact of reactjs if mean demonstrated just how effective combining javascript frameworks and tools could be for the modern web developer by replacing angular with react developers have an easier way to build isomorphic web applications where code can run on both the client and server this book will get you up to speed with one of the latest strategies to meet the demands of today s dynamic and data intensive web combining detailed insights and guidance with practical and actionable information that will ensure you can build a complete isomorphic web app it s an essential resource for the forward thinking developer you ll learn how to create a reliable and powerful back end platform with node js and express as well as exploring how to use mongodb as the primary database you ll see how its flexibility is a core part of any full stack developer s workflow as well as learning how to use mongoose alongside it to make data storage safer and more reliable instructions and navigations all of the code is organized into folders each folder starts with a number followed by the application name for example chapter02 the code will look like the following articleid 987654 articletitle lorem ipsum article one articlecontent here goes the content of the article articleid 123456 articletitle lorem ipsum article two articlecontent sky is the limit the content goes here related products meteor full stack web application development https www packtpub com web development meteor full stack web application development utm source github utm medium repository utm campaign 9781787287754 learning full stack react video https www packtpub com web development learning full stack react video utm source github utm medium repository utm campaign 9781787121348 mastering mean web development expert full stack javascript video https www packtpub com web development mastering mean web development expert full stack javascript video utm source github utm medium repository utm campaign 9781785882159 suggestions and feedback click here https docs google com forms d e 1faipqlse5qwunkgf6puvzpirpdtuy1du5rlzew23ubp2s p3wb gcwq viewform if you have any feedback or suggestions | front_end |
|
quiniela-app-backend | quiniela app backend this is the backend for development an app for quinielas | server |
|
ReactJS_VideoGuides | react js web development the essentials bootcamp course logo course logo udemy png official repository guide to accompany the video lessons of the react js web development the essentials bootcamp course take the course here https www udemy com react js and redux mastering web apps https www udemy com react js and redux mastering web apps commit by lecture guide https github com 15dkatz react essentials bootcamp commits https github com 15dkatz react essentials bootcamp commits this breaks down the course one commit at a time per lecture for an easier checkpoint troubleshooting experience what you ll learn updated for 2022 2023 learn how to code with react redux react hooks and more from an engineer with 5 years of industry experience modern redux in 2022 2023 modern syntax and best practices react hooks in 2023 2023 explore fundamental hooks and build hooks from scratch learn react in 2022 2023 the right way and learn best practices from an engineer with 5 years of industry experience create industry relevant projects that you can use on your portfolio and resume access 3 hours of in depth javascript material to hone your js skills learn react the right way and learn best practices from an engineer with 5 years of industry experience modern react in 2022 2023 createstore functional components etc explore the react engine and learn how it works under the hood to better understand the virtual dom state props etc learn how to build applications from scratch setting up your own react app template see how react fits in the big picture of web development with a ton of detailed overviews on what is happening in the browser and the react engine this provides the completed projects for portfolio music master react app template evens or odds starter react hooks in this course you will dive into react code right away you will gain relevant experience as soon as the first section time is precious and i want to make sure that you ll never feel like you re wasting it in this course so in a matter of minutes you will be writing react code in the first section with a fully completed app by the end of it understand how react fits in the big picture of web development in the second section you will take an important step back and examine how react fits in the big picture of web development you ll build a react project from scratch discovering all the layers that are in between the supplies that supports the react app and the browser which displays the react app create relevant and compelling react apps i m betting you ll find the apps both useful and interesting useful ones like the portfolio app will help you both learn react and be valuable as a completed project for your software engineering and web developer profile fun ones like music master will make coding lively giving you apps you want to show off to your friends and family | reactjs redux es6 tutorial | front_end |
Front-End-Projects | front end projects front end projects https socialify git ci tusharkesarwani front end projects image description 1 descriptioneditable a 20place 20for 20developers forks 1 issues 1 language 1 name 1 owner 1 pulls 1 stargazers 1 theme light p align center a href https frontendprojects netlify app img src https forthebadge com images badges check it out svg a p div align center p open source love svg1 https badges frapsoft com os v1 open source svg v 103 https github com ellerbrock open source badges prs welcome https img shields io badge prs welcome brightgreen svg style flat visitors https api visitorbadge io api visitors path tusharkesarwani 2ffront end projects 20 countcolor 23263759 style flat github forks https img shields io github forks tusharkesarwani front end projects github repo stars https img shields io github stars tusharkesarwani front end projects github contributors https img shields io github contributors tusharkesarwani front end projects github last commit https img shields io github last commit tusharkesarwani front end projects github repo size https img shields io github repo size tusharkesarwani front end projects github https img shields io github license tusharkesarwani front end projects github issues https img shields io github issues tusharkesarwani front end projects github closed issues https img shields io github issues closed raw tusharkesarwani front end projects github pull requests https img shields io github issues pr tusharkesarwani front end projects github closed pull requests https img shields io github issues pr closed tusharkesarwani front end projects p div open source program this project is a part of following open source programs div align center ssoc 2 0 https raw githubusercontent com tusharkesarwani front end projects main img ssoc png div div align center swoc 3 0 https raw githubusercontent com tusharkesarwani front end projects main img 9630b803 7d9b 4b19 ae68 cdbfc16c8254 png div h2 align center tech stacks used h2 p align center a href https www w3schools com html target blank rel noreferrer img src https raw githubusercontent com devicons devicon master icons html5 html5 original wordmark svg alt html5 width 100 height 100 a a href https www w3schools com css target blank rel noreferrer img src https upload wikimedia org wikipedia commons thumb d d5 css3 logo and wordmark svg 1200px css3 logo and wordmark svg png alt css3 width 100 height 100 a a href https dart dev target blank rel noreferrer a a href https developer mozilla org en us docs web javascript target blank rel noreferrer img src https cdn cdnlogo com logos j 69 javascript svg alt js width 80 height 80 a p h2 align center how to get started h2 you can refer to the following articles on basics of git and github and also contact the project mentors in case you are stuck forking a repo https help github com en github getting started with github fork a repo cloning a repo https docs github com en repositories creating and managing repositories cloning a repository how to create a pull request https opensource com article 19 7 create pull request github getting started with git and github https towardsdatascience com getting started with git and github 6fcd0f2d4ac6 h2 align center how to contribute h2 take a look at the existing issues or create your own issues wait for the issue to be assigned to you after which you can start working on it fork the repo and create a branch for any issue that you are working upon create a pull request which will be promptly reviewed and suggestions would be added to improve it add screenshots to help us know what this script is all about h2 a href https github com tusharkesarwani front end projects blob main contributing md contributing guidelines a h2 links portfolio https img shields io badge my portfolio 000 style for the badge logo ko fi logocolor white https portfolio of tushar netlify app linkedin https img shields io badge linkedin 0a66c2 style for the badge logo linkedin logocolor white https www linkedin com in tushar104 h2 align center stars chart h2 stargazers over time stargazers over time https starchart cc tusharkesarwani front end projects svg https starchart cc tusharkesarwani front end projects h2 align center project admin h2 table align center tr td align center a href https github com tusharkesarwani img src https avatars githubusercontent com u 92527686 v 4 width 100px alt br sub b tusharkesarwani b sub a td tr table h2 align center project mentors h2 table align center tr td align center a href https github com tusharkesarwani img src https avatars githubusercontent com u 92527686 v 4 width 100px alt br sub b tusharkesarwani b sub a td tr table h2 align center project contributors h2 table align center tr td a href https github com tusharkesarwani front end projects graphs contributors align center img src https contrib rocks image repo tusharkesarwani front end projects a td tr table hr h1 align center happy coding h1 show some nbsp by giving the star to this repo p align right a href https github com tusharkesarwani front end projects back to top a p | cool-projects css3 front-end-development github html5 javascript open-source open-source-project project web-development | front_end |
Prosumer-TFM | prosumer tfm proposed topic for my master s thesis in software engineering cloud data and it management analysis of iraq as a producer and consumer of energy | cloud |
|
Tlaser | tlaser tlaser is a laser engraver designed as proof of concept of a new motion system corexy cantilever it s called tlaser because this motion system and thus this machine is shaped like a t img align right height 240 src https github com ficacador tlaser raw master images tlaser 20logo png main features corexy cantilever 5500mw blue laser 315mm x 205mm engraving area linear guide rails adjustable height at each corner 3d printed video of tlaser in action https youtu be njzcqpt6v3c p float center img width 430 src https github com ficacador tlaser raw master images tlaser freecad jpg img width 430 src https github com ficacador tlaser raw master images tlaser 0 jpg p corexy cantilever tlaser was developed to validate and demonstrate a new motion system which the author named corexy cantilever corexy cantilever has the same kinematics as corexy https corexy com theory html so it is easy to configure in existing firmware like grbl or marlin if the motors rotate to the same directions the head travels along the x axis if the motors rotate to opposite directions the head travels along the y axis this mechanism has two belts one over another each on a circuit shaped like a t both circuits have pulleys at the middle on both sides and at the end of the cantilever the upper belt is driven by the motor at the right end has a pulley on the left end and is locked to the left side of the carrier the lower is symmetric motor on the left pulley on the right locked to the right side of the carrier p float center img width 430 src https github com ficacador tlaser raw master images corexy 20cantilever 20x png img width 430 src https github com ficacador tlaser raw master images corexy 20cantilever 20y png p hardware the following parts are required for building this machine 5 5w 445nm blue laser module w power supply http s click aliexpress com e babwr1wg mgn12h 300mm and mgn15h 400mm blocks and rails http s click aliexpress com e bkj8dyfi 2 nema 17 stepper motors http s click aliexpress com e cpg92ixi arduino uno r3 w cnc shield v3 and a4988 stepper drivers http s click aliexpress com e b7e6xk5a 2 mechanical endstops http s click aliexpress com e cig3mqk timing belt gt2 6mm 5m http s click aliexpress com e belvtmgu 2 belt drive pulleys gt2 16 teeth bore 5mm https s click aliexpress com e d7v8utq gt2 pulleys 4 w 16 teeth and 4 w no teeth https s click aliexpress com e dzure9s m3x6 and m3x30 round head hex bolts http s click aliexpress com e cscwftx6 m3x8 socket head hex bolts http s click aliexpress com e b72th5nm m3x12 self tapping bolts http s click aliexpress com e bzmmlnqu m3x50 socket head bolts http s click aliexpress com e t3qlc2k m3 hex nuts http s click aliexpress com e nrg0nic the list above contains affiliate links the total cost should be below 150 165 usd 3d printed parts not included for a more detailed table with quantities check the bill of materials https github com ficacador tlaser raw master bom ods 3d printed parts all parts are already correctly oriented for ease of printing and available in stl files https github com ficacador tlaser tree master stls and amf files https github com ficacador tlaser tree master amfs dimensional accuracy is very important this is an assembly of mechanical parts designed to fit tight the slightest overextrusion will make for a hard time during the assembly due to proximity to the stepper motors and laser module laser mount https github com ficacador tlaser blob master stls laser 20mount stl motor mount left https github com ficacador tlaser blob master stls motor 20mount 20left stl and motor mount right https github com ficacador tlaser blob master stls motor 20mount 20right stl should be printed with a more heat resistant material like abs or petg recommended settings 0 5 mm line width 0 2 mm layer height 2 mm wall thickness over 15 infill some parts require support base left https github com ficacador tlaser blob master stls base 20left stl and base right https github com ficacador tlaser blob master stls base 20right stl are prone to warping requiring good bed adhesion total printing time around 38h total filament used approximately 600g assembly video of tlaser exploded assembly on freecad https youtu be 6z7jv q1ixa an exploded assembly cad file https github com ficacador tlaser raw master cad tlaser 20exploded 20assembly fcstd for freecad https www freecadweb org with the exploded assembly workbench https github com jmg1 explodedassembly added is available to guide the assembly process required tools are a set of hex key wrenches a philips screwdriver a small flat head screwdriver a rubber mallet and a wire cutter the controller wiring is pretty straight forward the exceptions being the laser ttl connects to the z endstop pins and the red wire is left out when connecting the x and y endstops the power supply and 12v laser pcb cables need to be cut so both can be connected to the same respective and on the cnc shield p float center img width 430 src https github com ficacador tlaser raw master images tlaser 20exploded 20assembly jpg img width 430 src https github com ficacador tlaser raw master images controller 20assembly 205 20shield jpg p firmware an arduino running grbl https github com gnea grbl is used to control tlaser it uses grbl version 1 1g with a modified config h file https github com ficacador tlaser blob master firmware grbl v1 1 grbl config h with the configuration and default values that better suit this machine you can find an hex file and the source code of grbl for tlaser here https github com ficacador tlaser tree master firmware instructions on how to flash grbl to an arduino https github com gnea grbl wiki flashing grbl to an arduino or alternatively compile and upload grbl https github com gnea grbl wiki compiling grbl can be found on the grbl wiki https github com gnea grbl wiki grbl is distributed under the gplv3 license and is developed by sungeun k jeon ph d cam software laserweb4 https github com laserweb laserweb4 wiki is simple yet powerful graphically pleasing open source laser cam software available for windows mac and linux a file containing tlaser profile https github com ficacador tlaser blob master cam 20profiles laserweb profiles tlaser json for laserweb4 v4 0 996 is available there are many valid laser cam software options use whatever one you like the most design this project was designed using freecad https www freecadweb org there are more powerful and certainly more user friendly options but none is actually free of charge of licensing of login and available for windows mac and linux designed with freecad version 0 18 3 with the added workbenches fasteners https github com shaise freecad fastenerswb a2plus https github com kbwbe a2plus and exploded assembly https github com jmg1 explodedassembly you can find the cad files for the individual parts and the assembly here https github com ficacador tlaser tree master cad all the features and sketches of the 3d printed parts are named for ease of understanding and editing tlaser is developed by filipe ca ador license tlaser is published under the creative commons attribution sharealike 4 0 international cc by sa 4 0 https creativecommons org licenses by sa 4 0 public license under this license anyone can share copy redistribute edit remix and develop this work for any purpose as long as credit is attributed to the original author and any further developments are distributed under the same license no warranty of any kind is provided and the author is not liable for any losses damages or injuries in any way related to this work | 3d-printed laser laser-engraver arduino grbl freecad | os |
DA-Mobile | da mobile da mobile application made by the app development club introduction credit | front_end |
|
awesome-llm-datasets | awesome llm datasets this repository is a collection of useful links related to datasets for language model models llms and reinforcement learning with human feedback rlhf it includes a variety of open datasets as well as tools pre trained models and research papers that can help researchers and developers work with llms and rlhf from a data perspective follow and star for the latest and greatest links related to datasets for llms and rlhf table of contents 1 datasets datasets 1 for pre training for pre training 1 2023 2023 2 before 2023 before 2023 2 for instruction tuning for instruction tuning 3 for rlhf for rlhf 4 for evaluation for evaluation 5 for other purposes for other purposes 2 models and their datasets models and their datasets 3 tools and methods tools and methods 4 papers papers datasets for pre training 2023 redpajama data https github com togethercomputer redpajama data 1 2 trillion tokens dataset in english dataset token count commoncrawl 878 billion c4 175 billion github 59 billion books 26 billion arxiv 28 billion wikipedia 24 billion stackexchange 20 billion total 1 2 trillion also includes code for data preparation deduplication tokenization and visualization created by ontocord ai mila qu bec ai institute eth ds3lab universit de montr al stanford center for research on foundation models crfm stanford hazy research research group and laion before 2023 for instruction tuning for rlhf alignment for evaluation for other purposes models and their datasets llama overview a collection of open source foundation models ranging in size from 7b to 65b parameters released by meta ai license non commercial bespoke model gpl 3 0 code release blog post https ai facebook com blog large language model llama meta ai arxiv publication https arxiv org abs 2302 13971 model card https github com facebookresearch llama blob main model card md vicuna overview a 13b parameter open source chatbot model fine tuned on llama and 70k chatgpt conversations that maintains 92 of chatgpt s performance and outperforms llama and alpaca license non commercial bespoke license model apache 2 0 code repo https github com lm sys fastchat vicuna weights release blog post https vicuna lmsys org sharegpt dataset https sharegpt com models https huggingface co lmsys gradio demo https chat lmsys org dolly 2 0 overview a fully open source 12b parameter instruction following llm fine tuned on a human generated instruction dataset licensed for research and commercial use license cc by sa 3 0 model cc by sa 3 0 dataset apache 2 0 code repo https github com databrickslabs dolly release blog post https www databricks com blog 2023 04 12 dolly first open commercially viable instruction tuned llm models https huggingface co databricks llava overview a multi modal llm that combines a vision encoder and vicuna for general purpose visual and language understanding with capabilities similar to gpt 4 license non commercial bespoke model cc by nc 4 0 dataset apache 2 0 code repo https github com haotian liu llava project homepage https llava vl github io arxiv publication https arxiv org abs 2304 08485 dataset models https huggingface co liuhaotian gradio demo https llava hliu cc stablelm overview a suite of low parameter 3b 7b llms trained on a new dataset built on the pile with 1 5 trillion tokens of content license cc by sa 4 0 models repo https github com stability ai stablelm release blog post https stability ai blog stability ai launches the first of its stablelm suite of language models models https huggingface co stabilityai gradio demo https huggingface co spaces stabilityai stablelm tuned alpha chat alpaca overview a partially open source instruction following model fine tuned on llama which is smaller and cheaper and performs similarly to gpt 3 5 license non commercial bespoke model cc by nc 4 0 dataset apache 2 0 code release blog post https crfm stanford edu 2023 03 13 alpaca html dataset https huggingface co datasets tatsu lab alpaca tools and methods papers | ai |
|
Torpedo | torpedo torpedo is a framework to test the resiliency of the airship deployed environment it provides templates and prebuilt tools to test all components from hardware nodes to service elements of the deployed stack the report and logging module helps easy triage of design issues pre requisites label the nodes label the nodes on which argo to be run as argo enabled label the nodes on which metacontroller needs to be enabled as metacontroller enabled label the nodes on which torpedo should run as torpedo controller enabled label the nodes on which traffic and chaos jobs to run as resiliency enabled label the nodes on which log collector jobs to run as log collector enabled clone the git repository git clone https github com bm metamorph torpedo git install metacontroller kubectl create ns metacontroller cat torpedo metacontroller rbac yaml kubectl create n metacontroller f cat torpedo install metacontroller yaml kubectl create n metacontroller f install argo kubectl create ns argo cat torpedo install argo yaml kubectl create n argo f deploy torpedo controller cat torpedo torpedo crd yaml kubectl create f cat torpedo controller yaml kubectl create f apply torpedo rbac rules cat torpedo resiliency rbac yaml kubectl create f cat torpedo torpedo rbac yaml kubectl create f deploy torpedo kubectl create configmap torpedo metacontroller n metacontroller from file torpedo metacontroller torpedo metacontroller py cat torpedo torpedo controller yaml kubectl create n metacontroller f cat torpedo webhook yaml kubectl create n metacontroller f trigger the test suite cat test suite kubectl n metacontroller create f note in case ceph storage is used to create a pvc create a ceph secret in the namespace the pvc needs to created with same name as usersecretname as mentioned in the ceph storage class the ceph secret can be obtained by the following command kubectl exec it n ceph ceph mon pod ceph auth get key client admin base64 replace the key and name in torpedo secret yaml with the key generated in above command and the name mentioned in the ceph storage class respectively and execute the following command cat torpedo secret yaml kubectl create f test cases covered in torpedo 1 openstack openstack api get calls keystone service list mariadb keystone service list memcached keystone service list ingress keystone service list glance image list neutron port list nova server list cinder volume list heat stack list horizon get call on horizon landing page openstack rabbitmq glance rabbitmq post call to create and upload and delete an image neutron rabbitmq post call to create and delete a router nova rabbitmq post call to create and delete a server cinder rabbitmq post call to create a volume heat rabbitmq post call to create and delete a stack openstack api post calls glance post call to create and upload and delete an image neutron post call to create and delete a router neutron dhcp agent post call to create a virtual machine assign a floating ip to the virtual machine and initiate a ping request to the floating ip openvswitch db post call to create a virtual machine assign a floating ip to the virtual machine and initiate a ping request to the floating ip openvswitch daemon post call to create a virtual machine assign a floating ip to the virtual machine and initiate a ping request to the floating ip nova compute post call to create and delete a server nova scheduler post call to create and delete a server nova conductor post call to create and delete a server libvirt post call to create and delete a server cinder volume post call to create a volume cinder scheduler post call to create a volume heat post call to create and delete a stack 2 ucp ucp api get calls keystone keystone service list promenade get call to check health armada releases list drydock nodes list shipyard configdocs list barbican secrets list deckhand revisions list 3 kubernetes kubernetes proxy creates a pod and a service and initiate a ping request to the service ip kubernetes apiserver get call to the pod list kubernetes scheduler post call to create and delete a pod ingress get call to kube apiserver test suite description the test suite contains following sections 1 auth 2 job parameters 3 namespace 4 orchestrator plugin 5 chaos plugin 6 volume storage class 7 volume storage capacity 8 volume name auth auth section consists of keystone auth information in case of openstack and ucp and url and token in case of kubernetes auth auth url http keystone api openstack svc cluster local 5000 v3 username username password password user domain name default project domain name default project name admin job parameters job parameters section further consists 7 sections 1 name name for test case 2 service name of the service against which the tests to be run example nova cinder etc 3 component component of service against which the test to run example nova os api cinder scheduler 4 job duration duration for which the job needs to run both chaos and traffic jobs 5 count number of times chaos traffic should be induced on target service takes precedence only if job duration is set to 0 6 nodes used in case of node power off scenario defaults to none in normal scenarios takes a list of nodes with the following information ipmi ip ipmi ip of the target node password ipmi password of the target node user ipmi username of the target node node name node name of the target node 7 sanity checks a list of checks that needs to be performed while the traffic and chaos jobs are running defaults to none example get a list of pods nodes etc takes 3 parameters as input image image to be used to run the sanity checks name name of the sanity check command command to be executed 8 extra args a list of extra parameters which can be passed for a specific test scenario defaults to none namespace namespace in which the service to verify is running orchestrator plugin the plugin to be used to initiate traffic chaos plugin the plugin to be used to initiate chaos volume storage class storage class to be used to create a pvc used to choose the type of storage to be used to create pvc volume storage volume capacity of the pvc to be created volume name name of the volume pvc apiversion torpedo k8s att io v1 kind torpedo metadata name openstack torpedo test spec auth auth url http keystone api openstack svc cluster local 5000 v3 username admin password user domain name default project domain name default project name admin job params service nova component os api kill interval 30 kill count 4 same node true pod labels application nova component os api node labels openstack nova control enabled service mapping nova name nova os api max nodes 2 nodes ipmi ip ipmi ip node name node name user username password password sanity checks name pod list image kiriti29 torpedo traffic generator v1 command bin bash sanity checks sh pod list 2000 kubectl get pods all namespaces o wide extra args namespace openstack job duration 100 count 60 orchestrator plugin torpedo traffic orchestrator chaos plugin torpedo chaos volume storage class general volume storage 10gi volume name openstack torpedo test node power off testcase scenario in torpedo the framework aims at creating a chaos in a nc environment and thereby measuring the downtime before the cluster starts behaving normally parallely collecting all the logs pertaining to openstack api calls pods list nodes list and so on 1 the testcase initially creates a heat stack which in turn creates a stack of 10 vms before introducing any chaos ort tests 2 once the heat stack is completely validated we record the state 3 initiate sanity checks for a checking the health of openstack services keystone get call on service list glance get call on image list neutron get call on port list nova get call on server list heat get call on stack list cinder get call on volume list b checks on kubernetes pod list kubectl get pods all namespaces o wide node list kubectl get nodes rabbitmq cluster status kubectl exec it rabbitmq pod on target node n namespace rabbitmqctl cluster status ceph cluster status kubectl exec it ceph pod on target node n namespace ceph health 4 now we shutdown the node ipmi power off 5 parallely instantiate the heat stack creation and see how much time it takes for the heat stack to finish verify heat stack is created in 15 minutes config param if not re initiate the stack creation we try this in loop the test exits with a failure after 40 minutes time limit this is a configurable parameter 6 if the heat stack creation is complete then we bring up the shutdown node and repeat the steps 1 5 on other nodes 7 logs are captured with request response times failures success messages on the test requests 8 a report is generated based on the number of testcases that have passed or failed all the logs of sanity checks apache common log format the entire pod logs in all namespaces in the cluster the heat logs authors muktevi kiriti gurpreet singh hemanth kumar nakkina | resiliency chaos chaos-engineering kubernetes airship | cloud |
Restaurant-Review-Categoriser | restaurant review categoriser this repository contains a nlp program made with python which automatically categorize the reviews received as positive or negative major libraries used 1 nltk 2 re contributions are welcomed here built with love http forthebadge com images badges built with love svg | natural-language-processing python3 | ai |
sql_salaries | sql fun with salaries using provided csv files from an old database the database is rebuilt and analyzed by 1 data modeling designing an erd 2 data engineering create table schema and import csv files 3 data analysis create various queries in sql to probe whether the database is real or fabricated and visualize using pandas files data https github com l0per sql challenge tree master data database tables were provided as csv files erd schema sql database schema created from https app quickdatabasediagrams com query sql various queries probing the database db vis ipynb visualizing database using pandas data modeling and engineering the csv files were successfully imported into tables from the following erd erd https github com l0per sql challenge blob master images erd png raw true data analysis salary distributions plotting the distribution of salaries by each position title reveals an excess of salaries around 40000 regardless of position title which suggests a fabricated dataset salary distributions https github com l0per sql challenge blob master images dist title salaries png raw true average salaries average salary does not increase with seniority of title suggesting a fabricated dataset salary barplot https github com l0per sql challenge blob master images bar title salaries png raw true april fools it is indeed fabricated aprilfools https github com l0per sql challenge blob master images aprilfools png raw true | server |
|
starfyre | p align center img alt starfyre logo src https user images githubusercontent com 29942790 221331176 609e156a 3896 4c1a 9386 7bf595dfb879 png width 350 p discord https img shields io discord 1080951642070978651 label discord logo discord logocolor white style for the badge color blue https discord gg thqcpvjmz6 downloads https static pepy tech badge starfyre https pepy tech project starfyre starfyre introduction starfyre is a library that allows you to build reactive frontends using only python with starfyre you can create interactive real time applications with minimal effort simply define your frontend as a collection of observables and reactive functions and let starfyre handle the rest installation pip install starfyre a sample project is hosted on github https github com sansyrox first starfyre app sample app to create an application bash python3 m starfyre create my app my app src init fyre python from parent import parent from store import store def mocked request return fetched on the server async def handle on click e alert click rendered on client if 1 1 print hello world current value get parent signal set parent signal current value 1 a await fetch https jsonplaceholder typicode com todos 1 print await a text print handles on click style body background color red style pyxide store parent hello world span onclick handle on click mocked request for i in range 4 span parent store pyxide script this is the optional section for third party scripts and custom js script sample cli usage bash usage python m starfyre options command line interface to compile and build a starfyre project args path str path to the project directory build bool whether to start the build package create str name of the project to create serve bool whether to serve the project options path text path to the project requires build build compile and build package requires path create text create a new project requires a project name serve serve the project requires path help show this message and exit how to contribute get started please read the code of conduct and go through contributing md before contributing to starfyre feel free to open an issue for any clarifications or suggestions if you re feeling curious you can take a look at a more detailed architecture here if you still need help to get started feel free to reach out on our community discord developing locally python version 3 10 1 fork this repo https github com sparckles starfyre 2 clone this repo git clone https github com sparckles starfyre 3 go in to the starfyre directory cd starfyre 4 download poetry curl ssl https install python poetry org python3 5 install the dependencies poetry install 6 activate poetry virtual environment poetry shell 7 run the script build sh this command will run the build process in starfyre against the test application in test application directory the build sh file is a simple script that runs two commands sequentially python m starfyre build true path test application the path variable here is the path to our application the build directory is basically a python package that contains all the compiled files we use the build flag to run that package 8 you can find a small test application in the test application directory 9 navigate to cd test application dist 10 open index html in your browser to see the output feedback feel free to open an issue and let me know what you think of it | frontend hacktoberfest python python3 web | front_end |
esp32-iot-uno | esp32 iot uno hardware features esp32 wifi bluetooth le soc 240mhz module esp wroom 32 automatic select 3 power sources dc6 28v usb and battery auto download flash mode integrated sdcard slot support 1 bit mode open hardware design with kicad cc by sa license i2c oled display header lithium ion battery charger 1 reset button 1 programable button 1 power led 1 programable led 1 charger led compatible with shields for esp32 in the future gateway gsm gprs gps and lora shield connectivity can rs485 rs232 shield audio shield images esp32 iot uno images1 assets esp32 uno 0776 jpg assets esp32 uno 0776 jpg esp32 iot uno images2 assets esp32 uno 0826 jpg assets esp32 uno 0826 jpg pinout esp32 iot pinout assets esp32 pinout png assets esp32 pinout png schematic esp32 iot uno schematic assets esp32 iot uno sch png assets esp32 iot uno sch svg pcb esp32 iot uno pcb assets esp32 iot uno pcb png assets esp32 iot uno pcb svg 3d top esp32 iot uno 3d top assets esp32 iot uno 3d top png assets esp32 iot uno pcb svg down esp32 iot uno 3d down assets esp32 iot uno 3d down png assets esp32 iot uno pcb svg gerber download assets gerber zip license cc by sa this creative commons license requires you to release all derivative works under this same license you must give credit to the original author of the work state their name and the title of the original work say that you modified the work if you did and include the attribution logo website https esp32 vn | esp32 iot arduino | server |
l24es | l24es | neural-networks embedded-systems linear-algebra machine-learning | os |
CMU-15-721 | cmu 15 721 to become a better engineer i m taking the carnegie mellon university course 15 721 advanced database systems to learn olap oltp engineering i have a personal interest in high performance computing and there are many crossovers between that and the subjects taught in this course such as kernel bypass methods low latency networking parallel processing multithreading cache management and more which i will be learning prof andy pavlov teaches this course and he is incredibly knowledgable on the subject the lectures can be found on youtube here https www youtube com watch v lws8leqauvc list plse8odhjzxjyzllmbx3cr0sxwnrm7clfn lecture 4 analytical database indexes this lecture focuses on how to optimize sequential scans on a database i learned about zone maps which are essentially storing metadata inside a memory block which contains common aggregate data that might be queried often like min max avg sum and count this reduces the time complexity of scanning to o 1 in the case where you need the aggregate data i also learned about bit weaving which allows you to store your indexes as bit vectors in such a way that you can take advantage of bitwise operations to very efficiently check if the data you need is in the memory block exercises this course doesn t have labs or exercises so i ll be making my own exercise bloom filters one of the data structures mentioned in this course is the bloom filter the bloom filter is a probabalistic data structure this interested me because i had never heard of it before so i decided to make my own implementation you can watch me build a c implementation of this on youtube https youtu be i2p2zpir 9c si ix7ovq3f8fcdfmzu | server |
|
ExaBGPmon | exabgpmon an http frontend to the glorious exabgp features configure exabgp and auto generate a config file monitor and manage peers control routes advertised automatically re advertise when exabgp or peer comes back online view prefixes received from peers dashboard docs dashboard png dashboard config docs config png config running exabgpmon install and start mongodb install dependencies pip install r requirements this will install exabgp supervisord inititialize python manage py init config start supervisor and exabgpmon supervisord supervisorctl start exabgpmon supervisorctl start exabgp configure and restart exabgp through gui configuration file will automatically be updated | front_end |
|
DIY-Data-Science | diy data science diy logo https mir s3 cdn cf behance net project modules disp 9336839151265 560c985e288ea png compilation of data science resources optimized for self education the best way to learn data science is to do it yourself theory research seq2seq https github com jxieeducation diy data science blob master research seq2seq md visual qa https github com jxieeducation diy data science blob master research visual qa md frameworks libraries xgboost https github com jxieeducation diy data science blob master frameworks xgboost md chainer https github com jxieeducation diy data science blob master frameworks chainer md gensim https github com jxieeducation diy data science blob master frameworks gensim md pyldavis https github com jxieeducation diy data science blob master frameworks pyldavis md pyevolve https github com jxieeducation diy data science blob master frameworks pyevolve md deep learning paper notes papernotes https github com jxieeducation diy data science blob master papernotes contributing everything is appreciated from writing a guide to sharing a great link contributing is a great way to teach yourself and others data science | ai |
|
CoronaMaskOn | coronamaskon mask control with computer vision images corona png who world health organization https www who int health topics coronavirus tab tab 1 coronavirus disease covid 19 is an infectious disease caused by a newly discovered coronavirus most people infected with the covid 19 virus will experience mild to moderate respiratory illness and recover without requiring special treatment older people and those with underlying medical problems like cardiovascular disease diabetes chronic respiratory disease and cancer are more likely to develop serious illness demo video you can reach demo video here https youtu be 984gw94teoe p img width 410 height 360 src images mask off png img width 410 height 360 src images mask on png align right p dataset dataset includes masked human faces and unmasked human faces images datasetd png the pictures were downloaded from google with the chrome extension https chrome google com webstore detail download all images ifipmflagepipjokmbdecpmjbibjnakm faces in the pictures were detected and extracted with the opencv library after that converted to grayscale format images prepare data png training training details are in this notebook training ipynb training ipynb images accuracy png try it yourself i share the trained model with you you can find it here https drive google com open id 1nrxpkhaljcz53kjcn51p2dhrobvlqanb to try application with webcam run below code python camera mask control py note i am still trying to expand the dataset even more i will also continue to develop the model using with extended dataset and pre trained model when i complete the dataset it will be public | ai |
|
DEPRECATED-html300 | this repository is deprecated please do not clone fork or use this repository instead use this one https github com uwfront end cert html300 v2 uw front end certificate html300 this repository contains the activities and assignments required to complete the final course of the front end certificate program all assignment submissions and quizzes discussions will be done through canvas https canvas uw edu course setup optional development tools these are tools that can be installed on your computer if they look useful to you visual studio code https code visualstudio com editor ide hyper terminal https hyper is modern terminal adobe xd https www adobe com products xd html design tool initial setup the only tool you must install locally on your computer to get started is node npm visit the node homepage https nodejs org and download the latest stable version this will be the one labeled recommended for most users follow instructions to install it npm stands for node package manager we downloaded node but we won t be using it to write server side javascript we ll just be using npm to manage the javascript packages our projects will use each activity and assignment directory in this repo is a project we will be using npm from the terminal when instructions say from within a project it means that you should navigate to a project like html300 lesson03 activity before running the command npm can be used to install javascript packages in a single project this is the default behavior when npm install package name here is run this is the way we will be using it in this course it can also install packages on your computer referred to as global installation because they are then available outside of projects this is done by including the g flag when installing like npm install g package name here we will not be using this functionality in this course global installation can cause compatibility problems and is no longer recommended instead when you need to run a command to a specific node package as we will with gulp we can use the npx prefix this will execute the command with the version of that package installed in the project gulp we will be using gulp as our main build tool in this course there is no need to install it on your computer we will only be using it with the npx prefix as explained above after lesson01 each activity and assignment will be run by starting a gulp task and then visiting the url that is made available gulp will pay attention to changes in your sass and html files and update what is shown in the browser when any changes are saved it will automatically convert sass code to css each activity and assignment is already configured with a gulpfile that defines the tasks you will not need to edit these in any way your interaction with gulp will be limited to running npx gulp from within a project directory to start up a server if you re interested visit gulp s documentation https gulpjs com to learn more about how the tool works this is strictly optional course final project setup create a new repository that will hold your final course project name it something appropriate for what it will do e g weathertracker movieguide etc for assignment 01 you will create a documents folder in the root with a txt or md file within containing the course proposal as the course progresses work for the course final project will be done in the new repo the rest of these instructions are for working with this html300 repo html300 course repo setup within this html300 repo click the fork button the the top right to fork a copy to your personal github account you will be taken to your forked repo it should say forked fromn uwfront end cert html300 under the title clone the fork to your local machine adding the upstream to your fork navigate to your fork s root folder in terminal e g cd sites uw html300 make sure you are on your master branch git checkout master add the upstream repository as a remote connection git remote add upstream git github com uwfront end cert html300 git verify with git remote v should have both origin and upstream reminder you will only ever push to origin as that s your fork only fetching and pulling will work with upstream fetching and merging upstream to keep your fork in sync with any changes to the original we can use upstream to fetch and merge with our forks this only needs to be done once a week or when your instructor suggests to run git fetch upstream to get the latest code from all branches now make sure you re on master with git checkout master now merge the upstream version of master with your fork s by running git merge upstream master you shouldn t run into conflicts but if you get errors or conflicts you can work this guide to resolve or talk to your instructor for help https help github com en github collaborating with issues and pull requests resolving a merge conflict using the command line per lesson workflow start by checking out your local master branch and fetching merging upstream at the beginning of each lesson module so you know you re up to date from the master branch create a branch for the lesson s assignment keep these branches all named consistently like lesson1 lesson2 etc fetch and merge upstream master to your local master branch git checkout master git fetch upstream git merge upstream master from the master branch create the branch for the lesson s assignment git checkout b lesson3 each lesson will usually have two folders activity and assignment feel free to work on the activity and assignment on the same lessonx branch the activity will often have starter and solution folders this is to help illustrate the starting ending point for each the assignment folder will have the starter files and instructions required to complete once completed open a new pull request within the pr set the base to be your forked master branch and the compare branch is that lesson s assignment branch copy the direct link to the pull request page and paste that into the submission box in canvas assignments check the readme md files found in the root of each assignment folder for instructions links for tools and any required information consult the rubric within canvas for the grading scale breakdown when you open a pull request you may keep pushing commits to that assignment s branch as they will automatically update the pr no need to close and re open a new one please use the canvas discussion boards if coming across issues or problems with assignments so all folks have visibility quizzes each week there will be a quiz available to take please flag up any issues if questions aren t being assessed correctly resources libraries these are libraries that will be used in assignments and activities when you re ready use these links to documentation to learn more about them nodejs npm https nodejs org en gulp https gulpjs com sass https sass lang com documentation bootstrap 4 https getbootstrap com vue https vuejs org vue devtools https chrome google com webstore detail vuejs devtools nhdogjmejiglipccpnnnanhbledajbpd hl en nuxt https nuxtjs org other resources git resources https try github io learn git branching https learngitbranching js org locale en us exploring es6 https exploringjs com es6 you don t know js https github com getify you dont know js array explorer https sdras github io array explorer object explorer https sdras github io object explorer sass guide https sass lang com guide css tricks https css tricks com flexbox froggy https flexboxfroggy com grid garden https codepip com games grid garden syntax fm podcast https syntax fm shoptalk show podcast https shoptalkshow com | front_end |
|
SQL-Employee-Database-Mystery | sql challenge employee database a mystery in two parts background it is a beautiful spring day and it is two weeks since you have been hired as a new data engineer at pewlett hackard your first major task is a research project on employees of the corporation from the 1980s and 1990s all that remain of the database of employees from that period are six csv files in this assignment you will design the tables to hold data in the csvs import the csvs into a sql database and answer questions about the data in other words you will perform 1 data engineering 3 data analysis note you may hear the term data modeling in place of data engineering but they are the same terms data engineering is the more modern wording instead of data modeling before you begin 1 create a new repository for this project called sql challenge do not add this homework to an existing repository 2 clone the new repository to your computer 3 inside your local git repository create a directory for the sql challenge use a folder name to correspond to the challenge employeesql 4 add your files to this folder 5 push the above changes to github instructions data modeling inspect the csvs and sketch out an erd of the tables feel free to use a tool like http www quickdatabasediagrams com http www quickdatabasediagrams com data engineering use the information you have to create a table schema for each of the six csv files remember to specify data types primary keys foreign keys and other constraints for the primary keys check to see if the column is unique otherwise create a composite key https en wikipedia org wiki compound key which takes to primary keys in order to uniquely identify a row be sure to create tables in the correct order to handle foreign keys import each csv file into the corresponding sql table note be sure to import the data in the same order that the tables were created and account for the headers when importing to avoid errors data analysis once you have a complete database do the following 1 list the following details of each employee employee number last name first name sex and salary 2 list first name last name and hire date for employees who were hired in 1986 3 list the manager of each department with the following information department number department name the manager s employee number last name first name 4 list the department of each employee with the following information employee number last name first name and department name 5 list first name last name and sex for employees whose first name is hercules and last names begin with b 6 list all employees in the sales department including their employee number last name first name and department name 7 list all employees in the sales and development departments including their employee number last name first name and department name 8 in descending order list the frequency count of employee last names i e how many employees share each last name bonus optional as you examine the data you are overcome with a creeping suspicion that the dataset is fake you surmise that your boss handed you spurious data in order to test the data engineering skills of a new employee to confirm your hunch you decide to take the following steps to generate a visualization of the data with which you will confront your boss 1 import the sql database into pandas yes you could read the csvs directly in pandas but you are after all trying to prove your technical mettle this step may require some research feel free to use the code below to get started be sure to make any necessary modifications for your username password host port and database name sql from sqlalchemy import create engine engine create engine postgresql localhost 5432 your db name connection engine connect consult sqlalchemy documentation https docs sqlalchemy org en latest core engines html postgresql for more information if using a password do not upload your password to your github repository see https www youtube com watch v 2uatpmnvh0i https www youtube com watch v 2uatpmnvh0i and https help github com en github using git ignoring files https help github com en github using git ignoring files for more information 2 create a histogram to visualize the most common salary ranges for employees 3 create a bar chart of average salary by title epilogue evidence in hand you march into your boss s office and present the visualization with a sly grin your boss thanks you for your work on your way out of the office you hear the words search your id number you look down at your badge to see that your employee id number is 499942 img width 1268 alt epilogue snippet src https user images githubusercontent com 45700182 129626148 44578917 2479 4a7f 939f ab5846ae475b png submission create an image file of your erd create a sql file of your table schemata create a sql file of your queries optional create a jupyter notebook of the bonus analysis create and upload a repository with the above files to github and post a link on bootcamp spot ensure your repository has regular commits and a thorough readme md file rubric unit 9 rubric sql homework employee database a mystery in two parts https docs google com document d 1oksntynct0v0e vkhimj9 ig0 oxnwczajlkv0avmkq edit usp sharing references mockaroo llc 2021 realistic data generator https www mockaroo com https www mockaroo com | sql data-analysis database visualization data-engineering data-modeling erd schemata sql-queries | server |
linguistics_problems | computational linguistics welcome to the main page of my project this repository stores examples of linguistics problems my name is daria i m a software engineer with skills in natural language processing my general scientific interests are knowledge bases and facts extraction there are very important analysis tools that provides semantic analysis and text mining project has next sections pre morphology pre morphology phonology phonology morphology morphology knowledge engineering knowledge engineering n grams applications n grams applications games games in the source code three languages is supported now english russian and finnish i hope that very soon next publishing problems will implement nlp algorithms for more languages source code pre morphology russian tokenizer src russian naivetokenizer py sentence boundary detection src russian naivesentenceboundarydetector py transliteration russian latin with spell checker src russian naivetransliterator py word decomposition src russian worddecompounder py camel case segmenter src english camelcasesplitter py distance to anagram src english anagrams py russian number2text converter src russian textnormalizer py phonology soundex algorithm implementation src russian soundex py syllable module word syllables count russian english finnish and word syllables list russian finnish src russian syllables py morphology russian patronymic generator src russian patronymic ya py russian diminutive names generator src russian diminutive names py russian cases generator dative src russian russian caser py russian cognate words checker src russian cognate words py english adjective comparisoner src english comparative or superlative py common english question generator src english question py finnish predicative sentences src suomi finnishpredicativequestioner py finnish pos tagger src suomi finnishpostagger py finnish case tagger src suomi finnish ending pos bilstm case tagger ipynb russian pos tagger src russian naivepostagger py syntax syntax analyzer for simple sentences src russian naivesyntaxanalyzer py knowledge engineering family tree src ontologies pedigree py abstract ontology for company src ontologies companyontology py simple timetable qa system src ontologies timetable py bookshelf src ontologies biblio n grams applications n gram dictionary for spelling for language modeling src ngrams ngramdictionarymanager py simple english word filler src ngrams wordfiller py n gram language model src ngrams languagemodel py collocations src ngrams collocations py russian diminutive names generator with rnn src ngrams diminutive rnn py russian character rnn non smoothing src ngrams charlevellanguagemodel py russian joking language model pi day src ngrams pidaylanguagemodel py simple spell checker based on n grams and damerau levenstein distance src russian spellchecker py advanced spell checker based on dictionary of words from good texts with 2 3 gram index train language model with 2 grams on good texts retrieval candidates with damerau levenstein distance find candidate with max probability of bigram max p prev word candidate candidate in candidates games russian cities src russian games cities py guess city src russian games guess city py guess number src russian games more or less py secret letter src russian games secret letter py opposites src russian games opposites game py | nlp machine-learning linguistics computational-linguistics ontologies python natural-language-processing natural-language-understanding language-modeling | ai |
Computer-Vision-with-OpenCV-3-and-Qt5 | computer vision with opencv 3 and qt5 this is the code repository for computer vision with opencv 3 and qt5 https www packtpub com application development computer vision opencv 3 and qt5 utm source github utm medium repository utm campaign 9781788472395 published by packt https www packtpub com utm source github it contains all the supporting project files necessary to work through the book from start to finish about the book developers have been using opencv library to develop computer vision applications for a long time however they now need a more effective tool to get the job done and in a much better and modern way qt is one of the major frameworks available for this task at the moment instructions and navigation all of the code is organized into folders each folder starts with a number followed by the application name for example chapter02 chapter 11 has no code file rest all chapter code files are present in their respective folder the code will look like the following include mainwindow h include int main int argc char argv qapplication a argc argv mainwindow w w show return a exec although every required tool and software the correct version and how it is installed and configured is covered in the initial chapters of the book the following is a list that can be used as a quick reference a regular computer with a more recent version of windows macos or linux such as ubuntu operating system installed on it microsoft visual studio on windows xcode on macos cmake qt framework opencv framework to get an idea of what a regular computer is these days you can search online or ask a local shop however the one you already have is most probably enough to get you started related products mastering opencv 3 second edition https www packtpub com application development mastering opencv 3 second edition utm source github utm medium repository utm campaign 9781786467171 machine learning for opencv https www packtpub com big data and business intelligence machine learning opencv utm source github utm medium repository utm campaign 9781783980284 computer vision with python 3 https www packtpub com application development computer vision python 3 utm source github utm medium repository utm campaign 9781788299763 suggestions and feedback click here https docs google com forms d e 1faipqlse5qwunkgf6puvzpirpdtuy1du5rlzew23ubp2s p3wb gcwq viewform if you have any feedback or suggestions download a free pdf i if you have already purchased a print or kindle version of this book you can get a drm free pdf version at no cost br simply click on the link to download a free pdf copy of this book i p align center a href https packt link free ebook 9781788472395 https packt link free ebook 9781788472395 a p | ai |
|
software-engineering-exercise | software engineering exercise overview you are asked to take over development of a new system for tracking quests this system must be capable of handling any story driven quest regardless of genre fantasy historical fiction sports etc the system has several open issues that need to be addressed which can be found in the issues section of this repository instructions fork this repository and make copies of the open issues in this repository make your updates to the existing codebase in your new repository you may complete tasks from the open issues in any order but it may be helpful to work in the order of the issues 1 then 2 etc leave your commit history visible do not squash commits use a separate feature branch pull request for each issue and link it to the corresponding issue in your repository if you choose to implement additional features beyond those described in the issues in this repository please add them as issues in your new repository the user interface should use react redux development environment visual studio 2019 free community edition is available here https visualstudio microsoft com downloads net core 3 1 included in the net core workload in the visual studio 2019 installer choice of database technology is left up to you | cloud |
|
Blockchain-Developer-Bootcamp | h4 align center open source version of consensys academy s blockchain developer bootcamp h4 br p align center a href https github com consensys academy blockchain developer bootcamp pulls img src https img shields io badge prs welcome brightgreen svg longcache true alt pull requests a a href license md img src https img shields io badge license cc by nc sa 204 0 lightgrey svg longcache true alt cc license a p p align center a href https twitter com consensysacad target blank img src https img shields io twitter follow consensysacad svg logo twitter a a href https discord com invite consensys target blank img src https img shields io discord 697535391594446898 color green label discord logo discord a p div align center sub created by a href https www consensys net target blank consensys a and a href https github com consensys academy blockchain developer bootcamp graphs contributors contributors a div course details summary intro s00 summary l1 course intro docs s00 intro l1 course intro index md l2 why learn docs s00 intro l2 why learn index md l3 course tips docs s00 intro l3 course tips index md l4 technical requirements docs s00 intro l4 technical requirements index md l5 communication tools docs s00 intro l5 communication tools index md l6 keeping up docs s00 intro l6 keeping up index md l7 advanced students docs s00 intro l7 advanced students index md l8 whats exciting docs s00 intro l8 whats exciting index md details details summary fundamentals s01 summary m0 intro docs s01 fundamentals m0 intro index md m1 cryptography docs s01 fundamentals m1 cryptography index md m2 consensus docs s01 fundamentals m2 consensus index md m3 ag blockchain docs s01 fundamentals m3 ag blockchain index md m4 bitcoin l1 history and development docs s01 fundamentals m4 bitcoin l1 history and development index md m5 wallets docs s01 fundamentals m5 wallets index md details details summary ethereum s02 summary m1 background docs s02 ethereum m1 background index md m2 accounts docs s02 ethereum m2 accounts index md m3 state docs s02 ethereum m3 state index md m4 clients workshop l1 docs s02 ethereum m4 clients workshop l1 index md m5 installing geth docs s02 ethereum m5 installing geth index md m6 installing besu docs s02 ethereum m6 installing besu index md details details summary smart contracts s03 summary m1 mental model l1 mental model sc docs s03 smart contracts m1 mental model l1 mental model sc index md m2 intro to truffle docs s03 smart contracts m2 intro to truffle index md m2 solidity docs s03 smart contracts m2 solidity index md m3 python docs s03 smart contracts m3 python index md m4 design patterns docs s03 smart contracts m4 design patterns index md m5 exercises docs s03 smart contracts m5 exercises index md m6 security docs s03 smart contracts m6 security index md details details summary developer tooling s04 summary m1 intro docs s04 developer tooling m1 intro index md m2 web3 libraries docs s04 developer tooling m2 web3 libraries index md m3 infura l1 docs s04 developer tooling m3 infura l1 index md m3 infura l2 docs s04 developer tooling m3 infura l2 index md m4 truffle deep dive docs s04 developer tooling m4 truffle deep dive index md m5 other dev tools docs s04 developer tooling m5 other dev tools index md m6 exercise docs s04 developer tooling m6 exercise index md details details summary defi s05a summary m0 concepts docs s05a defi m0 concepts index md m1 intro docs s05a defi m1 intro index md m2 stablecoins l1 docs s05a defi m2 stablecoins l1 index md m3 nfts l1 docs s05a defi m3 nfts l1 index md m4 wrapped l1 docs s05a defi m4 wrapped l1 index md m5a dexes l1 docs s05a defi m5a dexes l1 index md m5b amms l1 docs s05a defi m5b amms l1 index md m5c rfqs l1 docs s05a defi m5c rfqs l1 index md m6 oracles l1 docs s05a defi m6 oracles l1 index md m7 defi lending l1 docs s05a defi m7 defi lending l1 index md m8 governance l1 docs s05a defi m8 governance l1 index md m9 swaps l1 docs s05a defi m9 swaps l1 index md details details summary daos s06 summary m1 understand docs s06 daos m1 understand index md m2 build docs s06 daos m2 build index md m3 manage docs s06 daos m3 manage index md details details summary additional topics s07 summary l1 ipfs docs s07 additional topics l1 ipfs index md l2 filecoin docs s07 additional topics l2 filecoin index md l3 the graph docs s07 additional topics l3 the graph index md l4 zkp docs s07 additional topics l4 zkp index md details details summary scalability s08 summary m1 intro l1 overview docs s08 scalability m1 intro l1 overview index md m2 types l1 docs s08 scalability m2 types l1 index md m3 rubric l1 docs s08 scalability m3 rubric l1 index md m4 examples docs s08 scalability m4 examples index md m5 crosschain l1 docs s08 scalability m5 crosschain l1 index md details details summary beyond code s09 summary m1 eips docs s09 beyond code m1 eips index md m2 ethics docs s09 beyond code m2 ethics index md m3 continuing ed docs s09 beyond code m3 continuing ed index md m4 spirit docs s09 beyond code m4 spirit index md details details summary eth 2 s10 summary m1 background docs s10 eth2 m1 background index md m2 key terms docs s10 eth2 m2 key terms index md m3 future considerations docs s10 eth2 m3 future considerations index md details details summary what now s11 summary s11 what now docs s11 what now index md details details summary final project s12 summary s12 final project docs s12 final project index md details contributions this course will grow over time contributions are highly encouraged and desired if you see an error please open an issue and submit a pull request we envision this course to grow over time sections on contributing to open source would be helpful should you have an idea please open an issue and discuss it with the team pull requests are highly appreciated please see contributing md contributing md for how to make a contribution an example of this process found below 1 fork it https github com consensys academy blockchain developer bootcamp 2 create your feature branch git checkout b branchname 3 commit your changes git commit m added x input 4 push to the branch git push origin branchname 5 create a new pull request built by consensys https consensys net and github contributors image https contrib rocks image repo consensys academy blockchain developer bootcamp this work is licensed under a creative commons attribution noncommercial sharealike 4 0 international https creativecommons org licenses by nc sa 4 0 | blockchain |
|
Lab1 | lab1 source code and files related to lab one of uml eece5520 microprocessor ii and embedded system design | os |
|
RLPHF | h1 align center rlphf h1 this is the official github repository for personalized soups personalized large language model alignment via post hoc parameter merging https arxiv org abs 2310 11564 citation article jang2023personalized title personalized soups personalized large language model alignment via post hoc parameter merging author jang joel and kim seungone and lin bill yuchen and wang yizhong and hessel jack and zettlemoyer luke and hajishirzi hannaneh and choi yejin and ammanabrolu prithviraj journal arxiv preprint arxiv 2310 11564 year 2023 setup install dependencies pip install r requirements txt get the data and unzip it wget https storage googleapis com personalized soups data zip step 1 generate rollouts torchrun nnodes 1 nproc per node 1 net nfs cirrascale mosaic joel personalized rlhf generate rollouts py output dir output dir base model path to tulu ckpt dataset name data alpaca gpt4 10k json prompt generate a response that can be easily understood by an elementary school student batch size 16 start per 0 end per 100 to get the tulu checkpoints refer to this repository https arxiv org abs 2302 03202 feel free to put any customized prompt from the prompt config step 2 label generated rollouts using gpt4 cd gpt4 annotate python run py open ai key open ai key input dir rollout dir saving path save path annotators yaml file of annotator config the yaml file of the gpt4 annotator configs used for our experiments are provided in the gpt4 b5 directory first clone https github com tatsu lab alpaca farm git https github com tatsu lab alpaca farm git next place the gpt4 b5 directory inside alpaca farm auto annotations annotators and refer to the target yaml file e g pref1a yaml with the annotators config please refer to the alpacafarm code repo for more details step 3 training reward model next we utilize the gpt4 annotation for reward model training an example script is provided below torchrun nnodes 1 nproc per node 4 training reward model py model name path to tulu ckpt dataset name path to rm data eval dataset name eval dataset name output dir output dir per device train batch size 2 num train epochs 1 wandb project wandb project name wandb run name wandb run name you can find the list of reward model training data in the data rm training directory you can choose to create your own custom eval dataset during rm training step 4 policy model training here are sample script rns you can use to train each models traditional rlhf torchrun nnodes 1 nproc per node 4 training rlhf py dataset name data alpaca gpt4 10k json model name path to tulu ckpt reward model name dir to rm output dir output dir adafactor false save freq 10 output max length 512 batch size 16 gradient accumulation steps 8 batched gen true ppo epochs 8 learning rate 1 4e 5 mini batch size 2 early stopping true log with wandb val dataset name data koala eval 50 json val every n steps 10 wandb project wandb project name wandb run name wandb run name dir to rm is the directory to the adapter model bin from the reward model training output directory multitask training torchrun nnodes 1 nproc per node 4 training multitask training py base model path to tulu ckpt dataset name data alpca gpt4 10k mt json streaming lr scheduler type constant learning rate 1e 5 max steps 1000 output dir output dir project name wandb project name run name wandb run name p morl torchrun nnodes 1 nproc per node 4 training pmorl py dataset name data alpaca gpt4 pmorl 8 json model name path to tulu ckpt reward model name dir to rm output dir output dir adafactor false save freq 10 output max length 512 batch size 16 gradient accumulation steps 8 batched gen true ppo epochs 8 learning rate 1 4e 5 mini batch size 2 early stopping true log with wandb wandb project wandb project name wandb run name wandb run name val dataset name data koala eval 50 json val every n steps 10 p soups torchrun nnodes 1 nproc per node 4 training psoups py dataset name data psoups alpaca gpt4 p1a 10k json model name path to tulu ckpt reward model name dir to rm output dir output dir adafactor false save freq 10 output max length 512 batch size 16 gradient accumulation steps 8 batched gen true ppo epochs 8 learning rate 1 4e 5 mini batch size 2 early stopping true log with wandb wandb project wandb project name wandb run name wandb run name val dataset name data koala eval 50 json val every n steps 10 you can choose the different preference training files in data psoups directory step 5 generate model outputs example of generating outputs using trained policy models e g p morl torchrun nnodes 1 nproc per node 1 eval py output dir output dir base model path to tulu ckpt dataset name data koala eval 50 json prompt generate a response that can easily be understandable by an elementary school student generate a response that is concise and to the point without being verbose generate a response that is friendly witty funny and humorous like a close friend batch size 16 start per 0 end per 100 checkpoint dir policy model dir example of generating outputs using p soups torchrun nnodes 1 nproc per node 1 eval py output dir output dir base model path to tulu ckpt dataset name data koala eval 50 json prompt generate a response that can easily be understandable by an elementary school student generate a response that is concise and to the point without being verbose generate a response that is friendly witty funny and humorous like a close friend batch size 16 start per 0 end per 100 checkpoint dirs policy model dir 1 checkpoint dirs policy model dir 2 checkpoint dirs policy model dir 3 you can append any combination for the prompt configuration that you want to evaluate step 6 gpt4 evaluation after obtaining the model outputs from the previous step you could use gpt 4 as an evaluator to judge measure the win rate across different baselines go to gpt4 evaluate and run the following command python run py input dir1 first output file input dir2 second output file annotators annotators criteria wise eval gpt4 p1a yaml saving path eval results crit 1a json the demonstrations used for gpt4 evaluation and the criteria mentioned in the paper are all stored within gpt4 evaluate alpaca farm auto annotators criteria wise eval gpt4 feel free to add additional preferences you would like to evaluate on | ai |
|
indigo | build status https travis ci org indigo astronomy indigo svg branch master https travis ci org indigo astronomy indigo github tag latest by date http img shields io github v tag indigo astronomy indigo https github com indigo astronomy indigo blob master changelog md license http img shields io badge license indigo blueviolet svg https github com indigo astronomy indigo blob master license md platform http img shields io badge platform linux 20 7c 20macos 20 7c 20windows success svg indigo is the next generation of indi based on layered architecture and software bus this is the list of requirements taken into the consideration 1 no gpl dependency to allow commercial use under application stores licenses 2 ansi c for portability and to allow simple wrapping into net java golang objective c or swift in future 3 layered architecture to allow direct linking of the drivers into applications and or to replace wire protocol 4 atomic approach to device drivers e g if camera has imaging and guiding chip driver should expose two independent simple devices instead of one complex it is much easier and transparent for client 5 drivers should support hot plug at least for usb devices if device is connected disconnected while driver is running its properties should appear disappear on the bus 6 fits xisf jpeg tiff raw avi and ser format supported directly by the framework this is already done framework 1 indigo bus framework 2 xml json protocol adapter for client and server side 3 indigo server server with loadable indigo so dylib and indi executables drivers 4 indigo server standalone server with built in drivers 5 indigo server for macos wrapper provided just as an example 6 integrated http server for blob download server control web based indi control panel 7 indigo prop tool command line tool to list and set properties drivers 1 ccd with wheel guider ao and focuser simulator 2 mount simulator 3 atik ccd titan 3xx 4xx one with built in wheel vs infinity 11000 4000 driver 4 atik efw2 filterwheel driver 5 sx ccd driver 6 sx wheel driver 7 shoestring fcusb focuser driver 8 ssag qhy5 ccd driver 9 asi wheel driver 10 iidc ccd driver 11 asi ccd driver 12 nexstar mount driver supports celestron nexstar and sky watcher mounts 13 lx200 mount driver supports meade avalon losmandy 10microns astrophysics zwo and pegasusastro mounts and eqmac 14 fli filter wheel driver 15 fli ccd driver 16 fli focuser driver testers needed 17 usb focus v3 driver 18 sbig ccd driver with guider guider ccd and external guider head 19 sbig filter wheel driver part of ccd driver 20 ascom driver for indigo camera 21 ascom driver for indigo filter wheel 22 qhy ccd driver note maybe unstable due to inherited instability in the qhy sdk 23 zwo usb st4 port 24 meade dsi camera driver 25 takahashi temma mount driver 26 ica imagecapture api driver for dslrs mac only deprecated 27 gps simulator 28 nmea 0183 gps driver 29 andor ccd driver 32 64 bit intel linux only 30 wemacro rail focuser driver platform independent usb mac only bluetooth 31 eqmac guider driver mac only deprecated 32 apogee ccd driver 33 moravian intruments ccd and filter wheel driver 34 hid joystick aux driver 35 cgusbst4 guider driver 36 brightstar quantum filter wheel driver untested 37 trutek filter wheel driver untested 38 xagyl filter wheel driver untested 39 optec filter wheel driver untested 40 pegasus dmfc focuser driver 41 rigelsys nstep focuser driver 42 rigelsys nfocus focuser driver untested 43 ioptron mount driver 44 moonlite focuser driver 45 mjkzz rail focuser driver untested platform independent usb mac only bluetooth 46 gphoto2 ccd driver deprecated excluded from build 47 optec focuser driver untested 48 touptek ccd driver 49 altairastro ccd driver 50 rts on com aux shutter driver untested 51 dsusb aux shutter driver 52 gpusb guider driver 53 lakesideastro focuser untested 54 sx ao driver 55 sbig ao driver part of ccd driver 56 synscan eqmod mount driver 57 ascol driver 58 asi focuser driver 59 deepsky dad af1 and af2 focuser driver 60 baader planetarium steeldrive ii focuser driver 61 unihedron sqm driver 62 artesky flat box usb driver 63 nexdome dome driver untested based on g rozema s firmware 64 usb dewpoint v1 and v2 aux driver 65 pegasus astro flatmaster driver 66 astrogadget focusdreampro ascom jolo focuser driver added 67 lacerta flat box controller aux driver 68 uvc usb video class ccd driver 69 nexdome v3 dome driver untested requires firmware v 3 0 0 or newer 70 optec flip flap driver 71 gps service daemon gpsd driver 72 lunatico astronomy limpet armadillo platypus focuser rotator powerbox gpio drivers 73 lunatico astronomy limpet armadillo platypus rotator focuser powerbox gpio driver 74 lunatico astronomy dragonfly dome relay controller gpio driver 75 rotator simulator driver 76 lunatico aag cloudwacher driver 77 baader planetarium classic rotating dome driver 78 mgbox driver 79 manual wheel driver 80 pmc8 mount controller driver 81 robofocus focuser driver 82 qhy cfw filter wheel driver 83 rainbowastro mount driver 84 nexstar aux protocol mount driver 85 lunatico astronomy aag cloudwatcher driver 86 myfocuserpro 2 focuser driver 87 interactive astronomy skyroof driver 88 astromechanics focuser driver 89 nexdome beaver dome driver 90 astromechanics light pollution meter pro driver 91 optec pyxis rotator driver 92 svbony camera driver 93 geoptik flat field generator driver 94 talon 6 dome driver 95 interactive astronomy skyalert driver 96 vixen starbook mount driver 97 playerone camera driver 98 pegasusastro prodigy microfocuser driver 99 pegasusastro indigo wheel driver 100 pegasusastro falcon rotator driver 101 omegonpro ccd driver 102 mallincam ccd driver 103 risingcam ccd driver 104 orion starshotg ccd driver 105 ogma ccd driver 106 primalucelab focuser driver 107 zwo asiair power ports this is under development 1 a box adaptive optics driver how to build indigo prerequisites ubuntu debian raspbian sudo apt get install build essential autoconf autotools dev libtool cmake libudev dev libavahi compat libdnssd dev libusb 1 0 0 dev libcurl4 gnutls dev libz dev git curl bsdmainutils patchelf it is advised to remove libraw1394 dev sudo apt get remove libraw1394 dev fedora dnf install automake autoconf cmake libtool gcc gcc c libusb devel avahi compat libdns sd devel libudev devel git curl curl devel zlib devel it is advised to remove libraw1394 devel dnf remove libraw1394 devel macos install xcode and download and build autoconf automake and libtool use tools cltools sh script get code and build it git clone https github com indigo astronomy indigo git cd indigo make all build bin indigo server v indigo ccd simulator other drivers and connect from any indigo indi client or web browser to localhost on port 7624 no pthread yield new linux distributions come with the latest glibc that does not provide pthread yield however libqhy a depends on it we do not have control over this library and it is not officially supprted by qhy any more so if you get complaints by the linker for missing pthread yield call please execute the folowing command in indigo drivers ccd qhy bin externals pthread yield compat make patchlib and rerun the build | astronomy astronomy-library macos macosx linux | os |
secure-web-dev-workshop1 | manipulate data with javascript and nodejs goal learning javascript basics by manipulating arrays objects functions etc prerequisites 1 have a github account properly setup with your local git you should use ssh authentication between your local git and github how to set it up https help github com articles connecting to github with ssh 2 fork the repository fork button img fork png 3 clone your fork locally shell cd path to workspace git clone git github com your username secure web dev workshop1 git 4 run the index js file with node shell node index js 5 check the result it should display it works what to do there are multiple todo instructions inside the index js file you have to do the task associated with each one of them commit your changes after resolving each todo shell git commit m feat feat name explain commit data structure we will work on a public dataset opendata given by french gov and the city of paris named lieux de tournage paris https opendata paris fr explore dataset lieux de tournage a paris information this dataset represents a list of locations for outdoor scene shooting in paris since 2016 it does not include data of the current year here is an element extracted from the dataset to show the data structure json datasetid lieux de tournage a paris recordid 737d97372281f1d51fe3294aae21179ca00e2e05 fields coord y 48 83566 type tournage long m u00e9trage nom producteur mandarin production date fin 2020 08 21 geo point 2d 48 83566000015182 2 348314535961912 nom tournage tout s est bien passe ardt lieu 75013 geo shape coordinates 2 348314535961912 48 83566000015182 type point id lieu 2020 404 nom realisateur francois ozon adresse lieu rue pascal 75013 paris date debut 2020 08 20 annee tournage 2020 coord x 2 34831454 geometry type point coordinates 2 348314535961912 48 83566000015182 record timestamp 2022 02 21t12 01 17 756 01 00 | front_end |
|
perfil-politico-frontend | perfil pol tico front end for perfil pol tico https github com okfn brasil perfil politico a platform for profiling candidates in brazilian 2022 general election based entirely on open data prerequisites node js https nodejs org en 16 npm 8 17 you can use nvm to manage multiple installations of node js on your computer check nvm instalattion guides for macos and linux here https github com nvm sh nvm and for windows here https docs microsoft com en us windows nodejs setup on windows once you have it you can use the right node version for the project using these commands project setup npm install compiles and hot reloads for development npm run serve it will make your website available here localhost 8080 http localhost 8080 compiles and minifies for production npm run build then serve the contents of the dist directory run your unit tests npm run test unit lints and fixes files npm run lint | vuejs | front_end |
polkadot | dear contributors and users we would like to inform you that we have recently made significant changes to our repository structure in order to streamline our development process and foster better contributions we have merged three separate repositories cumulus substrate and polkadot into a single new repository the polkadot sdk https github com paritytech polkadot sdk go ahead and make sure to support us by giving a star to the new repo by consolidating our codebase we aim to enhance collaboration and provide a more efficient platform for future development if you currently have an open pull request in any of the merged repositories we kindly request that you resubmit your pr in the new repository this will ensure that your contributions are considered within the updated context and enable us to review and merge them more effectively we appreciate your understanding and ongoing support throughout this transition should you have any questions or require further assistance please don t hesitate to reach out to us https forum polkadot network t psa parity is currently working on merging the polkadot stack repositories into one single repository 2883 best regards parity technologies | parity polkadot blockchain rust client node | blockchain |
IIT | iit introduction to information technology | server |
|
STI-Project | sti project information technology security course projects colaborators joana br s https github com joanaa b | server |
|
Sound-Master | sound master simple database management via java and hibernate software engineering project diagram soundmasterclassdiagram bmp | database-management hibernate | server |
ShipRSImageNet | shiprsimagaenet a large scale fine grained dataset for ship detection in high resolution optical remote sensing images python https img shields io badge python 3 x ff69b4 svg https github com luyanger1799 amazing semantic segmentation git opencv https img shields io badge opencv 3 x 7c4 x orange svg https github com luyanger1799 amazing semantic segmentation git apache https img shields io badge apache 2 0 blue svg https github com luyanger1799 amazing semantic segmentation git description font color red shiprsimagenet font is a large scale fine grainted dataset for ship detection in high resolution optical remote sensing images the dataset contains font color red 3 435 images font from various sensors satellite platforms locations and seasons each image is around 930 930 pixels and contains ships with different scales orientations and aspect ratios the images are annotated by experts in satellite image interpretation categorized into font color red 50 object categories images font the fully annotated shiprsimagenet contains font color red 17 573 ship instances font there are five critical contributions of the proposed shiprsimagenet dataset compared with other existing remote sensing image datasets images are collected from various remote sensors cover ing multiple ports worldwide and have large variations in size spatial resolution image quality orientation and environment ships are hierarchically classified into four levels and 50 ship categories the number of images ship instances and ship cate gories is larger than that in other publicly available ship datasets besides the number is still increasing we simultaneously use both horizontal and oriented bounding boxes and polygons to annotate images providing detailed information about direction background sea environment and location of targets we have benchmarked several state of the art object detection algorithms on shiprsimagenet which can be used as a baseline for future ship detection methods examples of annotated images image https github com zzndream shiprsimagenet blob main imgs examples 20of 20annotated 20images jpeg image source and usage license the shiprsimagenet dataset collects images from a variety of sensor platforms and datasets in particular images of the xview dataset are collected from worldview 3 satellites with 0 3m ground resolution images in xview are pulled from a wide range of geographic locations we only extract images with ship targets from them since the image in xview is huge for training we slice them into 930 930 pixels with 150 pixels overlap to produce 532 images and relabeled them with both horizontal bounding box and oriented bounding box we also collect 1 057 images from hrsc2016 and 1 846 images from fgsd datasets corrected the mislabeled and relabeled missed small ship targets 21 images from the airbus ship detection challenge 17 images from chinese satellites suchas gaofen 2 and jilin 1 use of the google earth images must respect the google earth terms of use https www google com permissions geoguidelines html all images and their associated annotations in shiprsimagenet can be used for academic purposes only but any commercial use is prohibited object category the ship classification tree of proposed shiprsimagenet is shown in the following figure level 0 distinguish whether the object is a ship namely class level 1 further classifies the ship object category named as category level 2 further subdivides the categories based on level 1 level 3 is the specific type of ship named as type image https github com zzndream shiprsimagenet blob main imgs shiprsimagenet categories tree jpeg at level 3 ship objects are divided into 50 types for brevity we use the following abbreviations dd for destroyer ff for frigate ll for landing as for auxiliary ship lsd for landing ship dock lha for landing heli copter assault ship aoe for fast combat support ship epf for expeditionary fast transport ship and roro for roll on roll off ship these 50 object classes are other ship other warship submarine other aircraft carrier enterprise nimitz midway ticonderoga other destroyer atago dd arleigh burke dd hatsuyuki dd hyuga dd asagiri dd other frigate perry ff patrol other landing yuting ll yudeng ll yudao ll yuzhao ll austin ll osumi ll wasp ll lsd 41 ll lha ll commander other auxiliary ship medical ship test ship training ship aoe masyuu as sanantonio as epf other merchant container ship roro cargo barge tugboat ferry yacht sailboat fishing vessel oil tanker hovercraft motorboat and dock dataset download baidu drive extraction code h2qk shiprsimagenet https pan baidu com s 1x6zrw39aozohebo1mm0rqq google drive shiprsimagenet https drive google com file d 1wapkasoa9mxrfxqiq6lttlvrv4csc6vv view usp sharing benchmark code installation we keep all the experiment settings and hyper parameters the same as depicted in mmdetection v2 11 0 config files except for the number of categories and parameters mmde tection is an open source object detection toolbox based on pytorch it is a part of the openmmlab project developed by multimedia laboratory cuhk this project is based on mmdetection https github com open mmlab mmdetection v2 11 0 mmdetection is an open source object detection toolbox based on pytorch it is a part of the openmmlab https openmmlab com project prerequisites linux or macos windows is in experimental support python 3 6 pytorch 1 3 cuda 9 2 if you build pytorch from source cuda 9 0 is also compatible gcc 5 mmcv https mmcv readthedocs io en latest installation installation install mmdetection following the instructions https github com open mmlab mmdetection blob master docs get started md we are noting that our code is checked in mmdetection v2 11 0 and pytorch v1 7 1 create a conda virtual environment and activate it python conda create n open mmlab python 3 7 y conda activate open mmlab install pytorch and torchvision following the official instructions https pytorch org e g python conda install pytorch torchvision c pytorch note make sure that your compilation cuda version and runtime cuda version match you can check the supported cuda version for precompiled packages on the pytorch website https pytorch org install mmcv full we recommend you to install the pre build package as below python pip install mmcv full f https download openmmlab com mmcv dist cu version torch version index html please replace cu version and torch version in the url to your desired one for example to install the latest mmcv full with cuda 11 and pytorch 1 7 1 use the following command python pip install mmcv full f https download openmmlab com mmcv dist cu110 torch1 7 1 index html download this benchmark code python git clone https github com open mmlab mmdetection git cd mmdetection2 11 shiprsimagenet install build requirements and then install mmdetection python pip install r requirements build txt pip install v e or python setup py develop train with shiprsimagenet download the shiprsimagenet dataset it is recommended to symlink the shiprsimagenet dataset root to mmdetection2 11 shiprsimagenet data python ln s dataset shiprsimagenet mmdetection2 11 shiprsimagenet data if your folder structure is different you may need to change the corresponding paths in config files python mmdetection2 11 shiprsimagenet mmdet tools configs data shiprsimagenet coco format masks voc format annotations imagesets jpegimages prepare a config file the benchamark config file of shiprsimagenet already in the following python mmdetection2 11 shiprsimagenet configs shiprsimagenet example of train a model with shiprsimagenet python python tools train py configs shiprsimagenet faster rcnn faster rcnn r50 fpn 100e shiprsimagenet level0 py models trained on shiprsimagenet we introduce two tasks detection with horizontal bounding boxes hbb for short and segmentation with oriented bounding boxes sbb for short hbb aims at extracting bounding boxes with the same orientation of the image it is an object detection task sbb aims at semantically segmenting the image it is a semantic segmentation task the evaluation protocol follows the same map and mar of area small medium large and map iou 0 50 0 95 calculation used by ms coco level 0 model backbone style hbb map sbb map extraction code download faster rcnn with fpn r 50 pytorch 0 550 2vrm model https pan baidu com s 1bavxp26ohdzm8gtsngqzow faster rcnn with fpn r 101 pytorch 0 546 f362 model https pan baidu com s 1t0iqcfrlcooppv0k6ysl5q mask rcnn with fpn r 50 pytorch 0 566 0 440 24eq model https pan baidu com s 1se hngc0vlng61fuv htq mask rcnn with fpn r 101 pytorch 0 557 0 436 lbcb model https pan baidu com s 1tfay8 8sutwfbqghbnav8a cascade mask rcnn with fpn r 50 pytorch 0 568 0 430 et6m model https pan baidu com s 1wvob8ms2zitwj w3hzhf9a ssd vgg16 pytorch 0 464 qabf model https pan baidu com s 1yj0f20pjr9e2op0rx8vduw retinanet with fpn r 50 pytorch 0 418 7qdw model https pan baidu com s 1nzc2ukqns0hzdvp srxubq retinanet with fpn r 101 pytorch 0 419 vdiq model https pan baidu com s 1nmseodcariirueynb q4oa foveabox r 101 pytorch 0 453 urbf model https pan baidu com s 13vpp1lmoafak vr0s0nuzq fcos with fpn r 101 pytorch 0 333 94ub model https pan baidu com s 1ql 8i05og80jqrtvqqw9hq level 1 model backbone style hbb map sbb map extraction code download faster rcnn with fpn r 50 pytorch 0 366 5i5a model https pan baidu com s 1ofnmgbchakg26iao1tnjya faster rcnn with fpn r 101 pytorch u 0 461 u 6ts7 model https pan baidu com s 1ubaofgejxbavvqg5c uvca mask rcnn with fpn r 50 pytorch u 0 456 u 0 347 9gnt model https pan baidu com s 1vijgbte6z4udatzsqu7alq mask rcnn with fpn r 101 pytorch 0 472 0 371 wc62 model https pan baidu com s 18lzr9yek6tjivbns8 qgpa cascade mask rcnn with fpn r 50 pytorch 0 485 0 365 a8bl model https pan baidu com s 12rvdqciapqfc9sg0ni0tlq ssd vgg16 pytorch 0 397 uffe model https pan baidu com s 19h43hbi1gi3n9rq bczh6q retinanet with fpn r 50 pytorch 0 368 lfio model https pan baidu com s 1suhdueoeacftk8que48sbw retinanet with fpn r 101 pytorch 0 359 p1rd model https pan baidu com s 1qeu4jwh1yajaov7wbuks4w foveabox r 101 pytorch 0 389 kwiq model https pan baidu com s 12rkj3hevn qgefjabqabfg fcos with fpn r 101 pytorch 0 351 1djo model https pan baidu com s 1bwn3n9thik5 5vdgrmy6sw level 2 model backbone style hbb map sbb map extraction code download faster rcnn with fpn r 50 pytorch 0 345 924l model https pan baidu com s 1auzf2zapklenwbvqfdfpkw faster rcnn with fpn r 101 pytorch 0 479 fb1b model https pan baidu com s 1tdwonosgeudiji4huwtzpq mask rcnn with fpn r 50 pytorch 0 468 0 377 so8j model https pan baidu com s 1g35mrwqqsrmv7jogotwjqw mask rcnn with fpn r 101 pytorch 0 488 0 398 7q1g model https pan baidu com s 1mgu88crwzgmwjcg1wjz0mw cascade mask rcnn with fpn r 50 pytorch 0 492 0 389 t9gr model https pan baidu com s 1g4qqlwkwp4alhxpuohsg2a ssd vgg16 pytorch 0 423 t1ma model https pan baidu com s 1n7gt2emfzue54dmzhw8y9g retinanet with fpn r 50 pytorch 0 369 4h0o model https pan baidu com s 1rplxarnckn0p0ojgpq8qog retinanet with fpn r 101 pytorch 0 411 g9ca model https pan baidu com s 1uyndcvyb p9m2h7k ql1iw foveabox r 101 pytorch 0 427 8e12 model https pan baidu com s 1qztaomrqxp6l5nvrbbmb4g fcos with fpn r 101 pytorch 0 431 0hl0 model https pan baidu com s 1ik3gyzb572paocjwierdag level 3 model backbone style hbb map sbb map extraction code download faster rcnn with fpn r 50 pytorch 0 375 7qmo model https pan baidu com s 1ljwkd3 khlavvsivseod5q faster rcnn with fpn r 101 pytorch 0 543 bmla model https pan baidu com s 1sqhxti69nukywopqs1nslq mask rcnn with fpn r 50 pytorch 0 545 0 450 a73h model https pan baidu com s 1rbkbyb2bo ubb5j67puya mask rcnn with fpn r 101 pytorch 0 564 0 472 7k9i model https pan baidu com s 1hs7fckr3l9jizg22vsvzgg cascade mask rcnn with fpn r 50 pytorch 0 593 0 483 ebga model https pan baidu com s 1ejynomggsjsqw1tikktnxg ssd vgg16 pytorch 0 483 otu5 model https pan baidu com s 1fmecagajjnxtba63jw9k9w retinanet with fpn r 50 pytorch 0 326 tu5a model https pan baidu com s 11s8x7w35g7krmzqijpcnpg retinanet with fpn r 101 pytorch 0 483 ptv0 model https pan baidu com s 1kwx7g3bcsagosovmjr36ta foveabox r 101 pytorch 0 459 1acn model https pan baidu com s 1p5ebaxwajj a4s4hfqhfew fcos with fpn r 101 pytorch 0 498 40a8 model https pan baidu com s 11tnlbl2agnhp hlgy5yovg development kit the shiprsimagenet development kit https github com zzndream shiprsimagenet devkit is based on dota development kit https github com captain whu dota devkit and provides the following function load and image and show the bounding box on it covert voc format label to coco format label citation if you make use of the shiprsimagenet dataset please cite our following paper z zhang l zhang y wang p feng and r he shiprsimagenet a large scale fine grained dataset for ship detection in high resolution optical remote sensing images in ieee journal of selected topics in applied earth observations and remote sensing vol 14 pp 8458 8472 2021 doi 10 1109 jstars 2021 3104230 contact if you have any the problem or feedback in using shiprsimagenet please contact zhengning zhang at 23880666 qq com license shiprsimagenet is released under the apache 2 0 license please see the license license file for more information | ai |
|
Kasuri | kasuri logo png https github com akiroz kasuri workflows test badge svg https img shields io npm v akiroz kasuri https www npmjs com package akiroz kasuri an opinionated type safe reactive module state management framework designed for complex embedded systems with a huge varity of i o and stateful components inspired by modern reactive ui frameworks and memory driven computing install yarn add akiroz kasuri concept concept png the system consists of a state fabric and compute logic split into multiple modules every module has its own state that lives inside the state fabric and could only modify its own state at the same time each module can read or subscribe to changes of all state in the whole system system state is managed in 2 nested levels modules and module state subscriptions listens to changes on the module state level although each module state can be arbitrarily nested system module1 state1 123 module2 state1 foo state2 x 0 y 0 z 0 motivation this project was developed for the specific needs of complex embedded systems with dozens of hardware integrations e g sensors actuators network devices displays etc several existing solutions have been evaluated used in the past ros robot operating system a distributed computing framework written in python c designed for robots provides node discovery pub sub and rpc infrastructures no predefined execution model io handling this option was rejected because ros is tied to a specific version of ubuntu concurrency in python c is hard computation power is limited on my hardware erlang otp the language runtime for erlang elixir with built in support for csp concurrency distributed node clustering io compute scheduling external monitoring and a database a solid choice for the job however the diversity of 3rd party libraries and availability of developers loses out when compared to more popular languages nodejs redux the language runtime for javascript with built in support for event based concurrency io compute scheduling a state management framework originally designed for ui programming with external tools for system introspection nodejs is a suitable runtime for the application however redux is way too complicated and redux devtools often hang or crash when there s hundreds thousands of actions per second thus kasuri aims to be a simpler version of redux for embedded systems note on state mutability it it worth noting that careful attention must be taken when using state values that aren t just simple values e g arrays nested objects if the state is updated via mutation of the previous state object references will not be updated causing the state subscription api to return the same value for both current and previous state in order to get correct values for the subscription api a new object must be created if your new value depends on the previous the use of immutable data structures is highly recommended immer is a good choice for creating immutable objects from mutations of previous state immer https immerjs github io immer project structuring the following project layout is recommended index ts entrypoint statemap ts exports an object mapping module name to the default state of all modules module1 state ts exports an object containing the default state of module1 module ts exports a module class that implements module1 logic a sample project could be found in the test directory api class module static defaultstate static defaultstate status pending statusmessage contains common default state modules all modules state should include these fields foostate module defaultstate mystate 123 getstate module string state string stalems number null returns state value from the fabric given the module and state name optionally discard stale values using stalems if the state is older than stalems milliseconds getstate returns undefined getupdatetime module string state string number get state last update time as unix timestamp milliseconds subscribestate module string state string listener current previous void subscribe to state changes listener takes both the current new and previous store values both contains state value and updatetime statechange module string state string promise current previous helper function that returns a promise of the next state change with the new and previous value and updatetime setstate update partial modulestate updates the state of this module given a map of key value pairs to set the update object a subset of the module s state swapstate state string swapfn value updatetime newval updates the state of this module given a state name and a swap function this swap function takes the state s current value updatetime time and return its new value note the swap function cannot be async async init method to be overwritten by subclasses this is called during system initialization subclasses should set the module s status state in this method to online if it s successful class kasuri constructor statemap modulemap constructs a new system from a default state map and a module object map the init method of each module will be called statemap module1 module defaultstate foo 1 bar hello world module2 module defaultstate modulemap module1 new module1 module2 new module2 store global state store contains all module state has the following schema modulename statename updatetime number unix timestamp in ms value t actual value of the state setstate module string update partial modulestate system wide version of module s setstate see module setstate getstate module string state string stalems number null same as module getstate getupdatetime module string state string number same as module getupdatetime subscribestate module string state string listener current previous void same as module subscribestate statechange module string state string promise current previous same as module statechange introspection the introspection server allows you to dump subscribe and set state in the state fabric const kasuri new kasuri const server await introspection server kasuri this server is intended to be used with the cli tool see kasuri help for details screenshot png introspection extensions the introspection server supports custom extensons to read format and modify the state fabric extension handlers can be added to the introspections server await introspection server kasuri extension mycustomext kasuri reqbody buffer buffer const params json parse reqbody tostring custom logic here the extension can be invoked cli stdin is passed as the request body and the result is piped to stdout echo foo 123 kasuri call mycustomext | os |
|
image-filter-project | udagram image filtering microservice udagram is a simple cloud application developed alongside the udacity cloud engineering nanodegree it allows users to register and log into a web client post photos to the feed and process photos using an image filtering microservice the project is split into three parts 1 the simple frontend https github com udacity cloud developer tree master course 02 exercises udacity c2 frontend a basic ionic client web application which consumes the restapi backend covered in the course 2 the restapi backend https github com udacity cloud developer tree master course 02 exercises udacity c2 restapi a node express server which can be deployed to a cloud service covered in the course 3 the image filtering microservice https github com udacity cloud developer tree master course 02 project image filter starter code the final project for the course it is a node express application which runs a simple script to process images your assignment tasks setup node environment you ll need to create a new node server open a new terminal within the project directory and run 1 initialize a new project npm i 2 run the development server with npm run dev create a new endpoint in the server ts file the starter code has a task for you to complete an endpoint in src server ts which uses query parameter to download an image from a public url filter the image and return the result we ve included a few helper functions to handle some of these concepts and we re importing it for you at the top of the src server ts file typescript import filterimagefromurl deletelocalfiles from util util deploying your system follow the process described in the course to eb init a new application and eb create a new environment to deploy your image filter service don t forget you can use eb deploy to push changes stand out optional refactor the course restapi if you re feeling up to it refactor the course restapi to make a request to your newly provisioned image server authentication prevent requests without valid authentication headers note if you choose to submit this make sure to add the token to the postman collection and export the postman collection file to your submission so we can review custom domain name add your own domain name and have it point to the running services try adding a subdomain name to point to the processing server note domain names are not included in aws free tier and will incur a cost author dims | cloud |
|
iot-device-java | english readme en us md iot hub iot explorer java android sdk iot hub java android sdk internet of things hub iot hub https cloud tencent com document product 634 iot hub sdk sdk java android iot hub java sdk hub hub device java iot hub android sdk hub hub device android iot explorer java android sdk internet of things explorer iot explorer ai https cloud tencent com document product 1081 iot explorer sdk sdk java android iot explorer java sdk explorer explorer device java iot explorer android sdk explorer explorer device android iot explorer android sdk explorer explorer device face iot explorer android sdk explorer explorer device video iot explorer android sdk explorer explorer device tme sdk sdk device sdk https github com tencentyun iot device java wiki device sdk | server |
|
GoLLIE | p align center br img src assets gollie png style height 250px br h2 align center b g b uideline f b o b llowing b l b arge b l b anguage model for b i b nformation b e b xtraction h2 p align center a href https twitter com intent tweet text wow this new model is amazing url https 3a 2f 2fgithub com 2fhitz zentroa 2fgollie img alt twitter src https img shields io twitter url style social url https 3a 2f 2fgithub com 2fosainz59 2fcollie a a href https github com hitz zentroa gollie blob main license img alt github license src https img shields io github license hitz zentroa gollie a a href https huggingface co collections hitz gollie 651bf19ee315e8a224aacc4f img alt pretrained models src https img shields io badge huggingface pretrained models green a a href https hitz zentroa github io gollie img alt blog src https img shields io badge blog post blue a a href https arxiv org abs 2310 03668 img alt paper src https img shields io badge paper orange a br a href http www hitz eus img src https img shields io badge hitz basque 20center 20for 20language 20technology blueviolet a a href http www ixa eus language en img src https img shields io badge ixa 20nlp 20group ff3333 a br br p p align justify we present img src assets gollie png width 20 gollie a large language model trained to follow annotation guidelines gollie outperforms previous approaches on zero shot information extraction and allows the user to perform inferences with annotation schemas defined on the fly different from previous approaches gollie is able to follow detailed definitions and does not only rely on the knowledge already encoded in the llm code and models are publicly available blog post gollie guideline following large language model for information extraction https hitz zentroa github io gollie paper gollie annotation guidelines improve zero shot information extraction https arxiv org abs 2310 03668 img src assets gollie png width 20 gollie in the huggingface hub hitz gollie https huggingface co collections hitz gollie 651bf19ee315e8a224aacc4f example jupyter notebooks gollie notebooks notebooks p p align center img src assets zero shot results png p schema definition and inference example the labels are represented as python classes and the guidelines or instructions are introduced as docstrings the model start generating after the result line python entity definitions dataclass class launcher template refers to a vehicle designed primarily to transport payloads from the earth s surface to space launchers can carry various payloads including satellites crewed spacecraft and cargo into various orbits or even beyond earth s orbit they are usually multi stage vehicles that use rocket engines for propulsion mention str the name of the launcher vehicle such as sturn v atlas v soyuz ariane 5 space company str the company that operates the launcher such as blue origin esa boeing isro northrop grumman arianespace crew list str names of the crew members boarding the launcher such as neil armstrong michael collins buzz aldrin dataclass class mission template any planned or accomplished journey beyond earth s atmosphere with specific objectives either crewed or uncrewed it includes missions to satellites the international space station iss other celestial bodies and deep space mention str the name of the mission such as apollo 11 artemis mercury date str the start date of the mission departure str the place from which the vehicle will be launched such as florida houston french guiana destination str the place or planet to which the launcher will be sent such as moon low orbit saturn this is the text to analyze text the ares 3 mission to mars is scheduled for 2032 the starship rocket build by spacex will take off from boca chica carrying the astronauts max rutherford elena soto and jake martinez the annotation instances that take place in the text above are listed here result mission mention ares 3 date 2032 departure boca chica destination mars launcher mention starship space company spacex crew max rutherford elena soto jake martinez p align center img src assets snippets space transparent png p installation you will need to install the following dependencies to run the gollie codebase bash pytorch 2 0 0 https pytorch org get started we recommend that you install the 2 1 0 version or newer as it includes important bug fixes transformers 4 33 1 pip install upgrade transformers peft 0 4 0 pip install upgrade peft bitsandbytes 0 40 0 pip install upgrade bitsandbytes flash attention 2 0 pip install flash attn no build isolation pip install git https github com hazyresearch flash attention git subdirectory csrc rotary you will also need these dependencies bash pip install numpy black jinja2 tqdm rich psutil datasets ruff wandb fschat pretrained models we release three gollie models based on code llama https huggingface co codellama 7b 13b and 34b the models are available in the huggingface hub model supervised average f1 zero shot average f1 huggingface hub gollie 7b 73 0 55 3 hitz gollie 7b https huggingface co hitz gollie 7b gollie 13b 73 9 56 0 hitz gollie 13b https huggingface co hitz gollie 13b gollie 34b 75 0 57 2 hitz gollie 34b https huggingface co hitz gollie 34b how to use gollie please take a look at our example jupyter notebooks to learn how to use gollie gollie notebooks notebooks currently supported tasks this is the list of task used for training and evaluating gollie however as demonstrated in the create custom task notebook notebooks create 20custom 20task ipynb gollie can perform a wide range of unseen tasks for more info read our paper https arxiv org abs 2310 03668 p align center img src assets datasets png p we plan to continue adding more tasks to the list if you want to contribute please feel free to open a pr or contact us you can use as example the already implemented tasks in the src tasks folder generate the gollie dataset the configuration files used to generate the gollie dataset are available in the configs data configs configs data configs folder you can generate the dataset by running the following command see bash scripts generate data sh bash scripts generate data sh for more info bash config dir configs data configs output dir data processed w examples python m src generate data configs config dir ace config json config dir bc5cdr config json config dir broadtwitter config json config dir casie config json config dir conll03 config json config dir crossner ai config json config dir crossner literature config json config dir crossner music config json config dir crossner politics config json config dir crossner science config json config dir diann config json config dir e3c config json config dir europarl config json config dir fabner config json config dir harveyner config json config dir mitmovie config json config dir mitrestaurant config json config dir mitmovie config json config dir multinerd config json config dir ncbidisease config json config dir ontonotes config json config dir rams config json config dir tacred config json config dir wikievents config json config dir wnut17 config json output output dir overwrite output dir include examples we do not redistribute the datasets used to train and evaluate gollie not all of them are publicly available some require a license to access them for the datasets available in the huggingface datasets library the script will download them automatically for the following datasets you must provide the path to the dataset by modifying the corresponding configs data configs configs data configs file ace05 https catalog ldc upenn edu ldc2006t06 preprocessing script https github com osainz59 collie blob main src tasks ace preprocess ace py casie https github com ebiquity casie tree master data crossner https github com zliucr crossner diann http nlp uned es diann e3c https github com hltfbk e3c corpus tree main preprocessed data clinical entities english harveyner https github com brickee harveyner tree main data tweets mitmovie https groups csail mit edu sls downloads movie mitrestaurant https groups csail mit edu sls downloads restaurant rams https nlp jhu edu rams tacred https nlp stanford edu projects tacred wikievents https github com raspberryice gen arg if you encounter difficulties generating the dataset please don t hesitate to contact us how to train your own gollie first you need to generate the gollie dataset see the previous section for more info second you must create a configuration file please see the configs model configs configs model configs folder for examples finally you can train your own gollie by running the following command see bash scripts bash scripts folder for more examples bash configs folder configs model configs python3 m src run configs folder collie 7b codellama yaml how to evaluate a model first you need to generate the gollie dataset see the previous section for more info second you must create a configuration file please see the configs model configs eval configs model configs eval folder for examples finally you can evaluate your own gollie by running the following command see bash scripts eval bash scripts eval folder for more examples bash configs folder configs model configs eval python3 m src run configs folder collie 7b codellama yaml citation bibtex misc sainz2023gollie title gollie annotation guidelines improve zero shot information extraction author oscar sainz and iker garc a ferrero and rodrigo agerri and oier lopez de lacalle and german rigau and eneko agirre year 2023 eprint 2310 03668 archiveprefix arxiv primaryclass cs cl | code-llama event-extraction guidelines hugginface-hub huggingface inference information-extraction llama llama2 llm llms named-entity-recognition relation-extraction state-of-the-art text-generation training transformer gollie | ai |
ViewUIPlus | p align center a href https www iviewui com img width 200 src https file iviewui com view ui logo new svg a p h1 view ui plus h3 an enterprise level ui component library and front end solution based on vue js 3 h3 h1 view ui plus https img shields io npm v view ui plus svg style flat square https www npmjs org package view ui plus npm downloads http img shields io npm dm view ui plus svg style flat square https npmjs org package view ui plus npm downloads https img shields io npm dt view ui plus svg style flat square https npmjs org package view ui plus js gzip size http img badgesize io https unpkg com view ui plus dist viewuiplus min js compression gzip label gzip 20size 20js style flat square css gzip size http img badgesize io https unpkg com view ui plus dist styles viewuiplus css compression gzip label gzip 20size 20css style flat square join the chat at https gitter im iview iview https img shields io badge chat on gitter 30b392 svg style flat square https gitter im iview iview utm source badge utm medium badge utm campaign pr badge utm content badge docs https www iviewui com https www iviewui com start on cloud ide https idegithub com view design viewuiplus https idegithub com view design viewuiplus features dozens of useful and beautiful components friendly api it s made for people with any skills level extensive documentations and demos it is quite awesome install we provide starter kit for you view ui plus project based on vue cli https github com view design view ui project vuecli view ui plus project based on vite https github com view design view ui project vite view ui plus project based on typescript https github com view design view ui project ts view ui plus project based on nuxt https github com view design view ui project nuxt install view ui plus using npm npm install view ui plus save using a script tag for global use html script type text javascript src viewuiplus min js script link rel stylesheet href dist styles viewuiplus css you can find more info on the website https www iviewui com view ui plus guide install usage vue template slider v model value range template script setup import ref from vue const value ref 20 50 script using css via import js import view ui plus dist styles viewuiplus css community if you want to contribute us or in case you are haiving any doubt questions find other users at the gitter chat https gitter im iview iview or post on stackoverflow using iview ui tag https stackoverflow com questions tagged iview ui bugs file a issue here https www iviewui com new issue please provide a example so we can help you better contribute contact us in gitter chat https gitter im iview iview wechat or via mail to admin aresn com prs welcome major contributors name avatar name avatar name avatar aresn https github com icarusion https avatars3 githubusercontent com u 5370542 v 3 s 60 jingsam https github com jingsam https avatars3 githubusercontent com u 1522494 v 3 s 60 rijn https github com rijn https avatars2 githubusercontent com u 6976367 v 3 s 60 lcx960324 https github com lcx960324 https avatars3 githubusercontent com u 9768245 v 3 s 60 gitleonine1989 https github com gitleonine1989 https avatars1 githubusercontent com u 7582490 v 3 s 60 huixisheng https github com huixisheng https avatars1 githubusercontent com u 1518967 v 3 s 60 sergio crisostomo https github com sergiocrisostomo https avatars3 githubusercontent com u 5614559 v 3 s 60 lison16 https github com lison16 https avatars3 githubusercontent com u 20942571 v 3 s 60 xotic750 https github com xotic750 https avatars3 githubusercontent com u 216041 v 3 s 60 huanghong1125 https github com huanghong1125 https avatars3 githubusercontent com u 12794817 v 3 s 60 yangdan8 https github com yangdan8 https avatars2 githubusercontent com u 16515026 v 3 s 60 likuner https github com likuner https avatars3 githubusercontent com u 18632318 v 3 s 60 license mit http opensource org licenses mit copyright c 2016 present viewdesign | component-library iview vue viewdesign | front_end |
ws2812_i2s | ws2812 i2s library this a library to be used in firmware for the esp8266 what is this this library is a i2s interface to drive ws2811 ws2812 and sk6812 led strips the code is lifted out of the esp open rtos https github com superhouse esp open rtos project and has been modified to work with sk6812 leds and compile with the original espressif rtos sdk the communication with the leds is over i2s and uses dma to offload timing critical stuff off the cpu you will need a framebuffer 8bit per color for each of your leds and the library internally needs dma buffers which are 4 bytes per led per color the i2s pin is shared with uart0 rxd so you will not be able to send anything to the esp over serial anymore api hardware selection to select which type of led you want to connect to your esp module you may have to edit the makefile make cflags dled type led type ws2812 dled mode led mode rgb led type there are two possible values for led type led type ws2812 use this one for ws2811 or ws2812 leds led type sk6812 use this one for the sk6812 types mostly used by rgbw strips led mode this setting defines how many color components are in your leds led mode rgb this is the usual default 3 colors they are sent in grb order led mode rgbw use this if your leds have 4 color components like the rgbw strips with a dedicated white led in addition to the red green and blue ones c api c void ws2812 i2s init uint32 t pixels number call this one with the pixel count before starting to send out data this will initalize all needed buffers and set the iomux for gpio 3 from the default uart0 rxd to i2s c void ws2812 i2s update ws2812 pixel t pixels update the led strip with new pixel data the library assumes the number of pixels in this array are the same as with the init call one pixel looks like this c typedef struct uint8 t red uint8 t green uint8 t blue if led mode led mode rgbw uint8 t white endif ws2812 pixel t so as you can see the white component is only available when configured for a rgbw strip usage instructions this library is built with the esp8266 setup http github com esp8266 setup esp8266 setup tool in mind if you are already using the esp8266 setup build system just issue the following command in your project dir bash esp8266 setup add library git https github com esp8266 setup ws2812 i2s git master if you do not want to use the esp8266 setup build system just grab the files from the src and include directories and add them to your project be aware that most libraries built with this build system use the c99 standard so you may have to add std c99 to your cflags build instructions install the esp8266 toolchain download the esp8266 rtos sdk compile the library bash make xtensa tools root path to compiler bin sdk path path to esp8266 rtos sdk the finished library will be placed in the current directory under the name of libws2812 i2s a corresponding include files are in include if you installed the esp sdk and toolchain to a default location see below you may just type make to build default locations windows xtensa tools root c esp8266 xtensa lx106 elf bin sdk path c esp8266 esp8266 rtos sdk macos x we assume that your default file system is not case sensitive so you will have created a sparse bundle with a case sensitive filesystem which is named esp8266 xtensa tools root volumes esp8266 esp open sdk xtensa lx106 elf bin sdk path volumes esp8266 esp8266 rtos sdk linux xtensa tools root opt espressif crosstool ng builds xtensa lx106 elf bin sdk path opt espressif esp8266 rtos sdk | os |
|
trainee-roadmap | front end trainee roadmap img src https miro medium com max 1200 0 hiclyadnsiyt0odu jpg inspired from https github com kamranahmedse developer roadmap https roadmap sh frontend https roadmap sh angular https roadmap sh react table of content prerequisites prerequisites how does the browser render a website how does the browser render a website http http html html css css javascript javascript typescript typescript build tools build tools angular angular react react git git soft skills soft skills youtube channels to subscribe youtube channels to subscribe prerequisites english minimal level b1 comfortable level b2 img src assets common european framework of reference cefr png width 100 how does the browser render a website https github com vasanthk how web works https www youtube com watch v hjhvdblsxug ab channel academind https www youtube com watch v sme4owhztcc ab channel jsconf http https www youtube com watch v iym2zfp3zn0 t 442s ab channel traversymedia https www youtube com watch v 2jyt5f2isg4 ab channel freecodecamp org https www youtube com watch v zkeqqio7n k ab channel webdevsimplified cors https www youtube com watch v pntfsvu yti ab channel webdevsimplified html https www youtube com watch v dpnqb74smug ab channel freecodecamp org https www youtube com watch v n8yml4ezp4g ab channel codevolution https www youtube com playlist list pl41lfr 6dnoq3bebuctnmsvdojciiv en css https www youtube com watch v 0w6qz0 adam list pl0zuz27sz 6mx9fd9elt80g1bpcysmwit ab channel davegray https www youtube com watch v 1rs2nd1ryyc ab channel freecodecamp org https www youtube com watch v mnpdifwaaa4 ab channel davegray https www youtube com watch v yszonjkpgg4 list plzla0gpn vh8mpxiuhjwomaagoceinl0r responsive web design tutorials https www youtube com watch v srvurasnj0s ab channel freecodecamp org https www youtube com playlist list pl4cuxegkcc9g9vh9maa xknfjswznpzfw css positioning tutorials https www youtube com playlist list pl4cuxegkcc9hudkgi5o5uiwutagbxilth scss https www youtube com watch v kqn4hl9bgc list pl4cuxegkcc9jxjx7vojnvk o8ubdzecnb ab channel thenetninja flexbox https www youtube com watch v 3yw65k6lcia ab channel traversymedia grid https www youtube com playlist list plu8eosxdxhp5cifvt9 ze3ingcdac2xkg tailwind https www youtube com playlist list pl4cuxegkcc9gpxorlehjc5bgnii5heghw https www youtube com watch v lzp4salrffc ab channel dailytuition bulma https www youtube com playlist list pl4cuxegkcc9ixitwkbaqxcydt1u6e7a8a bootstrap https www youtube com playlist list pl4cuxegkcc9joim91nlzd qah aimmdar javascript https www youtube com watch v hdi2bqojy3c ab channel traversymedia https www youtube com watch v zbpegr48 ve list plqklakb2gjhwxv9rcarwvn06isll 9mpq ab channel coderlipi https www youtube com playlist list pldyqo7g0 nsx8 gzab8kd1ll4j4halqbj object oriented javascript https www youtube com playlist list pl4cuxegkcc9i5yvdkjgt60vnvwffpblb7 callbacks promises async await https www youtube com watch v porjizfvm7s ab channel traversymedia https www youtube com playlist list pl4cuxegkcc9jx2ttzk3igwksbtugydrlu modular javascript https www youtube com playlist list ploycgnoiygabs wdaaxchu82q xqgub4f event loop https towardsdev com event loop in javascript 672c07618dc9 https www youtube com watch v 8aghzqkofbq ab channel jsconf javascript best practices and coding conventions https www youtube com watch v rmn bkz1km0 ab channel javascriptmastery https www youtube com playlist list ply5pat 51egyo4ixvdzgzy57n0 r1qmtb https www youtube com playlist list plzla0gpn vh cthencpcm0dww6a5xyc7f https www youtube com playlist list plfkdytlp3abzwwlehq1whckyi8ncpy74s typescript https www youtube com playlist list pl4cuxegkcc9gugr39q yd6v bsymwkpui https www youtube com watch v d56mg7dezgs ab channel programmingwithmosh https www youtube com playlist list plqq 6pq4lttanfgsbnfzfwuhhaz3tiezu https www youtube com playlist list plyvdvjlntojf6ajswwat7kzrjvzw en8b https www youtube com watch v jbmrduvkl5w list plzla0gpn vh z2fqig50 pojrukjgbu7g typescript design patterns https www youtube com playlist list plzvrqmj9hdisk1pnrkewlklyfcdu9qjhy build tools npm https www youtube com playlist list plc3y8 rfhvwhgwwm5j3kqzx47n7dwwnrq yarn https www youtube com watch v byrxp9vlzei ab channel webstylepress webpack https www youtube com playlist list plb67cosr0 lpuxik35j8m7equbujqma0q angular big framework better to start learning from official documentation https angular io basic https www youtube com playlist list plc3y8 rfhvwhbragfinjr8khircdtkzcz angular material https www youtube com playlist list plc3y8 rfhvwileucqfgtl5gt5u6deirsu ngrx https www youtube com playlist list plw2eqosuplwjrfwgoi9gzdc3re4fke0wv rxjs https www youtube com playlist list plx7ev3jl9sfl8lrnzyzau8yn uqrgbhij https www youtube com playlist list pl55riy5tl51phpagycrn9ubnlvxf8rgvi angular forms https www youtube com playlist list plzagffnxkfubqodajk3evoekfud nankl component interaction https www youtube com playlist list plc3y8 rfhvwgkhalu8gtyf 5bb8qt wzv solid https www youtube com watch v y mrj9qycvi t 672s ab channel decodedfrontend react big framework better to start learning from official documentation https reactjs org docs getting started html render https www youtube com playlist list plc3y8 rfhvwg7czgqpqibeahn8d6l530t react router v6 https www youtube com watch v ul3y1lxxzdu ab channel webdevsimplified react typescript https www youtube com playlist list plc3y8 rfhvwi1axijgtkm0bkthzvc lsk how to style your react app https www youtube com watch v dxikbh lcf4 ab channel ericmurphy react material ui https www youtube com playlist list plc3y8 rfhvwh k9mdlrrcdywl7cevl2ro react context hooks https www youtube com playlist list pl4cuxegkcc9hnokbyjilpg5g9m2apuepi https www youtube com watch v cf2lq gzea8 list plc3y8 rfhvwisvxhz135pogtx7 oe3q3a https www youtube com watch v o6p86uwfdr0 list plzla0gpn vh8etggfgercwmy5u5hojf h https youtu be yisqpp31l0 react hook form https www youtube com playlist list pl03g4h exutppogty 45owvn79rvjikzf react query https www youtube com playlist list plc3y8 rfhvwjtelcrprczlo6bllbuspd2 react testing library https www youtube com playlist list pl4cuxegkcc9gm4 5usnmlqmosm dzuvq react solid principles https www youtube com watch v msq dcrxoxw ab channel coderone react design patterns https www youtube com playlist list plgeetuaeeds5he2ugwezjxyezwpbonhr react storybook tutorial https www youtube com watch v bysfuxgg ow list plc3y8 rfhvwhc j3x3t9la8 gqjgvidqk git https www youtube com watch v usjzcfj8yxe ab channel coltsteele https www youtube com watch v 8jj101d3kne ab channel programmingwithmosh https www youtube com watch v uszj k0dgsg ab channel freecodecamp org https www youtube com watch v wiy824wwpu4 ab channel ihatetomatoes https www youtube com watch v crlgddprdoq ab channel academind soft skills https www youtube com watch v ll7jyrxwmqy ab channel knowledgehutupgrad https www youtube com skillopedia featured reverse interview https github com viraptor reverse interview newsletters https bytes dev https react libhunt com https react statuscode com https javascriptweekly com https tailwindweekly com https medium com youtube channels to subscribe https www youtube com webdevsimplified https www youtube com decodedfrontend https www youtube com codeshotswithprofanis https www youtube com joshtriedcoding https www youtube com joshuamorony https www youtube com dreylikydev https www youtube com codevolution https www youtube com coderone | angular angular-learning css front-end front-end-development front-end-learning-journey html javascript react react-learning typescript typescript-learning front-end-learning-path | front_end |
tech-initials-generator | tech initials generator information technology initials generator like aws returns amazon web services or analysis wordpress solution site vercel http tech initials generator gspetillo vercel app requirements bootstrap https getbootstrap com vue js https vuejs org developed by gabriel petillo https github com gspetillo and jean jacques https github com jjeanjacques10 | initials tech generator | server |
Blockchain | nitdablockchainscholarship non officially study plan if you fail to plan you are planning to fail benjamin franklin success is no accident it is hard work perseverance learning studying sacrifice and most of all love of what you are doing or learning to do img align center alt coding src https media giphy com media btrtnpmpq8uorcrbwg giphy gif onboarding programme will start on 8 december 2022 where you will delve deeper into the programme and complete a series of assignments if you finish among the top 30 of the class you will be onboarded into the main blockchain programme img align center alt coding src https media giphy com media kfhbj8jn52ucpspcyi giphy gif core curriculum must prove your capabilities in the first 2 weeks by getting a cumulative score that puts you in the 30th percentile in all quizzes and projects in that period in order to progress in the programme if you don t meet this requirement you will no longer be part of the programme after 2 weeks pre requisites git https adamsgeeky github io blockchain docs recources tutorial basics gitgithub github https adamsgeeky github io blockchain docs recources tutorial basics github difference between web1 web2 web3 https geekink hashnode dev blockchain technologies details summary the first 2 weeks summary bitcoin theory introduction https adamsgeeky github io blockchain docs category bitcoin theory how bitcoin works https adamsgeeky github io blockchain docs recources btheory intro objective chapter 1 abstract objective x peer to peer cash x digital signatures and trusted third parties x peer to peer network x time chain and proof of work x cpu power x cooperation in the network x network structure x messaging between nodes chapter 2 introduction objective details summary commerce on the internet summary has come to rely almost exclusively on financial institutions serving as trusted third parties to process electronic payments while the system works well enough for most transaction it still suffers from the inherent weaknesses of te the trust based model such as hight transaction cost transaction par day limit details details summary non reversible transactions summary completely non reversible transaction are not possible since financials instaituion cannot avoid mediating disputes the cost of mediating increases transactions costs limiting the minimum practical transaction size and cutting off the possibility fr small casual transactions and there is a broader cost in the loss of ability to make non reversible payments for non reversible services details details summary privacy in commerce summary with the possibility of reversal the need for trust spreads merchants who be wary of their customers hassling them for move information than they would otherwise need this bring a huge problem for privacy of the good actor within the system as their identity details often end up being stored in large merchant database with their corresponding payment details details details summary the paradigm of fraud acceptance summary a certain percentage of fraud is accepted as unavoidable traditional payment system there cost and payment uncertainties can be avoided in person by using physical currency but no mechanism exists to make payments over a communications channel without a trusted party details bitcoin solve all the above mention problems details summary what is needed summary is an electronic payment system based on cryptographic proof instead of trust allowing any two willing parties to transact directly with each other without the need of trusted third party bank bitcoin achieve this by using digital signature and a simple but fully feature scripting language by using bitcoin receive can quickly and simply validate that funds were indeed controlled that is they own it by the sending party and that the transaction correctly allocate the amount to their control without additional validation by third parties details details summary protecting sellers from fraud summary details details summary proposed solution summary details details summary security and honesty summary details chapter 3 transactions objective x electronic coins is the chain of digital signature spending a coin payee verification existing solutions first seen rule broadcasting transactions achieving consensus proof of acceptance chapter 4 timestamp server objective timestamped hashes a chain of timestamped hashes timestamp server video chapter 5 proof of work objective hashcash scanning random space nonce immutable work chained effort one cpu one vote the majority decision the honest chain attacking the longest chain controlling the block discovery rate chapter 6 network objective section read through running the network the longest chain simultaneous blocks breaking the tie missed messages chapter 7 incentive objective the coinbase transaction coin distribution mining analogy transaction fees the end of inflation encouraging honesty the attacker s dilemma incentive video chapter 8 reclaiming disk space objective spent transactions the merkle tree compacting blocks block headers chapter 9 simplified payment verification objective full network nodes merkle branches transaction acceptance verification during attack situations maintaining an attack invalid block relay system businesses running nodes chapter 10 combining and splitting value objective dynamically sized coins inputs and outputs a typical example fan out chapter 11 privacy objective traditional models privacy in bitcoin public records stock exchange comparison key re use linking inputs chapter 12 calculations objective attacking the chain things the attacker cannot achieve the only thing the attacker can achieve the binomial random walk the gambler s ruin exponential odds waiting for confirmation attack via proof of work vanishing probabilities details pre requisites what is program programming programming language compiler and interpreter https geekink hashnode dev programming fundamentals javascript objectives identify interactions on web pages created with javascript articulate in general terms the importance of how javascript was developed and how that impacts the way javascript is written identify properly formed semantic html articulate major concepts in css identify properly formed css syntax write simple javascript statements in the web browser console assign and retrieve values from variables and arrays in javascript course outline 1 introduction to javascript https adamsgeeky github io blockchain docs recources js intro 2 javascript variables 3 javascript arrays 4 module summary golang course overview this course covers the fundamental elements of go data types protocols formats and writing code that incorporates rfcs and json details summary an introduction of go why go is worth learning summary golang build simple secure scalable systems x what is go go or golang is a programming language developed at google it has received a lot of acclaim from developers for its speed and straightforward syntax an open source programming language supported by google easy to learn and great for teams built in concurrency and a robust standard library large ecosystem of partners communities and tools x why go is worth learning golang is a compiled multi threaded programming language based on open source c and developed internally at google it is a single language that allows different processes to run simultaneously which means simultaneous programming extremely fast easy to maintain and efficient golang has all the advantages needed for distributed systems because it can handle multiple parts of the blockchain concurrently the language was developed for high performance programs running on modern distributed systems and multicore processors market participants perceived the launch of golang as an attempt to create a replacement for c c today the hyperledger fabric blockchain platform uses this programming language it is often used in nft marketplace development examples of blockchain projects that use golang gochain dero loom network ethereum hyperledger fabric details course outline overview objects concurrency installing go https adamsgeeky github io blockchain docs recources golang local setting up go locally how to compile and run go programs https adamsgeeky github io blockchain docs recources golang structure workspaces packages go tool variables https adamsgeeky github io blockchain docs recources golang varibles variable initialization https adamsgeeky github io blockchain docs recources golang varibles data types pointers https adamsgeeky github io blockchain docs recources golang datatype variable scope http localhost 3000 blockchain docs recources golang varibles deallocating memory garbage collection comments printing integers ints floats strings https adamsgeeky github io blockchain docs recources golang datatype string packages constants https adamsgeeky github io blockchain docs recources golang varibles control flow control flow scan composite data types arrays https adamsgeeky github io blockchain docs recources golang datatype slices https adamsgeeky github io blockchain docs recources golang datatype variable slices https adamsgeeky github io blockchain docs recources golang datatype hash tables maps https adamsgeeky github io blockchain docs recources golang datatype structs https adamsgeeky github io blockchain docs recources golang datatype protocols and formats rfcs json file access ioutil file access os become familiar with go code introduction of source code elements keywords and identifiers basic types and their value constants and variables also introduces untyped values and type deductions common operators also introduces more type deduction rules function declarations and calls code packages and package imports expressions statements and simple statements basic control flows goroutines deferred function calls go type system go type system overview a must read to master go programming pointers structs value parts to gain a deeper understanding into go values arrays slices and maps first class citizen container types strings functions function types and values including variadic functions channels the go way to do concurrency synchronizations methods interfaces value boxes used to do reflection and polymorphism type embedding type extension in the go way type unsafe pointers generics use and read composite types reflections the reflect standard package some special topics line break rules more about deferred function calls some panic recover use cases explain panic recover mechanism in detail also explains exiting phases of function calls code blocks and identifier scopes expression evaluation orders value copy costs in go bounds check elimination concurrent programming concurrency synchronization overview channel use cases how to gracefully close channels other concurrency synchronization techniques the sync standard package atomic operations the sync atomic standard package memory order guarantees in go common concurrent programming mistakes memory related memory blocks memory layouts memory leaking scenarios details details summary bitcoin blockchain for 5 weeks summary details details summary project for 5 weeks summary details img align center alt coding src https media giphy com medial4jyy0qtljtlczowm giphy gif | blockchain blockchain-technology nigeria bsv-blockchain | blockchain |
db-capstone-project | db capstone project database engineering capstone with mysql workbench and tableau database schema db littlelemondm png visualization using python costs costs png bar histogram png scatterplot scatterplot png pair pairplot png | server |
|
Using-machine-learning-to-detect-malicious-URLs | using machine learning to detect maclicious urls this repo is based on https github com faizann24 using machine learning to detect malicious urls http fsecurify com using machine learning detect malicious urls | ai |
|
aepp-blockchain-explorer | waffle io issues in progress https badge waffle io aeternity aepp blockchain explorer png label in 20progress title in 20progress http waffle io aeternity aepp blockchain explorer ternity blockchain explorer overview this is an explorer for the ternity blockchain here https github com aeternity aepp blockchain explorer is the github repository the underlying api is currently being developed and improved the explorer is designed to work with the latest features of ternity https github com aeternity aeternity and pp middleware https github com aeternity aepp middleware upcoming we are workin on the next version and the designs are available here https sketch cloud s jay59 we are looking for input on what data we should show and would appreciate your voice feel free to open a ticket and mark it with the proposal label features view a generation key blocks micro blocks and transactions view the list of generations from the latest all the way back to the genesis block view list of transactions for each generation micro block view an account and see its balance search for accounts by public key search for blocks generation transaction by their hash and by their position in the blockchain view ternity token market exchange rates via coingecko com api build setup the node api url can be configured from env file or environment variable vue app node url bash install dependencies npm install serve with hot reload at localhost 8080 npm run serve vue app node url https roma net mdw aepps com npm run serve build for production with minification vue app node url https roma net mdw aepps com npm run build build for production and view the bundle analyzer report npm run build report | aeternity explorer blockchain | blockchain |
FedCV | fedcv has been migrated to https github com fedml ai fedml tree master python app fedcv | computer-vision federated-learning image-segmentation medical-imaging object-detection image-classifiation | ai |
Enginous | enginous an easy to use search tool for databases of designs motivation as part of my research into design automation i feel there need to be more tools that encourage fast prototype driven use of computers in engineering in doing this one challenge we ve kept running into is the tradeoff between speed of development speed of use and reliability of the results in terms of reliability and speed of development we generally choose to make foward models that is a model that takes a design and predicts performance in a high level and slow performing language when we are designing we generally want one of two things from these computer tools either to explore a wide range of different designs in search of inspiration or a unique angle or to find a single specific design that gives us the performance we desire combining these two thoughts it s relativly fast and easy to build a database of designs offline the task is generally trivially parallelizable we want a way to find designs in this database that give us a certain performance lots of people have developed cool methods of approximating that inverse function such as neural networks and meta modelling but i wanted to make something simpler to use and more general albeit slower and probably more memory intensive how it works we require two features of the designs that the distance between two designs can be calculated and that the triangle inequality holds over that distance in our use that distance is generally calculated off of performance so we are searching for designs with a certain performance we use shapiro s algorithm with a few additions the algorithm works by precalculating several landmark points and calculating the distances from the landmarks to every other point in the database when searching for a design we calculate the distance from our target to the landmarks locate the nearest landmark then throw out any points in the db whose distance to the landmark is more than twice that of the target one we ve filtered the set of points we iterate over the remaining points in the db in order of increasing difference between the target distance and design distance for calculating the landmarks we use a bootstrapping approach to estimate the expected number of points that can be filtered given a certain number of landmarks this way the operator can decide the space time tradeoffs they want to make to do allow easy parallelization and distribution of the search activity | server |
|
hotel-management | hotel management this is a project for the databases class in ntua electrical and computer engineering department the project was graded with perfect score 100 100 contributors listed alphabetically 1 elina syrri elinasyr https github com elinasyr 1 george papadoulis g papad https github com g papad 1 nick bellos nickbel7 https github com nickbel7 tools used python https img shields io badge python v3 7 red svg dependencies https img shields io badge flask v2 0 1 red pypyodbc https img shields io badge pypyodbc v1 3 4 red svg sqlserver https img shields io badge sql server v2019 yellow svg requirements https github com alexandroskyriakakis database blob master requirements txt sql server 2019 flask 2 0 1 pypyodbc 1 3 4 er diagram https github com nickbel7 hotel management blob main diagrams erd 2crelational erd jpg relational model https github com nickbel7 hotel management blob main diagrams erd 2crelational relationaldiagram png raw true https github com nickbel7 hotel management blob main diagrams erd 2crelational relationaldiagram jpg installation 1 at first make sure you have installed sql server 2019 express on your computer download page https www microsoft com en us download details aspx id 101064 2 then connect to the server throught a dbms preferably microsoft sql management studio with sa system administrator credentials run the following sql queries inside the dmbs at this spesific order 3 create tables sql sql code create tables sql to create the database and the tables 4 create indexes sql sql code create indexes sql to create the indexes 5 create views 1 sql sql code create views 1 sql and create views 2 sql sql code create views 2 sql to create the required views insert mock data in the database 6 insert data from the excel hotelmanagement data xlsx mock data hotelmanagement v2 xlsx throught the import export wizard of microsoft management studio br attention insert the data table by table with strictly the following order and by enabling the identity insert in the edit mappings option for each table reservations reservations hotelservices hotellocations doors hotelrooms reservationcustomers reservationservices reservationrooms dooraccesslog or br directly insert the backup bak file of the database with all the data located here hotelmanagement bak db backup hotelmanagement v2 bak bash databases right click restore database download and run the web app 7 run bash git clone https github com nickbel7 hotel management git cd hotel management 9 add your database credentials preferably use sa user to have all privileges at the top of the app py project app py file bash ql user sql password sql server name sql database name hotelmanagement 10 run the following script to download all required libraries bash pip install r requirements txt 11 run the following script to enter the project folder and start the web server bash cd project python m flask run 12 open your browser and type http 127 0 0 1 5000 to preview the website sql queries here we show all the queries sql code project queries sql used in the site at each page find the questions for the queries attached to the file docs pdf youtube explaining in greek language how to use our wep application and what queries are used in each page br https youtu be qy2ix3ab5gi | lambda ece ntua databases | server |
npcgpt | npcgpt writeup https medium com sean skimmer npc gpt 3f4cb5272773 leveraging large language models for video games and npcs this proof of concept is built out for the game stardew valley https www stardewvalley net the project is made up of 3 parts described below part 1 character generation to generate dynamic and unique npc s the first model generates a short character bio the model is based on openai s open source gpt2lmheadmodel https huggingface co docs transformers model doc gpt2 transformers gpt2lmheadmodel the model was fine tuned using sample character bios that were generated using similar formatting and description styles as the stardew valley villager wiki pages https stardewvalleywiki com villagers the sample bios used for training data can be found at data character bios csv the generated character bio is then cleaned and processed to be used as input for the dialogue model part 2 dialogue model the cleaned personalities generated by the model above are fed into the dialogue model to create conversations that are both relevant to the game and personality for this step the knowledge base of objects in the game was created to ensure the dialogue surrounded relevant items mobs and locations training data was created with the chatgpt api https openai com blog introducing chatgpt and whisper apis created by openai using prompts that returned item definitions quests and general dialogue this replaced the original method which used the original game dialogue scraped with videogamedialoguecorpuspublic https github com seannyd videogamedialoguecorpuspublic as the original dialogue was not comprehensive enough to achieve substantial results multiple models were trained with guidance from this medium article https medium com huggingface how to build a state of the art conversational ai with transfer learning 2d818ac26313 using dialogpt https huggingface co microsoft dialogpt medium text hey my name is mariama 21 how are you 3f and gpt2 https huggingface co gpt2 as base models the outputs of this model are run through part 3 to determine if there is intent related to a conversation part 3 rule based named entity recognition for intent recognition to capture the transactional intention of dialogue with the npc two spacy matchers https spacy io api matcher are used to identify and pull the relevant information from the dialogue for example if the user asks the npc for a quest the npc might give the user an item quest or a mob related quest the matcher rules can identify that a quest has been given to the user and extract the target items or mobs the quantity requested and the potential reward offered to the user upon completion of the request this information is used to dynamically implement the transactions in the game allowing for a more robust and complete user experience | ai |
|
MobileAppDevelopment | bzu https bzu edu pk images logo1 png https www bzu edu pk mobileappdevelopment for students to understand mobile app development java download jdk from https www oracle com pk java technologies downloads set path in windows environment variable helloworld program https www baeldung com java could not find load main class 10 best java ides and editors in 2023 https www turing com blog best java ides and editors android developer guide https developer android com guide https apilevels com course outline chapter 3 android architecture chapter 4 application components chapter 5 hello world chapter 7 activities chapter 8 services https codeunplug com android service sample code for playing default ringtone chapter 9 broadcast recievers https www geeksforgeeks org broadcast receiver in android with example chapter 68 sqlite database https www tutorialspoint com android android sqlite database htm chapter 11 fragments https www javatpoint com android fragments chapter 12 intents filters chapter 13 ui layouts chapter 14 ui controls radio buttons https github com tahirabbas876 quiz using radiobutton chapter 15 event handling chapter 19 notifications https www tutorialspoint com how to create a notification with notificationcompat builder in android chapter 25 alert dialoges https www tutorialspoint com android android alert dialoges htm chapter 45 calling api and json parsing https www geeksforgeeks org json parsing in android using volley library listview and arrayadapter https guides codepath com android using an arrayadapter with listview reading qr code https github com journeyapps zxing android embedded web api bzuattendance backend server https github com qadir0108 bzuattendancebackend | android android-application android-development mobile-development | front_end |
mock-ApiRest-Flask | english version here mock api rest with flask vers o em portugu s aqui mock api rest com flask mock api rest with flask the main purpose of this project is to create a mock api rest with flask to be used in the development of the frontend of the mobile project a react native repository is available here https github com eduardo da silva reactnative layout navigation installation the project is managed with pdm so you need to install the dependencies with bash pdm install run to run the project you need to execute the following command bash pdm run python main py mock api rest com flask este projeto tem como principal objetivo criar um mock de api rest com flask para ser utilizado no desenvolvimento do frontend do projeto mobile um reposit rio react native est dispon vel aqui https github com eduardo da silva reactnative layout navigation instala o o projeto gerenciado com pdm ent o voc precisa instalar as depend ncias com bash pdm install execu o para executar o projeto voc precisa executar o seguinte comando bash pdm run python main py | api-rest backend flask mock pdm python | front_end |
bolero | bolero build status https github com camshaft bolero workflows ci badge svg https github com camshaft bolero actions workflow ci latest version https img shields io crates v bolero svg https crates io crates bolero documentation https docs rs bolero badge svg https docs rs bolero license https img shields io crates l bolero svg https github com camshaft bolero blob master license fuzz and property testing front end for rust book a copy of the bolero book can be found here http camshaft github io bolero quick start 1 install subcommand and add a dependency console cargo add dev bolero cargo install f cargo bolero 2 write a test using bolero check https docs rs bolero latest bolero macro check html macro rust pub fn buggy add x u32 y u32 u32 if x 12976 y 14867 return x wrapping sub y return x wrapping add y test fn fuzz add bolero check with type cloned for each a b buggy add a b a wrapping add b 3 run the test with cargo bolero console cargo bolero test fuzz add some moments later test failure input 12976 14867 error test returned false linux installation cargo bolero needs a couple of libraries installed to compile if these libraries aren t available the requirement can be relaxed by executing cargo install cargo bolero no default features f debian ubuntu bash sudo apt install binutils dev libunwind dev | fuzz fuzz-testing property-testing libfuzzer honggfuzz afl | front_end |
chessbot_python | chessbot why create another chessbot the explanation is simple i did not find a free bot i liked online all the bots i saw on internet are parsing the html of the different websites to find the positions but it creates a big limitation if there is a new website or a new html organisation nothing will work on the other hand my bot just looks at the screen and work with it to find the chessboard and the pieces it is much more robust this chess bot can play automatically as white or black on lichess com chess com chess24 com and theoretically any website using drag and drop to move pieces it uses stockfish engine to process moves mss to do fast screenshots pyautogui to move the mouse chess to store and test the moves and opencv to detect the chessboard it has been written only with python about the bot level it beats easily chess com computer on level 8 10 around 2000 elo when taking 1 2 second per move and crushes every human opponent on any time format longer than 1 minute this bot has been developped on ios but all the librairies it is using are compatible on linux and windows too getting started prerequisites stockfish this bot uses stockfish to calculate the next best move here is the procedure to make it work download stockfish for your os https stockfishchess org download the macos stockfish i used is already commited add it to your path with export path path pwd test that stockfish is working well by running the command stockfish in your terminal it should output something like this stockfish 120218 64 by t romstad m costalba j kiiski g linscott python this bot requires python 3 to run using the bot the bot runs very easily go in the folder that contains the source code run the command python3 main py limitations this project is far from perfect yet it has a few limitations because of the computer vision algorithm used to detect the chessboard the square colors should be with plain colors without having wierd textures the gui is still quite basic one small deviation during a game the board moved the user touched the mouse and the bot will not work at all it is not possible to stop the chessbot without closing the window this project has been tested only on a mac please feel free to help me improve it author stanislas heili initial work mygit https github com stanou01260 | python bot chessbot pyautogui opencv-python | ai |
ComposeDesignSystem | composedesignsystem custom space design system for article https szadorozhnyi medium com custom design system using jetpack compose 17a59b1ae38d | os |
|
TAC | tac absorbed in creating global biggest traceability public chain nbsp nbsp we hope to basing on the technology of block chain adopting unique non tamper distributed ledger characteristics of the block chain build the traceability cloud platform solve the enterprise s difficulties in information traceability anti counterfeit verification and mobile marketing during commodity production nbsp nbsp circulation and distribution and terminal consumption process through the sub chain of the landing project and corresponding dapp application and provide a fast and efficient cluster of cloud services development for the technical developers so as to solve the problem of trusted for brand enterprises and consumers and then build a new block chain ecosystem traceability chain as the future world selectable internet value transmission protocol and push forward the practicability and usability of the whole block chain industry nbsp nbsp as the most promising block chain ecosystem it perfectly combines the advantages of ethereum and bitshares nbsp nbsp traceability chain will also constantly and gradually form the block chain economy through the construction of the foundation platform the design and development of the software and hardware products the development of various products and the development and iteration of the commercial landing project improve the industry efficiency and promote the effective and collaborative development of the society white paper to learn more about tac please read the white paper http dl tacblock com traceability chain whitepaper eng pdf http dl tacblock com traceability chain whitepaper pdf official links official website eng https tacchain io https tacchain cn company news eng https blog tacchain io https blog tacchain cn | server |
|
Restaurant-chain | restaurant chain english project for the database course of the bachelor of science in computer engineering university of pisa the project document is written in italian italiano progetto per il corso di basi di dati laurea triennale in ingegneria informatica universit di pisa contributors luigi leonardi https github com luigix25 | server |
|
LLM-T2T | llm t2t data and code for paper large language models are effective generators evaluators and feedback providers for faithful table to text generation | ai |
|
Data-Science-Cheatsheet | data science cheatsheet 2 0 a helpful 5 page data science cheatsheet to assist with exam reviews interview prep and anything in between it covers over a semester of introductory machine learning and is based on mit s machine learning courses 6 867 and 15 072 the reader should have at least a basic understanding of statistics and linear algebra though beginners may find this resource helpful as well inspired by maverick s data science cheatsheet hence the 2 0 in the name located here https github com ml874 data science cheatsheet topics covered linear and logistic regression decision trees and random forest svm k nearest neighbors clustering boosting dimension reduction pca lda factor analysis natural language processing neural networks recommender systems reinforcement learning anomaly detection time series a b testing this cheatsheet will be occasionally updated with new improved info so consider a follow or star to stay up to date future additions ideas welcome time series added statistics and probability added data imputation generative adversarial networks graph neural networks links data science cheatsheet 2 0 pdf https github com aaronwangy data science cheatsheet blob main data science cheatsheet pdf screenshots here are screenshots of a couple pages the link to the full cheatsheet is above images page1 1 png raw true images page2 1 png raw true why is python sql not covered in this cheatsheet i planned for this resource to cover mainly algorithms models and concepts as these rarely change and are common throughout industries technical languages and data structures often vary by job function and refreshing these skills may make more sense on keyboard than on paper license feel free to share this resource in classes review sessions or to anyone who might find it helpful this work is licensed under the a rel license href http creativecommons org licenses by nc sa 4 0 creative commons attribution noncommercial sharealike 4 0 international license a a rel license href http creativecommons org licenses by nc sa 4 0 img alt creative commons license style border width 0 src https i creativecommons org l by nc sa 4 0 88x31 png a br images are used for educational purposes created by me or borrowed from my colleagues here https stanford edu shervine teaching cs 229 contact feel free to suggest comments updates and potential improvements author aaron wang https www linkedin com in axw if you d like to support this cheatsheet you can buy me a coffee here https www paypal me aaxw i also do resume application and tech consulting send me a message if interested | cheatsheet data-science machine-learning | ai |
front-end-system-design | front end system design resources materials strong front end system design fundamentals strong nbsp nbsp introduction to system design https www youtube com watch v sv 4pogosnu list pl4cfloq4ggwice0tz6ixkfn3xwkxrlbou index 1 nbsp nbsp client server architecture https www youtube com watch v jwgsvn 4atk list pl4cfloq4ggwice0tz6ixkfn3xwkxrlbou index 2 nbsp nbsp front end system design components https www youtube com watch v 44monnt5pic list pl4cfloq4ggwice0tz6ixkfn3xwkxrlbou index 3 br strong system design hld lld overview strong nbsp nbsp cracking system design hld lld interviews https www youtube com watch v qemifzceemm list pl4cfloq4ggwice0tz6ixkfn3xwkxrlbou index 4 nbsp nbsp cracking lld interviews https www youtube com watch v yun65nk8 vq list pl4cfloq4ggwice0tz6ixkfn3xwkxrlbou index 8 br strong system design of popular application strong nbsp nbsp design netflix youtube hotstar https www youtube com watch v sn48gezruk list pl4cfloq4ggwice0tz6ixkfn3xwkxrlbou index 5 nbsp nbsp design snake ladder game https www youtube com watch v vgtd8of6yuw list pl4cfloq4ggwice0tz6ixkfn3xwkxrlbou index 14 t 29s br strong front end system design mock interviews strong nbsp nbsp design autocomplete typeahead suggestions https www youtube com watch v ikrbwt6lqiy list pl4cfloq4ggwice0tz6ixkfn3xwkxrlbou index 10 nbsp nbsp design configurable ui dynamic ui https www youtube com watch v 6z7zxb4ntbe list pl4cfloq4ggwice0tz6ixkfn3xwkxrlbou index 13 nbsp nbsp design codesandbox part 1 https www youtube com watch v hnyduovy470 list pl4cfloq4ggwice0tz6ixkfn3xwkxrlbou index 6 nbsp nbsp design codesandbox part 2 https www youtube com watch v o5aojlcs8rc list pl4cfloq4ggwice0tz6ixkfn3xwkxrlbou index 7 nbsp nbsp design whiteboard excalidraw figma draw io https www youtube com watch v 1lnjvdfstso list pl4cfloq4ggwice0tz6ixkfn3xwkxrlbou index 15 br strong performance optimization strong nbsp nbsp network optimization techniques https www youtube com watch v xsvkwiw t4k list pl4cfloq4ggwice0tz6ixkfn3xwkxrlbou index 11 nbsp nbsp assets optimization techniques https www youtube com watch v 9jdlzxr8gvw list pl4cfloq4ggwice0tz6ixkfn3xwkxrlbou index 12 nbsp nbsp mdn performance https developer mozilla org en us docs learn performance nbsp nbsp web dev performance https web dev learn performance nbsp nbsp google dev performance https developers google com web fundamentals performance get started br strong front end interview strong nbsp nbsp popular interview questions https www youtube com watch v c kvh gqti list pl4cfloq4ggwice0tz6ixkfn3xwkxrlbou index 9 br | front_end |
|
ml-workshop-3-of-4 | advanced machine learning with scikit learn pipelines and evaluation metrics part 3 of 4 other parts part 1 https github com amueller ml workshop 1 of 4 part 2 https github com amueller ml workshop 2 of 4 part 4 https github com amueller ml workshop 4 of 4 content review of scikit learn api https amueller github io ml workshop 3 of 4 slides 01 reminder sklearn api html gradient boosted trees https amueller github io ml workshop 3 of 4 slides 02 gradient boosting building workflows with pipelines https amueller github io ml workshop 3 of 4 slides 03 pipelines html evaluation metrics for classification https amueller github io ml workshop 3 of 4 slides 04 model evaluation html instructor andreas mueller http amuller github io amuellerml https twitter com amuellerml columbia university book introduction to machine learning with python http shop oreilly com product 0636920030515 do this repository will contain the teaching material and other info associated with the workshop advanced advanced machine learning with scikit learn part i ii please download the large movie review dataset from http ai stanford edu amaas data sentiment before coming to the workshop about the workshop scikit learn is a machine learning library in python that has become a valuable tool for many data science practitioners this training will cover some of the more advanced aspects of scikit learn such as building complex machine learning pipelines and advanced model evaluation model evaluation is an underappreciated aspect of machine learning but using the right metric to measure success is critical practitioners are often faced with imbalanced classification tasks where accuracy can be uninformative or misleading we will discuss other metrics when to use them and how to compute them with scikit learn we will also look into how to build processing pipelines using scikit learn to chain multiple preprocessing techniques together with supervised models and how to tune complex pipelines prerequisites this workshop assumes familiarity with jupyter notebooks and basics of pandas matplotlib and numpy it also assumes some familiarity with the api of scikit learn and how to do cross validations and grid search with scikit learn obtaining the tutorial material if you are familiar with git it is most convenient if you clone the github repository this is highly encouraged as it allows you to easily synchronize any changes to the material git clone https github com amueller ml workshop 3 of 4 if you are not familiar with git you can download the repository as a zip file by heading over to the github repository https github com amueller ml workshop 3 of 4 in your browser and click the green download button in the upper right images download repo png please note that i may add and improve the material until shortly before the tutorial session and we recommend you to update your copy of the materials one day before the tutorials if you have an github account and forked cloned the repository via github you can sync your existing fork with via the following commands git pull origin master installation notes this tutorial will require recent installations of numpy http www numpy org scipy http www scipy org matplotlib http matplotlib org pillow https python pillow org pandas http pandas pydata org scikit learn http scikit learn org stable 0 22 1 ipython http ipython readthedocs org en stable jupyter notebook http jupyter org mlxtend imbalanced learn the last one is important you should be able to type jupyter notebook in your terminal window and see the notebook panel load in your web browser try opening and running a notebook from the material to see check that it works for users who do not yet have these packages installed a relatively painless way to install all the requirements is to use a python distribution such as anaconda https www continuum io downloads which includes the most relevant python packages for science math engineering and data analysis anaconda can be downloaded and installed for free including commercial use and redistribution the code examples in this tutorial requires python 3 5 or later after obtaining the material we strongly recommend you to open and execute a jupyter notebook jupter notebook check env ipynb that is located at the top level of this repository inside the repository you can open the notebook by executing bash jupyter notebook check env ipynb inside this repository inside the notebook you can run the code cell by clicking on the run cells button as illustrated in the figure below images check env 1 png finally if your environment satisfies the requirements for the tutorials the executed code cell will produce an output message as shown below images check env 2 png | ai |
|
releasing-research-code | tips for publishing research code img src https upload wikimedia org wikipedia en thumb 0 08 logo for conference on neural information processing systems svg 1200px logo for conference on neural information processing systems svg png width 200 collated best practices from most popular ml research repositories now official guidelines at neurips 2021 based on analysis of more than 200 machine learning repositories these recommendations facilitate reproducibility and correlate with github stars for more details see our our blog post https medium com paperswithcode ml code completeness checklist e9127b168501 for neurips 2021 code submissions it is recommended but not mandatory to use the readme md template templates readme md and check as many items on the ml code completeness checklist described below as possible readme md template we provide a readme md template templates readme md that you can use for releasing ml research repositories the sections in the template were derived by looking at existing repositories seeing which had the best reception in the community and then looking at common components that correlate with popularity ml code completeness checklist we compiled this checklist by looking at what s common to the most popular ml research repositories in addition we prioritized items that facilitate reproducibility and make it easier for others build upon research code the ml code completeness checklist consists of five items 1 specification of dependencies 2 training code 3 evaluation code 4 pre trained models 5 readme file including table of results accompanied by precise commands to run produce those results we verified that repositories that check more items on the checklist also tend to have a higher number of github stars this was verified by analysing official neurips 2019 repositories more details in the blog post https medium com paperswithcode ml code completeness checklist e9127b168501 we also provide the data notebooks code checklist neurips2019 csv and notebook notebooks code checklist analysis pdf to reproduce this analysis from the post neurips 2019 repositories that had all five of these components had the highest number of github stars median of 196 and mean of 2 664 stars we explain each item on the checklist in detail blow 1 specification of dependencies if you are using python this means providing a requirements txt file if using pip and virtualenv providing environment yml file if using anaconda or a setup py if your code is a library it is good practice to provide a section in your readme md that explains how to install these dependencies assume minimal background knowledge and be clear and comprehensive if users cannot set up your dependencies they are likely to give up on the rest of your code as well if you wish to provide whole reproducible environments you might want to consider using docker and upload a docker image of your environment into dockerhub 2 training code your code should have a training script that can be used to obtain the principal results stated in the paper this means you should include hyperparameters and any tricks that were used in the process of getting your results to maximize usefulness ideally this code should be written with extensibility in mind what if your user wants to use the same training script on their own dataset you can provide a documented command line wrapper such as train py to serve as a useful entry point for your users 3 evaluation code model evaluation and experiments often depend on subtle details that are not always possible to explain in the paper this is why including the exact code you used to evaluate or run experiments is helpful to give a complete description of the procedure in turn this helps the user to trust understand and build on your research you can provide a documented command line wrapper such as eval py to serve as a useful entry point for your users 4 pre trained models training a model from scratch can be time consuming and expensive one way to increase trust in your results is to provide a pre trained model that the community can evaluate to obtain the end results this means users can see the results are credible without having to train afresh another common use case is fine tuning for downstream task where it s useful to release a pretrained model so others can build on it for application to their own datasets lastly some users might want to try out your model to see if it works on some example data providing pre trained models allows your users to play around with your work and aids understanding of the paper s achievements 5 readme file includes table of results accompanied by precise command to run to produce those results adding a table of results into readme md lets your users quickly understand what to expect from the repository see the readme md template templates readme md for an example instructions on how to reproduce those results with links to any relevant scripts pretrained models etc can provide another entry point for the user and directly facilitate reproducibility in some cases the main result of a paper is a figure but that might be more difficult for users to understand without reading the paper you can further help the user understand and contextualize your results by linking back to the full leaderboard that has up to date results from other papers there are multiple leaderboard services results leaderboards where this information is stored additional awesome resources for releasing research code hosting pretrained models files 1 zenodo https zenodo org versioning 50gb free bandwidth doi provides long term preservation 2 github releases https help github com en github administering a repository managing releases in a repository versioning 2gb file limit free bandwidth 3 onedrive https www onedrive com versioning 2gb free 1tb with office 365 free bandwidth 4 google drive https drive google com versioning 15gb free bandwidth 5 dropbox https dropbox com versioning 2gb paid unlimited free bandwidth 6 aws s3 https aws amazon com s3 versioning paid only paid bandwidth 7 huggingface hub https github com huggingface huggingface hub versioning no size limitations free bandwidth 8 dagshub https dagshub com versioning no size limitations free bandwith 9 codalab worksheets https worksheets codalab org 10gb free bandwith managing model files 1 rclone https rclone org provides unified access to many different cloud storage providers standardized model interfaces 1 pytorch hub https pytorch org hub 2 tensorflow hub https www tensorflow org hub 3 hugging face nlp models https huggingface co models 4 onnx https onnx ai results leaderboards 1 papers with code leaderboards https paperswithcode com sota with 4000 leaderboards 2 codalab competitions https competitions codalab org with 450 leaderboards 3 evalai https eval ai with 100 leaderboards 4 nlp progress https nlpprogress com with 90 leaderboards 5 collective knowledge https cknowledge io reproduced results with 40 leaderboards 6 weights biases benchmarks https www wandb com benchmarks with 9 leaderboards making project pages 1 github pages https pages github com 2 fastpages https github com fastai fastpages making demos tutorials executable papers 1 google colab https colab research google com 2 binder https mybinder org 3 streamlit https github com streamlit streamlit 4 codalab worksheets https worksheets codalab org contributing if you d like to contribute or have any suggestions for these guidelines you can contact us at hello paperswithcode com or open an issue on this github repository all contributions welcome all content in this repository is licensed under the mit license | machine-learning awesome-list neurips neurips-2020 | ai |
DenseNet-NLP | very deep convolutional networks for natural language processing in tensorflow this is the densenet implementation of the paper do convolutional networks need to be deep for text classification in tensorflow we study in the paper the importance of depth in convolutional models for text classification either when character or word inputs are considered we show on 5 standard text classification and sentiment analysis tasks that deep models indeed give better performances than shallow networks when the text input is represented as a sequence of characters however a simple shallow and wide network outperforms deep models such as densenet with word inputs our shallow word model further establishes new state of the art performances on two datasets yelp binary 95 9 and yelp full 64 9 paper hoa t le christophe cerisara alexandre denis do convolutional networks need to be deep for text classification association for the advancement of artificial intelligence 2018 aaai 18 workshop on affective content analysis https arxiv org abs 1707 04108 article dblp journals corr lecd17 author hoa t le and christophe cerisara and alexandre denis title do convolutional networks need to be deep for text classification journal corr year 2017 p align center img src https github com lethienhoa very deep convolutional networks for natural language processing blob master selection 134 png raw true p results p align center img src https github com lethienhoa very deep convolutional networks for natural language processing blob master selection 135 png raw true p p align center img src https github com lethienhoa very deep convolutional networks for natural language processing blob master selection 136 png raw true p reference source codes https github com dennybritz cnn text classification tf | ai |
|
SciSharp-Stack-Examples | scisharp stack examples this repo contains many practical examples written in scisharp s machine learning libraries if you still don t know how to use net for deep learning getting started from these examples is your best choice join the chat at https gitter im publiclab publiclab https badges gitter im join 20chat svg https gitter im sci sharp community requirements net core 5 0 https dotnet microsoft com download dotnet core 5 0 visual studio 2019 https visualstudio microsoft com vs or visual studio code https code visualstudio com run specific example in shell c bat run all examples from source code dotnet run project src tensorflownet examples run specific example dotnet run project src tensorflownet examples ex linear regression graph run in compiled library dotnet tensorflownet examples dll ex mnist cnn eager f bat run all examples from source code dotnet run project src tensorflownet examples fsharp run specific example dotnet run project src tensorflownet examples fsharp ex linear regression eager run in compiled library dotnet tensorflownet examples fsharp dll ex mnist cnn eager example runner will download all the required files like training data and model pb files basic model hello world c src tensorflownet examples helloworld cs f src tensorflownet examples fsharp helloworld fs basic operations c src tensorflownet examples basicoperations cs f src tensorflownet examples fsharp basicoperations fs linear regression in graph mode c src tensorflownet examples basicmodels linearregression cs f src tensorflownet examples fsharp basicmodels linearregression fs linear regression in eager mode c src tensorflownet examples basicmodels linearregressioneager cs f src tensorflownet examples fsharp basicmodels linearregressioneager fs linear regression in keras c src tensorflownet examples basicmodels linearregressionkeras cs logistic regression in graph mode c src tensorflownet examples basicmodels logisticregression cs f src tensorflownet examples fsharp basicmodels logisticregression fs logistic regression in eager mode c src tensorflownet examples basicmodels logisticregressioneager cs f src tensorflownet examples fsharp basicmodels logisticregressioneager fs nearest neighbor c src tensorflownet examples basicmodels nearestneighbor cs f src tensorflownet examples fsharp basicmodels nearestneighbor fs naive bayes classification c src tensorflownet examples basicmodels naivebayesclassifier cs f src tensorflownet examples fsharp basicmodels naivebayesclassifier fs k means clustering c src tensorflownet examples basicmodels kmeansclustering cs neural network full connected neural network in eager mode c src tensorflownet examples neuralnetworks fullyconnectedeager cs f src tensorflownet examples fsharp neuralnetworks fullyconnectedeager fs full connected neural network keras c src tensorflownet examples neuralnetworks fullyconnectedkeras cs f src tensorflownet examples fsharp neuralnetworks fullyconnectedkeras fs nn xor c src tensorflownet examples neuralnetworks neuralnetxor cs object detection in mobilenet c src tensorflownet examples objectdetection detectinmobilenet cs mnist fnn in keras functional api c src tensorflownet examples imageprocessing mnistfnnkerasfunctional cs f src tensorflownet examples fsharp imageprocessing mnistfnnkerasfunctional fs mnist cnn in graph mode c src tensorflownet examples imageprocessing digitrecognitioncnn cs f src tensorflownet examples fsharp imageprocessing digitrecognitioncnn fs mnist cnn in eager mode c src tensorflownet examples imageprocessing digitrecognitioncnneager cs f src tensorflownet examples fsharp imageprocessing digitrecognitioncnneager fs mnist cnn in keras subclass c src tensorflownet examples imageprocessing mnistcnnkerassubclass cs f src tensorflownet examples fsharp imageprocessing mnistcnnkerassubclass fs mnist rnn c src tensorflownet examples imageprocessing digitrecognitionrnn cs mnist lstm c src tensorflownet examples imageprocessing digitrecognitionlstm cs image classification in keras sequential api c src tensorflownet examples imageprocessing imageclassificationkeras cs f src tensorflownet examples fsharp imageprocessing imageclassificationkeras fs image recognition inception c src tensorflownet examples imageprocessing imagerecognitioninception cs f src tensorflownet examples fsharp imageprocessing imagerecognitioninception fs toy resnet in keras functional api c src tensorflownet examples imageprocessing toyresnet cs f src tensorflownet examples fsharp imageprocessing toyresnet fs transfer learning for image classification in inceptionv3 c src tensorflownet examples imageprocessing transferlearningwithinceptionv3 cs cnn in your own dataset c src tensorflownet examples imageprocessing cnninyourowndata cs f src tensorflownet examples fsharp imageprocessing cnninyourowndata fs natural language processing binary text classification c src tensorflownet examples textprocessing binarytextclassification cs cnn text classification c src tensorflownet examples textprocessing cnn models vdcnn cs named entity recognition c src tensorflownet examples textprocessing ner time series weather prediction cnn rnn c src tensorflownet examples timeseries weatherprediction cs welcome to pr your example to us your contribution will make net community better than ever br a href http scisharpstack org img src https github com scisharp scisharp blob master art scisharp stack png width 391 height 100 a | tensorflow tensorflow-examples scisharp csharp | ai |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.