names
stringlengths 1
98
| readmes
stringlengths 8
608k
| topics
stringlengths 0
442
| labels
stringclasses 6
values |
---|---|---|---|
chmdc-todo-app | react native todo app react native and expo todo app for the challenges of coderhouse mobile development course chmdc todoapp design https user images githubusercontent com 42822912 223900474 92ec1d71 1145 4402 bb5d 7d7ac142f86f jpg features 1 add tasks reminders users can add tasks and reminders 2 edit tasks reminders users can edit existent tasks and reminders 3 delete tasks reminders users can delete tasks and reminders 4 mark tasks as done users can mark a task as done 5 turn off on reminders notifications users can turn off or on reminders notifications app demo https user images githubusercontent com 42822912 223900496 52e90ae3 8402 4f7b 811f 30cfd33fbf5e mov app technical information chmdc todoapp is developed with react native and expo global state management global application state is managed with redux this is responsible for managing tasks and reminders persistance tasks and reminders are stored into a sqlite database in the device personal data visit my github https github com mathiramilo profile to see more amazing projects if you are interested contact me on linkedin https www linkedin com in mathias ramilo | expo react-native mobile todoapp | front_end |
Dagsverket | dagsverket software engineering 1 with database project dagsverket | server |
|
Automated_data_pipeline | automated data pipeline data engineering automated data pipeline on mysql cloud database the objective of this project is to create a data pipeline that gathers information from the internet processes it and stores it in a database transitioning from a local setup to a cloud based infrastructure the project is divided into two main phases local pipeline and cloud pipeline phase 1 local pipeline in this phase the focus is on setting up a local environment for data collection and storage 1 1 scrape data from the web learn how to access and extract information from websites by downloading and parsing their html code using the beautiful soup library in python 1 2 collect data with apis acquire data from various internet data providers using apis learn how to authenticate assemble requests and interact with apis using python s requests library 1 3 create a database model define the logical structure of a relational database to store the collected data determine the required tables and their relationships paving the way for efficient data storage and retrieval 1 4 store data on a local mysql instance set up a local mysql database on your computer and store the collected data from both web scraping and apis ensure that the connection between python and mysql is functional phase 2 cloud pipeline in this phase the project transitions to a cloud based infrastructure for improved scalability and automation 2 1 set up a cloud database utilize amazon web services aws relational database service rds to create a cloud hosted mysql database this step improves the scalability and accessibility of the database 2 2 move scripts to lambda migrate the data collection scripts from jupyter notebooks to aws lambda functions lambda is a cloud service that allows code execution without managing server infrastructure 2 3 automate the pipeline leverage aws cloudwatch events or eventbridge to schedule and automate the execution of data collection scripts this automation simplifies the process and ensures data collection occurs at specific intervals or based on triggers overall this project aims to create an efficient and automated data pipeline initially on a local environment and later transitioning to a cloud based setup utilizing aws services to enhance scalability and maintainability architecture collect data with web scraping collect data with apis image https github com fabiano2415 automated data pipeline assets 101226686 c7ba0363 9314 41cc 9302 90c649bc69b6 | cloud |
|
mana-do | welcome to manabie coding challenge hello we re excited that you re interested in joining manabie below are the requirements and explanations for the challenge notes our challenge codebase is bootstrapped by create react app with typescript all provided codes are in this repository please fork complete your challenge and create a pr to this repository we judge your codes by easy to read and understand well organized and consistent test cases how do you approach new technologies don t worry if you can t complete the challenge in time just do your best in a mindful way if you can t fully complete the challenge please note the completed features we d like too see some descriptions about your pr typescript is a plus point so we hope you can spend your time on this requirements common required for both positions our code base has some strange bugs and anti patterns please help us to find and fix these please comments the reasons and your solutions we manabian believe that engineers themselves should take care of the quality of their products please somehow convince us that your changes are correct we d prefer to have a few tests for important changes that you had added or fixed unit test or integration test front end engineer for front end engineer you can use localstorage instead of calling remote apis we provided a simple ui for todo app please enhance it with your creative mind please help us to add some features to the application the persistent feature after refreshing our todos will be disappeared that s annoying for our users let s use localstorage or api calls for fullstack engineer to keep them the edit feature currently users cannot edit the todos please help them user double clicks the todo to edit presses enter to apply the changes or clicks outside to discard the active complete todo feature allows users to click on checkbox items they have completed fullstack engineer you have to make sure your code satisfy the back end requirements in https github com manabie com togo keep the existing features in sync with backend create toggle status toggle all delete we do not require you to enhance the ui but it is preferable have some small changes but meaningful are great done the common requirements above how to run this code run yarn or npm install if this is the first time you clone this repo master branch run yarn start fullstack in case you are doing a fullstack test else run yarn start frontend to start this project in development mode sign in using username firstuser password example last updated 2022 01 13 | front_end |
|
cvt | cvt logo data logos cvt png computer vision tools githalytics com alpha https cruel carlota pagodabox com 1972da9ba634242817a1efff00773652 githalytics com http githalytics com tum uav cvt building cvt git clone git github com tum uav cvt git cd cvt mkdir build cd build cmake make | ai |
|
project-walkthroughs | overview this repository contains files notebooks and data used for live project walkthroughs on dataquest you can watch the project walkthroughs on youtube https www youtube com channel uc lepy0lm0e2 ikyuwpi5a these walkthroughs help you build complete end to end projects that can go into your portfolio prerequisites to complete these projects you ll need to have a good understanding of python syntax including functions if statements and data structures data cleaning pandas syntax using jupyter notebook the basics of machine learning please make sure you ve completed these dataquest courses or know the material before trying these projects python introduction https www dataquest io course introduction to python for loops and if statements https www dataquest io course for loops and conditional statements in python dictionaries in python https www dataquest io course dictionaries frequency tables and functions in python functions and jupyter notebook https www dataquest io course python functions and jupyter notebook python intermediate https www dataquest io course python for data science intermediate pandas and numpy fundamentals https www dataquest io course pandas fundamentals data cleaning https www dataquest io course python datacleaning machine learning fundamentals https www dataquest io course machine learning fundamentals | data-science machine-learning pandas python | front_end |
AWS-Serverless-Data-Engineering-Pipeline | aws data engineering pipeline this is a repository for the duke university cloud computing course project on serverless data engineering pipeline for this project i recreated the below pipeline in icloud9 reference https github com noahgift awslambda img src images pipeline png alt drawing width 450 below are the steps of how to build this pipeline in aws 1 create a new icloud9 environment dedicated to this project need a refresher please check this repo https github com noahgift awslambda blob master beginners guide aws lambda ipynb make sure to use name as your unique id for your items in the fang table 2 create a fang table in dynamodb and sqs queue you can check how to do it here https www youtube com watch v zxxdbtamoa4 3 build producer lambda function 1 in icloud9 initialize a serverless application with sam template https docs aws amazon com serverless application model latest developerguide sam cli command reference sam init html python sam init inputs 1 2 4 producer 2 set virtual environment and source it python i called my virtual environment comprehendproducer python3 m venv comprehendproducer source comprehendproducer bin activate 3 add the code for your application to app py 4 add relevant packages used in your app to requirements txt file 5 install requirements python cd hello world pip install r requirements txt cd 6 create a repository producer in elastic container registry ecr and copy its uri 7 build and deploy your serverless application python sam build sam deploy guided when prompted to input uri paste the uri for the producer repository that you ve just created 8 create iam role granting administrator access to the producer lambda function not sure how to create iam role check out this video https www youtube com watch v zxxdbtamoa4 17 min 9 add the execution role that you created to the producer lambda function in case you forgot how to do it in aws console lambda click on producer function configuration permissions edit select the role under existing role 10 you are all set with the producer function now deactivate virtual environment deactivate cd 4 create an s3 bucket and note its name 5 build consumer lambda function repeat steps in 3 in 3 when you add the code for a consumer app to app py make sure to replace bucket fangsentiment with the name of your s3 bucket 6 add triggers to lambda functions not sure how to do it check out this video https www youtube com watch v zxxdbtamoa4 start times are noted below producer lambda function cloudwatchevent 30 min consumer lambda function sqs 42 min 7 if all goes well you will see sentiment results in your s3 bucket s3 images s3 png tip if you ve already deployed your lambda function but need to edit your application you can make the necessary edits to your app and build and deploy the app again sam build sam deploy tip if you don t have space left on disk you may want to remove a few docker containers that you don t use python list containers docker image ls remove a container docker image rm containerid | cloud |
|
fwd | front end web development class materials this is a repository of class materials i have created for the lectures and labs of my front end web development https www noisebridge net wiki front end web development class how to use this repository often in class i will ask that you download this repository and use some files located in specific folders to be able to follow along with my lecture the best way to use this repository is to clone it shell git clone https github com jeffreyatw fwd git once you clone it i would recommend not making any changes to the repository this includes forking it and maintaining a parallel copy if you want to do your own work i recommend keeping a folder entirely outside of this repository and creating one folder per each class there this is because it will be harder to reconcile further updates to this repository if your changes get in the way of mine to update this repository to the latest version cd into the fwd folder and run shell git pull if this command somehow fails or says that there are conflicts i would recommend deleting the fwd folder and starting over by cloning the repository again | front_end |
|
DiLu | dilu a knowledge driven approach to autonomous driving with large language models custom badge https img shields io badge paper arxiv b31b1b logo arxiv logocolor white https arxiv org abs 2309 16292 custom badge https img shields io badge project 20page white logo github color green https pjlab adg github io dilu dilu is an innovative closed loop and self evolving framework for autonomous driving that integrates common sense knowledge and memory components empowered by large language models highlights 2023 10 12 our project page is now online check it out here https pjlab adg github io dilu 2023 09 28 our paper is available on arxiv https arxiv org abs 2309 16292 knowledge driven autonomous driving knowledge driven autonomous driving assets paradigm png div we suggest the next generation knowledge driven paradigm for autonomous driving systems includes three components ol li an a style color darkgreen environment a with which an agent can interact li li a a style color ff6f61 driver agent a with recall reasoning and reflection abilities li li a a style color 2f80ed memory component a to persist experiences li ol in continuous evolution the driver agent observes the environment queries and updates experiences from the memory component and performs decision making div framework overview framework overview assets framework png based on the knowledge driven paradigm for autonomous driving systems introduced above we propose a practical framework called dilu which consists of four core modules b environment reasoning reflection and memory b cite misc wen2023dilu title dilu a knowledge driven approach to autonomous driving with large language models author licheng wen and daocheng fu and xin li and xinyu cai and tao ma and pinlong cai and min dou and botian shi and liang he and yu qiao year 2023 eprint 2309 16292 archiveprefix arxiv primaryclass cs ro | ai |
|
Doric | doric design system foundation in swift https raw githubusercontent com jayeshk doric master doric png build status https travis ci com jayeshk doric svg branch master https travis ci com jayeshk doric cocoapods compatible https img shields io cocoapods v doric svg https img shields io cocoapods v doric svg carthage compatible https img shields io badge carthage compatible red svg style flat https github com carthage carthage platform https img shields io cocoapods p doric svg style flat https github com jayeshk doric documentation https github com jayeshk doric blob master docs badge svg doric design system foundation design system foundation written in swift protocol oriented type safe scalable framework for ios features features requirements requirements contribute contribute installation installation usage documentation usage md faq faq credits credits license license features x typography x iconography x colour palette x dynamic scalable font support x auto layout x gradients shadows borders and other scales x layout spacing x themes x ui debugging helpers x api document https jayeshk github io doric usage html usage documentation documentation usage md sketch plugin see doricsnippet beta https github com jayeshk doricsnippet generates swift code snippets for doric framework roadmap not in specific orders to achieve it add debugging tools ruler etc in progress expand framework to create more styles colour processing utilities trait based layouts uitraitcollection in progress accessibility for colour palettes x sketch plugin to generate palette try here https github com jayeshk doricsnippet usage guide https jayeshk github io doric usage html see documentation requirements ios 11 0 xcode 10 2 swift 5 demo preview https raw githubusercontent com jayeshk doric master demo screenshot png preview https raw githubusercontent com jayeshk doric master screens preview gif installation cocoapods cocoapods https cocoapods org is an application level dependency manager for the objective c swift and any other languages that run on the objective c runtime that provides a standard format for managing external libraries for usage and installation instructions visit site https cocoapods org to integrate doric using cocoapods specify it in your podfile ruby pod doric 1 0 0 carthage carthage https github com carthage carthage builds your dependencies and provides you with binary frameworks but you retain full control over your project structure and setup carthage does not automatically modify your project files or your build settings to integrate doric into your xcode project using carthage specify it in your cartfile ogdl github jayeshk doric 1 0 0 manually if you can integrate project manually as below using git submodule embedded framework open up terminal cd into your top level project directory and run the following command if your project is not initialised as a git repository bash git init add doric as a git submodule by running the following command bash git submodule add https github com jayeshk doric git open the new doric folder and drag the doric xcodeproj into the project navigator it should appear nested underneath your application s blue project icon whether it is above or below all the other xcode groups does not matter select the doric xcodeproj in the project navigator and verify the deployment target matches that of your application target next select your application project in the project navigator to navigate to the target configuration window and select the application target under the targets heading in the sidebar in the tab bar at the top of that window open the general panel click on the button under the embedded binaries section you will see doric framework nested inside a products folder select the doric framework for ios you can verify which one you selected by inspecting the build log for your project the build target for doric will be listed as doric doric framework is automagically added as a target dependency linked framework and embedded framework in a copy files build phase which is all you need to build on the simulator and a device contribute doric is open to contribute see contribution notes if you want to contribute submit a pull request if you found a bug open an issue if you need help with a feature or need to discuss best practices please see usage document still anything to discuss contact me doricdesignsystem gmail com faq about doric name the doric order was one of the three orders of ancient greek and later roman architecture doric is named based doric order https en wikipedia org wiki doric order hence provides pillars for your digital products why design system required as the number of devices screen variants and environments are increasing so there is need to create scalable interface design systems doric s primary goal is to create a system which allows you to manage design at scale for ios consistency scalability and efficiency across app are focused areas see awesome design systems https github com alexpate awesome design systems it is required to implement all parts of doric doric provides various building blocks to implement interface all blocks can be used independently or can be composed as needed for example your app can use typography only or color palettes better practice would be compose all them into single design system it also allows you to use any of these blocks with other third part frameworks for example spacing and layout can be used any other third party frameworks since it protocol oriented framework you can further extended any of section and customise it doric comes with few default implementations see more for usage guide documentation usage md credits doric is influenced by various design system guidelines and many stackoverflow posts major source of inspiration atomic design http atomicdesign bradfrost com brad frost author of atomic design comprehensive collection of the bits and pieces that make up your interface opensans fonts demo catalogue catalogue open sans license doric is released under the mit license see license for details | design-system design swift design-language-framework design-language autolayouts autolayout-framework autolayout-constraints color color-theme typography textstyle adaptive-layouts theme-framework theme ios apple xcode dynamic-font designsystem | os |
Team-3-TCSS-450 | current a chat client social network and weather forecaster everything you need to stay current project showcase https youtu be drnv2us5fi | front_end |
|
TinyTechTown | tinytechtown tiny tech town the microprocessor based embedded system design project | os |
|
factored-segmenter | factoredsegmenter factoredsegmenter is the unsupervised text tokenizer for machine translation that aims at factoring shared properties of words such as casing or spacing and underlies microsoft translator it encodes tokens in the form wordpiece factor1 factor2 factorn this encoding syntax is directly understood by the marian neural machine translation toolkit https github com marian nmt marian to use factoredsegmenter with other toolkits one must implement a parser for this format modify the embedding lookup and to use factors on the target side the beam decoder the term factoredsegmenter refers to both a segmentation library and an encoding of text factoredsegmenter segments words into subwords or word pieces using the popular sentencepiece https github com google sentencepiece library under the hood however unlike sentencepiece in its common usage spaces and capitalization are not encoded in the sub word tokens themselves instead spacing and capitalization are encoded in factors that are attached to each token the purpose of this is to allow the sharing of model parameters across all occurences of a word be it in the middle of a sentence capitalized at the start of a sentence at the start of a sentence enclosed in parentheses or quotation marks or in all caps in a social media rant in sentencepiece these are all distinct tokens which is less robust for example this distinction leads to poor translation accuracy for all caps sentences which is problematic when translating social media posts features of factoredsegmenter represents words and tokens as tuples of factors to allow for parameter sharing e g spacing and capitalization are separate factors on word pieces an nmt training tool would form token embeddings by summing or concatenating embeddings of all factors in the factor tuple infrequent words are represented by subwords aka word pieces using the sentencepiece library robust treatment of numerals each digit is always split as its own token in any writing system we have observed that this reliably fixes a large class of translation errors for numerals especially when translating between different numeric systems such as arabic numbers to chinese support for phrase fixing where specific phrases are required to be translated in a very specific way such constrained translation is achieved with factoredsegmenter by either replacing such phrases by a fixed token where a factor is used to distinguish multiple such phrase fixes in a single sentence or by inserting the desired target translation directly into the encoded source where factors are used to distinguish the source from the target translation unknown character handling characters not covered by the word piece vocabulary for example rare emojis are encoded by their unicode character code in a form that a translation system can learn to copy through round trippable allows to fully reconstruct the source sentence from the factored sub word representation with minor exceptions support of continuous scripts which have different rules for spacing and combining marks factors let s randomly pick a word of recent prominence say hydroxychloroquine first observe that whether it occurs at the beginning of the word where it would normally be capitalized or within the sentence or whether it appears after a quotation mark where it is lower case but there is no space before it it is still the same word and it seems desirable to share embedding parameters across all four cases to some degree secondly note that since hydroxychloroquine is a word rarely seen until recently it may not have been seen frequently enough after a quotation mark to get its own token hence in that situation it would not only not share its embedding but it also may be segmented differently altogether from the other cases factoredsegmenter attempts to remedy this problem by representing each sub word as a tuple for example hydroxychloroquine at sentence start would be represented by a tuple that might be written in pseudo code as lemma hydroxychloroquine capitalization cap initial iswordbeginning wordbeg yes iswordend wordend yes each tuple member is called a factor the subword identity itself hydroxychloroquine is also represented by a factor which we call the lemma meaning that it is the base form that may be modified by factors this is inspired by the linguistic term lemma https simple wikipedia org wiki lemma linguistics which is a base form that gets modified by inflections in machine translation the embedding of the tuple would be formed by composing embedding vectors for each individual factor in the tuple e g by summing or concatenating them a factor has a type and a value while the lemma is a string the capitalization factor above is an enumeration with three values representing three kinds of capitalization capitalized first letter beginning of a capitalized word using the symbol cap initial all caps cap all and no capitalized letters at all a regular all lowercase word cap none to represent mixed case words e g rupaul we break them into subwords iswordbeginning is conceptually a boolean but for simplicity we give each factor a unique data type so iswordbeginning is an enum with two values wordbeg yes and wordbeg no likewise for iswordend different lemmas can have different factor sets for example digits and punctuation cannot be capitalized hence those lemmas not have a capitalization factor however for a given lemma the set of factors is always the same the specific set of factors of a lemma is determined from heuristics represented in the factoredsegmenter code with some configurability via options for infrequent words or morphological variants factoredsegmenter supports subword units a subword unit is used when a word is unseen in the training or not seen often enough factoredsegmenter relies on the excellent sentencepiece library for determining suitable subword units for example hydroxychloroquine might be rare enough to be represented by subwords such as hydro xy chloroquine it would be represented as a sequence of three tuples lemma hydro capitalization cap initial iswordbeginning wordbeg yes iswordend wordend no lemma xy capitalization cap none iswordbeginning wordbeg no iswordend wordend no lemma chloroquine capitalization cap none iswordbeginning wordbeg no iswordend wordend yes the subword nature of the tuples is represented by the iswordbeginning and iswordend factors factor syntax when written to a text file or when communicated to an nmt training toolkit factor tuples are represented as strings following a specific syntax the factor values are concatenated separated by vertical bars a direct concatenation of the above example would give hydroxychloroquine cap initial wordbeg yes wordend yes however to avoid to dramatically increase data file sizes factors use short hand notations when serialized also to make those files a little more readable to us humans lemmas are written in all caps while factors use lowercase this also avoids name conflicts between factor names and real words if hydroxychloroquine is a single word piece the actual form as written to file of the above is hydroxychloriquine ci wb we the example above where it is represented by multiple subword units has the following serialized form hydro ci wb wen xy cn wbn wen chloroquine cn wbn we any character that may be used as part of this syntax is escaped as a hex code for example if the vertical bar character itself was the lemma it would be serialized as x7c representation of space between tokens if you are familiar with sentencepiece you will notice that the tuples above do not directly encode whether there is a space before or after the word instead it is encoded as factors whether a token is at the boundary beginning end of a word for single word tokens both flags are true most of the time a word boundary implies a spaces but not always for example a word in quotation marks would not be enclosed in spaces rather the quotation marks would for example the sequence hydroxychloroquine works would be encoded as hydro ci wb wen xy cn wbn wen chloroquine cn wbn we works cn wb we without explicit factors for spaces rather the space between hydroxychloroquine and works is implied by the word boundary factors hence words do not carry factors determining space directly rather spacing related factors are carried by punctuation marks by default there is always a space at word boundaries but punctuation carries factors stating whether a space surrounding the punctuation should rather be elided whether the punctuation should be glued to the surrounding token s for example in the sentence hydroxychloroquine works the sentence final exclamation point is glued to the word to the left and would be represented by the following factor tuple lemma glueleft glue left yes glueright glue right no the glueleft factor indicates that the default space after works should be elided the short hand form that is used when writing to file is gl and gl and likewise gr and gr the full sequence would be encoded as hydro ci wb wen xy cn wbn wen chloroquine cn wbn we works cn wb we gl gr note that the short hands for boolean like factors are a little inconsistent for historical reasons note also that this documentation makes no claims regarding the veracity of its example sentences round trippability an important property of the factor representation is that it allows to fully reconstruct the original input text it is fully round trippable if we encode a text as factor tuples and then decode it the result will be the original input string factoredsegmenter is used in machine translation by training the translation system to translate text in factor representation to text in the target language that is likewise in factor representation the final surface form is then recreated by decoding factor representation in the target language there are few exception to round trippability to support specifying specific translations for words phrase fixing factoredsegmenter can replace token ranges by special placeholders that get translated as such alternatively it can include the given target translation in the source string using special factors or marker tags the identity of such a token would get lost in the factored representation instead the translation system would remember its identity as side information the c api also allows replacing arbitrary character ranges on the fly the original characters get lost lastly it should be noted that the specific factor sets depend on configuration variables for example empirically we found no practical benefit in the iswordend factor so this is typically disabled by a configuration setting factoredsegmenter in code factoredsegmenter is manifested in code in two different ways first in the form of a c library which allows to execute all functions that is training encoding and decoding for example each time a user invokes microsoft translator e g via http translate bing com factoredsegmenter is invoked via the c interface twice once to encode the source sentence and once to decode the translation secondly a linux command line tool gives access to most of the library functions this is used for training factoredsegmenter models subword representations and it allows to build offline systems using the factored segmenter tool and marian alone training and factor configuration the factoredsegmenter representation is rule based except for the subword units which are based on sentencepiece hence before one can tokenize text with factoredsegmenter a factoredsegmenter model must be trained the training process first pre tokenizes the input into units of consistent letter type and then execute sentencepiece training on the resulting tokens the result of the training process are two files an fsm file for factored segmenter model an fsm file contains everything needed to encode and decode it holds all configuration options the factor specification which lemma has what factors subword inventories and also embeds the binary sentencepiece model for subword splitting an fsv file for factored segmenter vocabulary the fsv file holds the subset of the fsm model that is needed by the translation software marian to interpret the factor representation at training time the user must specify all options regarding which factors are used todo to be continued e g need to document continuous script handling combining marks some more on numerals also all model options and command line arguments prerequisites to build factoredsegmenter you will need to install the following dependencies linux sudo apt get install dotnet sdk 3 1 sudo apt get install dotnet runtime 3 1 and you need to install sentencepiece from source https github com google sentencepiece c from source sentencepiece is accessed both via executing a binary and via direct invocation of the c library windows https dotnet microsoft com download dotnet core thank you sdk 3 1 101 windows x64 installer and sentencepiece in the windows version sentencepiece is presently only invoked via the sentencepiece command line tools it has not been tested whether the vcpkg installation https github com google sentencepiece installation works how to build linux cd repo src dotnet publish c release r linux x64 f netcoreapp3 1 p publishsinglefile true p publishtrimmed true factored segmenter csproj now you can run the binary at repo src bin release netcoreapp3 1 linux x64 publish factored segmenter windows open src folder in visual studio 2017 or later with 2017 it will complain that it cannot build the 3 1 sdk f5 debugging still works using 2 1 but you may need to hit f5 twice example command lines encoding pigz d c data1 speechtrans enu deu student speech normalize src training sentences sentenceonly src normalized enu snt gz time parallelized env lc all en us utf 8 factored segmenter src bin release netcoreapp3 1 linux x64 publish factored segmenter encode model factored segmenter enu deu generalnn joint segmenter fsm pigz c best data1 speechtrans data 2019 12 enu deu student tn trainsinglesent normalized enu snt fs gz training time env lc all en us utf 8 factored segmenter src bin release netcoreapp3 1 linux x64 publish factored segmenter train model factored segmenter out enu deu generalnn joint segmenter fsm distinguish initial and internal pieces single letter case factors serialize indices and unrepresentables inline fixes min piece count 38 min char count 2 vocab size 32000 data1 speechtrans enu deu student speech train segmenter enu deu generalnn joint corpus sampled contributing this project welcomes contributions and suggestions most contributions require you to agree to a contributor license agreement cla declaring that you have the right to and actually do grant us the rights to use your contribution for details visit https cla opensource microsoft com when you submit a pull request a cla bot will automatically determine whether you need to provide a cla and decorate the pr appropriately e g status check comment simply follow the instructions provided by the bot you will only need to do this once across all repos using our cla this project has adopted the microsoft open source code of conduct https opensource microsoft com codeofconduct for more information see the code of conduct faq https opensource microsoft com codeofconduct faq or contact opencode microsoft com mailto opencode microsoft com with any additional questions or comments | ai |
|
Structure-from-motion-python | structure from motion python implementation based on sfmedu princeton cos429 computer vision http vision princeton edu courses sfmedu but on python numpy the objective of this project was to understand the structure from motion problem so i take the matlab code from http vision princeton edu courses sfmedu and translate it in python numpy the initial version is just a literal translation from the matlab code to python so expect higher run times if you want a fast and easy to use software see http ccwu me vsfm requeriments numpy cv2 https github com dranjan python plyfile for an example just run main py without any changes it will generate a point cloud of a jirafe from 5 images included in examples folder can take up to 30 m | python-numpy computer-vision | ai |
zpy | div align center a href https www zumolabs ai utm source github com utm medium referral utm campaign zpy img src https github com zumolabs zpy raw main docs assets zl tile logo png width 100px a zpy synthetic data in blender p align center a href https discord gg nxvxwehtg8 img alt discord title discord src https img shields io badge zpy devs grey style for the badge logo discord logocolor white a a href https twitter com zumolabs img alt twitter title twitter src https img shields io badge zumolabs 1da1f2 style for the badge logo twitter logocolor white a a href https www youtube com channel uccu2z8arljfdzfq7soz ytq img alt youtube title youtube src https img shields io badge zumolabs red style for the badge logo youtube logocolor white a a href https pypi org project zpy zumo img alt pypi title pypi src https img shields io badge pypi yellow style for the badge logo pypi logocolor white a a href https zumolabs github io zpy img alt docs title docs src https img shields io badge docs black style for the badge logo read 20the 20docs logocolor white a p div synthetic raspberry pi https github com zumolabs zpy raw main docs assets promo image png abstract collecting labeling and cleaning data for computer vision is a pain jump into the future and create your own data instead synthetic data is faster to develop with effectively infinite and gives you full control to prevent bias and privacy issues from creeping in we created zpy to make synthetic data easy by simplifying the simulation sim creation process and providing an easy way to generate synthetic data at scale check out our full documentation bookmark tabs https zumolabs github io zpy read the zpy paper https paperswithcode com paper zpy open source synthetic data for computer check out our new script writing guide https zumolabs github io zpy zpy tutorials script writing guide install thinking https zumolabs github io zpy zpy install pip you can install zpy with pip pip install zpy zumo more installation instructions can be found in the docs install using pip windows mac linux https zumolabs github io zpy zpy install pip install blender addon from zip windows mac linux https zumolabs github io zpy addon install install from script mac linux https zumolabs github io zpy zpy install script developer mode linux https zumolabs github io zpy zpy install linux developer mode windows https zumolabs github io zpy zpy install windows os status linux heavy check mark macos heavy check mark windows zpy 126 https github com zumolabs zpy issues 126 contribute busts in silhouette https zumolabs github io zpy overview contribute we welcome community contributions search through the current issues https github com zumolabs zpy issues or open your own license page facing up https zumolabs github io zpy overview license this release of zpy is under the gplv3 license a free copyleft license used by blender tldr its free use it citation writing hand https zumolabs github io zpy overview citation if you use zpy in your research we would appreciate the citation bibtex misc zpy title zpy synthetic data for blender author ponte h and ponte n and crowder s journal github note https github com zumolabs zpy volume 1 year 2021 | ml ai data synthetic blender python synthetic-data blender-addon deep-learning computer-vision | ai |
BilMuhTelegramBot | bilmuh telegram bot robot img src https user images githubusercontent com 30238276 83972942 cc80de80 a8eb 11ea 9541 ea2e76cdec73 png width 20 height 20 img src https user images githubusercontent com 30238276 83972947 cee33880 a8eb 11ea 815a e5a11b83577a png width 20 height 20 img src https user images githubusercontent com 30238276 83972944 ce4aa200 a8eb 11ea 963b 1d82e6a3c7eb png width 30 height 30 img src https user images githubusercontent com 30238276 83972946 ce4aa200 a8eb 11ea 9bf7 94d3c5d77b12 png width 20 height 20 some of us are too lazy to check our department s website to see the important announcements to overcome this problem we created the bilmuh telegram bot basically telegram bot runs on the esp8266 development board and fetches pure html code at our department s announcement scrollbar using thingspeak s thinghttp we preferred esp8266 by nodemcu because it is a cost effective and highly integrated wi fi mcu after getting data our c algorithm makes http get request to thinghttp and fetches all data available then comparing the new data with old data using firebase realtime database if new data differs from the old one then make an http put request to firebase and replace the current data to new data at the end of this process we put esp8266 s mcu to deep sleep mode for power efficiency goal dart we don t need to check our department website every day anymore we push a telegram notification to more than 200 students in a single cycle this automation makes our life carefree as we don t miss any important notification for electricity esp8266 sleeps for 99 of the day so don t worry about the electricity bill reliability gem our current system has been working for 3 months without any issue img src https user images githubusercontent com 30238276 83168267 e6ffce80 a119 11ea 9330 3a7d18797741 png width 60 height 60 getting started book these instructions will get you a copy of the project up and running on your development board br step 1 download arduino ide br step 2 download esp8266 core for ide br step 3 create an account on firebase br step 4 create an account on thingspeak br step 5 download universal telegram bot library br step 6 compile the given code br step 7 connect a jumper cable between d0 and rst pins br step 8 plug some high quality 5v adaptor to esp8266 raspberry pi adaptor works great br br that s all while you are sleeping esp8266 will work at 1 hour intervals br br img src https user images githubusercontent com 30238276 83973381 b7597f00 a8ee 11ea 8b77 40ee1828e2be jpg width 60 height 60 prerequisites pencil 1 x breadboard br 1 x jumper cable br 1 x 5v 2a adaptor br 1 x esp8266 br 1 x usb cable to upload the code br 2 x heatsink optional built with arduino https www arduino cc arduino ide thingspeak https thingspeak com thingspeak firebase https firebase google com firebase telegram https github com witnessmenow universal arduino telegram bot universal telegram bot library esp8266 https github com esp8266 arduino arduino esp8266 core firebase library https github com firebaseextended firebase arduino arduino firebase library | server |
|
AdvancedDataModeling | advanceddatamodeling welcome to my take on data modelling in this journey i will design a conceptual model using draw io use mysql workbench to build er diagrams use the forward engineering method to create databases design a relational database model by using a systematic approach and applying the correct normalization levels build a dimensional data model data visualization connecting and preparing data in tableau and creating interactive dashboards in tableau mission 1 design a database model for m g objective design a simplified logical relational database model for mangata and gallo mangata and gallo m g has built an ad hoc database system to store data about their customers products orders and delivery status in one large table with the columns listed below client name client address order date delivery status delivery date contact number email item name item price total cost this database is difficult to manage and includes loads of redundant data create a proper database model for a simplified and logical relational database the database modelling tool i used was draw io task 1 create a conceptual model create a conceptual model to support m g s online ordering system the model should consider the entities listed in the m g big table conceptual diagram for m g database drawio https user images githubusercontent com 106580846 217201258 6a9e028a c930 43d8 b2ab 9659046a2297 png task 2 create a logical er diagram based on the conceptual model developed create a logical er diagram as following translate each entity in the conceptual diagram into a table with relevant attributes specify the primary key of each table create a multiplicity relationship between the tables define relevant constraints such as not null and foreign keys review the logical er diagram and make sure that your data model conforms to the first normal form by applying the data atomicity rule some clients could have multiple delivery addresses to apply data atomicity create an address table and relate it to the delivery table logical er diagram for m g drawio https user images githubusercontent com 106580846 217202101 edff0f68 e871 4dbb bd5c 4dfee69d5969 png mission 2 design a database model in mysql workbench mangata gallo m g jewelry store wants to make use of the logical database model outlined in the diagram below develop this model using mysql workbench and implementing it in your mysql server image https user images githubusercontent com 106580846 217250134 d76a42cd 541d 49ab a980 58af37b7dbd5 png task 1 create an er diagram using the visual data modeling tool in mysql workbench to create the proposed er diagram for m g er diagram mysql bench https user images githubusercontent com 106580846 217263905 83092f58 253b 4d71 8a8c 30316ae74759 png note do not use or any symblol when naming a database like i did running a query with it is dramaaa task 2 implement the internal schema use mysql workbench s forward engineer feature to implement the internal schema in your mysql server forward engineering https user images githubusercontent com 106580846 218455874 3377a870 b5fa 4cf3 b6ac 319191cde778 png it then should be able to appear the schema list in the navigator section schema list https user images githubusercontent com 106580846 217271843 d44ec408 23b0 4a20 9375 d18f65a65bdf png task 3 populate the m g database create view populate the m g database with data provided using the sql workbench editor use the insert statements insert code https user images githubusercontent com 106580846 217471138 fa70aa70 9c91 4bd9 9015 a19190644880 png create a virtual table to easily find information on orders this information must contain data from all tables including clients orders products delivery address create view https user images githubusercontent com 106580846 217477479 d096934d 11d1 44ae 81db 7b343857ecff png the output will be view https user images githubusercontent com 106580846 217477708 2949cda4 2dfe 43a1 b830 6b4d9841b4d7 png mission 3 create a dimensional data model create a dimensional model for global super store to help them make sense of their sales and profits global super store have experienced a decline in their profits in the last few years there are several factors that impacted their profits including global instability around shipping and product costs new competitors appearing in different markets around the world out of date products emerging new technologies the development of new products global super store needs to understand how these factors are affecting their sales and profits they need to compare data amongst different customers products times and locations to understand the problem task 1 identify key information identifying the business process you want to deal with in this case it is the sales process identify the grain the dimensions and the measures to be used to build the dimensional model levels of granularity region country and city year quarter month day or event levels category subcategory and items the dimensions location time product customers the facts the buy and sale prices of all products the quantity sold of each product the shipping cost of each product task 2 create a star schema create a star schema based on the dimensions and facts identified in task 1 create the dimensions and the fact tables including relevant attributes and data types in each table define the primary and the foreign keys in the data model stars chema https user images githubusercontent com 106580846 217501000 7ff21e75 0704 422a ab2a b376c2f098c2 png task 3 create a snowflake schema extend the star schema developed in task 2 by creating a suitable snowflake schema with a particular focus on the products dimension snow flake schema https user images githubusercontent com 106580846 217504694 a4e1db66 88c8 4568 a1dc f67e8cf83fab png mission 4 data analysis in tableau the global super store dataset includes more than 51000 records of data about customers orders and products in ms excel file we now need to analyze this data to understand their business activities and maximize their profits the tasks are completed in tableau task 1 prepare the data set for analysis connect to the global super store data set b4 https user images githubusercontent com 106580846 217525879 d88cacbd ce09 41dc a4f3 69d2735828cf png prepare it for data analysis by making sure that all fields like the order date and the ship date contain the correct data types create a new calculated field called warranty based on 90 days from the order date after https user images githubusercontent com 106580846 217526124 05f70306 a208 47ce a3b5 6714f6763e4d png task 2 create a map chart global super store want to investigate their business performance in africa create a map chart that shows sales in different countries in africa the map should show the sales in proportional sizes if you rollover a country you should be able to see country name quantity sold sales figures tableau 1 https user images githubusercontent com 106580846 217529081 09bd7c33 2753 42c4 b1bf ef7acbc549a3 png from the visualization we can easily determine the countries with the highest and lowest sales and their quantities here is a link to the worksheet on tableau public https public tableau com views globalsuperstoresalesinafrica sheet1 language en us publish yes display count n origin viz share link task 3 create a bar chart global super store want to check the profits made in each country in africa however they are only interested in data from countries where they have made at least 500 in profit create a bar chart in tableau called profits in africa when you rollover a bar you should be able to see name of the country profits shipping cost bar chart https user images githubusercontent com 106580846 217535706 9dff422d 7952 46e5 a367 9e294c68d842 png from the visualisation we can easily determine the country with the highest and lowest profits estimate the median though we can t determine if there is a relationship between profit and shipping cost we would need a different chart a scatter plot for that here is a link to the worksheet on tableau public https public tableau com views globalsuperstoreprofitsinafrica profitsinafrica language en us publish yes display count n origin viz share link step 4 create a dashboard develop a dashboard that includes the two visualizations created sales in africa map and the profits in africa bar chart dash 2 https user images githubusercontent com 106580846 217538087 82480140 7673 48a6 bdf3 8457ae6b0da6 png the dashboard can be made interactive so that when you click on a specific country in the map the information related to that country will be displayed in the bar chart dash 1 https user images githubusercontent com 106580846 217538140 fbe0bca0 852d 4e86 a39c 7e1eae5e40c0 png here is a link to the dashboard on tableau public https public tableau com views globalsuperstoresalesprofitsdashboard africasalesprofitdashboard language en us publish yes display count n origin viz share link | datamodeling drawio erdiagram mysql mysqlworkbench tableau dimensional-modeling | server |
FlyBirdYoYo | flybirdyoyo flybirdyoyo web asp net core 2 2 asp net core 2 2 web net core asp net core asp net core webapi ioc net core automapper dto memorycache redis dapper enityframework lite sql lambda sql dao t4 t4 db first curd curd tdd mysql sqlserver log4net log4net orm dapper lambda sql 1 asp net core 2 t4 3 visual studio 2017 t4 flybirdyoyo flybirdprint dbmanage codegentempletes net core 2 2 sdk https dotnet microsoft com download visual studio 2017 f5 http localhost 8003 congratulations 1021776019 qq com br br img src https images2018 cnblogs com blog 371989 201805 371989 20180514183954632 2054296110 jpg alt | front_end |
|
mushroomobser-dataset | mushroomobser dataset mushrooms images dataset collected from http mushroomobserver org http mushroomobserver org about the dataset http mushroomobserver org exists since 2006 since then mushroom enthusiasts contribute on a daily basis to the collection of mushroom observations on the website the site hast approximately 10 000 users that contributed in total approximately 250 000 observations of mushrooms each observations counts 1 5 images structure of the dataset the dataset consists of pictures of mushrooms the pictures are sorted by year and by label label can be species label but also more general label in the taxonomy of the mushroom like kingdom phylum class family order or genus in addition to the full dataset a clean dataset hast been created the clean dataset contains only the thumbnail image of every observation the most recent year is always used as test dataset all the other years as training dataset the dataset collected can be downloaded here https www dropbox com sh m1o91dwd1nto6w0 aabudqvjwtq04ll yaf g2mfa dl 0 additionally json files containing additional information about each image can be downloaded from the above link as well the json files contain a list of dictionaries each dictionary contains following information about the image if image thumbnail 1 the image belongs to the clean dataset python date 2006 05 21 07 17 22 gbif info canonicalname xerocomells dryophils class agaricomycetes classkey 186 confidence 98 family boletaceae familykey 8789 gens xerocomells genskey 8184844 kingdom fngi kingdomkey 5 matchtype exact order boletales orderkey 1063 phylm basidiomycota phylmkey 34 rank species scientificname xerocomells dryophils thiers n siegel c f schwarz j l frank 2014 species xerocomells dryophils specieskey 7574003 stats accepted synonym false sagekey 7574003 image id 11 image rl http mshroomobserver org images 320 11 label xerocomells dryophils location 38 observation 10 thmbnail 1 user 1 the file mushroom taxonomy pdf shows an overview of the taxonomy of the mushroom dataset scrape newest year for test to scrape the images from most recent year from http mushroomobserver org http mushroomobserver org you can run scrape images of year py bash python download images of year py year destination folder you may stop the script with ctrl c as soon as it starts scraping exclusively observations that are older then the desired year the script creates a json file with the image information create species dataset to create a dataset containing n classes of only mushroom species from the training set the script create data set py can be used arguments are number of classes wanted path to training dataset path to validation dataset bash python create data set py 10 volumes mo trainingset volumes mo validationset performance evaluation the tensorflow slim image classification library was used for installation instructions see here https github com tensorflow models tree master slim the code for performance evaluation is stored in the slim folder of this repository create tensorflow dataset to create tensorflow dataset bash train dir volumes mo data train 10 test dir volumes mo data validate 10 python download and convert data py dataset name mushrooms train dir train dir test dir test dir the tensorflow dataset is stored in slim tf data train network a pre trained inception v3 network pre trained on imagenet was used the network is stored in slim inception v3 to finetune on the mushroom dataset bash dataset dir tf data train dir train models checkpoint path inception v3 inception v3 ckpt python train image classifier py train dir train dir dataset dir dataset dir dataset name mushrooms dataset split name train model name inception v3 checkpoint path checkpoint path checkpoint exclude scopes inceptionv3 logits inceptionv3 auxlogits trainable scopes inceptionv3 logits inceptionv3 auxlogits max number of steps 100000 batch size 32 learning rate 0 001 learning rate decay type exponential save interval secs 3600 save summaries secs 3600 log every n steps 1000 optimizer rmsprop weight decay 0 00004 evaluate network to evaluate the network performance bash dataset dir tf data checkpoint file train models kopie model ckpt python eval image classifier py alsologtostderr checkpoint path checkpoint file dataset dir dataset dir dataset name mushrooms dataset split name validation model name inception v3 | ai |
|
GameZone | div style display none align center h1 img src https raw githubusercontent com tarikul islam anik animated fluent emojis master emojis activities video 20game png alt video game width 50 height 50 font size 10 gamezone font h1 repo intro div div align center h3 font size 4 this open source repository contains a collection of games built on basic tech stacks in web development use your creativity build your own game and contribute to the repository by making a pr br make sure you star the repository and show your love to us br also join the discord server for gamezone and start collaborating with others font br br p why to open source contributing in open source increases your opportunities to work with different projects and mentors getting to know various insights and ideas it is a platform where contributors grow together with a construvtive and a positive attitude this repository also provides one such platforms where contributers come over and put their ideas of new games and make our website as interactive as much they can discord https img shields io badge discord 235865f2 svg style for the badge logo discord logocolor white https discord gg fgwk4xzfxg github issues https img shields io github issues kunjgit gamezone github forks https img shields io github forks kunjgit gamezone github pull requests https img shields io github issues pr kunjgit gamezone github repo stars https img shields io github stars kunjgit gamezone style social github contributors https img shields io github contributors kunjgit gamezone website https img shields io website down color red down message offline up color blue up message online url https 3a 2f 2fkunjgit github io 2fgamezone 2f p div br tech stacks div div align center h2 img src https raw githubusercontent com tarikul islam anik animated fluent emojis master emojis travel 20and 20places high 20voltage png alt high voltage width 40 height 40 font size 6 tech stack font h2 br div center p div align center a href https developer mozilla org en us docs glossary html5 img src https img shields io badge html5 e34f26 svg style for the badge logo html5 logocolor white a a href https developer mozilla org en us docs web javascript img src https img shields io badge javascript f7df1e svg style for the badge logo javascript logocolor black a a href https getbootstrap com img src https img shields io badge bootstrap 7952b3 svg style for the badge logo bootstrap logocolor black a a href https developer mozilla org en us docs web css img src https img shields io badge css3 1572b6 svg style for the badge logo css3 logocolor black a a href https v2 tailwindcss com docs img src https img shields io badge tailwind 20css 06b6d4 svg style for the badge logo tailwind css logocolor black a div p center br br lets get started div align center h2 font size 6 img src https raw githubusercontent com tarikul islam anik animated fluent emojis master emojis travel 20and 20places rocket png alt rocket width 40 height 40 let s get started font h2 div contribution steps fork the repository clone this repository git clone url of the repo raise and issue to add new game or to enhancement for a game have a look at few things you have to take care during raising issue select appropriate issue template make sure your idea is unique and interesting don t alter the issue title you are supposed to write your issue name after that only issue title your issue make sure you just add your issue name ex new game super mario make sure you select the program in which you are participating wait till you have been assigned the issue after you have been assigned the issue start working on the code create your new branch using git checkout b name of your branch having your code into the repository make your game folder into games folder by the naming convention mentioned in contributing guideline github contributing guideline md add your code files index html style css script js in your game folder create readme md file in your folder and add all the functionalities and how you can play that game in that readme file also include screenshots of working game video of a game explaining if required to create your folder readme md checkout the template game readme template games folder readme template md now take one good screenshot of your game that you want to display it on our website and add into assets images follow the naming convention your folder name png or jpeg or jpg add your folders link and name in main readme md the one you are reading currently push your changes to github using git push origin name your branch submit your changes for review by creating pr and you are done i will review your code and i will merge your code to the main branch of this repository and you will notified for the same if you having queries in basic flow of github learn it from contributing guideline github contributing guideline md div align center h2 font size 6 img src https raw githubusercontent com tarikul islam anik animated fluent emojis master emojis smilies robot png alt robot width 40 height 40 games font h2 div list of the games center no name of the game 1 master typing https github com kunjgit gamezone tree main games master typing 2 tilting maze https github com kunjgit gamezone tree main games tilting maze 3 simon game challenge https github com kunjgit gamezone tree main games simon game challenge 4 snake game https github com kunjgit gamezone tree main games snake game 5 dino runner game https github com kunjgit gamezone tree main games dino runner game 6 whack a mole https github com kunjgit gamezone tree main games whack a mole 7 doraemon jump https github com kunjgit gamezone tree main games doraemon jump 8 black jack https github com kunjgit gamezone tree main games black jack 9 memory game https github com kunjgit gamezone tree main games memory game 10 word guessing game https github com kunjgit gamezone tree main games word guessing game 11 ludo game https github com kunjgit gamezone tree main games ludo game 12 piano game https github com kunjgit gamezone tree main games piano 13 atari breakout https github com kunjgit gamezone tree main games atari breakout 14 dinosaur game https github com kunjgit gamezone tree main games chrome dinosaur game 15 guess the colour by rgb game https github com kunjgit gamezone tree main games colour guessing game 16 guess the number https github com kunjgit gamezone tree main games guess the number 17 race car game https github com kunjgit gamezone tree main games race car 18 aim training https github com dp nothing gamezone tree contri games aim training 19 alien shooter https github com kunjgit gamezone tree main games alien shooters 20 fruit ninja https github com kunjgit gamezone tree main games fruit ninja 21 doodle jump https github com kunjgit gamezone tree main games doodle jump 22 alphabet game https github com kunjgit gamezone tree main games alphabet 23 candy crush https github com kunjgit gamezone tree main games candy crush 24 word association game https github com kunjgit gamezone tree main games word association game 25 tic tac toe https github com kunjgit gamezone tree main games tic tac toe 26 flappy bird game https github com kunjgit gamezone tree main games flappy bird 27 trivia it https hithub com kunjgit gamezone tree main games trivia it 28 minesweeper https github com kunjgit gamezone tree main games minesweeper 29 dice showdown game https github com himanshu07 debug gamezone tree main games dice showdown game 30 pac man game https github com kunjgit gamezone tree main games pac man game 31 brick breaker game https github com kunjgit gamezone tree main games brick breaker 32 magic square game https github com kunjgit gamezone tree main games magic square 33 fight game https github com kunjgit gamezone tree main games fight game 34 lighthouse game https github com kunjgit gamezone tree main games lighthouse 35 lights out game https github com kunjgit gamezone tree main games lights out 36 word scramble game https github com kunjgit gamezone tree main games word scramble game 37 tetris https github com kunjgit gamezone tree main games tetris 38 interactive quizzing application https github com kunjgit gamezone tree main games interactive quizzing 39 planet defense game https github com kunjgit gamezone tree main games planet defense 40 rabbit rush game https github com kunjgit gamezone tree main games rabbit rush 41 wordle https github com kunjgit gamezone tree main games wordle 42 roll race game https github com kunjgit gamezone tree main games roll race 43 menja game https github com kunjgit gamezone tree main games menja 44 typing speed test game https github com kunjgit gamezone tree main games typing speed test game 45 tile game https github com kunjgit gamezone tree main games tile game 46 stick hero game https github com kunjgit gamezone tree main games stick hero game 47 starwars character game https github com kunjgit gamezone tree main games starwars character game 48 traffic run https github com kunjgit gamezone tree main games traffic run 49 love result predictor https github com kunjgit gamezone tree main games love result predictor 51 tower defense https github com kunjgit gamezone tree main games tower defense 52 bird game https github com kunjgit gamezone tree main games bird game 53 bubble blast game https github com kunjgit gamezone tree main games bubble blast game 54 emoji charades https github com kunjgit gamezone tree main games emoji charades 55 drum and kit https github com kunjgit gamezone tree main games drum kit game 56 rock paper scissors https github com kunjgit gamezone tree main games rock paper scissors 57 frogger https github com kunjgit gamezone tree main games frogger 58 morethan5 https github com kunjgit gamezone tree main games not morethan5 59 unruly tower https github com kunjgit gamezone tree main games unruly tower 60 maze game https github com kunjgit gamezone tree main games mazegame 61 connect4 https github com kunjgit gamezone tree main games connect4 62 spelling bee https github com kunjgit gamezone tree main games spelling bee 63 2048 https github com kunjgit gamezone tree main games 2048 64 spin the wheel https github com kunjgit gamezone tree main games spin the wheel 65 breakout https github com kunjgit gamezone tree main games breakout 66 tower blocks https github com kunjgit gamezone tree main games tower blocks 67 platform game https github com kunjgit gamezone tree main games platform game 68 red light green light https github com kunjgit gamezone tree main games red light green light 69 squash your enemy https github com kunjgit gamezone tree main games squashing your enemy 70 avax gods https github com kunjgit gamezone tree main games avax gods 71 flip card game https github com kunjgit gamezone tree main games flip card game 72 bingo game https github com kunjgit gamezone tree main games bingo game 73 fifteen puzzle game https github com kunjgit gamezone tree main games fifteen puzzle game 74 stack game https github com kunjgit gamezone tree main games stack game 75 block io game https github com kunjgit gamezone tree main games block io 76 country guesser game https github com kunjgit gamezone tree main games country guesser game 77 touch the ball game https github com kunjgit gamezone tree main games touch the ball 78 sudoku https github com kunjgit gamezone tree main games sudoku 79 mini golf https github com kunjgit gamezone tree main games mini golf 80 rubik s solver https github com kunjgit gamezone tree main games rubik s solver 81 shoot the balloon https github com kunjgit gamezone tree main games shoot the balloon 82 dont die to ghosts https github com kunjgit gamezone tree main games dont die to ghosts 83 scifi alchemy https github com kunjgit gamezone tree main games scifi alchemy 84 packabunchas https github com kunjgit gamezone tree main games packabunchas 85 cast and catch https github com sheetal 05 gamezone tree main games cast and catch 86 track not found https github com kunjgit gamezone tree main games track not found 87 love calculator game https github com kunjgit gamezone tree main games love calci 88 planet game https github com kunjgit gamezone tree main games planet game 89 snake ladder https github com kunjgit gamezone tree main games snake ladder 90 among us game https github com kunjgit gamezone tree main games among us game 91 pokedex game https github com kunjgit gamezone tree main games pokedex 92 pacific air battle https github com kunjgit gamezone tree main games pacific air battle 93 dante https github com kunjgit gamezone tree main games dante 94 ping pong multiplayer https github com kunjgit gamezone tree main games ping pong multiplayer 95 sonic the hedgehog https github com kunjgit gamezone tree main games sonic the hedgehog 96 world of emojis https github com kunjgit gamezone tree main games world of emojis 97 ball fall game https github com kunjgit gamezone tree main games ball fall game 98 pinball https github com kunjgit gamezone tree main games pinball 99 duck hunting game https github com kunjgit gamezone tree main games duck hunting game 100 color turner https github com kunjgit gamezone tree main games color turner 101 catch the bunny https github com kunjgit gamezone tree main games catch the bunny 102 catch me game https github com kunjgit gamezone tree main games catch me game 103 blank detective https github com kunjgit gamezone tree main games blanks detective 104 falling blocks https github com kunjgit gamezone tree main games falling blocks 105 movie guessing game https github com kunjgit gamezone tree main games movie guessing game 106 wildcard bonanza https github com kunjgit gamezone tree main games wildcard bonanza 107 the last spartan https github com kunjgit gamezone tree main games the last spartan 108 space exploration https github com kunjgit gamezone tree main games space exploration 109 bow arrow game https github com kunjgit gamezone tree main games bow arrow 110 i want to google the game https github com kunjgit gamezone tree main games i want to google the game 111 space gun https github com kunjgit gamezone tree main games space gun 112 space huggers https github com kunjgit gamezone tree main games space huggers 113 spaceship escort https github com kunjgit gamezone tree main games spaceship escort 114 space defence https github com kunjgit gamezone tree main games space defence 115 glitch buster https github com kunjgit gamezone tree main games glitch buster 116 3d box game https github com kunjgit gamezone tree main games 3d box game 117 escape https github com kunjgit gamezone tree main games escape 118 retro dungeon puzzle https github com kunjgit gamezone tree main games retro dungeon puzzle 119 immunity collapse https github com kunjgit gamezone tree main games immunity collapse 120 hunt your card https github com kunjgit gamezone tree main games hunt your card 121 tenacity https github com kunjgit gamezone tree main games tenacity 122 emoji puzzle game https github com kunjgit gamezone tree main games emoji puzzle game 123 back to space https github com kunjgit gamezone tree main games back to space 124 snooze https github com kunjgit gamezone tree main games snooze 125 galaxy rider https github com kunjgit gamezone tree main games galaxy rider 126 squared lines https github com kunjgit gamezone tree main games squared lines 127 space war https github com kunjgit gamezone tree main games space war 128 sciara of colors https github com kunjgit gamezone tree main games sciara of colors 129 junojs https github com kunjgit gamezone tree main games junojs 130 fall down https github com kunjgit gamezone tree main games fall down 131 cat goric https github com kunjgit gamezone tree main games cat goric 132 cable maze https github com kunjgit gamezone tree main games cable maze 133 spaceducts https github com kunjgit gamezone tree main games spaceducts 134 zurbo https github com kunjgit gamezone tree main games zurbo 135 blast zone https github com kunjgit gamezone tree main games blastzone 136 free bird https github com kunjgit gamezone tree main games free bird 137 maximise boxes https github com kunjgit gamezone tree main games maximiseboxes 138 slide puzzle https github com kunjgit gamezone tree main games slide puzzle 139 diamond run https github com kunjgit gamezone tree main games diamond run 140 everyones sky https github com kunjgit gamezone tree main games everyones sky 141 line of fire https github com kunjgit gamezone tree main games line of fire 142 1024 moves https github com kunjgit gamezone tree main games 1024 moves 143 save the forest https github com kunjgit gamezone tree main games save the forest 144 dragon world game https github com kunjgit gamezone tree main games dragon world game 145 duckhunt https github com kunjgit gamezone tree main games duckhunt 146 plankman https github com kunjgit gamezone tree main games plankman 147 hold the cloud https github com kunjgit gamezone tree main games hold the cloud 148 labyrinth https github com kunjgit gamezone tree main games labyrinth 149 rip https github com kunjgit gamezone tree main games rip 150 risky nav https github com kunjgit gamezone tree main games risky nav 151 pixels from space https github com kunjgit gamezone tree main games pixels from space 152 poker dice https github com kunjgit gamezone tree main games poker dice 153 unlock the lock https github com kunjgit gamezone tree main games unlock the lock 154 gnomedom https github com kunjgit gamezone tree main games gnomedom 155 lost in the maze 3d https github com kunjgit gamezone tree main games lost in the maze 3d 156 pong ball https github com kunjgit gamezone tree main games pong ball 157 projectile motion game https github com kunjgit gamezone tree main games projectile motion game 158 swift https github com kunjgit gamezone tree main games swift 159 spacepi https github com kunjgit gamezone tree main games spacepi 160 destroyer https github com kunjgit gamezone tree main games destroyer 161 terror seventy https github com kunjgit gamezone tree main games terror seventy 162 humming https github com kunjgit gamezone tree main games humming 163 word search puzzle https github com kunjgit gamezone tree main games word search puzzle 164 ballarena https github com kunjgit gamezone tree main games ballarena 165 beyonder https github com kunjgit gamezone tree main games beyonder 166 shpere https github com kunjgit gamezone tree main games shpere 167 short circuit https github com kunjgit gamezone tree main games short circuit 168 johnny smiter https github com kunjgit gamezone tree main games johnny smiter 169 rectangular https github com kunjgit gamezone tree main games rectangular 170 canon defense https github com kunjgit gamezone tree main games canon defense 171 trashem https github com kunjgit gamezone tree main games trashem 172 chess https github com soarinskysagar gamezone gssoc23 tree main games chess 173 get the pigeon https github com kunjgit gamezone tree main games get the pigeon 174 uxu https github com kunjgit gamezone tree main games uxu 175 soul jumper https github com kunjgit gamezone tree main games soul jumper 176 infernal throne https github com kunjgit gamezone tree main games infernal throne 177 dead again https github com kunjgit gamezone tree main games dead again 178 norman the necromancer https github com kunjgit gamezone tree main games norman the necromancer 179 shape blocks https github com kunjgit gamezone tree main games shape blocks 180 goal rush https github com kunjgit gamezone tree main games goal rush 181 charon jr https github com kunjgit gamezone tree main games charon jr 182 color shifter https github com kunjgit gamezone tree main games color shifter 183 oh flip https github com kunjgit gamezone tree main games oh flip 184 snake feeder game https github com kunjgit gamezone tree main games snake feeder game 185 lossst https github com kunjgit gamezone tree main games lossst 186 hangman https github com kunjgit gamezone tree main games hangman 187 bad luck brian https github com kunjgit gamezone tree main games bad luck brian 188 bad depot https github com kunjgit gamezone tree main games bad depot 189 achluophobia https github com kunjgit gamezone tree main games achluophobia 190 timber terry https github com kunjgit gamezone tree main games timber terry 191 earth destroyer https github com kunjgit gamezone tree main games earth destroyer 192 lonely phantom https github com kunjgit gamezone tree main games lonely phantom 193 ghost surf https github com kunjgit gamezone tree main games ghost surf 194 sucker https github com kunjgit gamezone tree main games sucker 195 sorades https github com kunjgit gamezone tree main games sorades 196 thirteen https github com kunjgit gamezone tree main games thirteen 197 the raising fighting spirits https github com kunjgit gamezone tree main games the raising fighting spirits 198 green mahjong https github com kunjgit gamezone tree main games green mahjong 199 drag and drop puzzle game https github com kunjgit gamezone tree main games drag and drop puzzle 200 music guess game https github com kunjgit gamezone tree main games music guess game 201 tower of hanoi https github com kunjgit gamezone tree main games tower of hanoi 202 mastermind mania https github com kunjgit gamezone tree main games mastermind mania 203 ludo 4 player https github com kunjgit gamezone tree main games ludo 4 player 204 airballoon https github com kunjgit gamezone tree main games airballoon 205 space invaders https github com kunjgit gamezone tree main games space invaders 206 cut the rope https github com kunjgit gamezone tree main games cut the rope 207 caesar cipher https github com kunjgit gamezone tree main games caesar cipher 208 monster maker https github com kunjgit gamezone tree main games monster maker 209 stolen sword https github com kunjgit gamezone tree main games stolen sword 210 mastermind https github com kunjgit gamezone tree main games mastermind 211 highway 404 https github com kunjgit gamezone tree main games highway 404 212 bullseyegame https github com kunjgit gamezone tree main games bullseyegame 213 crossword game https github com kunjgit gamezone tree main games crossword game 214 guess the correct logo https github com shruti 2412 gamezone tree main games guess the correct logo 215 painting game https github com kunjgit gamezone tree main games painting game 216 platform game engine https github com kunjgit gamezone tree main games platform game engine 217 doppelkopf https github com kunjgit gamezone tree main games doppelkopf 218 quiz game https github com kunjgit gamezone tree main games quiz game 219 island survival https github com kunjgit gamezone tree main games island survival 220 linkup game https github com kunjgit gamezone tree main games linkup 221 trivia card https github com kunjgit gamezone tree main games trivia card 222 insect catch game https github com kunjgit gamezone tree main games insect catch game 223 carnival game https github com kunjgit gamezone tree main games carnival game 224 make me laugh https github com kunjgit gamezone tree main games make me laugh 225 avoider game https github com kunjgit gamezone tree main games avoider game 226 dungeon crawler https github com kunjgit gamezone tree main games dungeon crawler 227 snake water gun https github com kunjgit gamezone tree main games snake water gun 228 run and jump https github com kunjgit gamezone tree main games run and jump 229 ai chess game https github com kunjgit gamezone tree main games ai chess game 230 fruit catching https github com kunjgit gamezone tree main games fruit catching 231 bulls eye https github com kunjgit gamezone tree main games bulls eye 232 crystals collecter https github com kunjgit gamezone tree main games crystals collecter 233 dots and boxes game https github com kunjgit gamezone tree main games dots and boxes game 234 infinite runner game https github com kunjgit gamezone tree main games infinite runner game 235 mario matching https github com kunjgit gamezone tree main games mario matching game 236 hand cricket https github com kunjgit gamezone tree main games hand cricket 237 crossword puzzle https github com kunjgit gamezone tree main games crossword puzzle 238 pixel painter https github com kunjgit gamezone tree main games pixel painter 239 riddle room https github com kunjgit gamezone tree main games riddle room 240 armoralley https github com kunjgit gamezone tree main games armoralley 241 color switcher https github com kunjgit gamezone tree main games color switcher 242 maze of cables https github com vsatwika gamezonefork tree maze of cables games maze of cables 243 escape room https github com kunjgit gamezone tree main games escape room 244 super mario run https github com kunjgit gamezone tree main games super mario run 245 doodle draw https github com kunjgit gamezone tree main games doodle draw 246 arcade game https github com kunjgit gamezone tree main games arcade game 247 slice storm https github com vsatwika gamezonefork tree slice storm games slice storm 248 codepen simulator https github com kunjgit gamezone tree main games codepen simulator 249 piano tiles https github com kunjgit gamezone tree main games pianotiles game 250 caretaker https github com kunjgit gamezone tree main games caretaker 251 uno https github com kunjgit gamezone tree main games uno 252 remeber the color https github com kunjgit gamezone tree main games remember the color 253 guess the random shape https github com kunjgit gamezone tree main games guess the random shape 254 save doraemon https github com kunjgit gamezone tree main games save doraemon 255 animal match game https github com kunjgit gamezone tree main games animal match game 256 hextris https github com kunjgit gamezone tree main games hextris 257 mrfakegame https github com kunjgit gamezone tree main games mrfakegame 258 checkers https github com kunjgit gamezone tree main games checkers 259 roulette https github com kunjgit gamezone tree main games roulette 260 aero acrobat https github com kunjgit gamezone tree main games aero acrobat 261 adventure game https github com kunjgit gamezone tree main games adventure game 262 pumpkin pursuit https github com kunjgit gamezone tree main games pumpkin pursuit 263 corona shooter https github com kunjgit gamezone tree main games corona shooter 264 pokemon ball finder https github com kunjgit gamezone tree main games pokemon ball finder 265 basketball https github com kunjgit gamezone tree main games basketball 266 wault master https github com kunjgit gamezone tree main games wault master 267 reaction time https github com kunjgit gamezone tree main games reaction time 268 flag guess game https github com kunjgit gamezone tree main games flag guess game 269 cross the road https github com kunjgit gamezone tree main games cross the road 270 highway race barrel dodge https github com kunjgit gamezone tree main games highway race 271 bit maze platformer maze https github com kunjgit gamezone tree main games bit maze platformer maze 272 math game https github com kunjgit gamezone tree main games math game 273 space drifter https github com kunjgit gamezone tree main games space drifter 274 observe the cloud https github com kunjgit gamezone tree main games observe 20the 20cloud 275 cosmic coin blaster https github com kunjgit gamezone tree main games cosmic coin blaster 276 circus charly https github com kunjgit gamezone tree main games circus charly 277 pikachu volleyball https github com kunjgit gamezone tree main games pikachu volleyball 278 trex run https github com akankshachanana1 gamezone tree added games trex run 279 crack the code https github com kunjgit gamezone tree main games crack the code 280 skeleathon https github com kunjgit gamezone tree main games skeleathon 281 shadow pokeguess https github com kunjgit gamezone tree main games shadow pokeguess 282 brain color mastermind https github com kunjgit gamezone tree main games brain color mastermind 283 lizard spock game https github com kunjgit gamezone tree main games lizard spock game 284 angry boars https github com kunjgit gamezone tree main games angry boars 285 alphabet learning game https github com kunjgit gamezone tree main games alphabet learning game 286 country guesser game https github com kunjgit gamezone tree main games country guesser game 287 poke guess blitz https github com kunjgit gamezone tree main games poke guess blitz 288 spider man go https github com kunjgit gamezone tree main games spider man go 289 foosball https github com kunjgit gamezone tree main games foosball 290 triangle back to home https github com kunjgit gamezone tree main games triangle back to home 291 alphabet learning game https github com kunjgit gamezone tree lizard game games alphabet learning game 292 poke guess blitz https github com kunjgit gamezone tree main games poke guess blitz 293 spider man go https github com kunjgit gamezone tree lizard game games spider man go 294 foosball https github com kunjgit gamezone tree main games foosball 295 triangle back to home https github com kunjgit gamezone tree main games triangle back to home 296 death by hamster https github com kunjgit gamezone tree main games death by hamster 297 tenzies https github com kunjgit gamezone tree main games tenzies 298 target torrent https github com kunjgit gamezone tree main games target torrent 299 reversi https github com kunjgit gamezone tree main games reversi 300 reaction teaser https github com kunjgit gamezone pull 2134 files 301 scribble https github com kunjgit gamezone tree main games scribble 302 brain burst game https github com kunjgit gamezone tree main games brain burst game 303 stickthesticker https github com kunjgit gamezone tree main games stickthesticker 304 meme battle game https github com sahaycodes gamezone tree meme games meme battle game 305 match color game https github com kunjgit gamezone tree main games match color game 306 bow and arrow https github com kunjgit gamezone tree main games bow and arrow 307 beyblade https github com kunjgit gamezone tree main games beyblade 308 the labyrinth of death https github com sahaycodes gamezone tree meme games the labyrinth of death 309 2d breakout https github com kunjgit gamezone tree main games 2d breakout 310 battleship https github com kunjgit gamezone tree main games battleship 311 baseball https github com kunjgit gamezone tree main games baseball 312 save princess https github com kunjgit gamezone tree main games save princess 313 roadfighter https github com kunjgit gamezone tree main games roadfighter 314 guitar game https github com kunjgit gamezone tree main games guitar game 315 solitaire https github com kunjgit gamezone tree main games solitaire 316 lady tiger hunter https github com kunjgit gamezone tree main games lady tiger hunter 317 stone paper scissor https github com kunjgit gamezone tree main games stone paper scissor 318 flashlight pointer game https github com kunjgit gamezone tree main games flashlight pointer game 319 pig game https github com kanchanbora gamezone tree main games pig game 320 asteroids 3d https github com kunjgit gamezone tree main games asteroids 3d 321 lamb lane https github com sahaycodes gamezone tree meme games lamb lane 322 dinoffline https github com kunjgit gamezone tree main games dinoffline 323 maths sprint game https github com kunjgit gamezone tree main games maths sprint game 324 etch a sketch https github com kunjgit gamezone tree main games etch a sketch 325 quizzapp https github com kunjgit gamezone tree main games quizzapp 326 chess game https github com kunjgit gamezone tree main games chess game 327 which color https github com sahaycodes gamezone tree main games which color 328 snail game https github com sahaycodes gamezone tree meme games snail game 329 solitaire https github com kunjgit gamezone tree main games solitaire up 330 slime attack https github com apu52 gamezone tree slime attack game games slime attack game 331 star trek trivia https github com kunjgit gamezone tree startrek trivia games star trek trivia 332 pokemon card game https github com kunjgit gamezone tree main games pokemon card game 333 digit dilemma https github com kunjgit gamezone tree main games digit dilemma 334 tennis https github com kunjgit gamezone tree main games tennis 335 illusion https github com kunjgit gamezone tree main games illusion 336 block buster https github com sahaycodes gamezone tree meme games block buster 337 guess the ball https github com kunjgit gamezone tree main games guess the ball 338 doremon puzzle https github com kunjgit gamezone tree main games doremon puzzle 339 guess the celebrity https github com kunjgit gamezone tree main games guess the celeb 340 rock paper scissors lizard spock https github com kunjgit gamezone tree main rock paper scissors lizard spock 341 elemental riddles https github com kunjgit gamezone tree main elemental riddles 342 falling ball https github com kunjgit gamezone tree main games falling ball 343 hit target https github com kunjgit gamezone tree main games hit target 344 archery https github com kunjgit gamezone tree main games archery 345 click circle https github com kunjgit gamezone tree main click circle 346 color switch challenger https github com kunjgit gamezone tree main color switch challenger 347 puzzle game https github com kunjgit gamezone tree main games puzzle game 348 quizify https github com kunjgit gamezone tree main quizify 349 word blitz https github com kunjgit gamezone tree main word blitz 350 click circle https github com kunjgit gamezone tree main click circle 351 color switch challenger https github com kunjgit gamezone tree main color switch challenger 352 puzzle game https github com kunjgit gamezone tree main games puzzle game 353 quizify https github com kunjgit gamezone tree main quizify 354 word blitz https github com kunjgit gamezone tree main word blitz 355 code cracker https github com kunjgit gamezone tree main code cracker 356 know your country https github com kunjgit gamezone tree main games know your country 357 musical floor https github com kunjgit gamezone tree main games musical floor 358 sky dodge https github com kunjgit gamezone tree main sky dodge 359 swap card game https github com kunjgit gamezone tree main games swap card game 360 memorization card https github com kunjgit gamezone tree main games memorization card 361 smashing blocks https github com kunjgit gamezone tree main games smashing blocks 362 response reaction https github com kunjgit gamezone tree main games response reaction 363 truth and dare https github com kunjgit gamezone tree main games truth and dare 364 rotating elements https github com tanujbordikar gamezone tree rotating elements 365 chopsticks https github com kunjgit gamezone tree main games chopsticks 366 anime clicker https github com kunjgit gamezone tree main games anime clicker 367 3d snake https github com kunjgit gamezone tree main games 3d snake 368 rocket showdown https github com tanujbordikar gamezone tree rocket showdown 369 find extra cube https github com kunjgit gamezone tree main games find extra cube 370 pathplex https github com kunjgit gamezone tree main games pathplex 371 css select https github com kunjgit gamezone tree main games css select 372 squid https github com kunjgit gamezone tree main games squid game 373 css crossword https github com kunjgit gamezone tree main games css crossword 374 css select https github com kunjgit gamezone tree main games css select 375 squid https github com kunjgit gamezone tree main games squid game 376 flip coin https github com kunjgit gamezone tree main games flip coin 377 witty word quest https github com kunjgit gamezone tree main games witty word quest 378 typing game https github com ishan 77 gamezone tree main games typing game 379 numeral whiz https github com ishan 77 gamezone tree main games numeral whiz 380 candy match https github com kunjgit gamezone tree main games candy match saga 381 crossy road https github com tanujbordikar gamezone tree crossy road 382 huehero https github com kunjgit gamezone tree main games huehero 383 puzzel winner https github com kunjgit gamezone tree main games puzzel winner 384 emoji intruder https github com kunjgit gamezone tree main games emoji intruder 385 guess the weapon https github com kunjgit gamezone tree main games guess the weapon center br br div align center h2 font size 6 img src https raw githubusercontent com tarikul islam anik animated fluent emojis master emojis objects page 20with 20curl png alt page with curl width 40 height 40 contributing guideline font h2 div br contributing guideline detail read our contributing guideline github contributing guideline md to get all details about contributing to gamezone learn all about development process and all information you need to contribute to our project if you are having the basic queries make sure you checkout resources there br code of conduct div align center h2 font size 6 img src https raw githubusercontent com tarikul islam anik animated fluent emojis master emojis hand 20gestures handshake png alt handshake width 40 height 40 code of conduct font h2 div br please note that this project is released with code of conduct github code of conduct md by participating in this project you agree to abide by its terms br license license https img shields io badge license apache 2 0 blue svg https opensource org licenses apache 2 0 terms and conditions for use reproduction and distribution are under the apache 2 0 license https opensource org license apache 2 0 mentors br br div align center div br br a big thanks to all the contributors div align center h2 font size 6 img src https raw githubusercontent com tarikul islam anik animated fluent emojis master emojis smilies red 20heart png alt red heart width 40 height 40 contributors font h2 div br this project thanking all the contributors for having your valuable contribution to our project make sure you show some love by giving to our repository br center a href https github com kunjgit gamezone graphs contributors img src https contrib rocks image repo kunjgit gamezone a center br p align right a href top back to top a p | bootstrap css html open-source collaboration contributions games girlscript-foundation github javascript open-source-project collaborate css3 game hacktoberfest learning-by-doing | front_end |
dcos-iot-demo | dc os iot demo img src docs 0 overview architecture jpg this project demonstrates how to configure a full stack geo enabled internet of things iot solution using a href https mesosphere com mesosphere s a open sourced a href https dcos io data center operating system dc os a using a href https www docker com docker a containerization and a href http mesos apache org mesos a frameworks including a href https mesosphere github io marathon marathon a a href http kafka apache org kafka a a href http spark apache org spark a and a href http elasticsearch mesosframeworks com elasticsearch a br to see the dc os iot demo in action click on the video link below br center table tr td width 50 img src docs 9 visual 02 gif br geohash aggregation replay of taxi movement in new york city td td width 50 img src docs 9 visual 03 gif br heatmap replay of taxi movement in new york city br br td tr table center center a href https youtu be topmpihuv o img src docs 0 overview dcos iot demo screenshot jpg height 75 width 75 a center to create your own dc os iot demo environment 0 a href docs 0 overview readme md review the application architecture overview a br 1 provision compute resources on a href docs 1 azure readme md microsoft azure a a href docs 1 amazon readme md amazon web services a a href docs 1 amazon c2s readme md amazon c2s a or a href docs 1 on premise readme md on premise a br 2 a href docs 2 install readme md install dc os a and then a href docs 3 explore readme md explore the dc os mesos dashboards a br 3 a href docs 4 kafka readme md install kafka schedule brokers a br 4 a href docs 5 elasticsearch readme md install elastic schedule an elasticsearch cluster a br 5 a href docs 6 webapp readme md install map web application a br running the demo 6 a href docs 7 stream readme md schedule a spark streaming job a br 7 a href docs 8 source readme md schedule a kafka producer application a br 8 a href docs 9 visual readme md visualize iot movement behavior a br 9 a href docs 10 cleanup readme md applying cleanup procedures a between demo runs | server |
|
cloud-data-warehouse | div id top div thanks for checking out the best readme template if you have a suggestion that would make this better please fork the repo and create a pull request or simply open an issue with the tag enhancement don t forget to give the project a star thanks again now go create something amazing d project shields i m using markdown reference style links for readability reference links are enclosed in brackets instead of parentheses see the bottom of this document for the declaration of the reference variables for contributors url forks url etc this is an optional concise syntax you may use https www markdownguide org basic syntax reference style links div align center linkedin linkedin shield linkedin url div h3 align center cloud data warehouse h3 p align center this project is my solution to the udacity data engineering nanodegree cloud data warehouse project br a cloud data warehouse build with aws redshift that automatically spins up aws services and inserts data into the tables p div table of contents details summary table of contents summary ol li a href about the project about the project a ul li a href built with built with a li ul li li a href getting started getting started a ul li a href prerequisites prerequisites a li ul li li a href contact contact a li ol details about the project about the project br this project is an etl pipeline that builds a redshift cluster in aws and then inserts data into it that is taken from public s3 buckets hosted by udacity br this project was quite fun to make and taught me lot about infrastructure as code aws etl and database design in general why this db design why cloud this design schema star schema was chosen because it made the most sense with the analytical needs and the way we ingest data the data analytics and business intelligence people wanted to have fact dimension table schema that they can then work with and are just the easiest to answer our questions with the cloud was chosen to give easy expandability in the future as well as uptime and low overhead in terms of not needing on perm hardware why this etl design the etl is designed to work with the current infrastructure and be easily useable by anybody with just a little bit of tech knowledge all that is needed is to clone the repo and execute two commands and entering some configs staging the log and song files made sense because it made it much easier for us to insert the wanted data into the right tables but also leaves us with the possibility of including more data or expanding our dimension or fact tables p align right a href top back to top a p built with python https www python org aws https aws amazon com p align right a href top back to top a p getting started getting started this is an example of how can get the project working from your machine be mindful that this will make a redshift cluster with your aws account prerequisites python www python org aws acount https aws amazon com python virtualenv sh pip install virtualenv installation 1 clone the repo of the branch you want sh git clone https github com maximiliansoerenpollak cloud data warehouse 2 open a terminal and navigate to the folder where you cloned the repo and make a virtual environment sh cd place you cloned repo cloud data warehouse activate and install all requirements sh python3 m venv name of virtualenv source name of virtualenv bin activate pip r install requirements txt now you should have all requirements installed that are needed for the project 3 you first have to open up the dwh empty cfg and fill out all the input there make sure you create a new iam role in your aws account since you do not want to enter your admin accounts information i explained underneath what to fill out where cluster dwh cluster identifier name you give your cluster db name name of the database db user name of the iam user db password db port 5439 dwh role role you want to create for this user dwh dwh cluster type multi node dwh num nodes 4 dwh node type dc2 large iam s3 log data s3 udacity dend log data log jsonpath s3 udacity dend log json path json song data s3 udacity dend song data keys access key access key for the iam user secret key secret key for the iam user once you have filled this out with the correct information save it as dwh cfg 4 after you have saved the config as dwh cfg and filed it all in you can start the process all you have to do is to go into the folder where you cloned the project and run the start script sh make the shellscript exectuable and start it chmod x start sh start sh this should then startup the aws cluster create all needed tables and move the data into them 5 if you want to shut down all created aws resources just run python aws shutdown py this will delete the redshift cluster and the role and policies you created p align right a href top back to top a p contact contact maximilain soeren pollak pollakmaximilian gmail com project link https github com maximiliansoerenpollak portfolio api https github com maximiliansoerenpollak portfolio api p align right a href top back to top a p markdown links images https www markdownguide org basic syntax reference style links license shield https img shields io github license maximiliansoerenpollak portfolio api license url https github com github username repo name blob master license txt linkedin shield https img shields io badge linkedin black svg style for the badge logo linkedin colorb 555 linkedin url https linkedin com in msoerenpollak | cloud |
|
ANLP | open in colab https colab research google com assets colab badge svg https colab research google com github gymk anlp binder https mybinder org badge logo svg https mybinder org v2 gh gymk anlp master jupyter notebook viewer nbviewer link to browse the notebooks https nbviewer jupyter org github gymk anlp tree master 1 applied natural language processing course by nptel toc 1 applied natural language processing course by nptel 1 applied natural language processing course by nptel 1 1 course layout 11 course layout 1 1 1 course type 111 course type 1 1 2 course level 112 course level 1 2 books and references 12 books and references 1 2 1 books 121 books 1 2 2 articles 122 articles 1 2 3 proceedings 123 proceedings 1 3 code 13 code 1 3 1 week 1 131 week 1 1 3 2 week 2 132 week 2 toc https hackersandslackers com applied natural language processing course code s anlp course by mr ramaseshan r chennai mathematical institute natural language processing nlp is an important area of artificial intelligence concerned with the processing and understanding nlu of a human language the goal of nlp and nlu is to process and harness information from a large corpus of text with very little manual intervention this course will introduce various techniques to find similar words using the context of surrounding words build a language model to predict the next word and generate sentences encode every word in the vocabulary of the corpus into a vector form that represents its context and similar words and encode a sentence for machine translation and conversation purposes the course will help learners to gather sufficient knowledge and proficiency in probabilistic artificial neural network ann and deep learning techniques for nlp intended audience any interested learners per requisites essential algorithms python proficiency elementary probability and statistics linear algebra basic understanding of machine learning note only english corpus is considered throughout this course 1 1 course layout week 1 introduction terminologies empirical rules week 2 word to vectors week 3 probability and language model week 4 neural networks for nlp week 5 distributed word vectors word embeddings week 6 recurrent neural network language model week 7 statistical machine translation week 8 statistical machine translation neural machine translation week 9 neural machine translation week 10 conversation modeling chat bots dialog agents question processing week 11 information retrieval tasks using neural networks learn to rank understanding phrases analogies week 12 spelling correction using traditional and neural networks end notes 1 1 1 course type elective 1 1 2 course level postgraduate youtube video link https www youtube com playlist list plyqspqzte6m ecngdz2qottze7yi4eedb video playlist 1 2 books and references 1 2 1 books 1 niladri sekhar dash and s arulmozi features of a corpus singapore springer singapore 2018 pp 17 34 isbn 978 981 10 7458 5 doi 10 1007 978 981 10 7458 5 2 url https doi org 10 1007 978981 10 7458 5 2 2 ian goodfellow yoshua bengio and aaron courville deep learning http www deeplearningbook org mit press 2016 3 nitin indurkhya and fred j damerau handbook of natural language processing chapman and hall crc 2010 4 daniel jurafsky and james h martin speech and language processing an introduction to natural language processing computational linguistics and speech recognition 1st upper saddle river nj usa prentice hall ptr 2000 isbn 0130950696 5 c d manning et al foundations of statistical natural language processing mit press mit press 1999 isbn 9780262133609 url https books google co in books id yifdxbex3suc 6 christopher d manning prabhakar raghavan and hinrich schutze an introduction to information retrieval cambridge up 2009 chap 6 pp 109 133 7 jacob perkins python 3 text processing with nltk 3 cookbook packt publishing ltd 2014 8 noah a smith linguistic structure prediction synthesis lectures on human language technologies morgan and claypool may 2011 1 2 2 articles 1 dzmitry bahdanau kyunghyun cho and yoshua bengio neural machine translation by jointly learning to align and translate english us in arxiv 2014 2 yoshua bengio et al a neural probabilistic language model in journal of machine learning research 3 mar 2003 pp 1137 1155 issn 1532 4435 3 peter f brown et al class based n gram models of natural language in comput linguist 18 4 dec 1992 pp 467 479 issn 0891 2017 4 peter f brown et al the mathematics of statistical machine translation parameter estimation in comput linguist 19 2 june 1993 pp 263 311 issn 0891 2017 5 kyunghyun cho et al on the properties of neural machine translation encoder decoder approaches in corr abs 1409 1259 2014 arxiv 1409 1259 6 scott deerwester et al indexing by latent semantic analysis in journal of the american society for information science 41 6 1990 pp 391 407 7 chris dyer notes on noise contrastive estimation and negative sampling in corr abs 1410 8251 2014 arxiv 1410 8251 8 yoav goldberg a primer on neural network models for natural language processing in corr abs 1510 00726 2015 arxiv 1510 00726 9 nils hadziselimovic et al forgetting is regulated via musashi mediated transnational control of the arp2 3 complex in cell 156 6 mar 2014 pp 1153 1166 issn 1097 4172 10 sepp hochreiter and ju rgen schmidhuber long short term memory in neural comput 9 8 nov 1997 pp 1735 1780 issn 0899 7667 11 chiori hori and takaaki hori end to end conversation modeling track in dstc6 in corr abs 1706 07440 2017 arxiv 1706 07440 12 andrej karpathy justin johnson and fei fei li visualizing and understanding recurrent networks in corr abs 1506 02078 2015 13 minh thang luong hieu pham and christopher d manning effective approaches to attention based neural machine translation in corr abs 1508 04025 2015 arxiv 1508 04025 14 tomas mikolov et al efficient estimation of word representations in vector space in corr abs 1301 3781 2013 15 franz josef och and hermann ney the alignment template approach to statistical machine translation in computational linguistics 30 4 dec 2004 pp 417 449 issn 0891 2017 16 f pedregosa et al scikit learn machine learning in python in journal of machine learning research 12 2011 pp 2825 2830 17 xin rong word2vec parameter learning explained in corr abs 1411 2738 2014 arxiv 1411 2738 url http arxiv org abs 1411 2738 18 fraser w smith and lars muckli nonstimulated early visual areas carry information about surrounding context in proceedings of the national academy of sciences 107 46 2010 pp 20099 20103 1 2 3 proceedings 1 kyunghyun cho et al learning phrase representations using rnn encoder decoder for statistical machine translation in proceedings of the 2014 conference on empirical methods in natural language processing emnlp doha qatar association for computational linguistics oct 2014 pp 1724 1734 2 rafal jozefowicz wojciech zaremba and ilya sutskever an empirical exploration of recurrent network architectures in proceedings of the 32nd international conference on international conference on machine learning volume 37 icml 15 lille france jmlr org 2015 pp 2342 2350 3 quoc le and tomas mikolov distributed representations of sentences and documents in international conference on machine learning 2014 pp 1188 1196 4 edward loper and steven bird nltk the natural language toolkit in proceedings of the acl 02 workshop on effective tools and methodologies for teaching natural language processing and computational linguistics volume 1 etmtnlp 02 philadelphia pennsylvania association for computational linguistics 2002 pp 63 70 5 tomas mikolov et al distributed representations of words and phrase and their compositionality in proceedings of the 26th international conference on neural information processing systems volume 2 nips 13 lake tahoe nevada curran associates inc 2013 pp 3111 3119 6 andriy mnih and geoffrey hinton a scalable hierarchical distributed language model in proceedings of the 21st international conference on neural information processing systems nips 08 vancouver british columbia canada curran associates inc 2008 pp 1081 1088 isbn 978 1 6056 0 949 2 7 frederic morin and yoshua bengio hierarchical probabilistic neural network language model in aistats vol 5 citeseer 2005 pp 246 252 8 kishore papineni et al bleu a method for automatic evaluation of machine translation in proceedings of 40th annual meeting of the association for computational linguistics philadelphia pennsylvania usa association for computational linguistics july 2002 pp 311 318 1 3 code 1 3 1 week 1 s i file details 1 week1 assignment ipynb week1 assignment pdf week 1 self assignments 1 3 2 week 2 s i file details 1 week2 exercise shakespear play ipynb week 2 lecture 2 exercise | ai |
|
Modern-Web-Development-with-ASP.NET-Core-3-Second-Edition | mastering asp net core 3 0 second edition a href https www packtpub com programming mastering asp net core 3 0 second edition utm source github utm medium repository utm campaign 9781789619768 img src https www packtpub com media catalog product cache bf3310292d6e1b4ca15aeea773aca35e 9 7 9781789619768 original 53 png alt mastering asp net core 3 0 second edition height 256px align right a this is the code repository for mastering asp net core 3 0 second edition https www packtpub com programming mastering asp net core 3 0 second edition utm source github utm medium repository utm campaign 9781789619768 published by packt an end to end guide covering the latest features of visual studio 2019 blazor and entity framework what is this book about asp net has been the de facto choice of web developers for a long time with asp net core 3 microsoft has made internal changes to the framework along with introducing new additions that will change the way you approach web development this second edition of mastering asp net core 3 has been thoroughly updated to help you make the most of the latest features in the framework right from grpc and conventions to a new chapter on blazor this book covers the following exciting features understand the new capabilities of asp net core 3 0 become well versed with how to configure asp net core to use it to its full potential create controllers and action methods and understand how to maintain state implement and validate forms and retrieve information from them improve productivity by enforcing reuse process forms and effective security measures delve into the new blazor development model deploy asp net core applications to new environments such as microsoft azure aws and docker if you feel this book is for you get your copy https www amazon com dp 1789619769 today a href https www packtpub com utm source github utm medium banner utm campaign githubbanner img src https raw githubusercontent com packtpublishing github master github png alt https www packtpub com border 5 a instructions and navigations all of the code is organized into folders for example chapter02 the code will look like the following iisexpress applicationurl http localhost 5000 following is what you need for this book if you are a developer with basic knowledge of asp net mvc and want to build powerful applications then this book is for you developers who want to explore the latest changes in asp net core 3 1 to build professional level applications will also find this book useful familiarity with c asp net core html and css is expected to get the most out of this book with the following software and hardware list you can run all code files present in the book chapter 1 20 software and hardware list no software required os required 1 visual studio 2019 windows mac os x and linux any 2 visual studio 2019 community edition windows mac os x and linux any we also provide a pdf file that has color images of the screenshots diagrams used in this book click here to download it https static packt cdn com downloads 9781789619768 colorimages pdf related products c 8 and net core 3 projects using azure second edition packt https www packtpub com in web development c 8 and net core 3 0 projects second edition utm source github utm medium repository utm campaign 9781789612080 amazon https www amazon com dp 178961208x hands on restful web services with asp net core 3 packt https www packtpub com in application development hands restful web services aspnet core utm source github utm medium repository utm campaign 9781789537611 amazon https www amazon com dp b07mxlqr34 get to know the author ricardo peres is a portuguese developer blogger and book author and is currently a team leader at dixons carphone he has over 20 years of experience in software development and his interests include distributed systems architectures design patterns and net development he won the microsoft mvp award in 2015 and has held this title up to 2020 he also authored entity framework core cookbook second edition and mastering asp net core 2 0 and was a technical reviewer for learning nhibernate 4 for packt he also contributed to syncfusion s succinctly collection with titles on net development ricardo maintains a blog development with a dot where he writes about technical issues you can catch up with him on twitter at rjperes75 suggestions and feedback click here https docs google com forms d e 1faipqlsdy7datc6qmel81fiuuymz0wy9vh1jhkvpy57oimekgqib ow viewform if you have any feedback or suggestions download a free pdf i if you have already purchased a print or kindle version of this book you can get a drm free pdf version at no cost br simply click on the link to claim your free pdf i p align center a href https packt link free ebook 9781789619768 https packt link free ebook 9781789619768 a p | front_end |
|
blockchain | blockchain discussion of blockchains for the blockchain community group and workshops | blockchain |
|
mws-restaurant-stage-2 | local development api server usage get restaurants curl http localhost 1337 restaurants get restaurants by id curl http localhost 1337 restaurants 3 architecture local server node js sails js contributors brandy lee camacho technical project manager mailto brandy camacho udacity com david harris web services lead mailto david harris udacity com omar albeik frontend engineer mailto omaralbeik gmail com getting started development local api server location of server server server depends on node js lts version v6 11 2 https nodejs org en download npm https www npmjs com get npm and sails js http sailsjs com please make sure you have these installed before proceeding forward great you are ready to proceed forward awesome let s start with running commands in your terminal known as command line interface cli install project dependancies install project dependancies npm i install sails js globally install sails global npm i sails g start the server start server node server you should now have access to your api server environment debug environment development debug port 1337 if you find a bug in the source code or a mistake in the documentation you can help us by submitting an issue to our waffle dashboard https waffle io udacity mwnd issues even better you can submit a pull request with a fix archival note this repository is deprecated therefore we are going to archive it however learners will be able to fork it to their personal github account but cannot submit prs to this repository if you have any issues or suggestions to make feel free to utilize the https knowledge udacity com forum to seek help on content specific issues submit a support ticket along with the link to your forked repository if learners are blocked for other reasons here are the links for the retail consumers https udacity zendesk com hc en us requests new and enterprise learners https udacityenterprise zendesk com hc en us requests new ticket form id 360000279131 | front_end |
|
IST | ist information security technologies | server |
|
altschool-holiday-challenge | altschool holiday challenge my 3rd semester cloud engineering altschool holiday challenge | cloud |
|
Awesome-Embodied-Agent-with-LLMs | awesome embodied agent with llms awesome https cdn rawgit com sindresorhus awesome d7305f38d29fed78fa85652e3a63e154dd8e8829 media badge svg https github com sindresorhus awesome this is a curated list of embodied ai or robot with large language models research which is maintained by haonan https github com zchoi watch this repository for the latest updates and feel free to raise pull requests if you find some interesting papers p align left a href https github com zchoi awesome embodied agent with llms stargazers img src https reporoster com stars zchoi awesome embodied agent with llms width 50 height 60 a p table of contents survey survey llms with rl llms with rl planning and manipulation or pretraining planning and manipulation or pretraining multi agent learning and coordination multi agent learning and coordination vision and language navigation vision and language navigation detection detection 3d grounding 3d grounding interactive embodied learning interactive embodied learning rearrangement rearrangement benchmark benchmark simulator simulator others others trend and imagination of llm based embodied agent p align center img src trend png width 54 img src genshin jpg width 43 span b figure 1 trend of embodied agent with llms sup 1 sup b span nbsp nbsp nbsp nbsp nbsp nbsp nbsp nbsp nbsp nbsp nbsp nbsp nbsp nbsp nbsp nbsp nbsp nbsp nbsp nbsp nbsp nbsp span b figure 2 an envisioned agent society sup 2 sup b span p methods survey the rise and potential of large language model based agents a survey https arxiv org pdf 2309 07864 pdf arxiv 2023 br fudan nlp group mihoyo inc a survey on llm based autonomous agents https arxiv org pdf 2308 11432 pdf arxiv 2023 br gaoling school of artificial intelligence renmin university of china llms with rl language reward modulation for pretraining reinforcement learning https arxiv org pdf 2308 12270 pdf arxiv 2023 github https github com ademiadeniji lamp br ademi adeniji amber xie carmelo sferrazza younggyo seo stephen james pieter abbeel br sup 1 sup uc berkeley guiding pretraining in reinforcement learning with large language models https openreview net attachment id 63704lh4v5 name pdf icml 2023 br yuqing du sup 1 sup olivia watkins sup 1 sup zihan wang sup 2 sup cedric colas sup 3 4 sup trevor darrell sup 1 sup pieter abbeel sup 1 sup abhishek gupta sup 2 sup jacob andreas sup 3 sup br sup 1 sup department of electrical engineering and computer science university of california berkeley usa sup 2 sup university of washington seattle sup 3 sup massachusetts institute of technology computer science and artificial intelligence laboratory sup 4 sup inria flowers laboratory planning and manipulation or pretraining agent instructs large language models to be general zero shot reasoners https arxiv org pdf 2310 03710 pdf iclr 2024 submit br nicholas crispino sup 1 sup kyle montgomery sup 1 sup fankun zeng sup 1 sup dawn song sup 2 sup chenguang wang sup 1 sup br sup 1 sup washington university in st louis sup 2 sup uc berkeley camel communicative agents for mind exploration of large scale language model society https arxiv org pdf 2303 17760 pdf neurips 2023 github https link zhihu com target https 3a github com camel ai camel project page https www camel ai org br guohao li hasan abed al kader hammoud hani itani dmitrii khizbullin bernard ghanem br sup 1 sup king abdullah university of science and technology kaust language models as zero shot planners extracting actionable knowledge for embodied agents https arxiv org pdf 2201 07207 pdf arxiv 2022 github https github com huangwl18 language planner project page https wenlong page language planner br wenlong huang sup 1 sup pieter abbeel sup 1 sup deepak pathak sup 2 sup igor mordatch sup 3 sup br sup 1 sup uc berkeley sup 2 sup carnegie mellon university sup 3 sup google film following instructions in language with modular methods https openreview net pdf id qi4542y2s1d iclr 2022 github https github com soyeonm film project page https gary3410 github io tapa br so yeon min sup 1 sup devendra singh chaplot sup 2 sup pradeep ravikumar sup 1 sup yonatan bisk sup 1 sup ruslan salakhutdinov sup 1 sup br sup 1 sup carnegie mellon university sup 2 sup facebook ai research embodied task planning with large language models https arxiv org pdf 2307 01848 pdf arxiv 2023 github https github com gary3410 tapa project page https gary3410 github io tapa demo https huggingface co spaces xuxw98 tapa huggingface model https huggingface co gary3410 pretrain lit llama br zhenyu wu sup 1 sup ziwei wang sup 2 3 sup xiuwei xu sup 2 3 sup jiwen lu sup 2 3 sup haibin yan sup 1 sup br sup 1 sup school of automation beijing university of posts and telecommunications sup 2 sup department of automation tsinghua university sup 3 sup beijing national research center for information science and technology spring gpt 4 out performs rl algorithms by studying papers and reasoning https arxiv org pdf 2305 15486 pdf arxiv 2023 br yue wu sup 1 4 sup shrimai prabhumoye sup 2 sup so yeon min sup 1 sup yonatan bisk sup 1 sup ruslan salakhutdinov sup 1 sup amos azaria sup 3 sup tom mitchell sup 1 sup yuanzhi li sup 1 4 sup br sup 1 sup carnegie mellon university sup 2 sup nvidia sup 3 sup ariel university sup 4 sup microsoft research poni potential functions for objectgoal navigation with interaction free learning https openaccess thecvf com content cvpr2022 papers ramakrishnan poni potential functions for objectgoal navigation with interaction free learning cvpr 2022 paper pdf cvpr 2022 oral project page https vision cs utexas edu projects poni github https github com srama2512 poni br santhosh kumar ramakrishnan sup 1 2 sup devendra singh chaplot sup 1 sup ziad al halah sup 2 sup jitendra malik sup 1 3 sup kristen grauman sup 1 2 sup br sup 1 sup facebook ai research sup 2 sup ut austin sup 3 sup uc berkeley moving forward by moving backward embedding action impact over action semantics https openreview net pdf id vmjctnuswi iclr 2023 project page https prior allenai org projects action adaptive policy github https github com kuohaozeng aap br kuo hao zeng sup 1 sup luca weihs sup 2 sup roozbeh mottaghi sup 1 sup ali farhadi sup 1 sup br sup 1 sup paul g allen school of computer science engineering university of washington sup 2 sup prior allen institute for ai modeling dynamic environments with scene graph memory https openreview net attachment id niuxs1cai4 name pdf icml 2023 br andrey kurenkov sup 1 sup michael lingelbach sup 1 sup tanmay agarwal sup 1 sup emily jin sup 1 sup chengshu li sup 1 sup ruohan zhang sup 1 sup li fei fei sup 1 sup jiajun wu sup 1 sup silvio savarese sup 2 sup roberto mart n mart n sup 3 sup br sup 1 sup department of computer science stanford university sup 2 sup salesforce ai research sup 3 sup department of computer science university of texas at austin reasoning with language model is planning with world model https arxiv org pdf 2305 14992 pdf arxiv 2023 br shibo hao sup sup yi gu sup sup haodi ma sup sup joshua jiahua hong sup sup zhen wang sup sup daisy zhe wang sup sup zhiting hu sup sup br sup sup uc san diego sup sup university of florida sup sup mohamed bin zayed university of artificial intelligence do as i can not as i say grounding language in robotic affordances https arxiv org pdf 2204 01691 pdf arxiv 2022 br robotics at google everyday robots do embodied agents dream of pixelated sheep embodied decision making using language guided world modelling https openreview net attachment id rm5qi57c5i name pdf icml 2023 br kolby nottingham sup 1 sup prithviraj ammanabrolu sup 2 sup alane suhr sup 2 sup yejin choi sup 3 2 sup hannaneh hajishirzi sup 3 2 sup sameer singh sup 1 2 sup roy fox sup 1 sup br sup 1 sup department of computer science university of california irvine sup 2 sup allen institute for artificial intelligence sup 3 sup paul g allen school of computer science context aware planning and environment aware memory for instruction following embodied agents https arxiv org pdf 2308 07241v2 pdf iccv 2023 br byeonghwi kim jinyeon kim yuyeong kim sup 1 sup cheolhong min jonghyun choi sup sup br yonsei university sup 1 sup gwangju institute of science and technology inner monologue embodied reasoning through planning with language models https openreview net pdf id 3r3pz5i0tye corl 2022 project page https innermonologue github io br robotics at google language models meet world models embodied experiences enhance language models https arxiv org pdf 2305 10626 pdf arxiv 2023 https img shields io github stars szxiangjn world model for language model style social label code stars https github com szxiangjn world model for language model twitter https twitter com szxiangjn status 1659399771126370304 br jiannan xiang sup sup tianhua tao sup sup yi gu sup sup tianmin shu sup sup zirui wang sup sup zichao yang sup sup zhiting hu sup sup br sup sup uc san diego sup sup uiuc sup sup mit sup sup carnegie mellon university alphablock embodied finetuning for vision language reasoning in robot manipulation https arxiv org pdf 2305 18898 pdf arxiv 2023 video https www youtube com watch v ayazid1 qqk br chuhao jin sup 1 sup wenhui tan sup 1 sup jiange yang sup 2 sup bei liu3 sup sup ruihua song sup 1 sup limin wang sup 2 sup jianlong fu sup 3 sup br sup 1 sup renmin university of china sup 2 sup nanjing university sup 3 sup microsoft research a persistent spatial semantic representation for high level natural language instruction execution https openreview net pdf id negdzeyjcka corl 2021 https img shields io github stars valtsblukis hlsm style social label code stars https github com valtsblukis hlsm project page https hlsm alfred github io poster https openreview net attachment id negdzeyjcka name poster br valts blukis sup 1 2 sup chris paxton sup 1 sup dieter fox sup 1 3 sup animesh garg sup 1 4 sup yoav artzi sup 2 sup br sup 1 sup nvidia sup 2 sup cornell university sup 3 sup university of washington sup 4 sup university of toronto vector institute llm planner few shot grounded planning for embodied agents with large language models https arxiv org pdf 2212 04088 pdf iccv 2023 project page https dki lab github io llm planner github https github com osu nlp group llm planner br chan hee song sup 1 sup jiaman wu sup 1 sup clayton washington sup 1 sup brian m sadler sup 2 sup wei lun chao sup 1 sup yu su sup 1 sup br sup 1 sup the ohio state university sup 2 sup devcom arl code as policies language model programs for embodied control https arxiv org pdf 2209 07753 arxiv 2023 project page https code as policies github io github https code as policies github io blog https ai googleblog com 2022 11 robots that write their own code html colab https colab research google com drive 124te4tsgyyrvduzedclufyvwcc2qbbre br jacky liang wenlong huang fei xia peng xu karol hausman brian ichter pete florence andy zeng br robotics at google 3d llm injecting the 3d world into large language models https arxiv org abs 2307 12981 arxiv 2023 https img shields io github stars umass foundation model 3d llm style social label code stars https github com umass foundation model 3d llm br sup 1 sup yining hong sup 2 sup haoyu zhen sup 3 sup peihao chen sup 4 sup shuhong zheng sup 5 sup yilun du sup 6 sup zhenfang chen sup 6 7 sup chuang gan br sup 1 sup ucla sup 2 sup sjtu sup 3 sup scut sup 4 sup uiuc sup 5 sup mit sup 6 sup mit ibm watson ai lab sup 7 sup umass amherst voxposer composable 3d value maps for robotic manipulation with language models https arxiv org abs 2307 05973 arxiv 2023 project page https voxposer github io online demo https www youtube com watch v yvn4er05a3m br wenlong huang sup 1 sup chen wang sup 1 sup ruohan zhang sup 1 sup yunzhu li sup 1 2 sup jiajun wu sup 1 sup li fei fei sup 1 sup br sup 1 sup stanford university sup 2 sup university of illinois urbana champaign palm e an embodied multimodal language mode https arxiv org pdf 2303 03378 pdf icml 2023 project page https palm e github io br sup 1 sup robotics at google sup 2 sup tu berlin 3google research large language models as commonsense knowledge for large scale task planning https arxiv org pdf 2305 14078 pdf arxiv 2023 br zirui zhao wee sun lee david hsu br school of computing national university of singapore multi agent learning and coordination demonstration free autonomous reinforcement learning via implicit and bidirectional curriculum https openreview net attachment id bmo1vlkq7d name pdf icml 2023 br jigang kim sup 1 2 sup daesol cho sup 1 2 sup h jin kim sup 1 3 sup br sup 1 sup seoul national university sup 2 sup artificial intelligence institute of seoul national university aiis sup 3 sup automation and systems research institute asri br note this paper mainly focuses on reinforcement learning for embodied ai adaptive coordination in social embodied rearrangement https openreview net attachment id byesw113sz name pdf icml 2023 br andrew szot sup 1 2 sup unnat jain sup 1 sup dhruv batra sup 1 2 sup zsolt kira sup 2 sup ruta desai sup 1 sup akshara rai sup 1 sup br sup 1 sup meta ai sup 2 sup georgia institute of technology vision and language navigation esc exploration with soft commonsense constraints for zero shot object navigation https openreview net attachment id gydfm0zexy name pdf icml 2023 br kaiwen zhou sup 1 sup kaizhi zheng sup 1 sup connor pryor sup 1 sup yilin shen sup 2 sup hongxia jin sup 2 sup lise getoor sup 1 sup xin eric wang sup 1 sup sup 1 sup university of california santa cruz sup 2 sup samsung research america navgpt explicit reasoning in vision and language navigation with large language models https arxiv org pdf 2305 16986 pdf arxiv 2023 br gengze zhou sup 1 sup yicong hong sup 2 sup qi wu sup 1 sup br sup 1 sup the university of adelaide sup 2 sup the australian national university instruct2act mapping multi modality instructions to robotic actions with large language model https arxiv org pdf 2305 11176 pdf arxiv 2023 github https github com opengvlab instruct2act siyuan huang sup 1 2 sup zhengkai jiang sup 4 sup hao dong sup 3 sup yu qiao sup 2 sup peng gao sup 2 sup hongsheng li sup 5 sup br sup 1 sup shanghai jiao tong university sup 2 sup shanghai ai laboratory sup 3 sup cfcs school of cs pku sup 4 sup university of chinese academy of sciences sup 5 sup the chinese university of hong kong detection detgpt detect what you need via reasoning https arxiv org pdf 2305 14167 pdf arxiv 2023 br renjie pi sup 1 sup jiahui gao sup 2 sup shizhe diao sup 1 sup rui pan sup 1 sup hanze dong sup 1 sup jipeng zhang sup 1 sup lewei yao sup 1 sup jianhua han sup 3 sup hang xu sup 2 sup lingpeng kong sup 2 sup tong zhang sup 1 sup br sup 1 sup the hong kong university of science and technology sup 2 sup the university of hong kong 3shanghai jiao tong university 3d grounding llm grounder open vocabulary 3d visual grounding with large language model as an agent https arxiv org pdf 2309 12311 pdf arxiv 2023 br jianing yang sup 1 sup xuweiyi chen sup 1 sup shengyi qian sup 1 sup nikhil madaan madhavan iyengar sup 1 sup david f fouhey sup 1 2 sup joyce chai sup 1 sup br sup 1 sup university of michigan sup 2 sup new york university interactive embodied learning grounding large language models in interactive environments with online reinforcement learning https openreview net attachment id fexm8gbxwu name pdf icml 2023 br thomas carta sup 1 sup clement romac sup 1 2 sup thomas wolf sup 2 sup sylvain lamprier sup 3 sup olivier sigaud sup 4 sup pierre yves oudeyer sup 1 sup br sup 1 sup inria flowers university of bordeaux sup 2 sup hugging face sup 3 sup univ angers leria sfr mathstic f 49000 sup 4 sup sorbonne university isir learning affordance landscapes for interaction exploration in 3d environments https arxiv org pdf 2008 09241 pdf neurips 2020 https img shields io github stars facebookresearch interaction exploration style social label code stars https github com facebookresearch interaction exploration project page https vision cs utexas edu projects interaction exploration br tushar nagarajan kristen grauman br ut austin and facebook ai research ut austin and facebook ai research embodied question answering in photorealistic environments with point cloud perception https arxiv org abs 1904 03461 cvpr 2019 oral slides https embodiedqa org slides eqa matterport slides pdf br erik wijmans sup 1 sup samyak datta sup 1 sup oleksandr maksymets sup 2 sup abhishek das sup 1 sup georgia gkioxari sup 2 sup stefan lee sup 1 sup irfan essa sup 1 sup devi parikh sup 1 2 sup dhruv batra sup 1 2 sup br sup 1 sup georgia institute of technology sup 2 sup facebook ai research multi target embodied question answering https openaccess thecvf com content cvpr 2019 papers yu multi target embodied question answering cvpr 2019 paper pdf cvpr 2019 br licheng yu sup 1 sup xinlei chen sup 3 sup georgia gkioxari sup 3 sup mohit bansal sup 1 sup tamara l berg sup 1 3 sup dhruv batra sup 2 3 sup br sup 1 sup university of north carolina at chapel hill sup 2 sup georgia tech 3facebook ai neural modular control for embodied question answering https arxiv org abs 1810 11181 corl 2018 spotlight project page https embodiedqa org github https github com facebookresearch embodiedqa br abhishek das sup 1 sup georgia gkioxari sup 2 sup stefan lee sup 1 sup devi parikh sup 1 2 sup dhruv batra sup 1 2 sup br sup 1 sup georgia institute of technology sup 2 sup facebook ai research embodied question answering https embodiedqa org paper pdf cvpr 2018 oral project page https embodiedqa org github https github com facebookresearch embodiedqa br abhishek das sup 1 sup samyak datta sup 1 sup georgia gkioxari2 sup 2 sup stefan lee sup 1 sup devi parikh sup 2 1 sup dhruv batra sup 2 sup br sup 1 sup georgia institute of technology sup 2 sup facebook ai research rearrangement a simple approach for visual room rearrangement 3d mapping and semantic search https openreview net pdf id fgg6vhp3w9w iclr 2023 br sup 1 sup brandon trabucco sup 2 sup gunnar a sigurdsson sup 2 sup robinson piramuthu sup 2 3 sup gaurav s sukhatme sup 1 sup ruslan salakhutdinov br sup 1 sup cmu sup 2 sup amazon alexa ai sup 3 sup university of southern california benchmark alfworld aligning text and embodied environments for interactive learning https openreview net pdf id 0iox0yccdtn iclr 2021 project page https alfworld github io github https github com alfworld alfworld br mohit shridhar sup sup xingdi yuan sup sup marc alexandre c t sup sup yonatan bisk sup sup adam trischler sup sup matthew hausknecht sup sup br sup sup university of washington sup sup microsoft research montr al sup sup carnegie mellon university sup sup microsoft research alfred a benchmark for interpreting grounded instructions for everyday tasks https arxiv org pdf 1912 01734 pdf cvpr 2020 project page https askforalfred com github https github com askforalfred alfred br mohit shridhar sup 1 sup jesse thomason sup 1 sup daniel gordon sup 1 sup yonatan bisk sup 1 2 3 sup winson han sup 3 sup roozbeh mottaghi sup 1 3 sup luke zettlemoyer sup 1 sup dieter fox sup 1 4 sup br sup 1 sup paul g allen school of computer sci eng univ of washington sup 2 sup language technologies institute carnegie mellon university sup 3 sup allen institute for ai sup 4 sup nvidia br vima robot manipulation with multimodal prompts https vimalabs github io assets vima paper pdf icml 2023 project page https vimalabs github io github https github com vimalabs vima vima bench https github com vimalabs vimabench br yunfan jiang sup 1 sup agrim gupta sup 1 sup zichen zhang sup 2 sup guanzhi wang sup 3 4 sup yongqiang dou sup 5 sup yanjun chen sup 1 sup li fei fei sup 1 sup anima anandkumar sup 3 4 sup yuke zhu sup 3 6 sup linxi fan sup 3 sup br sqa3d situated question answering in 3d scenes https arxiv org pdf 2210 07474 pdf iclr 2023 project page https sqa3d github io slides http web cs ucla edu xm file sqa3d iclr23 slides pdf github https github com silongyong sqa3d br xiaojian ma sup 2 sup silong yong sup 1 3 sup zilong zheng sup 1 sup qing li sup 1 sup yitao liang sup 1 4 sup song chun zhu sup 1 2 3 4 sup siyuan huang sup 1 sup br sup 1 sup beijing institute for general artificial intelligence bigai sup 2 sup ucla sup 3 sup tsinghua university sup 4 sup peking university iqa visual question answering in interactive environments https openaccess thecvf com content cvpr 2018 papers gordon iqa visual question cvpr 2018 paper pdf cvpr 2018 github https github com danielgordon10 thor iqa cvpr 2018 demo video youtube https www youtube com watch v pxd3c 1jr98 feature youtu be br danie sup 1 sup gordon1 aniruddha kembhavi sup 2 sup mohammad rastegari sup 2 4 sup joseph redmon sup 1 sup dieter fox sup 1 3 sup ali farhadi sup 1 2 sup br sup 1 sup paul g allen school of computer science university of washington sup 2 sup allen institute for artificial intelligence sup 3 sup nvidia sup 4 sup xnor ai env qa a video question answering benchmark for comprehensive understanding of dynamic environments https openaccess thecvf com content iccv2021 papers gao env qa a video question answering benchmark for comprehensive understanding of iccv 2021 paper pdf iccv 2021 project page https envqa github io overview github https github com maybelu9 env qa br difei gao sup 1 2 sup ruiping wang sup 1 2 3 sup ziyi bai sup 1 2 sup xilin chen sup 1 sup br sup 1 sup key laboratory of intelligent information processing of chinese academy of sciences cas institute of computing technology cas sup 2 sup university of chinese academy of sciences sup 3 sup beijing academy of artificial intelligence simulator ai2 thor an interactive 3d environment for visual ai https arxiv org abs 1712 05474 arxiv 2022 project page http ai2thor allenai org github https github com allenai ai2thor br allen institute for ai university of washington stanford university carnegie mellon university br igibson a simulation environment for interactive tasks in large realistic scenes https ieeexplore ieee org document 9636667 iros 2021 project page https svl stanford edu igibson github https link zhihu com target https 3a github com stanfordvl igibson releases tag 1 0 0 br bokui shen fei xia et al br habitat a platform for embodied ai research https openaccess thecvf com content iccv 2019 papers savva habitat a platform for embodied ai research iccv 2019 paper pdf iccv 2019 project page https aihabitat org habitat sim https github com facebookresearch habitat sim habitat lab https github com facebookresearch habitat lab habitat challenge https github com facebookresearch habitat challenge br facebook ai research facebook reality labs georgia institute of technology simon fraser university intel labs uc berkeley br habitat 2 0 training home assistants to rearrange their habitat https scontent fhkg4 2 fna fbcdn net v t39 8562 6 10000000 254710466627524 1145871437139214759 n pdf nc cat 106 ccb 1 7 nc sid ad8a9d nc ohc ui4k7s8ek sax8dltw0 nc ht scontent fhkg4 2 fna oh 00 afcxugrrxo 0g2trcuecpu jeif0zwkxggpippuhhk3xcw oe 64f38ad0 neurips 2021 project page https research facebook com publications habitat 2 0 training home assistants to rearrange their habitat text habitat 202 0 3a 20training 20home 20assistants 20to 20rearrange 20their ai 20stack 20 e2 80 93 20data 2c 20simulation 2c 20and 20benchmark 20tasks br facebook ai research georgia tech intel research simon fraser university uc berkeley others least to most prompting enables complex reasoning in large language models https arxiv org pdf 2205 10625 iclr 2023 br google research brain team react synergizing reasoning and acting in language models https arxiv org pdf 2210 03629 pdf iclr 2023 https img shields io github stars ysymyth react style social label code stars https github com ysymyth react br shunyu yao sup 1 sup jeffrey zhao sup 2 sup dian yu sup 2 sup nan du sup 2 sup izhak shafran sup 2 sup karthik narasimhan sup 1 sup yuan cao sup 2 sup br sup 1 sup department of computer science princeton university sup 2 sup google research brain team algorithm of thoughts enhancing exploration of ideas in large language models https arxiv org pdf 2308 10379 pdf arxiv 2023 br virginia tech microsoft graph of thoughts solving elaborate problems with large language models https arxiv org abs 2308 09687 pdf arxiv 2023 br eth zurich cledar warsaw university of technology tree of thoughts deliberate problem solving with large language models https arxiv org pdf 2305 10601 pdf arxiv 2023 br shunyu yao sup 1 sup dian yu sup 2 sup jeffrey zhao sup 2 sup izhak shafran sup 2 sup thomas l griffiths sup 1 sup yuan cao sup 2 sup karthik narasimhan sup 1 sup br sup 1 sup princeton university sup 2 sup google deepmind chain of thought prompting elicits reasoning in large language models https arxiv org pdf 2201 11903 pdf neurips 2022 br jason wei xuezhi wang dale schuurmans maarten bosma brian ichter fei xia ed h chi quoc v le denny zhou br google research brain team minedojo building open ended embodied agents with internet scale knowledge https proceedings neurips cc paper files paper 2022 file 74a67268c5cc5910f64938cac4526a90 paper datasets and benchmarks pdf neurips 2022 github https github com minedojo minedojo https img shields io github stars minedojo minedojo style social label code stars https github com minedojo minedojo project page https minedojo org knowledge base https minedojo org knowledge base html br linxi fan sup 1 sup guanzhi wang sup 2 sup yunfan jiang sup 3 sup ajay mandlekar sup 1 sup yuncong yang sup 4 sup haoyi zhu sup 5 sup andrew tang sup 4 sup de an huang sup 1 sup yuke zhu sup 1 6 sup anima anandkumar sup 1 2 sup br sup 1 sup nvidia sup 2 sup caltech sup 3 sup stanford sup 4 sup columbia sup 5 sup sjtu sup 6 sup ut austin distilling internet scale vision language models into embodied agents https openreview net pdf id 6vvkgnepp7 icml 2023 br theodore sumers sup 1 sup kenneth marino sup 2 sup arun ahuja sup 2 sup rob fergus sup 2 sup ishita dasgupta sup 2 sup br lisa reasoning segmentation via large language model https arxiv org pdf 2308 00692 pdf arxiv 2023 github https github com dvlab research lisa huggingface models https huggingface co xinlai dataset https drive google com drive folders 125mewyg5ao6tz3zdj 1 e3n04lgvelqy usp sharing online demo http 103 170 5 190 7860 txin lai sup 1 sup zhuotao tian sup 2 sup yukang chen sup 1 sup yanwei li sup 1 sup yuhui yuan sup 3 sup shu liu sup 2 sup jiaya jia sup 1 2 sup br sup 1 sup the chinese university of hong kong sup 2 sup smartmore sup 3 sup msra br acknowledge 1 trend pic from this repo https github com paitesanshi llm agent survey tree main br 2 figure from this paper the rise and potential of large language model based agents a survey https arxiv org pdf 2309 07864 pdf | embodied-agent embodied-ai large-language-models scene-understanding navigation planning-algorithms manipulator-robotics awesome | ai |
HeliOS | picture source media prefers color scheme dark srcset extras helios og logo dark png source media prefers color scheme light srcset extras helios og logo light png img alt helios logo src extras helios og logo light png picture license gpl version 2 https img shields io badge license gplv2 blue svg https github com heliosproj helios blob master license md github last commit https img shields io github last commit heliosproj helios github release latest by date https img shields io github v release heliosproj helios platformio registry https badges registry platformio org packages heliosproj library helios svg https registry platformio org libraries heliosproj helios arduino library badge https www ardu badge com badge helios svg https www ardu badge com helios github stars https img shields io github stars heliosproj helios style social github watchers https img shields io github watchers heliosproj helios style social rocket overview helios is an open source embedded operating system that is free for everyone to use while called an operating system helios is a multitasking kernel for use in embedded applications its rich fully documented api allows the user to control every aspect of the system and access kernel services syscalls for task process management scheduler management inter process communication memory management device management i e device drivers and more while maintaining a tiny footprint for a broad range of low power embedded devices helios is also easily customized to fit the user s specific needs through a single header file src config h helios supports two multitasking models that can be leveraged concurrently within the same application the first multitasking model is event driven when a task is placed in the waiting state the task will only respond to task events helios supports two types of task events the first is direct to task notifications which allow one task to send a notification to another task in this scenario the helios scheduler will wake the recipient task and schedule it for execution after the recipient task clears the direct to task notification the recipient task will returning to waiting until another notification is received the second type of task event is timer based task timers can be configured to tell helios to schedule the task to run every so many ticks typically milliseconds though task timers should not be confused with application timers or simply timers as helios supports both the second model for multitasking is a conventional cooperative model in this model cooperative tasks are always scheduled to run unless suspended additionally the cooperative model in helios contains a unique scheduler feature that builds on the traditional cooperative model in most cooperatively scheduled multitasking models a simple round robin approach is used i e each task is executed consecutively however the helios scheduler uses a runtime balanced algorithm for scheduling cooperative tasks in other words tasks that consume more runtime are deprioritized i e executed less frequently in favor of tasks that consume less runtime this design prevents long running tasks from monopolizing the system s execution time event driven and cooperatively scheduled tasks run together seamlessly although event driven tasks always receive execution priority over cooperatively scheduled tasks one important aspect of multitasking in helios is it does not rely on context switching this reduces the need for the user to manage access to shared resources in a thread safe way using mutexes and semaphores this also eliminates the need for the port or portability code required to save the context during a context switch as a result the user can focus his or her development effort on their specific application without having to contend with concurrent access to shared resources like everything in life there are drawbacks while a conventional cooperative model spares the user from contending with concurrent access to shared resources if a task does not relinquish control to the helios scheduler it will monopolize all available runtime this also means that the helios scheduler does not enforce hard timing i e real time the helios scheduler enforces soft timing so if a waiting task timer has elapsed the scheduler will prioritize the task but may miss the deadline helios also provides services for three inter process communication models the first as discussed previously is direct to task notifications direct to task notifications are an efficient communication channel between tasks that prevent a task from consuming runtime when there is nothing for the task to process the second model is message queues message queues can be created at runtime and can be shared among any number of tasks queues are highly flexible fifo communication channels that require very little code to implement the third model is stream buffers stream buffers are very much like message queues with one important difference while queues operate on multi byte messages stream buffers operate similarly on single byte streams finally while technically not one of helios s models for inter process communication helios supports task parameters that can be leveraged for rudimentary inter process communication if so desired the helios kernel includes built in memory management that improves the safety margin of dynamically allocated memory while helios s dynamic memory allocation allocates heap memory the heap in helios is not a true heap helios uses a private heap that is implemented as static memory allocated at compile time helios does not use the standard library malloc and free functions and it is recommended that the user also avoid those functions in favor of helios s memory management syscalls helios also maintains a separate memory region for kernel objects which reduces the risk that memory access in the user s application would corrupt critical kernel objects as of kernel 0 4 0 helios also supports sophisticated memory defragmentation and consistency checking to ensure memory is utilized efficiently and with a high degree of integrity helios also supports a kernel mode device driver model device drivers for virtually any feature or peripheral can be easily developed using the provided device driver template while device drivers are not needed in most applications when the microcontroller s mmu or mpu is enabled it may not be possible to access memory mapped registers and i o from the user s code while implementation of the arm mmu and mpu in helios is forthcoming device driver support had to be added to helios first information about the device driver system calls can be found in the helios developer s guide doc helios developers guide pdf a device driver template can be found here drivers template driver c and here drivers template driver h helios is built to be robust helios 0 3 0 and later has undergone static analysis testing using a commercially licensed static analysis tool as well as misra c 2012 checks while helios is not certified for nor should be used in full or in part in any safety critical application where a risk to life exists user s can be confident they are building their embedded application on a robust embedded operating system lastly for platformio and arduino users helios is easily added to their embedded application the latest release of helios is available directly through the platformio registry https registry platformio org libraries heliosproj helios and the arduino library manager https www arduino cc reference en libraries helios for users of other embedded platforms and or tool chains simply download the latest release https github com heliosproj helios releases of helios from github and add the sources to your project loudspeaker what s new the helios 0 4 x series kernel was recently released which supersedes all prior kernel versions the syscall api and internals have undergone significant development rendering applications built on earlier kernels incompatible with 0 4 x the key change that will impact compatibility is the introduction of a consistent return type for all syscalls this provides a better mechanism for error propagation and a consistent interface handling errors for example prior to kernel 0 4 0 a task would be created as follows c xtask task xtaskcreate taskmain task main null if task use the task here in this example the user application would only know if an error or exception occurred by checking if task was null in kernel 0 4 0 all syscalls have a standard return type xreturn that can either be returnok or returnerror see the helios developer s guide doc helios developers guide pdf for more information about xreturn thus in kernel 0 4 0 the same process of creating a task is done as follows c xtask task if error xtaskcreate task const xbyte taskmain task main null xsystemhalt use the task here in this manner the application can check all syscalls for success or failure even when a syscall does not modify or set arguments it is passed for the very latest on what development is occurring please check out the helios trello board https trello com b xnkdpugr helios anyone wanting to contribute to helios should refer to the contributing section computer mouse helios around the web helios is a tiny embedded os designed for arduino boards https www cnx software com 2020 08 14 helios is a tiny embedded os designed for arduino boards helios for arduino https linuxhint com linux on arduino newly launched embedded os helios brings simple multitasking to arduino microcontrollers https www hackster io news newly launched embedded os helios brings simple multitasking to arduino microcontrollers 11f6b137b75c new helios an embedded os for arduino boards https iot industrial devices com new helios an embedded os for arduino boards helios is a small and simple embedded operating system for arduino https twitter com arduino status 1293910675312357376 arduino operating system best options of 2021 https all3dp com 2 best arduino operating system helios is a tiny embedded os designed for arduino boards https news knowledia com us en articles helios is a tiny embedded os designed for arduino boards f35f44fe6c88759fa13d8781ce09ac985b2fdd3a dart getting started documentation the helios syscall api is documented in the helios developer s guide doc helios developers guide pdf if you are in need of support please refer to the contributing section on how to submit an issue arduino ide using the helios embedded operating system in your arduino sketch could not be easier open the arduino ide and use the library manager to search for and install helios the folks at arduino have documented the steps to install a library here https docs arduino cc software ide v1 tutorials installing libraries once installed you can experiment with the example sketches that are included with helios and can be found under file examples helios in the arduino ide platformio ide helios is also available directly through the platformio registry and can be added to your project either from the platformio gui or cli the steps for which are described in the platformio documentation here https docs platformio org en latest librarymanager index html like the arduino ide several examples are included with helios for you to experiment with arm cortex m if more advanced features are desired helios also has built in support for cmsis on arm cortex m microcontrollers and can be easily integrated into your keil uvision or vendor ide project by 1 downloading the current release here https github com heliosproj helios releases and unpacking the zip file into your project s source directory 2 downloading the cmsis headers and vendor s hal bsp headers and placing them into your project s include directory 3 adding the vendor s hal bsp header to the helios src port h header directly below the elif defined cmsis arch cortexm statement i e line 52 4 setting system core clock frequency and system core clock prescaler in helios s src config h header to match the cortex m s core clock frequency and your desired prescaler 5 add the dcmsis arch cortexm compiler directive to your project s build configuration espressif esp32 please note that helios is not supported on the espressif esp32 microcontroller when using the esp32 arduino core this is because the esp32 arduino core is built on freertos and helios and freertos cannot coexist in the same application to target esp32 helios must be built using espressif s sdk without the esp32 arduino core the files src port h and src port c will also need to be updated with the necessary code to control interrupts and access the microcontroller s tick timer espressif s sdk can be found here https idf espressif com man teacher example many embedded applications implement what is called a super loop a super loop is a loop that never exits i e while 1 and contains most of the code executed by the microcontroller the problem with super loops is they can grow out of control and become difficult to manage this becomes especially challenging given the relatively few options for controlling timing e g delay unfortunately the use of delay to control timing also means the microcontroller is unable to perform other operations at least without the help of an isr until delay returns below is an example of how easy it is to leverage the event driven multitasking capabilities within helios to implement the arduino blink example arduino blink example below is the blink example sketch included with the arduino platform c the setup function runs once when you press reset or power the board void setup initialize digital pin led builtin as an output pinmode led builtin output the loop function runs over and over again forever void loop digitalwrite led builtin high turn the led on high is the voltage level delay 1000 wait for a second digitalwrite led builtin low turn the led off by making the voltage low delay 1000 wait for a second helios blink example below is the arduino blink example sketch implemented using helios in this example a helios task which alternates the microcontroller s gpio pin state between high and low is added in a wait state and a task timer is set instructing helios s scheduler to execute the task every 1 000 ticks milliseconds on many microcontrollers c include helios h define the task s main function this function is the entry point for the task when executed by the scheduler the task parameter contains the task itself and may be used to perform operations against the task such as suspending it with xtasksuspend task the parm parameter points to memory containing the task parameter s this memory can be allocated by xmemalloc if needed the task parameter must be dereferenced inside the task s main function a convenient c macro deref taskparm is available to simplify the task of dereferencing the task parameter void blinktask main xtask task xtaskparm parm dereference the task parameter and store its value in the local integer ledstate this integer contains the state of the led i e 1 on or 0 off global variables are discouraged in favor of task parameters when sharing or persisting a value is required int ledstate deref taskparm int parm once inside the task s main function do not call functions like arduino s delay helios tasks should implement a state machine model like the one used here to ensure control is returned to the scheduler as quickly as possible so other tasks may run if ledstate digitalwrite led builtin high ledstate 0 else digitalwrite led builtin low ledstate 1 because the value of ledstate has changed the task parameter must be dereferenced again so that it may be updated the task s main function will receive the same value the next time the task is executed by the scheduler task parameters are also the preferred method for sharing message queues stream buffers etc between tasks deref taskparm int parm ledstate return void setup int ledstate 0 pinmode led builtin output call xsysteminit to initialize memory and call initialization functions in the port layer the xsysteminit syscall must be made prior to making any other syscall the error and ok c macros are a concise method for checking the return value of the xsysteminit syscall a consistent return type xreturn was introduced in kernel 0 4 0 if the syscall fails xsystemhalt will be called to halt the system if error xsysteminit xsystemhalt declare the task which will be used inside of the arduino setup function to configure the task prior to handing over control to the helios scheduler xtask blink call the xtaskcreate syscall to create the task the xtaskcreate syscall prototype and parameters are as follows prototype xreturn xtaskcreate xtask task const xbyte name void callback xtask task xtaskparm parm xtaskparm taskparameter parameters task a reference to the task to pass the task by reference the address of operator must be used e g blink name a reference to the first byte of a byte array containing the ascii name of the task the task name is not a null terminated c char array sometimes called a string the length of the byte array must be precisely config task name bytes default is 8 bytes if the task name is shorter then it must be padded to meet the precise length requirement to avoid compiler warnings when using a literal e g blinktsk the argument must be cast as const xbyte callback a reference to the task s main function the task s main function s prototype must be as follows the name of the task s main function does not need to match the name given to the task through the name parameter void taskname xtask task xtaskparm parm if the syscall fails xsystemhalt will be called to halt the system if error xtaskcreate blink const xbyte blinktsk blinktask main ledstate xsystemhalt because the blink task will be an event driven task i e scheduled for execution only when a task event occurs the task must be placed in the waiting state by xtaskwait there are two types of task events direct to task notifications and task timers in this example we will be using a task timer if the syscall fails xsystemhalt will be called to halt the system if error xtaskwait blink xsystemhalt in order to use the task timer the task timer period must be set to a positive non zero value in this example we are setting the task timer to 1 000 ticks this way the helios scheduler will schedule the blink task for execution every 1 000 ticks the length of a tick is platform and or architecture dependent though on most platforms a tick will occur every one millisecond if the syscall fails xsystemhalt will be called to halt the system if error xtaskchangeperiod blink 1000 xsystemhalt now that the task s are created and configured they way we want control must be passed to the helios scheduler once this is done the only way to return control back to the arduino setup function is by calling xtasksuspendall which will cause the scheduler to quit if the syscall fails xsystemhalt will be called to halt the system if error xtaskstartscheduler xsystemhalt while not required it is advised to call xsystemhalt at the end of the arduino setup function in this way if the scheduler is forced to quit the application will halt and no further code will be executed xsystemhalt void loop the arduino loop function is not used in a helios application and must remain empty package releases all releases including the latest release can be found here https github com heliosproj helios releases 0 4 1 fixed platformio library json file and updated readme 0 4 0 consistent return type for all syscalls additional memory consistency checking new helios developer s guide new code documentation and many more changes and improvements 0 3 5 several new features including device drivers stream buffers task watchdog timer improved memory defragmentation and many more including improvements to source code and documentation 0 3 4 corrected blink example in readme and in examples fixed esp8266 support added queue locking and other improvements 0 3 3 multi region memory support memory defragmentation cmsis support new portability layer and other code improvements 0 3 2 some fixes to the memory management system calls and related functions 0 3 1 a lot of refactoring code clean up from the 0 3 0 release and code documentation readability improvements 0 3 0 first release of the new 0 3 x series kernel many new features most of the kernel rewritten new example code and new documentation 0 2 7 added a contributed example privatized the list pointers for scheduler and added support for teensy 3 4 0 2 6 added built in support for esp8266 and minor internal updates 0 2 5 numerous internal enhancements including improved time precision and scheduler now gracefully handles overflow of run time timer 0 2 4 additional example arduino sketches and other code improvements 0 2 3 improved protection of system state new examples improved code documentation and some maintainability enhancements 0 2 2 additional function calls minor fixes and documentation enhancements 0 2 1 the first official release construction contributing see the contributing contributing md guidelines on how to contribute to helios if you are going to make a source code or documentation contribution please do not fork the master branch only pull requests forked from the develop branch will be accepted scroll copyright license helios embedded operating system copyright c 2020 2023 helios project license heliosproj org helios is copyrighted open source software licensed under the free software foundation s gnu general public license gpl version 2 the full text of the license can be found here license md skull and crossbones important notice helios is not certified for use in safety critical applications the helios source code whether in full or in part must never be used in applications where a risk to life exists in other words do not use helios in your project if there is even a remote chance someone might get hurt speech balloon other notice this project is not affiliated in any way past or present with the discontinued unix like operating system helios developed by dr tim king of perihelion software ltd or axel muhr s work on helios ng https github com axelmuhr helios ng any resemblance is purely coincidental | rtos arduino arm avr freertos multitasking operating-system os real-time sam teensy zephyr embedded | os |
LED_Lamp_2 | led lamp utilized ti s tiva c series mcu arm cortex m4f to control an led lamp that used a i2c lux sensor to auto adjust the brightness almost all the code was developed from stratch to learn as much as possible programming on embedded sytems project description a while back i bought a cheap led lamp off amazon unlike most lamps that only output one color temperature this lamp consisted of two led groups one led group that emitted a warm light and another led group that emitted a cool light by adjusting the relative brightness of each led different temperature profiles could be made the led lamp had four built in temperature profiles that the user could switch between i wanted to know how the lamp operated so i hacked it apart used an oscilloscope to probe a few signals and discovered that an onboard microcontroller was providing a pwm signal with variable duty cycles to set each different temperature profile i thought that this would be a perfect opportunity to learn how to use my own microcontroller to control the brightness of the leds originally i developed of the code on the chipkit uc32 mcu microchip s version of the arduino but i soon got my hands on a tiva c series mcu by texas instruments and ported the project over this led lamp project incorporates several new features that the original product did not have first the leds do not change brightness instantly instead they brightness is changed in a gradual fashion using timers to create a smooth fading in out effect second the led lamp could be controlled by a pc via uart the command facade was used to create a module that received commands from the pc and executed the corresponding command such as setting a specific brightness or the amount of time it took for the led to fade to a new temperature profile third the brightness of the leds can be automatically set by an external i2c lux sensor to support this sensor a robust i2c driver was developed that allowed the system to detect a i2c failure and continue to function properly without the lux sensor the source code utilizes ti s peripheral driver library which provides a very thin layer of abstraction between the user and direct register read writes in order to use the library effectively the user still needs to properly understand how the specific mcu operates by reading the datasheet this project could have been written without the use of ti s peripheral driver but this was avoided for a couple of reasons first the functions provided by the peripheral driver makes it obvious to the reader of the code as to what registers the function is accessing usually when writing directly to the register it is not so obvious and additional comments are needed to support each code statement second it saves valuable time there is no need to look up what each bit in each register does in the datasheet this detail is encapsulated by library although this project seems relatively simple a lot of important embedded systems concepts were learned through the development process during this project the book making embedded systems by elecia white was used as a complementary learning aid and several key concepts from the book were incorporated into the project for instance a better understanding of how to effectively use interrupts was gained and the command and facade design patterns were implemented | os |
|
Grokking-the-System-Design | grokking the system design grokking the system design interview course materials | os |
|
zbasic | zbasic this project provides a very basic version of a working zipcpu https github com zipcpu zbasic system it is designed so that others you perhaps can then build off of it and design with it zbasic has three primary goals to provide a usable beginning system to allow users to get something up and running quickly to provide a very basic system that can then be matched with an emulator and used to test library and compiler functionality apart from actual hardware to demonstrate the utility of autofpga https github com zipcpu autofpga and its ability to quickly easily and seemlessly add components to a design if you d like to give this a spin consider the instructions in this article http zipcpu com zipcpu 2018 02 12 zbasic intro html describing how to do so status the zbasic system can now be made using autofpga https github com zipcpu autofpga all the way from zero to hello world successfully in verilator testing license gisselquist technology llc is pleased to provide you access to this entire project under the gplv3 license if this license will not work for you please contact me | zipcpu verilator verilog fpga system-on-chip | os |
zephyr-rtos.nix | nix environment for zephyr rtos developers i using this config for a year usage 1 install nix https nixos org download html 2 clone this repo 3 change dir to repo dir and run nix shell config you can select required toolchains and add extra dependencies by creating custom derived shell nix see shell nix sdl add sdl2 to inputs if you need emulated graphic devices | os |
|
mobile-dispatch-server | mobile dispatch server addis clinic mobile dispatch server development | front_end |
|
lansuite | lansuite middot prs welcome https img shields io badge prs welcome brightgreen svg contributing md pull requests lansuite is a content management system designed primarily for the needs of lan parties german version of this readme can be found at readme de md readme de md lansuite has features like registration for parties announce a new lan party and enable people to sign up with a direct payment flow and several follow up actions like clan creation and more seat plans define a seating plan with several rooms and areas and let lan party attendees choose their seat in advance this enables clans to sit together to facilitate better team play organisation of tournaments manage tournaments for multiple games with different modes like single and double elimination league or group games with ko strategy projector support show the latest content like news messages the current state of a tournament or a timetable at a wall via the projector mode during the party to inform attendees cash and money management manage the cash flow of your organization team and don t lose the overview news system announce updates and inform all party guests about the latest news with a simple to use news system and many more other features like a picture gallery a hall of fame and more are included give lansuite a try install and test it getting started see our documentation on lansuite github io lansuite https lansuite github io lansuite docs installation html there you will find information on how to install it what the requirements are how to configure the system and more if you still struggle with getting started feel free to open an issue https github com lansuite lansuite issues new and tell us your challenge with such feedback we can help you and improve the documentation call out for users are you using lansuite if yes let us know in who is using lansuite 312 https github com lansuite lansuite issues 312 contributing every helping hand is welcomed you don t need to be able to write source code actions like improve the documentation fixing typos translating texts into another language welcome newcomers helping out with support in the issue tracker talking about lansuite at events like meetups or lanparties and similar activities are also highly valuable so feel free get started and help us to build a better community and system contributing guide read our contributing guide https github com lansuite lansuite blob master contributing md to learn about our development process how to propose bugfixes and improvements and how to build and test your changes to lansuite beginner friendly bugs to help you get your feet wet and get you familiar with our contribution process we have a list of beginner friendly bugs https github com lansuite lansuite labels good 20first 20issue that contain bugs which are fairly easy to fix this is a great place to get started contact the best way to get in contact with us is via github issues https github com lansuite lansuite issues over this way it is transparent to the community and all team members and contributors are informed and have the chance to respond similar projects awesome lan party software https github com lanparties awesome lanparty software license lansuite is gpl v2 licensed license | lanparty cms gaming esports php hacktoberfest | os |
Progress-ToDoWebapp | progress todowebapp consolidation of all the todo web apps that were created while i was learning web development contains simplest with php only a more complex version with the use of meteorjs framework and a current working project using ajax and diving into backend management | server |
|
shape-from-shading | shape from shading computer vision 1st project shape from shading depth reconstruction from 2d image using shading information | ai |
|
ml-workspace | h1 align center a href https github com ml tooling ml workspace title ml workspace home img width 50 alt src https github com ml tooling ml workspace raw main docs images ml workspace logo png a br h1 p align center strong all in one web based development environment for machine learning strong p p align center a href https hub docker com r mltooling ml workspace title docker image version img src https img shields io docker v mltooling ml workspace color blue sort semver a a href https hub docker com r mltooling ml workspace title docker pulls img src https img shields io docker pulls mltooling ml workspace svg color blue a a href https hub docker com r mltooling ml workspace title docker image size img src https img shields io docker image size mltooling ml workspace color blue sort semver a a href https gitter im ml tooling ml workspace title chat on gitter img src https badges gitter im ml tooling ml workspace svg a a href https mltooling substack com subscribe title subscribe to newsletter img src http bit ly 2md9rxm a a href https twitter com mltooling title follow on twitter img src https img shields io twitter follow mltooling svg style social label follow a p p align center a href getting started getting started a a href features features screenshots a a href support support a a href https github com ml tooling ml workspace issues new labels bug template 01 bug report md report a bug a a href faq faq a a href known issues known issues a a href contribution contribution a p the ml workspace is an all in one web based ide specialized for machine learning and data science it is simple to deploy and gets you started within minutes to productively built ml solutions on your own machines this workspace is the ultimate tool for developers preloaded with a variety of popular data science libraries e g tensorflow pytorch keras sklearn and dev tools e g jupyter vs code tensorboard perfectly configured optimized and integrated highlights nbsp jupyter jupyterlab and visual studio code web based ides nbsp pre installed with many popular data science libraries tools nbsp full linux desktop gui accessible via web browser nbsp seamless git integration optimized for notebooks nbsp integrated hardware training monitoring via tensorboard netdata nbsp access from anywhere via web ssh or vnc under a single port nbsp usable as remote kernel jupyter or remote machine vs code via ssh nbsp easy to deploy on mac linux and windows via docker br getting started p a href https labs play with docker com stack https raw githubusercontent com ml tooling ml workspace main deployment play with docker docker compose yml title docker image metadata target blank img src https cdn rawgit com play with docker stacks cff22438 assets images button png alt try in pwd width 100px a p prerequisites the workspace requires docker to be installed on your machine installation guide https docs docker com install supported platforms start single instance deploying a single workspace instance is as simple as bash docker run p 8080 8080 mltooling ml workspace 0 13 2 voil that was easy now docker will pull the latest workspace image to your machine this may take a few minutes depending on your internet speed once the workspace is started you can access it via http localhost 8080 if started on another machine or with a different port make sure to use the machine s ip dns and or the exposed port to deploy a single instance for productive usage we recommend to apply at least the following options bash docker run d p 8080 8080 name ml workspace v pwd workspace env authenticate via jupyter mytoken shm size 512m restart always mltooling ml workspace 0 13 2 this command runs the container in background d mounts your current working directory into the workspace folder v secures the workspace via a provided token env authenticate via jupyter provides 512mb of shared memory shm size to prevent unexpected crashes see known issues section known issues and keeps the container running even on system restarts restart always you can find additional options for docker run here https docs docker com engine reference commandline run and workspace configuration options in the section below configuration configuration options the workspace provides a variety of configuration options that can be used by setting environment variables via docker run option env details summary configuration options click to expand summary table tr th variable th th description th th default th tr tr td workspace base url td td the base url under which jupyter and all other tools will be reachable from td td td tr tr td workspace ssl enabled td td enable or disable ssl when set to true either certificate cert crt must be mounted to code resources ssl code or if not the container generates self signed certificate td td false td tr tr td workspace auth user td td basic auth user name to enable basic auth both the user and password need to be set we recommend to use the code authenticate via jupyter code for securing the workspace td td td tr tr td workspace auth password td td basic auth user password to enable basic auth both the user and password need to be set we recommend to use the code authenticate via jupyter code for securing the workspace td td td tr tr td workspace port td td configures the main container internal port of the workspace proxy for most scenarios this configuration should not be changed and the port configuration via docker should be used instead of the workspace should be accessible from a different port td td 8080 td tr tr td config backup enabled td td automatically backup and restore user configuration to the persisted code workspace code folder such as the ssh jupyter or gitconfig from the users home directory td td true td tr tr td shared links enabled td td enable or disable the capability to share resources via external links this is used to enable file sharing access to workspace internal ports and easy command based ssh setup all shared links are protected via a token however there are certain risks since the token cannot be easily invalidated after sharing and does not expire td td true td tr tr td include tutorials td td if code true code a selection of tutorial and introduction notebooks are added to the code workspace code folder at container startup but only if the folder is empty td td true td tr tr td max num threads td td the number of threads used for computations when using various common libraries mkl openblas omp numba you can also use code auto code to let the workspace dynamically determine the number of threads based on available cpu resources this configuration can be overwritten by the user from within the workspace generally it is good to set it at or below the number of cpus available to the workspace td td auto td tr tr td colspan 3 b jupyter configuration b td tr tr td shutdown inactive kernels td td automatically shutdown inactive kernels after a given timeout to clean up memory or gpu resources value can be either a timeout in seconds or set to code true code with a default value of 48h td td false td tr tr td authenticate via jupyter td td if code true code all http requests will be authenticated against the jupyter server meaning that the authentication method configured with jupyter will be used for all other tools as well this can be deactivated with code false code any other value will activate this authentication and are applied as token via notebookapp token configuration of jupyter td td false td tr tr td notebook args td td add and overwrite jupyter configuration options via command line args refer to a href https jupyter notebook readthedocs io en stable config html this overview a for all options td td td tr table details persist data to persist the data you need to mount a volume into workspace via docker run option v details summary details click to expand summary the default work directory within the container is workspace which is also the root directory of the jupyter instance the workspace directory is intended to be used for all the important work artifacts data within other directories of the server e g root might get lost at container restarts details enable authentication we strongly recommend enabling authentication via one of the following two options for both options the user will be required to authenticate for accessing any of the pre installed tools the authentication only works for all tools accessed through the main workspace port default 8080 this works for all preinstalled tools and the access ports access ports feature if you expose another port of the container please make sure to secure it with authentication as well details summary details click to expand summary token based authentication via jupyter recommended activate the token based authentication based on the authentication implementation of jupyter via the authenticate via jupyter variable bash docker run p 8080 8080 env authenticate via jupyter mytoken mltooling ml workspace 0 13 2 you can also use generated to let jupyter generate a random token that is printed out on the container logs a value of true will not set any token but activate that every request to any tool in the workspace will be checked with the jupyter instance if the user is authenticated this is used for tools like jupyterhub which configures its own way of authentication basic authentication via nginx activate the basic authentication via the workspace auth user and workspace auth password variable bash docker run p 8080 8080 env workspace auth user user env workspace auth password pwd mltooling ml workspace 0 13 2 the basic authentication is configured via the nginx proxy and might be more performant compared to the other option since with authenticate via jupyter every request to any tool in the workspace will check via the jupyter instance if the user based on the request cookies is authenticated details enable ssl https we recommend enabling ssl so that the workspace is accessible via https encrypted communication ssl encryption can be activated via the workspace ssl enabled variable details summary details click to expand summary when set to true either the cert crt and cert key file must be mounted to resources ssl or if the certificate files do not exist the container generates self signed certificates for example if the path with certificate files on the local system contains a valid certificate for the host domain cert crt and cert key file it can be used from the workspace as shown below bash docker run p 8080 8080 env workspace ssl enabled true v path with certificate files resources ssl ro mltooling ml workspace 0 13 2 if you want to host the workspace on a public domain we recommend to use let s encrypt https letsencrypt org getting started to get a trusted certificate for your domain to use the generated certificate e g via certbot https certbot eff org tool for the workspace the privkey pem corresponds to the cert key file and the fullchain pem to the cert crt file when you enable ssl support you must access the workspace over https not over plain http details limit memory cpu by default the workspace container has no resource constraints and can use as much of a given resource as the host s kernel scheduler allows docker provides ways to control how much memory or cpu a container can use by setting runtime configuration flags of the docker run command the workspace requires atleast 2 cpus and 500mb to run stable and be usable details summary details click to expand summary for example the following command restricts the workspace to only use a maximum of 8 cpus 16 gb of memory and 1 gb of shared memory see known issues known issues bash docker run p 8080 8080 cpus 8 memory 16g shm size 1g mltooling ml workspace 0 13 2 for more options and documentation on resource constraints please refer to the official docker guide https docs docker com config containers resource constraints details enable proxy if a proxy is required you can pass the proxy configuration via the http proxy https proxy and no proxy environment variables workspace flavors in addition to the main workspace image mltooling ml workspace we provide other image flavors that extend the features or minimize the image size to support a variety of use cases minimal flavor p a href https hub docker com r mltooling ml workspace title docker image version img src https img shields io docker v mltooling ml workspace color blue sort semver a a href https hub docker com r mltooling ml workspace minimal title docker image size img src https img shields io docker image size mltooling ml workspace minimal color blue sort semver a a href https hub docker com r mltooling ml workspace minimal title docker pulls img src https img shields io docker pulls mltooling ml workspace minimal svg a p details summary details click to expand summary the minimal flavor mltooling ml workspace minimal is our smallest image that contains most of the tools and features described in the features section features without most of the python libraries that are pre installed in our main image any python library or excluded tool can be installed manually during runtime by the user bash docker run p 8080 8080 mltooling ml workspace minimal 0 13 2 details r flavor p a href https hub docker com r mltooling ml workspace r title docker image version img src https img shields io docker v mltooling ml workspace r color blue sort semver a a href https hub docker com r mltooling ml workspace r title docker image size img src https img shields io docker image size mltooling ml workspace r color blue sort semver a a href https hub docker com r mltooling ml workspace r title docker pulls img src https img shields io docker pulls mltooling ml workspace r svg a p details summary details click to expand summary the r flavor mltooling ml workspace r is based on our default workspace image and extends it with the r interpreter r jupyter kernel rstudio server access via open tool rstudio and a variety of popular packages from the r ecosystem bash docker run p 8080 8080 mltooling ml workspace r 0 12 1 details spark flavor p a href https hub docker com r mltooling ml workspace spark title docker image version img src https img shields io docker v mltooling ml workspace spark color blue sort semver a a href https hub docker com r mltooling ml workspace spark title docker image size img src https img shields io docker image size mltooling ml workspace spark color blue sort semver a a href https hub docker com r mltooling ml workspace spark title docker pulls img src https img shields io docker pulls mltooling ml workspace spark svg a p details summary details click to expand summary the spark flavor mltooling ml workspace spark is based on our r flavor workspace image and extends it with the spark runtime spark jupyter kernel zeppelin notebook access via open tool zeppelin pyspark hadoop java kernel and a few additional libraries jupyter extensions bash docker run p 8080 8080 mltooling ml workspace spark 0 12 1 details gpu flavor p a href https hub docker com r mltooling ml workspace gpu title docker image version img src https img shields io docker v mltooling ml workspace gpu color blue sort semver a a href https hub docker com r mltooling ml workspace gpu ttitle docker image size img src https img shields io docker image size mltooling ml workspace gpu color blue sort semver a a href https hub docker com r mltooling ml workspace gpu title docker pulls img src https img shields io docker pulls mltooling ml workspace gpu svg a p details summary details click to expand summary currently the gpu flavor only supports cuda 11 2 support for other cuda versions might be added in the future the gpu flavor mltooling ml workspace gpu is based on our default workspace image and extends it with cuda 10 1 and gpu ready versions of various machine learning libraries e g tensorflow pytorch cntk jax this gpu image has the following additional requirements for the system nvidia drivers for the gpus drivers need to be cuda 11 2 compatible version 460 32 03 instructions https github com nvidia nvidia docker wiki frequently asked questions how do i install the nvidia driver docker 19 03 nvidia container toolkit instructions https github com nvidia nvidia docker wiki installation native gpu support bash docker run p 8080 8080 gpus all mltooling ml workspace gpu 0 13 2 docker 19 03 nvidia docker 2 0 instructions https github com nvidia nvidia docker wiki installation version 2 0 bash docker run p 8080 8080 runtime nvidia env nvidia visible devices all mltooling ml workspace gpu 0 13 2 the gpu flavor also comes with a few additional configuration options as explained below table tr th variable th th description th th default th tr tr td nvidia visible devices td td controls which gpus will be accessible inside the workspace by default all gpus from the host are accessible within the workspace you can either use code all code code none code or specify a comma separated list of device ids e g code 0 1 code you can find out the list of available device ids by running code nvidia smi code on the host machine td td all td tr tr td cuda visible devices td td controls which gpus cuda applications running inside the workspace will see by default all gpus that the workspace has access to will be visible to restrict applications provide a comma separated list of internal device ids e g code 0 2 code based on the available devices within the workspace run code nvidia smi code in comparison to code nvidia visible devices code the workspace user will be still able to access other gpus by overwriting this configuration from within the workspace td td td tr tr td tf force gpu allow growth td td by default the majority of gpu memory will be allocated by the first execution of a tensorflow graph while this behavior can be desirable for production pipelines it is less desirable for interactive use use code true code to enable dynamic gpu memory allocation or code false code to instruct tensorflow to allocate all memory at execution td td true td tr table details multi user setup the workspace is designed as a single user development environment for a multi user setup we recommend deploying ml hub https github com ml tooling ml hub ml hub is based on jupyterhub with the task to spawn manage and proxy workspace instances for multiple users details summary deployment click to expand summary ml hub makes it easy to set up a multi user environment on a single server via docker or a cluster via kubernetes and supports a variety of usage scenarios authentication providers you can try out ml hub via bash docker run p 8080 8080 v var run docker sock var run docker sock mltooling ml hub latest for more information and documentation about ml hub please take a look at the github site https github com ml tooling ml hub details br support this project is maintained by benjamin r thlein https twitter com raethlein lukas masuch https twitter com lukasmasuch and jan kalkan https www linkedin com in jan kalkan b5390284 please understand that we won t be able to provide individual support via email we also believe that help is much more valuable if it s shared publicly so that more people can benefit from it type channel nbsp bug reports a href https github com ml tooling ml workspace issues utf8 e2 9c 93 q is 3aopen is 3aissue label 3abug sort 3areactions 2b1 desc title open bug report img src https img shields io github issues ml tooling ml workspace bug svg a nbsp feature requests a href https github com ml tooling ml workspace issues q is 3aopen is 3aissue label 3afeature sort 3areactions 2b1 desc title open feature request img src https img shields io github issues ml tooling ml workspace feature svg label feature 20request a nbsp usage questions a href https github com ml tooling ml workspace issues q is 3aopen is 3aissue label 3asupport sort 3areactions 2b1 desc title open support request img src https img shields io github issues ml tooling ml workspace support svg label support 20request a a href https stackoverflow com questions tagged ml tooling title open question on stackoverflow img src https img shields io badge stackoverflow ml tooling orange svg a a href https gitter im ml tooling ml workspace title chat on gitter img src https badges gitter im ml tooling ml workspace svg a nbsp announcements a href https gitter im ml tooling ml workspace title chat on gitter img src https badges gitter im ml tooling ml workspace svg a a href https mltooling substack com subscribe title subscribe for updates img src http bit ly 2md9rxm a a href https twitter com mltooling title ml tooling on twitter img src https img shields io twitter follow mltooling svg style social label follow nbsp other requests a href mailto team mltooling org title email ml tooling team img src https img shields io badge email ml tooling green logo mail ru logocolor white a br features p align center a href jupyter jupyter a a href desktop gui desktop gui a a href visual studio code vs code a a href jupyterlab jupyterlab a a href git integration git integration a a href file sharing file sharing a a href access ports access ports a a href tensorboard tensorboard a a href extensibility extensibility a a href hardware monitoring hardware monitoring a a href ssh access ssh access a a href remote development remote development a a href run as a job job execution a p the workspace is equipped with a selection of best in class open source development tools to help with the machine learning workflow many of these tools can be started from the open tool menu from jupyter the main application of the workspace img style width 100 src https github com ml tooling ml workspace raw main docs images features open tools png within your workspace you have full root sudo privileges to install any library or tool you need via terminal e g pip apt get conda or npm you can find more ways to extend the workspace within the extensibility extensibility section jupyter jupyter notebook https jupyter org is a web based interactive environment for writing and running code the main building blocks of jupyter are the file browser the notebook editor and kernels the file browser provides an interactive file manager for all notebooks files and folders in the workspace directory img style width 100 src https github com ml tooling ml workspace raw main docs images features jupyter tree png a new notebook can be created by clicking on the new drop down button at the top of the list and selecting the desired language kernel you can spawn interactive terminal instances as well by selecting new terminal in the file browser img style width 100 src https github com ml tooling ml workspace raw main docs images features jupyter notebook png the notebook editor enables users to author documents that include live code markdown text shell commands latex equations interactive widgets plots and images these notebook documents provide a complete and self contained record of a computation that can be converted to various formats and shared with others this workspace has a variety of third party jupyter extensions activated you can configure these extensions in the nbextensions configurator nbextensions tab on the file browser the notebook allows code to be run in a range of different programming languages for each notebook document that a user opens the web application starts a kernel that runs the code for that notebook and returns output this workspace has a python 3 kernel pre installed additional kernels can be installed to get access to other languages e g r scala go or additional computing resources e g gpus cpus memory python 2 is deprected and we do not recommend to use it however you can still install a python 2 7 kernel via this command bin bash resources tools python 27 sh desktop gui this workspace provides an http based vnc access to the workspace via novnc https github com novnc novnc thereby you can access and work within the workspace with a fully featured desktop gui to access this desktop gui go to open tool select vnc and click the connect button in the case you are asked for a password use vncpassword img style width 100 src https github com ml tooling ml workspace raw main docs images features desktop vnc png once you are connected you will see a desktop gui that allows you to install and use full fledged web browsers or any other tool that is available for ubuntu within the tools folder on the desktop you will find a collection of install scripts that makes it straightforward to install some of the most commonly used development tools such as atom pycharm r runtime r studio or postman just double click on the script clipboard if you want to share the clipboard between your machine and the workspace you can use the copy paste functionality as described below img style width 100 src https github com ml tooling ml workspace raw main docs images features desktop vnc clipboard png long running tasks use the desktop gui for long running jupyter executions by running notebooks from the browser of your workspace desktop gui all output will be synchronized to the notebook even if you have disconnected your browser from the notebook visual studio code visual studio code https github com microsoft vscode open tool vs code is an open source lightweight but powerful code editor with built in support for a variety of languages and a rich ecosystem of extensions it combines the simplicity of a source code editor with powerful developer tooling like intellisense code completion and debugging the workspace integrates vs code as a web based application accessible through the browser based on the awesome code server https github com cdr code server project it allows you to customize every feature to your liking and install any number of third party extensions p align center img src https github com ml tooling ml workspace raw main docs images features vs code png p the workspace also provides a vs code integration into jupyter allowing you to open a vs code instance for any selected folder as shown below p align center img src https github com ml tooling ml workspace raw main docs images features vs code open png p jupyterlab jupyterlab https github com jupyterlab jupyterlab open tool jupyterlab is the next generation user interface for project jupyter it offers all the familiar building blocks of the classic jupyter notebook notebook terminal text editor file browser rich outputs etc in a flexible and powerful user interface this jupyterlab instance comes pre installed with a few helpful extensions such as a the jupyterlab toc https github com jupyterlab jupyterlab toc jupyterlab git https github com jupyterlab jupyterlab git and juptyterlab tensorboard https github com chaoleili jupyterlab tensorboard img style width 100 src https github com ml tooling ml workspace raw main docs images features jupyterlab png git integration version control is a crucial aspect of productive collaboration to make this process as smooth as possible we have integrated a custom made jupyter extension specialized on pushing single notebooks a full fledged web based git client ungit https github com fredriknoren ungit a tool to open and edit plain text documents e g py md as notebooks jupytext https github com mwouts jupytext as well as a notebook merging tool nbdime https github com jupyter nbdime additionally jupyterlab and vs code also provide gui based git clients clone repository for cloning repositories via https we recommend to navigate to the desired root folder and to click on the git button as shown below img style width 100 src https github com ml tooling ml workspace raw main docs images features git open png this might ask for some required settings and subsequently opens ungit https github com fredriknoren ungit a web based git client with a clean and intuitive ui that makes it convenient to sync your code artifacts within ungit you can clone any repository if authentication is required you will get asked for your credentials img style width 100 src https github com ml tooling ml workspace raw main docs images features git ungit credentials png push pull merge and other git actions to commit and push a single notebook to a remote git repository we recommend to use the git plugin integrated into jupyter as shown below img style width 100 src https github com ml tooling ml workspace raw main docs images features git push notebook png for more advanced git operations we recommend to use ungit https github com fredriknoren ungit with ungit you can do most of the common git actions such as push pull merge branch tag checkout and many more diffing and merging notebooks jupyter notebooks are great but they often are huge files with a very specific json file format to enable seamless diffing and merging via git this workspace is pre installed with nbdime https github com jupyter nbdime nbdime understands the structure of notebook documents and therefore automatically makes intelligent decisions when diffing and merging notebooks in the case you have merge conflicts nbdime will make sure that the notebook is still readable by jupyter as shown below img style width 100 src https github com ml tooling ml workspace raw main docs images features git nbdime merging png furthermore the workspace comes pre installed with jupytext https github com mwouts jupytext a jupyter plugin that reads and writes notebooks as plain text files this allows you to open edit and run scripts or markdown files e g py md as notebooks within jupyter in the following screenshot we have opened a markdown file via jupyter img style width 100 src https github com ml tooling ml workspace raw main docs images features git jupytext png in combination with git jupytext enables a clear diff history and easy merging of version conflicts with both of those tools collaborating on jupyter notebooks with git becomes straightforward file sharing the workspace has a feature to share any file or folder with anyone via a token protected link to share data via a link select any file or folder from the jupyter directory tree and click on the share button as shown in the following screenshot img style width 100 src https github com ml tooling ml workspace raw main docs images features file sharing open png this will generate a unique link protected via a token that gives anyone with the link access to view and download the selected data via the filebrowser https github com filebrowser filebrowser ui img style width 100 src https github com ml tooling ml workspace raw main docs images features file sharing filebrowser png to deactivate or manage e g provide edit permissions shared links open the filebrowser via open tool filebrowser and select settings user management access ports it is possible to securely access any workspace internal port by selecting open tool access port with this feature you are able to access a rest api or web application running inside the workspace directly with your browser the feature enables developers to build run test and debug rest apis or web applications directly from the workspace img style width 100 src https github com ml tooling ml workspace raw main docs images features access port png if you want to use an http client or share access to a given port you can select the get shareable link option this generates a token secured link that anyone with access to the link can use to access the specified port the http app requires to be resolved from a relative url path or configure a base path tools port tools made accessible this way are secured by the workspace s authentication system if you decide to publish any other port of the container yourself instead of using this feature to make a tool accessible please make sure to secure it via an authentication mechanism details summary example click to expand summary 1 start an http server on port 1234 by running this command in a terminal within the workspace python m http server 1234 2 select open tool access port input port 1234 and select the get shareable link option 3 click access and you will see the content provided by python s http server 4 the opened link can also be shared to other people or called from external applications e g try with incognito mode in chrome details ssh access ssh provides a powerful set of features that enables you to be more productive with your development tasks you can easily set up a secure and passwordless ssh connection to a workspace by selecting open tool ssh this will generate a secure setup command that can be run on any linux or mac machine to configure a passwordless secure ssh connection to the workspace alternatively you can also download the setup script and run it instead of using the command img style width 100 src https github com ml tooling ml workspace raw main docs images features ssh access png the setup script only runs on mac and linux windows is currently not supported just run the setup command or script on the machine from where you want to setup a connection to the workspace and input a name for the connection e g my workspace you might also get asked for some additional input during the process e g to install a remote kernel if remote ikernel is installed once the passwordless ssh connection is successfully setup and tested you can securely connect to the workspace by simply executing ssh my workspace besides the ability to execute commands on a remote machine ssh also provides a variety of other features that can improve your development workflow as described in the following sections details summary b tunnel ports b click to expand summary an ssh connection can be used for tunneling application ports from the remote machine to the local machine or vice versa for example you can expose the workspace internal port 5901 vnc server to the local machine on port 5000 by executing bash ssh nnt l 5000 localhost 5901 my workspace to expose an application port from your local machine to a workspace use the r option instead of l after the tunnel is established you can use your favorite vnc viewer on your local machine and connect to vnc localhost 5000 default password vncpassword to make the tunnel connection more resistant and reliable we recommend to use autossh https www harding motd ca autossh to automatically restart ssh tunnels in the case that the connection dies bash autossh m 0 f nnt l 5000 localhost 5901 my workspace port tunneling is quite useful when you have started any server based tool within the workspace that you like to make accessible for another machine in its default setting the workspace has a variety of tools already running on different ports such as 8080 main workspace port with access to all integrated tools 8090 jupyter server 8054 vs code server 5901 vnc server 22 ssh server you can find port information on all the tools in the supervisor configuration https github com ml tooling ml workspace blob main resources supervisor supervisord conf for more information about port tunneling forwarding we recommend this guide https www everythingcli org ssh tunnelling for fun and profit local vs remote details details summary b copy data via scp b click to expand summary scp https linux die net man 1 scp allows files and directories to be securely copied to from or between different machines via ssh connections for example to copy a local file local file txt into the workspace folder inside the workspace execute bash scp local file txt my workspace workspace to copy the workspace directory from my workspace to the working directory of the local machine execute bash scp r my workspace workspace for more information about scp we recommend this guide https www garron me en articles scp html details details summary b sync data via rsync b click to expand summary rsync https linux die net man 1 rsync is a utility for efficiently transferring and synchronizing files between different machines e g via ssh connections by comparing the modification times and sizes of files the rsync command will determine which files need to be updated each time it is run which is far more efficient and convenient than using something like scp or sftp for example to sync all content of a local folder local project folder into the workspace remote project folder folder inside the workspace execute bash rsync rlptzvp delete exclude git local project folder my workspace workspace remote project folder if you have some changes inside the folder on the workspace you can sync those changes back to the local folder by changing the source and destination arguments bash rsync rlptzvp delete exclude git my workspace workspace remote project folder local project folder you can rerun these commands each time you want to synchronize the latest copy of your files rsync will make sure that only updates will be transferred you can find more information about rsync on this man page https linux die net man 1 rsync details details summary b mount folders via sshfs b click to expand summary besides copying and syncing data an ssh connection can also be used to mount directories from a remote machine into the local filesystem via sshfs https github com libfuse sshfs for example to mount the workspace directory of my workspace into a local path e g local folder path execute bash sshfs o reconnect my workspace workspace local folder path once the remote directory is mounted you can interact with the remote file system the same way as with any local directory and file for more information about sshfs we recommend this guide https www digitalocean com community tutorials how to use sshfs to mount remote file systems over ssh details remote development the workspace can be integrated and used as a remote runtime also known as remote kernel machine interpreter for a variety of popular development tools and ides such as jupyter vs code pycharm colab or atom hydrogen thereby you can connect your favorite development tool running on your local machine to a remote machine for code execution this enables a local quality development experience with remote hosted compute resources these integrations usually require a passwordless ssh connection from the local machine to the workspace to set up an ssh connection please follow the steps explained in the ssh access ssh access section details summary b jupyter remote kernel b click to expand summary the workspace can be added to a jupyter instance as a remote kernel by using the remote ikernel https bitbucket org tdaff remote ikernel tool if you have installed remote ikernel pip install remote ikernel on your local machine the ssh setup script of the workspace will automatically offer you the option to setup a remote kernel connection when running kernels on remote machines the notebooks themselves will be saved onto the local filesystem but the kernel will only have access to the filesystem of the remote machine running the kernel if you need to sync data you can make use of rsync scp or sshfs as explained in the ssh access ssh access section in case you want to manually setup and manage remote kernels use the remote ikernel https bitbucket org tdaff remote ikernel src default readme rst command line tool as shown below bash change my workspace with the name of a workspace ssh connection remote ikernel manage add interface ssh kernel cmd ipython kernel f connection file name ml server python host my workspace you can use the remote ikernel command line functionality to list remote ikernel manage show or delete remote ikernel manage delete remote kernel name remote kernel connections img style width 100 src https github com ml tooling ml workspace raw main docs images features remote dev jupyter kernel png details details summary b vs code remote machine b click to expand summary the visual studio code remote ssh https marketplace visualstudio com items itemname ms vscode remote remote ssh extension allows you to open a remote folder on any remote machine with ssh access and work with it just as you would if the folder were on your own machine once connected to a remote machine you can interact with files and folders anywhere on the remote filesystem and take full advantage of vs code s feature set intellisense debugging and extension support the discovers and works out of the box with passwordless ssh connections as configured by the workspace ssh setup script to enable your local vs code application to connect to a workspace 1 install remote ssh https marketplace visualstudio com items itemname ms vscode remote remote ssh extension inside your local vs code 2 run the ssh setup script of a selected workspace as explained in the ssh access ssh access section 3 open the remote ssh panel in your local vs code all configured ssh connections should be automatically discovered just select any configured workspace connection you like to connect to as shown below img style width 100 src https github com ml tooling ml workspace raw main docs images features remote dev vscode gif you can find additional features and information about the remote ssh extension in this guide https code visualstudio com docs remote ssh details tensorboard tensorboard https www tensorflow org tensorboard provides a suite of visualization tools to make it easier to understand debug and optimize your experiment runs it includes logging features for scalar histogram model structure embeddings and text image visualization the workspace comes pre installed with jupyter tensorboard extension https github com lspvic jupyter tensorboard that integrates tensorboard into the jupyter interface with functionalities to start manage and stop instances you can open a new instance for a valid logs directory as shown below img style width 100 src https github com ml tooling ml workspace raw main docs images features tensorboard open png if you have opened a tensorboard instance in a valid log directory you will see the visualizations of your logged data img style width 100 src https github com ml tooling ml workspace raw main docs images features tensorboard dashboard png tensorboard can be used in combination with many other ml frameworks besides tensorflow by using the tensorboardx https github com lanpa tensorboardx library you can log basically from any python based library also pytorch has a direct tensorboard integration as described here https pytorch org docs stable tensorboard html if you prefer to see the tensorboard directly within your notebook you can make use of following jupyter magic load ext tensorboard tensorboard logdir workspace path to logs hardware monitoring the workspace provides two pre installed web based tools to help developers during model training and other experimentation tasks to get insights into everything happening on the system and figure out performance bottlenecks netdata https github com netdata netdata open tool netdata is a real time hardware and performance monitoring dashboard that visualize the processes and services on your linux systems it monitors metrics about cpu gpu memory disks networks processes and more img style width 100 src https github com ml tooling ml workspace raw main docs images features hardware monitoring netdata png glances https github com nicolargo glances open tool glances is a web based hardware monitoring dashboard as well and can be used as an alternative to netdata img style width 100 src https github com ml tooling ml workspace raw main docs images features hardware monitoring glances png netdata and glances will show you the hardware statistics for the entire machine on which the workspace container is running run as a job a job is defined as any computational task that runs for a certain time to completion such as a model training or a data pipeline the workspace image can also be used to execute arbitrary python code without starting any of the pre installed tools this provides a seamless way to productize your ml projects since the code that has been developed interactively within the workspace will have the same environment and configuration when run as a job via the same workspace image details summary b run python code as a job via the workspace image b click to expand summary to run python code as a job you need to provide a path or url to a code directory or script via execute code the code can be either already mounted into the workspace container or downloaded from a version control system e g git or svn as described in the following sections the selected code path needs to be python executable in case the selected code is a directory e g whenever you download the code from a vcs you need to put a main py file at the root of this directory the main py needs to contain the code that starts your job run code from version control system you can execute code directly from git mercurial subversion or bazaar by using the pip vcs format as described in this guide https pip pypa io en stable reference pip install vcs support for example to execute code from a subdirectory https github com ml tooling ml workspace tree main resources tests ml job of a git repository just run bash docker run env execute code git https github com ml tooling ml workspace git subdirectory resources tests ml job mltooling ml workspace 0 13 2 for additional information on how to specify branches commits or tags please refer to this guide https pip pypa io en stable reference pip install vcs support run code mounted into the workspace in the following example we mount and execute the current working directory expected to contain our code into the workspace ml job directory of the workspace bash docker run v pwd workspace ml job env execute code workspace ml job mltooling ml workspace 0 13 2 install dependencies in the case that the pre installed workspace libraries are not compatible with your code you can install or change dependencies by just adding one or multiple of the following files to your code directory requirements txt pip requirements format https pip pypa io en stable user guide requirements files for pip installable dependencies environment yml conda environment file https docs conda io projects conda en latest user guide tasks manage environments html highlight environment yml creating an environment file manually to create a separate python environment setup sh a shell script executed via bin bash the execution order is 1 environment yml 2 setup sh 3 requirements txt test job in interactive mode you can test your job code within the workspace started normally with interactive tools by executing the following python script bash python resources scripts execute code py path to your job build a custom job image it is also possible to embed your code directly into a custom job image as shown below dockerfile from mltooling ml workspace 0 13 2 add job code to image copy ml job workspace ml job env execute code workspace ml job install requirements only run python resources scripts execute code py requirements only execute only the code at container startup cmd python resources docker entrypoint py code only details pre installed libraries and interpreters the workspace is pre installed with many popular interpreters data science libraries and ubuntu packages interpreter python 3 8 miniconda 3 nodejs 14 scala perl 5 python libraries tensorflow keras pytorch sklearn xgboost mxnet theano and many more https github com ml tooling ml workspace tree main resources libraries package manager conda pip apt get npm yarn sdk poetry gdebi the full list of installed tools can be found within the dockerfile https github com ml tooling ml workspace blob main dockerfile for every minor version release we run vulnerability virus and security checks within the workspace using safety https pyup io safety clamav https www clamav net trivy https github com aquasecurity trivy and snyk via docker scan https docs docker com engine scan to make sure that the workspace environment is as secure as possible we are committed to fix and prevent all high or critical severity vulnerabilities you can find some up to date reports here https github com ml tooling ml workspace tree main resources reports extensibility the workspace provides a high degree of extensibility within the workspace you have full root sudo privileges to install any library or tool you need via terminal e g pip apt get conda or npm you can open a terminal by one of the following ways jupyter new terminal desktop vnc applications terminal emulator jupyterlab file new terminal vs code terminal new terminal additionally pre installed tools such as jupyter jupyterlab and visual studio code each provide their own rich ecosystem of extensions the workspace also contains a collection of installer scripts https github com ml tooling ml workspace tree main resources tools for many commonly used development tools or libraries e g pycharm zeppelin rstudio starspace you can find and execute all tool installers via open tool install tool those scripts can be also executed from the desktop vnc double click on the script within the tools folder on the desktop vnc details summary example click to expand summary for example to install the apache zeppelin https zeppelin apache org notebook server simply execute bash resources tools zeppelin sh port 1234 after installation refresh the jupyter website and the zeppelin tool will be available under open tool zeppelin other tools might only be available within the desktop vnc e g atom or pycharm or do not provide any ui e g starspace docker client details as an alternative to extending the workspace at runtime you can also customize the workspace docker image to create your own flavor as explained in the faq faq section br faq details summary b how to customize the workspace image create your own flavor b click to expand summary the workspace can be extended in many ways at runtime as explained here extensibility however if you like to customize the workspace image with your own software or configuration you can do that via a dockerfile as shown below dockerfile extend from any of the workspace versions flavors from mltooling ml workspace 0 13 2 run you customizations e g run install r runtime r kernel and r studio web server from provided install scripts bin bash resources path tools r runtime sh install bin bash resources path tools r studio server sh install cleanup layer removes unneccessary cache files clean layer sh finally use docker build https docs docker com engine reference commandline build to build your customized docker image for a more comprehensive dockerfile example take a look at the dockerfile of the r flavor https github com ml tooling ml workspace blob main r flavor dockerfile details details summary b how to update a running workspace container b click to expand summary to update a running workspace instance to a more recent version the running docker container needs to be replaced with a new container based on the updated workspace image all data within the workspace that is not persisted to a mounted volume will be lost during this update process as mentioned in the persist data persist data section a volume is expected to be mounted into the workspace folder all tools within the workspace are configured to make use of the workspace folder as the root directory for all source code and data artifacts during an update data within other directories will be removed including installed updated libraries or certain machine configurations we have integrated a backup and restore feature config backup enabled for various selected configuration files folders such as the user s jupyter vs code configuration gitconfig and ssh details summary update example click to expand summary if the workspace is deployed via docker kubernetes will have a different update process you need to remove the existing container via docker rm and start a new one via docker run with the newer workspace image make sure to use the same configuration volume name and port for example a workspace image version 0 8 7 was started with this command docker run d p 8080 8080 name ml workspace v path on host workspace env authenticate via jupyter mytoken restart always mltooling ml workspace 0 8 7 and needs to be updated to version 0 9 1 you need to 1 stop and remove the running workspace container docker stop ml workspace docker rm ml workspace 2 start a new workspace container with the newer image and same configuration docker run d p 8080 8080 name ml workspace v path on host workspace env authenticate via jupyter mytoken restart always mltooling ml workspace 0 9 1 details details details summary b how to configure the vnc server b click to expand summary if you want to directly connect to the workspace via a vnc client not using the novnc webapp desktop gui you might be interested in changing certain vnc server configurations to configure the vnc server you can provide overwrite the following environment variables at container start via docker run option env table tr th variable th th description th th default th tr tr td vnc pw td td password of vnc connection this password only needs to be secure if the vnc server is directly exposed if it is used via novnc it is already protected based on the configured authentication mechanism td td vncpassword td tr tr td vnc resolution td td default desktop resolution of vnc connection when using novnc the resolution will be dynamically adapted to the window size td td 1600x900 td tr tr td vnc col depth td td default color depth of vnc connection td td 24 td tr table details details summary b how to use a non root user within the workspace b click to expand summary unfortunately we currently do not support using a non root user within the workspace we plan to provide this capability and already started with some refactoring to allow this configuration however this still requires a lot more work refactoring and testing from our side using root user or users with sudo permission within containers is generally not recommended since in case of system kernel vulnerabilities a user might be able to break out of the container and be able to access the host system since it is not very common to have such problematic kernel vulnerabilities the risk of a severe attack is quite minimal as explained in the official docker documentation https docs docker com engine security security linux kernel capabilities containers even with root users are generally quite secure in preventing a breakout to the host and compared to many other container use cases we actually want to provide the flexibility to the user to have control and system level installation permissions within the workspace container details details summary b how to create and use a virtual environment b click to expand summary the workspace comes preinstalled with various common tools to create isolated python environments virtual environments the following sections provide a quick intro on how to use these tools within the workspace you can find information on when to use which tool here https stackoverflow com a 41573588 please refer to the documentation of the given tool for additional usage information venv recommended to create a virtual environment via venv https docs python org 3 tutorial venv html execute the following commands bash create environment in the working directory python m venv my venv activate environment in shell source my venv bin activate optional create jupyter kernel for this environment pip install ipykernel python m ipykernel install user name my venv display name my venv python version optional close enviornment session deactivate pipenv recommended to create a virtual environment via pipenv https pipenv pypa io en latest execute the following commands bash create environment in the working directory pipenv install activate environment session in shell pipenv shell optional create jupyter kernel for this environment pipenv install ipykernel python m ipykernel install user name my pipenv display name my pipenv python version optional close environment session exit virtualenv to create a virtual environment via virtualenv https virtualenv pypa io en latest execute the following commands bash create environment in the working directory virtualenv my virtualenv activate environment session in shell source my virtualenv bin activate optional create jupyter kernel for this environment pip install ipykernel python m ipykernel install user name my virtualenv display name my virtualenv python version optional close environment session deactivate conda to create a virtual environment via conda https docs conda io projects conda en latest user guide tasks manage environments html execute the following commands bash create environment globally conda create n my conda env activate environment session in shell conda activate my conda env optional create jupyter kernel for this environment python m ipykernel install user name my conda env display name my conda env python version optional close environment session conda deactivate tip shell commands in jupyter notebooks if you install and use a virtual environment via a dedicated jupyter kernel and use shell commands within jupyter e g pip install matplotlib the wrong python pip version will be used to use the python pip version of the selected kernel do the following instead python import sys sys executable m pip install matplotlib details details summary b how to install a different python version b click to expand summary the workspace provides three easy options to install different python versions alongside the main python instance pyenv https github com pyenv pyenv pipenv https pipenv pypa io en latest cli recommended conda https github com pyenv pyenv pipenv recommended to install a different python version e g 3 7 8 within the workspace via pipenv https pipenv pypa io en latest cli execute the following commands bash install python vers pipenv install python 3 7 8 activate environment session in shell pipenv shell check python installation python version optional create jupyter kernel for this environment pipenv install ipykernel python m ipykernel install user name my pipenv display name my pipenv python version optional close environment session exit pyenv to install a different python version e g 3 7 8 within the workspace via pyenv https github com pyenv pyenv execute the following commands bash install python version pyenv install 3 7 8 make globally accessible pyenv global 3 7 8 activate python version in shell pyenv shell 3 7 8 check python installation python3 7 version optional create jupyter kernel for this python version python3 7 m pip install ipykernel python3 7 m ipykernel install user name my pyenv 3 7 8 display name my pyenv python 3 7 8 conda to install a different python version e g 3 7 8 within the workspace via conda https github com pyenv pyenv execute the following commands bash create environment with python version conda create n my conda 3 7 python 3 7 8 activate environment session in shell conda activate my conda 3 7 check python installation python version optional create jupyter kernel for this python version pip install ipykernel python m ipykernel install user name my conda 3 7 display name my conda python version optional close environment session conda deactivate tip shell commands in jupyter notebooks if you install and use another python version via a dedicated jupyter kernel and use shell commands within jupyter e g pip install matplotlib the wrong python pip version will be used to use the python pip version of the selected kernel do the following instead python import sys sys executable m pip install matplotlib details details summary b can i publish any other than the default port to access a tool inside the container b click to expand summary you can do this but please be aware that this port is b not b protected by the workspace s authentication mechanism then for security reasons we therefore highly recommend to use the a href access ports access ports a functionality of the workspace details details summary b system and tool translations b click to expand summary if you want to configure another language than english in your workspace and some tools are not translated properly have a look a href https github com ml tooling ml workspace issues 70 issuecomment 841863145 at this issue a try to comment out the exclude translations line in etc dpkg dpkg cfg d excludes and re install configure the package details br known issues details summary b too small shared memory might crash tools or scripts b click to expand summary certain desktop tools e g recent versions of firefox https github com jlesage docker firefox increasing shared memory size or libraries e g pytorch see issues 1 https github com pytorch pytorch issues 2244 2 https github com pytorch pytorch issues 1355 might crash if the shared memory size dev shm is too small the default shared memory size of docker is 64mb which might not be enough for a few tools you can provide a higher shared memory size via the shm size docker run option bash docker run shm size 2g mltooling ml workspace 0 13 2 details details summary b multiprocessing code is unexpectedly slow b click to expand summary in general the performance of running code within docker is nearly identical https stackoverflow com questions 21889053 what is the runtime performance cost of a docker container compared to running it directly on the machine however in case you have limited the container s cpu quota as explained in this section limit memory cpu the container can still see the full count of cpu cores available on the machine and there is no technical way to prevent this many libraries and tools will use the full cpu count e g via os cpu count to set the number of threads used for multiprocessing threading this might cause the program to start more threads processes than it can efficiently handle with the available cpu quota which can tremendously slow down the overall performance therefore it is important to set the available cpu count or the maximum number of threads explicitly to the configured cpu quota the workspace provides capabilities to detect the number of available cpus automatically which are used to configure a variety of common libraries via environment variables such as omp num threads or mkl num threads it is also possible to explicitly set the number of available cpus at container startup via the max num threads environment variable see configuration section https github com ml tooling ml workspace configuration options the same environment variable can also be used to get the number of available cpus at runtime even though the automatic configuration capabilities of the workspace will fix a variety of inefficiencies we still recommend configuring the number of available cpus with all libraries explicitly for example python import os max num threads int os getenv max num threads set in pytorch import torch torch set num threads max num threads set in tensorflow import tensorflow as tf config tf configproto device count cpu max num threads inter op parallelism threads max num threads intra op parallelism threads max num threads tf session tf session config config set session for keras import keras backend as k k set session tf session set in sklearn estimator from sklearn linear model import logisticregression logisticregression n jobs max num threads fit x y set for multiprocessing pool from multiprocessing import pool with pool max num threads as pool results pool map lst details details summary b nginx terminates with sigill core dumped error b click to expand summary if you encounter the following error within the container logs when starting the workspace it will most likely not be possible to run the workspace on your hardware exited nginx terminated by sigill core dumped not expected the openresty nginx binary package used within the workspace requires to run on a cpu with sse4 2 support see this issue https github com openresty openresty issues 267 issuecomment 309296900 unfortunately some older cpus do not have support for sse4 2 and therefore will not be able to run the workspace container on linux you can check if your cpu supports sse4 2 when looking into the cat proc cpuinfo flags section if you encounter this problem feel free to notify us by commenting on the following issue 30 https github com ml tooling ml workspace issues 30 details br contribution pull requests are encouraged and always welcome read our contribution guidelines https github com ml tooling ml workspace tree main contributing md and check out help wanted https github com ml tooling ml workspace issues utf8 e2 9c 93 q is 3aopen is 3aissue label 3a help wanted sort 3areactions 2b1 desc issues submit github issues for any feature request and enhancement https github com ml tooling ml workspace issues new assignees labels feature template 02 feature request md title bugs https github com ml tooling ml workspace issues new assignees labels bug template 01 bug report md title or documentation https github com ml tooling ml workspace issues new assignees labels documentation template 03 documentation md title problems by participating in this project you agree to abide by its code of conduct https github com ml tooling ml workspace blob main github code of conduct md the development section development below contains information on how to build and test the project after you have implemented some changes development requirements docker https docs docker com get docker and act https github com nektos act installation are required to be installed on your machine to execute the build process to simplify the process of building this project from scratch we provide build scripts based on universal build https github com ml tooling universal build that run all necessary steps build test and release within a containerized environment to build and test your changes execute the following command in the project root folder bash act b j build under the hood it uses the build py files in this repo based on the universal build library https github com ml tooling universal build so if you want to build it locally you can also execute this command in the project root folder to build the docker container bash python build py make for additional script options bash python build py help refer to our contribution guides https github com ml tooling ml workspace blob main contributing md development instructions for more detailed information on our build scripts and development process licensed apache 2 0 created and maintained with nbsp by developers from berlin | machine-learning deep-learning data-science docker jupyter jupyter-lab python anaconda tensorflow pytorch neural-networks data-analysis scikit-learn r gpu jupyter-notebook kubernetes data-visualization vscode nlp | ai |
Learning-Path-Get-Started-with-Natural-Language-Processing-Using-Python-Spark-and-Scala | learning path get started with natural language processing using python spark and scala examples processing natural language text with python by jonathan mugan to download example files from this video course click here http examples oreilly com 0636920061007 text mining natural language understanding at scale by david talby and claudiu branzan to download example files from this video course click here https github com atigeo nlp demo building pipelines for natural language understanding with spark by alex thomas and david talby to download example files from this video course click here https github com alexander n thomas nlp spark annotate | ai |
|
react-nlp | react nlp visualization of natural language processing for react installation npm install react nlp usage import react from react import render from react dom import view from react nlp render view types types data data colors colors document getelementbyid root properties of view component property type optional default description types array no annotation type list data array no text and annotation data relations array yes relation list colors object yes color map for annotation labels linum boolean yes true if true show line numbers linebreak boolean yes true if true enable line break keepwhitespaces boolean yes false if true show multiple consecutive whitespace theme object yes override the defaulttheme src theme js l1 l13 types the types is an array of annotation type to show the annotations are shown in the order of types example wiki ne pos data the data is an array of the object consists of a text and annotation data property type optional description data text string no text data anno array no annotation data for the text data anno 0 string no annotation type data anno 1 number no start index of the text to annotate data anno 2 number no end index of the text to annotate data anno 3 string no annotation label example text darth vador also known as anakin skywalker is a fictional character anno wiki 0 10 darth vador wiki 27 41 darth vador ne 0 10 person ne 27 41 person pos 0 4 nnp pos 6 10 nnp pos 11 11 text he is originally a good person but anno pos 0 1 prp pos 3 4 vbz pos 6 15 rb pos 17 17 dt relations the relations is a list of relation of annotations property type optional description realations 0 string no type of relation br tt tail tail relation br ht head tail relation br th tail head relation br hh head head relation realations 1 number no first sentence index realations 2 number no first span index realations 3 number no second sentence index realations 4 number no second span index realations 5 string no relation label example hh 0 0 0 1 rel label1 ht 0 0 0 2 rel label2 tt 0 0 1 0 rel label3 hh 0 0 1 7 rel label4 colors the colors is a map for colors for annotation labels example wiki darth vador gray ne person yellow pos 84b62b theme property type default description fontsize string number 14 text font size borderstyle number 1 0 none 1 full 2 simple border string solid 1px 9a9a9a css border format color string black text font color linumcolor string 9a9a9a linum color stripe boolean true enable disable stripe stripecolor array ffffff f2f2f2 stripe color linepadding string number 15px 5px line padding annotationlinepadding string number 2px 3px annotation line padding labelfontsize string number 0 6em label font size labelcolor string black label color labelpadding string 2px 3px label padding labelborder string solid 1px gray label border characterpadding string number 0 character left right padding relationcolor string black relation color relationlabelfontsize string number 12px relation label font size relationlabelpadding string number 2px 3px relation label padding relationlabelborder string solid 1px gray relation label border relationlabelcolor string black relation label color relationlabelbgcolor string lightblue relation label background color relationlabelborderradius string number 0px relation label border radius run example npm install npm install react react dom npm run build npm run example then access http localhost 8081 http localhost 8081 in your browser if enable server mode access http localhost 8081 server yes http localhost 8080 server yes run demo app npm install npm install react react dom npm run build npm run demo then access http localhost 8081 http localhost 8081 in your browser dependencies react react dom css element queries color license mit | react natural-language-processing visualization annotations | ai |
beginner-friendly-haskell-for-web-development | beginner friendly real world haskell web development why haskell learning haskell changed the way i think about programming objective oriented programming taught me to think in terms of object with its mutable fields and methods for their interactions between each other in haskell i learned to think in terms of data and functions for data transformation i found it s more straightforward and allows me to write less code haskell is a static typed pure functional language while the functional programming allows you to reuse a lot of code for data transformation the haskell compiler also checks the types you defined for the data and the data transformation functions to catch mistakes before they get run why this book i think the best way to learn something is to use it in practice there are many books about haskell but none that i found explained how to use it for real world web development even after learning the language basics it took a lot of effort to figure out how to solve the unique challenges that arise in web development for this book i ve designed a comprehensive real world scenario to teach haskell and how to build a web app with it haskell provides many paths to solve different challenges but for this book i ve chosen the most beginner friendly methods choosing to go beginner friendly is a tradeoff sometimes the most efficient solution the one that involves the least code is not the easiest one but a beginner friendly approach is important for building things in a collaborative team setting most haskell books teach sophisticated features like monad transformer and advanced typeclasses which are both great abstractions that allow you to reuse lots of code however they re intimidating for beginners to understand and master in practice and aren t required to build a real world web app so i decided to build a web app without using these advanced abstractions only basic language building blocks to keep the code accessible to beginners i think beginners will better understand the point of those advanced abstractions by first building something without them and then refactoring with them later on about you this book is for people with basic knowledge of web programming you should know how http works and how sql query works you don t need any knowledge of haskell this book teaches the language basics and uses that foundation to build a real world web app book structure my goal is to show how to build a restful api for getting creating updating and deleting users from a postgresql database chapter 1 getting started the first chapter shows how to set up the development environment and start a haskell project chapter 2 functions chapters 2 through 4 cover the language basics and provide data structures with examples for modeling the user data chapter 3 recursion pattern matching higher order functions chapter 4 typeclass chapter 5 configure chapter 5 introduces io and how to parse an app s configuration chapter 6 json chapter 6 introduces how to encode data type into json and how to decode json into our type chapter 7 strings chapter 7 introduces the difference between the various string types and their use cases chapter 8 http client chapter 8 demonstrates how to build an http client for sending http calls and parsing the response chapter 9 database chapter 9 explains how to connect to a postgresql database how to migrate a database how to make queries to create retrieve and delete a user chapter 10 error handling chapter 10 introduces exceptions how to handle exceptions from the database queries and how to turn them into specific errors chapter 11 http service chapter 11 shows how to set up an http service and how to parse and validate the inputs we will build endpoints for making database queries chapter 12 logging chapter 12 introduces loggings how to log http requests and responses and how to log exceptions chapter 13 middlware chapter 13 explains how to create a middleware for adding a unique id to each request and tie all the logs about a request and its response with that id chapter 14 testing chapter 14 introduces how to write tests for the functions for database queries and how to write integration tests for http endpoints chapter 15 deployment chapter 15 shows how to build a docker container for the web app and deploy it to heroku how to get this book feel free to contact me if you are interested in this book zhangchiqing beginnerfriendlyhaskell gmail com | haskell web-development real-world-project beginner-friendly book | front_end |
FreeRTOS-STM32F4-Tutorial | stm32f4 freertos cubemx a demo project of freertos with cubemx running on a stm32f4 discovery board in this project stm32f4 interfacing with gyroscope and usb structure of this project f4 gyro gyroscope with usb vcom usb cdc f4 gyro rtos converting gyroscope with usb vcom example to rtos based steps to run this example prerequisite 1 a pc running windows 2 a stm32f4discovery board 3 keil uvision5 4 usb cable other tools install the toolchain the keil uvision5 for arm can be downloaded from its website http www2 keil com mdk5 it s available for only windows for personal and education purpose you can use mdk lite version with code size restricted to 32 kbyte install st link utility windows grab the official utility from st website http www st com web catalog tools fm146 cl1984 sc724 ss1677 pf251168 note that you should install the usb driver before install the st util install stm32 virtual com port driver windows grab the official driver from st website https www st com en development tools stsw stm32102 html compile this example open uvprojx on mdk arm folder and press f7 button debug connect your stm32f4discovery with a usb cable press ctrl f5 set breakpoint triggered at main function and enjoy | os |
|
Web-Dev-Workshop | personal site made at the codefy web development workshop | front_end |
|
Node.js-Web-Development-Fourth-Edition | node js web development fourth edition this is the code repository for node js web development fourth edition https www packtpub com web development nodejs web development fourth edition utm source github utm medium repository utm campaign 9781788626859 published by packt https www packtpub com utm source github it contains all the supporting project files necessary to work through the book from start to finish about the book node js is a server side javascript platform using an event driven non blocking i o model allowing users to build fast and scalable transaction intensive applications running in real time it plays a significant role in the software development world and liberates javascript from the web browser with node js we can reuse our javascript skills for general software development on a large range of systems it runs atop the ultra fast javascript engine at the heart of google s chrome browser v8 and adds a fast and robust library of asynchronous network i o modules the primary focus of node js is developing high performance highly scalable web applications and it also sees a widespread use in other areas electron the node js based wrapper around the chrome engine is the basis for popular desktop applications such as atom and visual studio code editors gitkraken postman etcher and the desktop slack client node js is popular for developing internet of things devices and sees a tremendous adoption in microservice development and for building tools for frontend web developers and more node js as a lightweight high performance platform fits microservice development like a glove instructions and navigation all of the code is organized into folders each folder starts with a number followed by the application name for example chapter02 the code will look like the following var http require http http createserver function req res res writehead 200 content type text plain res end hello world n listen 8124 127 0 0 1 console log server running at http 127 0 0 1 8124 we assume that you have some knowledge of javascript and possibly have experience with server side code development and that you are looking for a different way of developing server side code the basic requirement is to install node js and have a programmer oriented text editor the editor need not be anything fancy vi vim will even do in a pinch we will show you how to install everything that s needed it s all open source software that can be easily downloaded from websites the most important tool is the one between your ears some chapters require database engines such as mysql and mongodb although node js is a cross platform software development platform some third party modules are written in c c and must be compiled during installation to do so native code development tools such as c c compilers are required and python is required to run the tool chain the details are covered in chapter 2 setting up node js microsoft is involved with the node js project and to ensure developer productivity with node js on windows related products node js web development fourth edition https amzn to 2yp9npd restful web api design with node js 10 third edition https amzn to 30cb7uh hands on application development with node js video https www packtpub com web development hands application development nodejs video utm source github utm medium repository utm campaign 9781789135244 | front_end |
|
pythainlp | div align center img src https avatars0 githubusercontent com u 32934255 s 200 v 4 h1 pythainlp thai natural language processing in python h1 a href https pypi python org pypi pythainlp img alt pypi src https img shields io pypi v pythainlp svg a a href https www python org downloads release python 370 img alt python 3 7 src https img shields io badge python 3 7 blue svg a a href https opensource org licenses apache 2 0 img alt license src https img shields io badge license apache 202 0 blue svg a a href https pepy tech project pythainlp img alt download src https pepy tech badge pythainlp month a a href https github com pythainlp pythainlp actions workflows test ymlp img alt unit test and code coverage src https github com pythainlp pythainlp actions workflows test yml badge svg a a href https coveralls io github pythainlp pythainlp branch dev img alt coverage status src https coveralls io repos github pythainlp pythainlp badge svg branch dev a a href https www codacy com gh pythainlp pythainlp dashboard utm source github com amp utm medium referral amp utm content pythainlp pythainlp amp utm campaign badge grade img src https app codacy com project badge grade 5821a0de122041c79999bbb280230ffb a a href https colab research google com github pythainlp tutorials blob master source notebooks pythainlp get started ipynb img alt google colab badge src https badgen net badge launch 20quick 20start 20guide on 20google 20colab blue icon terminal a a href https zenodo org badge latestdoi 61813823 img alt doi src https zenodo org badge 61813823 svg a a href https matrix to thainlp matrix org rel noopener target blank img src https matrix to img matrix badge svg alt chat on matrix a div pythainlp is a python package for text processing and linguistic analysis similar to nltk https www nltk org with a focus on the thai language pythainlp nltk readme th md https github com pythainlp pythainlp blob dev readme th md news now you can contact with or ask any questions of the pythainlp team a href https matrix to thainlp matrix org rel noopener target blank img src https matrix to img matrix badge svg alt chat on matrix a version description status 4 0 https github com pythainlp pythainlp releases stable change log https github com pythainlp pythainlp issues 714 dev https github com pythainlp pythainlp tree dev release candidate for 4 1 change log https github com pythainlp pythainlp issues 788 getting started pythainlp 2 requires python 3 6 python 2 7 users can use pythainlp 1 6 see 2 0 change log https github com pythainlp pythainlp issues 118 upgrading from 1 7 https pythainlp github io docs 2 0 notes pythainlp 1 7 2 0 html upgrading thainer from 1 7 https github com pythainlp pythainlp wiki upgrade thainer from pythainlp 1 7 to pythainlp 2 0 pythainlp get started notebook https pythainlp github io tutorials notebooks pythainlp get started html api document https pythainlp github io docs tutorials https pythainlp github io tutorials official website https pythainlp github io pypi https pypi org project pythainlp facebook page https www facebook com pythainlp who uses pythainlp https github com pythainlp pythainlp blob dev inthewild md model cards https github com pythainlp pythainlp wiki model cards for technical details caveats and ethical considerations of the models developed and used in pythainlp capabilities pythainlp provides standard nlp functions for thai for example part of speech tagging linguistic unit segmentation syllable word or sentence some of these functions are also available via the command line interface details summary list of features summary convenient character and word classes like thai consonants pythainlp thai consonants vowels pythainlp thai vowels digits pythainlp thai digits and stop words pythainlp corpus thai stopwords comparable to constants like string letters string digits and string punctuation thai linguistic unit segmentation tokenization including sentence sent tokenize word word tokenize and subword segmentations based on thai character cluster subword tokenize thai part of speech tagging pos tag thai spelling suggestion and correction spell and correct thai transliteration transliterate thai soundex soundex with three engines lk82 udom83 metasound thai collation sorted by dictionary order collate read out number to thai words bahttext num to thaiword thai datetime formatting thai strftime thai english keyboard misswitched fix eng to thai thai to eng command line interface for basic functions like tokenization and pos tagging run thainlp in your shell details installation sh pip install upgrade pythainlp this will install the latest stable release of pythainlp install different releases stable release pip install upgrade pythainlp pre release nearly ready pip install upgrade pre pythainlp development likely to break things pip install https github com pythainlp pythainlp archive dev zip installation options some functionalities like thai wordnet may require extra packages to install those requirements specify a set of name immediately after pythainlp sh pip install pythainlp extra1 extra2 details summary list of possible code extras code summary full install everything attacut to support attacut a fast and accurate tokenizer benchmarks for word tokenization benchmarking tokenization benchmark md icu for icu international components for unicode support in transliteration and tokenization ipa for ipa international phonetic alphabet support in transliteration ml to support ulmfit models for classification thai2fit for thai word vector thai2rom for machine learnt romanization wordnet for thai wordnet api details for dependency details look at the extras variable in setup py https github com pythainlp pythainlp blob dev setup py data directory some additional data like word lists and language models may be automatically downloaded during runtime pythainlp caches these data under the directory pythainlp data by default the data directory can be changed by specifying the environment variable pythainlp data dir see the data catalog db json at https github com pythainlp pythainlp corpus command line interface some of pythainlp functionalities can be used via command line with the thainlp command for example to display a catalog of datasets sh thainlp data catalog to show how to use sh thainlp help licenses license pythainlp source codes and notebooks apache software license 2 0 https github com pythainlp pythainlp blob dev license corpora datasets and documentations created by pythainlp creative commons zero 1 0 universal public domain dedication license cc0 https creativecommons org publicdomain zero 1 0 language models created by pythainlp creative commons attribution 4 0 international public license cc by https creativecommons org licenses by 4 0 other corpora and models that may be included in pythainlp see corpus license https github com pythainlp pythainlp blob dev pythainlp corpus corpus license md contribute to pythainlp please fork and create a pull request for style guides and other information including references to algorithms we use please refer to our contributing https github com pythainlp pythainlp blob dev contributing md page who uses pythainlp you can read inthewild md https github com pythainlp pythainlp blob dev inthewild md citations if you use pythainlp in your project or publication please cite the library as follows wannaphong phatthiyaphaibun korakot chaovavanich charin polpanumas arthit suriyawongkul lalita lowphansirikul pattarawat chormai 2016 jun 27 pythainlp thai natural language processing in python zenodo http doi org 10 5281 zenodo 3519354 or by bibtex entry bib misc pythainlp author wannaphong phatthiyaphaibun and korakot chaovavanich and charin polpanumas and arthit suriyawongkul and lalita lowphansirikul and pattarawat chormai title pythainlp thai natural language processing in python month jun year 2016 doi 10 5281 zenodo 3519354 publisher zenodo url http doi org 10 5281 zenodo 3519354 sponsors logo description vistec depa thailand artificial intelligence research institute https airesearch in th assets img logo airesearch logo svg https airesearch in th since 2019 our contributors korakot chaovavanich and lalita lowphansirikul have been supported by vistec depa thailand artificial intelligence research institute https airesearch in th macstadium https i imgur com rky1djx png https www macstadium com we get support of free mac mini m1 from macstadium https www macstadium com for running ci builds div align center made with pythainlp team we build thai nlp div div align center strong we have only one official repository at https github com pythainlp pythainlp and another mirror at https gitlab com pythainlp pythainlp strong div div align center strong beware of malware if you use codes from mirrors other than the official two on github and gitlab strong div | python thai-nlp nlp-library thai-language natural-language-processing thai-nlp-library thai-soundex soundex word-segmentation thai hacktoberfest hacktoberfest-accepted hacktoberfest2023 | ai |
strawberries | strawberries computer vision on strawberries slides keynote computer 20vision 20slides key powerpoint computer 20vision 20slides pptx or pdf computer 20vision 20slides pdf jupyter notebooks working strawberry 20working ipynb and final with examples strawberry 20complete 20with 20examples ipynb demos hsv video demo hsv video py and strawberry video demo strawberry video py strawberries found jpg | computer-vision computer-vision-slides strawberries video-processing jupyter-notebook | ai |
Uber-end-to-end-data-engineering | uber data analytics end to end data engineering project this project focuses on performing end to end data engineering for uber data analytics the goal is to extract insights and visualize key metrics from the data using various technologies the project involves data preparation modeling transformation storage analytics and dashboard creation architecture architecture https github com swaraj patil 18 uber end to end data engineering assets 114085839 7746c2f5 084f 4856 b7de dca3c3631151 technologies used lucidchart used for data modeling and designing the data model gcp cloud storage created a bucket and uploaded the dataset files gcp compute engine created an instance to execute the project mage open source data pipeline tool used for etl processes gcp bigquery used for data analytics and querying looker studio created interactive dashboards for data visualization project steps data preparation combined and cleaned the uber dataset files to create a unified dataset datasets used 1 https d37ci6vzurychx cloudfront net trip data yellow tripdata 2023 02 parquet 2 https d37ci6vzurychx cloudfront net misc taxi zone lookup csv data modeling designed a comprehensive data model using lucidchart including a fact table and dimension tables uber data model https github com swaraj patil 18 uber end to end data engineering assets 114085839 d3c2c909 183c 4228 8036 53f1baa33c64 data transformation developed python scripts for data transformation and generating the necessary fact and dimension tables cloud setup created a new project on google cloud platform gcp and utilized gcp cloud storage to store the dataset securely compute engine provisioning created a gcp compute engine instance and installed the required software pip python mage pandas google cloud bigquery data pipeline utilized mage to create an end to end data pipeline including data loading transformation and exporting to gcp bigquery mage pipeline https github com swaraj patil 18 uber end to end data engineering assets 114085839 fd690857 b944 4f31 9fcb b880e27937d3 data analytics ran various queries in gcp bigquery to extract insights such as average fare amount by hour of the day tip amount by payment type top pickup locations by the number of trips and total trips by passenger count analytics table creation utilized sql in gcp bigquery to join the fact and dimension tables and create an analytics table dashboard creation connected gcp bigquery with looker studio to create an interactive and insightful dashboard dashboard features designed a dashboard with dynamic filters summary metrics total revenue record count and a bubble map visualization of pickup locations dashboard https github com swaraj patil 18 uber end to end data engineering assets 114085839 aa5f3951 edba 46c8 8ad4 b8add480e692 total revenue the total revenue generated from uber taxi rides in new york in february 2023 is 2 9 million indicating the financial performance for that specific month record count the dataset contains 112 700 records representing the total number of uber taxi rides recorded in new york during february 2023 average trip distance the average trip distance for uber rides in new york during february 2023 is 3 1 units giving an idea of the typical distance covered in rides during that period average fare amount the average fare amount per trip for uber rides in new york during february 2023 is 17 3 providing insights into the average cost of rides during that month average tip amount the average tip amount per trip for uber rides in new york during february 2023 is 3 3 indicating the average level of tipping by passengers in that specific month bubble map pickup locations the map visualizes the pickup locations of uber rides in new york during february 2023 highlighting areas with higher pickup activity tooltip the tooltip provides the vendor id giving additional information about the specific vendor associated with each pickup location color dimension the color represents the rate code name allowing for differentiation of different rate codes observed at each pickup location size dimension the size of the bubbles represents the tip amount indicating the range of tip amounts received at each pickup location filtering the summary and map on page 1 can be filtered by vendor id payment type rate code and trip distance to analyze specific segments or criteria within the dataset dashboard https github com swaraj patil 18 uber end to end data engineering assets 114085839 5e130e96 acf9 4f00 9296 ff8584eccba7 bar chart average total amount by rate code compares the average total amount for different rate codes providing insights into the pricing structure and revenue distribution across rate codes bar chart total amount and passenger count by payment type presents a comparison of the total amount and passenger count for different payment types helping understand the revenue and customer distribution by payment method bar chart tip amount by pickup zone displays the tip amount for different pickup zones highlighting zones where higher or lower tips are observed line chart passenger count by pickup zone illustrates the passenger count for each pickup zone providing insights into passenger demand trends across different zones these insights offer a focused view of the uber taxi data specifically for february 2023 in new york allowing for analysis and understanding of key metrics trends and patterns during that particular month conclusion this project demonstrates proficiency in data engineering utilizing a variety of technologies and tools the end to end process includes data preparation modeling transformation storage in gcp cloud storage analytics in bigquery and visualization using looker studio the project successfully delivers an interactive dashboard that allows users to explore and gain insights from the uber dataset | cloud |
|
VimOrganizer | vimorganizer vimorganizer is partly a clone of emacs org mode and partly a front end to org mode itself do org in vim this project is abandened sorry this project is definitely abandoned however you should be able to get all the info you need by reading the info txt and install txt files and there s way more information in the vimorg txt plugin help file you ll find in the doc folder i myself now use org mode in emacs using evil which is an excellent vim clone within emacs not quite vim but feels close enough and emacs has its advantages cheers herbert | front_end |
|
timestamp-microservice | timestamp microservice timestamp microservice is an fcc backend development and apis project challenge https www freecodecamp org learn back end development and apis back end development and apis projects timestamp microservice this app is written using node express typescript and jest requirements node 14 18 1 lts npm 6 x installation bash npm install how to run local to run the app in development mode locally run the below command on the root directory bash npm run dev an express server will spin up on http localhost 5000 usage there are two endpoints 1 api v1 timestamps the response returns the current unix and utc date and time 2 api v1 timestamps date this response returns the unix and utc date and time based on the users input how to run tests run all tests both unit and integration bash npm run test run unit tests bash npm run test unit run integration tests bash npm run test integration | server |
|
android-tutorials | android tutorials tutorials and sample code for android mobile development requirements android studio https developer android com studio index html crafted for api 21 kitkat and above recommended genymotion https www genymotion com for android emulator tutorials 1 android ui linearlayout https docs google com document d 1ghgpiqlld9bsolyc6ezhbpfyukaerlax8jqj8xosnei edit usp sharing 2 android ui relativelayout 3 android ui listview 4 android ui gridview 5 android ui recyclerview simple 6 android ui recyclerview custom 7 android intents implicit 8 android intents explicit 9 android storage sqlite database https docs google com document d 1mniqm2whhr6folwc05vcok8k paabq9vhmbilnibtzq edit usp sharing 9 android sensors gestures https docs google com document d 1o1 xrvs4hfw oskwvcaumd4z1xor4zjm2fpx2qggxii edit usp sharing | android android-development | front_end |
deepgaze | updates update 22 01 2020 you may be interested in following my new youtube channel https www youtube com channel uc6axkvw2y b3ab esldk0 g for weekly videos about computer vision machine learning deep learning and robotics update 16 07 2019 stable version of deepgaze 2 0 is available on branch 2 0 update 20 03 2019 started the porting on python opencv 3 0 check the branch 2 0 for a preliminary version update 10 06 2017 the pdf of the article head pose estimation in the wild using convolutional neural networks and adaptive gradient methods is available for free download in the next 50 days using this special link https authors elsevier com a 1vbdc77nkonot update 04 06 2017 article head pose estimation in the wild using convolutional neural networks and adaptive gradient methods have been accepted for publication in pattern recogntion elsevier the deepgaze cnn head pose estimator module is based on this work update 31 05 2017 implementation of the new package saliency map py deepgaze saliency map py the package contains an implementation of the fasa http ivrl epfl ch research saliency fast saliency algorithm for saliency detection example examples ex fasa saliency map ex fasa saliency map images py wiki http www scholarpedia org article saliency map update 22 03 2017 fixed a bug in mask analysis py and almost completed a more robust version of the cnn head pose estimator what is deepgaze deepgaze is a library for human computer interaction people detection and tracking which uses convolutional neural networks cnns for face detection head pose estimation and classification the focus of attention of a person can be approximately estimated finding the head orientation this is particularly useful when the eyes are covered or when the user is too far from the camera to grab the eye region with a good resolution when the eye region is visible it is possible to estimate the gaze direction which is much more informative and can give a good indication of the foa deepgaze contains useful packages for head pose estimation perspective n point convolutional neural networks face detection haar cascade skin and color detection range detection backprojection histogram based classification histogram intersection motion detection frame differencing mog mog2 motion tracking particle filter saliency map fasa deepgaze is based on opencv and tensorflow some of the best libraries in computer vision and machine learning deepgaze is an open source project and any contribution is appreciated feel free to fork the repository and propose integrations this library is the result of a recent work if you use the library in academic work please cite the following paper patacchiola m cangelosi a 2017 head pose estimation in the wild using convolutional neural networks and adaptive gradient methods pattern recognition http dx doi org 10 1016 j patcog 2017 06 009 why should i use deepgaze because deepgaze makes your life easier the implementation of many algorithms such as face detectors pose estimators and object classificators can be painful deepgaze has been designed to implement those algorithms in a few lines of code deepgaze is helpful for both beginners and advanced users who want to save time all the code contained in deepgaze is optimised and it is based on state of the art algorithms what is a convolutional neural network a convolutional neural network cnn or convnet is a type of feed forward artificial neural network in which the connectivity pattern between its neurons is inspired by the organization of the animal visual cortex whose individual neurons are arranged in such a way that they respond to overlapping regions tiling the visual field convolutional networks were inspired by biological processes and are variations of multilayer perceptrons designed to use minimal amounts of preprocessing they have wide applications in image and video recognition recommender systems and natural language processing wiki https en wikipedia org wiki convolutional neural network p align center img src doc images figure cnn png width 750 p main contributors this is an updated list of the main contributors of the project we are looking for contributors if you want to contribute adding a new module or improving an existing one send an email to our team https www inf ed ac uk people staff massimiliano patacchiola html massimiliano patacchiola http mpatacchiola github io project leader and main contributor joel gooch https www linkedin com in joel gooch 001458132 ppe 1 head pose estimation ishit mehta https github com ishit cnn cascade face detection luca surace https github com lukeoverride haar cascade multi face detection hrishikesh kamath https github com kamathhrishi version 2 0 porting notebooks test scripts prerequisites the current version of deepgaze is based on python 2 7 a porting for python 3 0 has been scheduled for the next year to use the libray you have to install numpy link http www numpy org shell sudo pip install numpy opencv 2 x not compatible with opencv 3 x link http opencv org shell sudo apt get install libopencv dev python opencv tensorflow link https www tensorflow org shell sudo pip install tensorflow some examples may require additional libraries dlib link http dlib net installation attention this version is obsolete please check the branch 2 0 on this repository https github com mpatacchiola deepgaze tree 2 0 download the repository from here https github com mpatacchiola deepgaze archive master zip or clone it using git shell git clone https github com mpatacchiola deepgaze git to install the package you have to enter in the deepgaze folder and run the setup py script it may require root privileges shell cd deepgaze sudo python setup py install if you want to track all the installed files you can record the installation process in a text file using the record flag shell sudo python setup py install record record txt done now give a look to the examples below examples head pose estimation using the perspective n point algorithm in opencv code examples ex pnp head pose estimation webcam py video https www youtube com watch v osni18xmag4 head pose estimation in the wild using perspective n point and dlib face detector code examples ex dlib pnp head pose estimation video py video https www youtube com watch v xures0g9ars head pose estimation in images using convolutional neural networks code examples ex cnn head pose estimation images ex cnn head pose estimation images py p align center img src doc images ex cnn head pose estimation images png width 750 p color detection using the histogram backprojection algorithm blog https mpatacchiola github io blog 2016 12 01 playing the google chrome dinosaur game with your hand html code examples ex color detection image ex color detection image py p align center img src doc images ex color detection image png width 750 p skin detection using the hsv range color detector code examples ex skin detection images ex skin detection images py p align center img src doc images ex skin detection images png width 750 p face detection using the hsv range color detector code examples ex face center color detection ex face center color detection py p align center img src doc images ex face center color detection png width 750 p corner detection comparison of four algorithms on a video streaming code examples ex corner detection video ex corner detection py video https www youtube com watch v 2fhd98k 6ag p align center img src doc images ex corner detection png width 750 p motion detection and tracking using frame differencing on a video streaming code examples ex diff motion detection video ex diff motion detection py p align center img src doc images ex diff motion detection video png width 750 p motion detection and tracking comparison of three algorithms on a video streaming code examples ex motion detectors comparison video ex motion detectors comparison video py video https www youtube com watch v xmi2ke2huge p align center img src doc images ex motion detectors comparison video png width 750 p motion tracking with unstable measurements using particle filter code examples ex particle filter object tracking video ex particle filter object tracking video py video https www youtube com watch v ktxvbn5 kpe p align center img src doc images ex particle filtering object tracking video png width 750 p motion tracking with multiple backprojection for playing chrome s dinosaur game blog https mpatacchiola github io blog 2016 12 01 playing the google chrome dinosaur game with your hand html code examples ex multi backprojection hand tracking gaming ex multi backprojection hand tracking gaming py video https www youtube com watch v eouokv5vvpu feature youtu be p align center img src doc images ex multi backprojection hand tracking gaming gif width 550 p classify object using their colour fingerprint histogram intersection blog https mpatacchiola github io blog 2016 11 12 the simplest classifier histogram intersection html code examples ex color classification images ex color classification image py p align center img src doc images ex color classification images png width 750 p implementation of the fasa fast accurate and size aware salient object detection algorithm code examples ex fasa saliency map ex fasa saliency map images py wiki http www scholarpedia org article saliency map link http ivrl epfl ch research saliency fast saliency p align center img src doc images ex fasa saliency map png width 750 p acknowledgements the example head pose estimation using perspective n point is partially based on the c version you can find here https github com severin lemaignan gazr and on the workshop developing an attention system for a social robot which was part of the 2nd international summer school on social human robot interaction to implement the bayes and particle filters i followed the great repository of rlabbe https github com rlabbe which you can find here https github com rlabbe kalman and bayesian filters in python | convolutional-neural-networks motion-tracking color-detection face-detection skin-detection motion-detection head-pose-estimation human-computer-interaction histogram-comparison histogram-intersection cnn particle-filter saliency-map | ai |
lundium | h1 align center lundium ui h1 p align center a href https lundegaard eu img alt by lundegaard src by lundegaard png width 120 a p h3 align center h3 h3 align center beautiful react component library h3 p align center a library of ui components used across lundegaard projects goal of this library is to provide one base from which components could be shared and reused thus offering one central point for further development and improvements p p align center a href https github com lundegaard lundium img src https flat badgen net badge github icon github label alt github a img src https flat badgen net badge license mit blue alt mit license a href https www npmjs com package lundium img src https flat badgen net npm dm lundium alt downloads a a href https www npmjs com package lundium img src https flat badgen net npm v lundium alt version a p getting started setup your own project and add lundium as your dependency bash yarn add lundium bash npm install lundium documentation all available components with usage examples can be found in our storybook docs https lundium netlify app theming lundium also comes with its own themes yoou can use it by including lundium theme basic in your project learn how you can use our default theme in your project in more detail theming page packages theme basic readme md localisation for localisation usecases you can use lundium locale package detailed walk through can be found on localisation page packages locale readme md contribution we are open to any ideas and suggestions feel free to make pr see contribution guide https github com lundegaard lundium blob master contributing md for guidelines br br see our related projects redux tools https github com lundegaard redux tools modular redux is possible react union https github com lundegaard react union intergrate react apps into various cmss seamlessly validarium https github com lundegaard validarium validations done right 2020 lundegaard a s | react react-components ui-kit | front_end |
WebDevelopment | project overview this project is a web based application that reads rss feeds as a part of the debug it competition hosted by sparsh2k19 and acm nit surat you are required to solve the bugs in the project and bring out the app that it essentially is you are free to change the ui as per your linking please have a look at the submission md for info regarding submissions i have kept the project very open ended and it is your expertise to give it the shape you think the best maintain the basic skeleton and the subtlety of the app rest is your imagination work it parameters of evalution 1 adding accessibility tags 2 enhancing the ui ux to anything presentable while maintaining the subtlety of the app 3 adding testing 4 making the app dynamic as essentially it should be overview of tests 1 a test that loops through each feed in the allfeeds object and ensures it has a url defined and that the url is not empty 2 a test that loops through each feed in the allfeeds object and ensures it has a name defined and that the name is not empty 3 a test that ensures the menu element is hidden by default 4 a test that ensures the menu changes visibility when the menu icon is clicked this test have two expectations does the menu display itself when clicked and does it hide when clicked again 5 a test that ensures when the loadfeed function is called and completes its work there is at least a single entry element within the feed container 6 a test that ensures when a new feed is loaded by the loadfeed function that the content actually changes running the project the project is simple to run you just need to fork the repo and then clone it open the index html file with a browser all the tests are performed by mocha and are done automatic submitting the project the submissions starts at 12am 23rd february 2019 the submission has deadline of 10 am 23rd february 2019 those projects will be disqualified which have commits after 12am 23rd febraury 2019 to submit make a pull request from your fork on the main repo | front_end |
|
conformal-risk | conformal risk control this is the official repository of conformal risk control http arxiv org abs 2208 02814 by anastasios n angelopoulos stephen bates adam fisch lihua lei and tal schuster p align center a style text decoration none important href http arxiv org abs 2208 02814 alt arxiv img src https img shields io badge paper arxiv red a a style text decoration none important href https people eecs berkeley edu 7eangelopoulos alt website img src https img shields io badge website berkeley yellow a a style text decoration none important href https docs conda io en latest miniconda html alt package management img src https img shields io badge conda env green a a style text decoration none important href https opensource org licenses mit alt license img src https img shields io badge license mit blue svg a p technical background in the risk control problem we are given some loss function l i lambda ell x i y i lambda for example in multi label classification you can think of the loss function as the false negative proportion l i lambda 1 frac y i cap c lambda x i y i where c lambda x i is the set valued output of a machine learning model as lambda grows so does the set c lambda x i which shrinks the false negative proportion we seek to choose hat lambda based on the first n data points to control the expected value of its loss i on a new test point i at some user specified risk level alpha mathbb e big l n 1 hat lambda big leq alpha the conformal risk control algorithm is in core get lhat py it is 5 lines long including the function header examples each of the polyps coco hierarchical imagenet qa folders contains a worked example of conformal risk control with a different risk function polyps does gut polyp segmentation with false negative rate control coco does multi label classification with false negative rate control hierarchical imagenet does hierarchical classification and chooses the resolution of its prediction by bounding the graph distance to an ancestor of the true label finally qa controls the f1 score in open world question answering setup for the computer vision experiments run conda env create f environment yml conda activate conformal risk this will install all dependencies for the vision experiments for the question answering task follow the instructions in qa readme md reproducing the experiments after setting up the environment enter the example folder and run the appropriate risk histogram py file to produce the grids of images in the paper run the python file containing the word grid in each folder citation article angelopoulos2022conformal title conformal risk control author angelopoulos anastasios n and bates stephen and fisch adam and lei lihua and schuster tal journal arxiv preprint arxiv 2208 02814 year 2022 | computer-vision conformal conformal-prediction natural-language-processing python pytorch pytorch-implementation uncertainty-estimation uncertainty-quantification | ai |
vue-design-system | build status https travis ci org arielsalminen vue design system svg branch master https travis ci org arielsalminen vue design system dependencies status https david dm org arielsalminen vue design system svg mit license https img shields io badge license mit blue svg gitter https badges gitter im gitterhq gitter svg https gitter im vueds lobby vue design system vue design system is an open source tool for building ui design systems with vue js https vuejs org it provides you and your team a set of organized tools patterns practices that work as the foundation for your application development the tool is built on top of vue js https vuejs org vue styleguidist https github com vue styleguidist vue styleguidist webpack https webpack js org and theo https github com salesforce ux theo and is aimed for designers and front end developers who have at least basic knowledge of component based workflows html scss javascript made by arielsalminen https twitter com arielsalminen and other contributors see also the official website https vueds com of vue design system and read my article https arielsalminen com 2018 vue design system on the processes and workflow i use to get started with a new design system project screenshot docs preview gif https vueds com features a set of interconnected patterns practices for you and your team a well thought out terminology naming conventions and hierarchy get an automated overview of how your design system progresses over time global design tokens in yaml format that you can use inside any component automatic generation of living user editable documentation easily export and use your design system as an npm dependency in another vue js or nuxt js project create a token an element or a pattern and it s immediately available across all components pre configured prettier setup for auto formatting code on both save and before commit live reloading autoprefixing scss and helper functions simple and sane defaults for svg and webfont usage out of the box documentation and the app logic are separated so you can have public docs while the app itself stays private and more https vueds com documentation getting started https github com arielsalminen vue design system wiki getting started how to install and run vue design system terminology https github com arielsalminen vue design system wiki terminology introduction to the system concepts and its hierarchy naming of things https github com arielsalminen vue design system wiki naming of things naming is hard so it s good to have clear guidelines directory structure https github com arielsalminen vue design system wiki directory structure what goes where and why working with the system https github com arielsalminen vue design system wiki working with the system concrete examples on how to work with tokens elements patterns and templates editing living documentation https github com arielsalminen vue design system wiki editing living documentation how to customize the living system documentation spacing https github com arielsalminen vue design system wiki spacing a framework for creating a predictable and harmonious spacing component status https github com arielsalminen vue design system wiki component status clear labels that reflect the state of completion component qa https github com arielsalminen vue design system wiki component qa how to review new components and keep the quality high contributing https github com arielsalminen vue design system blob master contributing md a set of guidelines for contributing to the system code of conduct https github com arielsalminen vue design system blob master code of conduct md by participating you agree to abide by its terms frequently asked questions https github com arielsalminen vue design system wiki frequently asked questions faq how to use icons how to use font face etc examples official example https vueds com example using vue design system as npm dependency on vue js project https github com arielsalminen vue design system example using vue design system as npm dependency on nuxt js project https github com arielsalminen nuxt design system using vue design system as npm dependency on a static website https github com arielsalminen vue design system example website roadmap see roadmap tag https github com arielsalminen vue design system issues q is 3aissue is 3aopen label 3aroadmap in the issues changelog 3 5 7 is the latest release see releases page https github com arielsalminen vue design system releases for the full changelog need more help about to get started with a new design system i m an independent designer and developer specialized in helping companies to build design systems https vueds com i also conduct design system workshops https arielsalminen com 2018 vue design system and do consulting let s talk https twitter com arielsalminen authors and license ariel salminen https arie ls artem sapegin http sapegin me rafael escala https github com rafaesc react styleguidist contributors https github com styleguidist react styleguidist graphs contributors vue styleguidist contributors https github com vue styleguidist vue styleguidist graphs contributors vue js contributors https github com vuejs vue graphs contributors vue webpack boilerplate contributors https github com vuejs templates webpack graphs contributors theo contributors https github com salesforce ux theo graphs contributors and polaris contributors https github com shopify polaris licensed under the mit license https github com arielsalminen vue design system blob master license | vuejs design-systems design-system vue-styleguidist components component-library vue | os |
embedded-systems-project | 2 d floor mapping rc car an unprecedented academic adventure into with arduino ble and android if you want a overview of what this repo contains check out final report docx in the submission docs folder enjoy d | os |
|
boda | boda a c framework for efficient experiments in computer vision wip created by matthew w moskewicz license boda is bsd 2 clause licensed refer to the license license file for the full license introduction boda is a one grad student mostly run on one machine work in progress however for the brave there is now a considerable amount of functionality present overall documentation is still very much lacking but in particular if you re interested in a unified c framework from camera lmdbs images to generated cuda opencl for cnns this might be an interesting project for you to explore at this point i think it s now plausible that others would be interested in and capable of usage of experimentation with and contributions to boda so please file issues prs if that s the case getting started installation see the install md install md file june 2016 paper boda rtc https arxiv org abs 1606 00094 to appear wimob 2016 nyc june 2016 poster boda poster preview slides https docs google com presentation d 1ffg jzo2gutnoqfl6hwwt0h6gmmfcd7orfpxxpo5gl0 edit usp sharing boda poster https drive google com file d 0b2t3gdjzvy rvmu3mwjiztrtalk view usp sharing january 2016 poster boda poster preview slides https docs google com presentation d 170rz7ddnmdc6vgtjfnzswjcaubr0x2m6rzxqkz08efy edit usp sharing boda poster https drive google com open id 0b2t3gdjzvy rvxrynw9zbna1ehm may 2014 poster boda poster preview slides https docs google com presentation d 1kvytotbpmslkcxvpl4qf8nylabgriya8iyopl dksfw edit usp sharing boda poster https drive google com file d 0b2t3gdjzvy rt1n6skvonfp1smm edit usp sharing mid 2013 poster boda poster preview slides https docs google com presentation d 15oa9wilmeq5isio5wgjdm9 nmrw ap4bc9pamksomd0 pub start false loop false delayms 300000 boda poster https drive google com file d 0b2t3gdjzvy rmxj6mkprrlgywufxogjbel8wefdzowo2vfvn edit usp sharing boda modes ui boda overall diagram https docs google com drawings d 1oir3fzt sio17c vjsbolakwucx4n6le4kdqr uexfw pub w 670 h 266 in the above diagram the middle box is a boda mode a c class with a main function and a set of parameters this is the boda version of a standard c program with a c main and some command line argument processing such as gflags getopt etc boda makes it easy to support many modes in a single binary program and provides some magic comments meta programming to ease the burden of 1 command line xml based ui creation for many such modes with hierarchical sharing of uis parameters 2 testing including automated regression diffs over outputs 3 timing profiling the main magic is a nested structure initialization system nesi which uses magic comments python code generation and a steaming pile of down home style void pointers and c or at least c style functions to initialize c structures from nested key value trees in turn created from command line arguments and or xml files a la json or the like | ai |
|
smart-retail-analytics | retail analytics details target os ubuntu 18 04 lts programming language python 3 5 time to complete 50 70min retail analytics docs images retail analytics png what it does this smart retail analytics application monitors people activity counts total number of people inside a retail store and keeps a check on the inventory by detecting the products specified by the user it detects objects on any number of screens by using video or camera resources requirements hardware 6th to 8th generation intel core processors with iris pro graphics or intel hd graphics software ubuntu 18 04 lts http releases ubuntu com 18 04 br note we recommend using a 4 14 linux kernel with this software run the following command to determine your kernel version uname a opencl runtime package intel distribution of openvino toolkit 2020 r3 release grafana v5 3 2 influxdb v1 6 2 how it works the application uses the inference engine included in the intel distribution of openvino toolkit it accepts multiple video input feeds and user can specify the feed type for each video there are three feed types that application supports shopper if the feed type of the video is shopper the application grabs the frame from that input stream and uses a deep neural network model for detecting the faces in it if there is anybody present in the frame it is counted as a shopper once the face is detected the application uses head pose estimation model to check the head pose of the person if the person is looking at the camera then his emotions are detected using emotions recognition model using the data obtained from this it infers if the person is interested or not and gives the total number of people detected it also measures the duration for which the person is present in the frame and the duration for which he was looking at the camera store traffic if the video feed type is traffic the application uses a deep neural network model to detect people in the frame the total number of people visited and the number of people currently present in front the camera is obtained from this shelf this feed type is used to keep a check on the product inventory if the video feed type is shelf an object detection model is used to detect the product specified by the user in the frame from this video stream it detects the objects and gives the number of objects present in the frame the application is capable of processing multiple video input feeds each having different feed type the data obtained from these videos is stored in influxdb for analysis and visualized on grafana it uses flask python web framework to live stream the output videos to the grafana retail analytics docs images architectural diagram png architectural diagram setup get the code steps to clone the reference implementation smart retail analytics sudo apt get update sudo apt get install git git clone https github com intel iot devkit smart retail analytics git install the intel distribution of openvino toolkit refer to https software intel com en us articles openvino install linux on how to install and setup the intel distribution of openvino toolkit you will need the opencl runtime package if you plan to run inference on the gpu it is not mandatory for cpu inference other dependencies influxdb influxdb is a time series database designed to handle high write and query loads it is an integral component of the tick stack influxdb is meant to be used as a backing store for any use case involving large amounts of timestamped data including devops monitoring application metrics iot sensor data and real time analytics grafana grafana is an open source general purpose dashboard and graph composer which runs as a web application it supports graphite influxdb prometheus google stackdriver aws cloudwatch azure monitor loki mysql postgresql microsoft sql server testdata mixed opentsdb and elasticsearch as backends grafana allows you to query visualize alert on and understand your metrics no matter where they are stored br ajax the ajax panel is a general way to load external content into a grafana dashboard which model to use the application uses intel pre trained models in the feed type shopper i e face detection adas 0001 https docs openvinotoolkit org 2020 3 models intel face detection adas 0001 description face detection adas 0001 html head pose estimation adas 0001 https docs openvinotoolkit org 2020 3 models intel head pose estimation adas 0001 description head pose estimation adas 0001 html emotion recognition retail 0003 https docs openvinotoolkit org 2020 3 models intel emotions recognition retail 0003 description emotions recognition retail 0003 html for the feed type traffic person detection retail 0002 https docs openvinotoolkit org 2020 3 person detection retail 0002 html is used and these can be downloaded using model downloader script for video feed type shelf mobilenet ssd model is used that can be downloaded using downloader script present in intel distribution of openvino toolkit the mobilenet ssd model is a single shot multibox detection ssd network intended to perform object detection this model is implemented using the caffe framework for details about this model check out the repository https github com chuanqi305 mobilenet ssd to install the dependencies and to download the models and optimize mobilenet ssd model run the below command cd path to the smart retail analytics python directory setup sh these models will be downloaded in the locations given below face detection opt intel openvino deployment tools open model zoo tools downloader intel face detection adas 0001 head pose estimation opt intel openvino deployment tools open model zoo tools downloader intel head pose estimation adas 0001 emotions recognition opt intel openvino deployment tools open model zoo tools downloader intel emotions recognition retail 0003 person detection retail opt intel openvino deployment tools open model zoo tools downloader intel person detection retail 0002 br the labels file the shelf feed type in the application requires a labels file associated with the model being used for detection all detection models work with integer labels and not string labels e g for the ssd300 and mobilenet ssd models the number 15 represents the class person that is why each model must have a labels file which associates an integer the label the algorithm detects with a string denoting the human readable label the labels file is a text file containing all the classes labels that the model can recognize in the order that it was trained to recognize them one class per line br for mobilenet ssd model labels txt file is provided in the resources directory the config file the resources config json contains the videos along with the video feed type the config json file is of the form name value pair video path to video and type video feed type for example inputs video path to video type video feed type the path to video is the path on the local system to a video to use as input if the video type is shelf then the labels of the class person bottle etc to be detected on that video is provided in the next column the labels used in the config json file must be present in the labels from the labels file br the application can use any number of videos for detection i e the config json file can have any number of blocks but the more videos the application uses in parallel the more the frame rate of each video scales down this can be solved by adding more computation power to the machine the application is running on what input video to use the application works with any input video sample videos for object detection are provided here https github com intel iot devkit sample videos br for first use we recommend using the face demographics walking https github com intel iot devkit sample videos blob master face demographics walking mp4 head pose face detection female https github com intel iot devkit sample videos blob master head pose face detection female mp4 bottle detection https github com intel iot devkit sample videos blob master bottle detection mp4 videos the videos are automatically downloaded in the resources folder by setup sh for example br the config json would be inputs video sample videos head pose face detection female mp4 type shopper video sample videos bottle detection mp4 label bottle type shelf video sample videos face demographics walking mp4 type traffic to use any other video specify the path in config json file using camera stream instead of the video file replace path to video with the camera id in config json and the label to be found where the id is taken from the video device the number x in dev videox on ubuntu to list all available video devices use the following command ls dev video for example if the output of above command is dev video0 then config json would be inputs video 0 type shopper setup the environment you must configure the environment to use the intel distribution of openvino toolkit one time per session by running the following command source opt intel openvino bin setupvars sh note this command needs to be executed only once in the terminal where the application will be executed if the terminal is closed the command needs to be executed again run the application change the current directory to the git cloned application code location on your system cd path to the smart retail analytics python directory application a user can specify a target device to run on by using the device command line argument d model acronym ex d fm d pm d mm d om or d pd followed by one of the values cpu gpu myriad or hddl br not specifying any target device means by default all the models will run on cpu although this can also be explicitly specified by the device command line argument to run the application with the required models python3 smart retail analytics py fm opt intel openvino deployment tools open model zoo tools downloader intel face detection adas 0001 fp32 face detection adas 0001 xml pm opt intel openvino deployment tools open model zoo tools downloader intel head pose estimation adas 0001 fp32 head pose estimation adas 0001 xml mm opt intel openvino deployment tools open model zoo tools downloader intel emotions recognition retail 0003 fp32 emotions recognition retail 0003 xml om resources fp32 mobilenet ssd xml pr opt intel openvino deployment tools open model zoo tools downloader intel person detection retail 0002 fp32 person detection retail 0002 xml lb resources labels txt once the command is executed in the terminal configure the grafana dashboard using the instructions given in the next section to see the output br to run the application on sync mode use f sync as command line argument by default the application runs on async mode running on different hardware the application can use different hardware accelerator for different models the user can specify the target device for each model using the command line argument as below d fm device target device for face detection network cpu gpu myriad hetero fpga cpu or hddl d pm device target device for head pose estimation network cpu gpu myriad hetero fpga cpu or hddl d mm device target device for emotions recognition network cpu gpu myriad hetero fpga cpu or hddl d om device target device for mobilenet ssd network cpu gpu myriad hetero fpga cpu or hddl d pd device target device for person detection retail network cpu gpu myriad hetero fpga cpu or hddl for example br to run face detection model with fp16 and emotions recognition model with fp32 on gpu head pose estimation model on myriad mobilenet ssd and person detection model on cpu use the below command python3 smart retail analytics py fm opt intel openvino deployment tools open model zoo tools downloader intel face detection adas 0001 fp16 face detection adas 0001 xml pm opt intel openvino deployment tools open model zoo tools downloader intel head pose estimation adas 0001 fp16 head pose estimation adas 0001 xml mm opt intel openvino deployment tools open model zoo tools downloader intel emotions recognition retail 0003 fp32 emotions recognition retail 0003 xml om resources fp32 mobilenet ssd xml pr opt intel openvino deployment tools open model zoo tools downloader intel person detection retail 0002 fp32 person detection retail 0002 xml lb resources labels txt d fm gpu d pm myriad d mm gpu d pd cpu d om cpu to run with multiple devices use multi device1 device2 for example d fm multi cpu gpu myriad br note br the intel neural compute stick and intel movidius vpu can only run fp16 models the model that is passed to the application must be of data type fp16 br 2 to run the application on fpga follow the steps mentioned under run on the fpga section br fp32 fp32 is single precision floating point arithmetic uses 32 bits to represent numbers 8 bits for the magnitude and 23 bits for the precision for more information click here https en wikipedia org wiki single precision floating point format br fp16 fp16 is half precision floating point arithmetic uses 16 bits 5 bits for the magnitude and 10 bits for the precision for more information click here https en wikipedia org wiki half precision floating point format run on the intel movidius vpu to run the application on intel movidius vpu configure the hddldaemon by following the below steps br open the hddl service config using the below command sudo vi hddl install dir config hddl service config update device snapshot mode none to device snapshot mode full update hddl configuration for tags graph tag map tagface 1 tagpose 1 tagmood 2 tagmobile 2 tagperson 2 save and close the file run hddldaemon br hddl install dir bin hddldaemon to run the application on the intel movidius vpu use the d hddl command line argument python3 smart retail analytics py fm opt intel openvino deployment tools open model zoo tools downloader intel face detection adas 0001 fp16 face detection adas 0001 xml pm opt intel openvino deployment tools open model zoo tools downloader intel head pose estimation adas 0001 fp16 head pose estimation adas 0001 xml mm opt intel openvino deployment tools open model zoo tools downloader intel emotions recognition retail 0003 fp16 emotions recognition retail 0003 xml om resources fp16 mobilenet ssd xml pr opt intel openvino deployment tools open model zoo tools downloader intel person detection retail 0002 fp16 person detection retail 0002 xml lb resources labels txt d pd hddl d fm hddl d pm hddl d mm hddl d om hddl visualize on grafana 1 open a new tab on the terminal and start the grafana server using the following command sudo service grafana server start 2 in your browser go to localhost 3000 http localhost 3000 3 log in with user as admin and password as admin 4 click on configuration 5 select data sources 6 click on add data source and provide inputs below name retail analytics type influxdb url http localhost 8086 database retail analytics click on save and test retail analytics docs images grafana1 png 7 click on icon present on the left side of the browser select import 8 click on upload json file 9 select the file name retail analytics json from smart retail analytics python directory 10 select retail analytics in select a influxdb data source retail analytics docs images grafana2 png 11 click on import containerize the application to containerize the smart retail analytics python application using docker container follow the instruction provided here docker | reference-implementation deep-learning inference computer-vision edge-computing edge edge-ai image-recognition object-detection intel openvino machine-learning real-time live-demo pretrained-models video | ai |
fall-detection | fall detection https travis ci org computationalcore fall detection svg branch master https travis ci org computationalcore fall detection this project consists on showcasing the advantages of the intel s openvino toolkit for inference in detecting people falling in an edge application this app perform single person fall detection using openvino s human pose estimation 0001 https docs openvinotoolkit org latest models intel human pose estimation 0001 description human pose estimation 0001 html pre trained model to detect falls the app uses the coordinates of the head nose eyes and ears neck and shoulders positions in a frame by frame comparison to determine if the person is falling it works with a video file input or webcam stream http img youtube com vi c s4oepptz8 0 jpg https www youtube com watch v c s4oepptz8 fall detection watch video https www youtube com watch v c s4oepptz8 prerequisites to run the application in this tutorial the openvino toolkit and its dependencies must already be installed alternatively you can create and build the docker image provided by this repository by following these instructions docker md installation instructions may be found at https software intel com en us articles openvino install linux when needed the following optional hardware can be used usb camera standard usb video class uvc camera intel core cpu with integrated graphics vpu usb intel movidius neural compute stick and what is being referred to as myriad a summary of what is needed hardware target and development platforms meeting the requirements described in the system requirements section of the openvino toolkit documentation which may be found at https software intel com en us openvino toolkit note while writing this tutorial an intel i7 8550u with intel hd graphics 520 gpu was used as both the development and target platform optional intel movidius neural compute stick usb uvc camera intel core cpu with integrated graphics software openvino toolkit supported linux operating system this tutorial was run on 64 bit ubuntu 16 04 1 lts updated to kernel 4 15 0 43 following the openvino toolkit installation instructions the latest openvino toolkit installed and verified supported versions 2018 r4 0 lastest version supported 2019 r1 0 1 git git for downloading from the github repository boost library to install on ubuntu run bash apt get install libboost dev libboost log dev checks by now you should have completed the linux installation guide for the openvino toolkit however before continuing please ensure that after installing the openvino toolkit you have run the supplied demo samples if you have and intend to use a gpu you have installed and tested the gpu drivers if you have and intend to use a usb camera you have connected and tested the usb camera if you have and intend to use a myriad you have connected and tested the usb intel movidius neural compute stick build clone the repository at desired location bash git clone https github com computationalcore fall detection the first step is to configure the build environment for the opencv toolkit by sourcing the setupvars sh script bash source opt intel openvino bin setupvars sh for older versions than 2019 r1 openvino was installed in a different dir run this instead bash source opt intel computer vision sdk bin setupvars sh change to the top git repository bash cd fall detection install other project dependencies after run openvino env bash pip install r requirements txt run to check available options run bash python fall detection py h usage fall detection py h i input mp fp16 fp32 l cpu extension pp plugin dir d device detect a person falling from a webcam or a video file optional arguments h help show this help message and exit i input input input path to video file or image cam for capturing video stream from internal camera mp fp16 fp32 model precision fp16 fp32 the precision of the human pose model default is 32 bit integer l cpu extension cpu extension cpu extension mkldnn cpu targeted custom layers absolute path to a shared library with the kernels impl pp plugin dir plugin dir plugin dir path to a plugin folder d device device device specify the target device to infer on cpu gpu fpga or myriad is acceptable demo will look for a suitable plugin for device specified cpu by default detecting falls on a video file in this case i use example demo mp4 but it can be any other bash python fall detection py i example demo mp4 detecting falls on webcam bash python fall detection py i cam limitations it works only with a single person in the future i can add multiple people support but currently if there is more than one person in the scene it will confuse the detector the detector only takes into consideration the relative positions of head elements neck and shoulders in the future it can be improved to consider other aspects of the human pose elements authors vin busquet https github com computationalcore https github com computationalcore license this project is licensed under the mit license see the license license file for details changelog for details check out changelog md changelog md | openvino deep-learning deeplearning detector machine-learning python3 inference | ai |
llm-rk3588 | llm rk3588 this repository is intend to provide a complete guide on how to run llms on rk3588 sbc specifically orange pi 5 plus but other rk3588 based board should be able to run without problem link blog https blog mlc ai 2023 08 09 gpu accelerated llm on orange pi mlc llm https github com mlc ai mlc llm apache tvm https github com apache tvm environment setup download and install the ubuntu 22 04 for your board from here https github com joshua riek ubuntu rockchip releases tag v1 22 download and install libmali g610 so cd usr lib sudo wget https github com jeffycn mirrors raw libmali lib aarch64 linux gnu libmali valhall g610 g6p0 x11 wayland gbm so check if file mali csffw bin exist under path lib firmware by running ls lib firmware if not then download it with command cd lib firmware sudo wget https github com jeffycn mirrors raw libmali firmware g610 mali csffw bin download opencl icd loader and manually add libmali to icd sudo apt install mesa opencl icd sudo mkdir p etc opencl vendors echo usr lib libmali valhall g610 g6p0 x11 wayland gbm so sudo tee etc opencl vendors mali icd download and install libopencl sudo apt install ocl icd opencl dev download and install dependencies for mali opencl sudo apt install libxcb dri2 0 libxcb dri3 0 libwayland client0 libwayland server0 libx11 xcb1 clinfo run clinfo to check if opencl runs successfully clinfo arm release ver g13p0 01eac0 rk so ver 3 number of platforms 2 platform name arm platform platform vendor arm platform version opencl 2 1 v1 g6p0 01eac0 2819f9d4dbe0b5a2f89c835d8484f9cd platform profile full profile feel free to check this https github com mlc ai mlc llm blob main docs install gpu rst article for other platform mlc llm setup use prebuilt recommended clone mlc llm repository sudo apt install git git lfs git clone recursive https github com mlc ai mlc llm git cd mlc llm mkdir p dist prebuilt cd dist prebuilt git clone https github com mlc ai binary mlc llm libs git lib git clone https huggingface co mlc ai mlc chat redpajama incite chat 3b v1 q4f16 1 cd build mlc chat cli from source cd mlc llm mkdir p build cd build python3 cmake gen cmake config py cmake cmake build parallel nproc cd verify installation expect to see mlc chat cli libmlc llm so and libtvm runtime so ls l build expect to see help message build mlc chat cli help run llms build mlc chat cli local id redpajama incite chat 3b v1 q4f16 1 device mali compile your own llms install mlc llm package git clone recursive https github com mlc ai mlc llm git cd mlc llm pip install verify installation expect to see help info python3 m mlc llm build help compile models make sure the model you are using is huggingface format read model description before you download python3 m mlc llm build hf path togethercomputer redpajama incite chat 3b v1 target opencl quantization q4f16 1 or you can use models you downloaded in local computer python3 m mlc llm build model path to model target opencl quantization q4f16 1 available quantization codes are q3f16 0 q4f16 1 q4f16 2 q4f32 0 q0f32 and q0f16 | ai |
|
Progressive-Web-Application-Development-by-Example | progressive web application development by example a href https www packtpub com application development progressive web application development example utm source github utm medium repository utm campaign 9781787125421 img src https www packtpub com sites default files b06922 mockupcovernew png alt progressive web application development by example height 256px align right a this is the code repository for progressive web application development by example https www packtpub com application development progressive web application development example utm source github utm medium repository utm campaign 9781787125421 published by packt develop fast reliable and engaging user experiences for the web what is this book about are you a developer that wants to create truly cross platform user experiences with a minimal footprint free of store restrictions and features customers want then you need to get to grips with progressive web applications pwas a perfect amalgamation of web and mobile applications with a blazing fast response time this book covers the following exciting features explore the core principles of pwas study the three main technical requirements of pwas discover enhancing requirements to make pwas transcend native apps and traditional websites create and install pwas on common websites with a given https as the core requirement get acquainted with the service worker life cycle if you feel this book is for you get your copy https www amazon com dp 1787125424 today a href https www packtpub com utm source github utm medium banner utm campaign githubbanner img src https raw githubusercontent com packtpublishing github master github png alt https www packtpub com border 5 a instructions and navigations all of the code is organized into folders for example chapter02 the code will look like the following function renderresults results var template document getelementbyid search results template searchresults document queryselector search results following is what you need for this book progressive web application development by example is for you if you re a web developer or front end designer who wants to ensure improved user experiences if you are an application developer with knowledge of html css and javascript this book will help you enhance your skills in order to develop progressive web applications the future of app development with the following software and hardware list you can run all code files present in the book chapter 1 10 software and hardware list chapter software required os required 1 10 nodejs 6 1 or above windows macos or linux supported by nodejs and has a browser chrome microsoft edge firefox opera or safari visual studio code or sublime related products paste books from the other books you may enjoy section progressive web apps with react packt https www packtpub com web development progressive web apps react utm source github utm medium repository utm campaign 9781788297554 amazon https www amazon com dp 1788297555 hands on full stack development with angular 5 and firebase packt https www packtpub com application development hands full stack development angular 5 and firebase utm source github utm medium repository utm campaign 9781788298735 amazon https www amazon com dp 178829873x get to know the author chris love chris love is a frontend developer with 25 years of professional experience he has won the microsoft mvp award for 12 years and has authored multiple books he has helped over 1 000 businesses of all sizes and from various industries chris regularly speaks at user groups code camps and developer conferences and also writes articles and videos to help fellow developers when he s not working on frontend development you can find him spending time with his step kids doing karate and taking part in spartan races suggestions and feedback click here https docs google com forms d e 1faipqlsdy7datc6qmel81fiuuymz0wy9vh1jhkvpy57oimekgqib ow viewform if you have any feedback or suggestions download a free pdf i if you have already purchased a print or kindle version of this book you can get a drm free pdf version at no cost br simply click on the link to claim your free pdf i p align center a href https packt link free ebook 9781787125421 https packt link free ebook 9781787125421 a p | front_end |
|
CS224n-Assignments | cs224n assignments my solutions to the practical assignments of cs224n winter 2019 | ai |
|
llm_planning | llm planning framework for evaluating planning with llms how to install bash git clone https github com yessense llm planning git conda env create f environment yaml uncomment next line to login in wandb wandb login how to run experiment bash python3 m llm planning run you may change run parameters in llm planning infrastructure config py write them down in config config yaml or add them as a command line arguments language table dataset dataset notebook a target blank href https colab research google com github yessense llm planning blob master language table dataset ipynb img src https colab research google com assets colab badge svg alt open in colab a data exploration notebook a target blank href https colab research google com github yessense llm planning blob master speech recognition speech 20recognition 20in 20noise 20dolgushin ipynb img src https colab research google com assets colab badge svg alt open in colab a speech recognition in noisy environments notebook with data processing and wav2vec model finetuning and evaluation whisper and wav2vec models a target blank href https colab research google com github yessense llm planning blob master speech recognition speech 20recognition 20in 20noise 20dolgushin ipynb img src https colab research google com assets colab badge svg alt open in colab a | ai |
|
lotus | p align center a href https lotus filecoin io title filecoin docs img src documentation images lotus logo h png alt project lotus logo width 244 a p h1 align center project lotus h1 p align center a href https circleci com gh filecoin project lotus img src https circleci com gh filecoin project lotus svg style svg a a href https codecov io gh filecoin project lotus img src https codecov io gh filecoin project lotus branch master graph badge svg a a href https goreportcard com report github com filecoin project lotus img src https goreportcard com badge github com filecoin project lotus a a href img src https img shields io badge golang 3e 3d1 18 8 blue svg a br p lotus is an implementation of the filecoin distributed storage network for more details about filecoin check out the filecoin spec https spec filecoin io building documentation note the default master branch is the dev branch please use with caution for the latest stable version checkout the most recent latest release https github com filecoin project lotus releases for complete instructions on how to build install and setup lotus please visit https lotus filecoin io https lotus filecoin io lotus install prerequisites supported platforms basic build instructions can be found further down in this readme reporting a vulnerability please send an email to security filecoin org see our security policy security md for more details related packages these repos are independent and reusable modules but are tightly integrated into lotus to make up a fully featured filecoin implementation go fil markets https github com filecoin project go fil markets which has its own kanban work tracker available here https app zenhub com workspaces markets shared components 5daa144a7046a60001c6e253 board builtin actors https github com filecoin project builtin actors contribute lotus is a universally open project and welcomes contributions of all kinds code docs and more however before making a contribution we ask you to heed these recommendations 1 if the proposal entails a protocol change please first submit a filecoin improvement proposal https github com filecoin project fips 2 if the change is complex and requires prior discussion open an issue github com filecoin project lotus issues or a discussion https github com filecoin project lotus discussions to request feedback before you start working on a pull request this is to avoid disappointment and sunk costs in case the change is not actually needed or accepted 3 please refrain from submitting prs to adapt existing code to subjective preferences the changeset should contain functional or technical improvements enhancements bug fixes new features or some other clear material contribution simple stylistic changes are likely to be rejected in order to reduce code churn when implementing a change 1 adhere to the standard go formatting guidelines e g effective go https golang org doc effective go html run go fmt 2 stick to the idioms and patterns used in the codebase familiar looking code has a higher chance of being accepted than eerie code pay attention to commonly used variable and parameter names avoidance of naked returns error handling patterns etc 3 comments follow the advice on the commentary https golang org doc effective go html commentary section of effective go 4 minimize code churn modify only what is strictly necessary well encapsulated changesets will get a quicker response from maintainers 5 lint your code with golangci lint https golangci lint run ci will reject your pr if unlinted 6 add tests 7 title the pr in a meaningful way and describe the rationale and the thought process in the pr description 8 write clean thoughtful and detailed commit messages https chris beams io posts git commit this is even more important than the pr description because commit messages are stored inside the git history one good rule is if you are happy posting the commit message as the pr description then it s a good commit message basic build instructions system specific software dependencies building lotus requires some system dependencies usually provided by your distribution ubuntu debian sudo apt install mesa opencl icd ocl icd opencl dev gcc git bzr jq pkg config curl clang build essential hwloc libhwloc dev wget y sudo apt upgrade y fedora sudo dnf y install gcc make git bzr jq pkgconfig mesa libopencl mesa libopencl devel opencl headers ocl icd ocl icd devel clang llvm wget hwloc hwloc devel for other distributions you can find the required dependencies here https lotus filecoin io lotus install prerequisites supported platforms for instructions specific to macos you can find them here https lotus filecoin io lotus install macos go to build lotus you need a working installation of go 1 19 12 or higher https golang org dl bash wget c https golang org dl go1 19 12 linux amd64 tar gz o sudo tar xz c usr local tip you ll need to add usr local go bin to your path for most linux distributions you can run something like shell echo export path path usr local go bin bashrc source bashrc see the official golang installation instructions https golang org doc install if you get stuck build and install lotus once all the dependencies are installed you can build and install the lotus suite lotus lotus miner and lotus worker 1 clone the repository sh git clone https github com filecoin project lotus git cd lotus note the default branch master is the dev branch where the latest new features bug fixes and improvement are in however if you want to run lotus on filecoin mainnet and want to run a production ready lotus get the latest release here https github com filecoin project lotus releases 2 to join mainnet checkout the latest release https github com filecoin project lotus releases if you are changing networks from a previous lotus installation or there has been a network reset read the switch networks guide https lotus filecoin io lotus manage switch networks before proceeding for networks other than mainnet look up the current branch or tag commit for the network you want to join in the filecoin networks dashboard https network filecoin io then build lotus for your specific network below sh git checkout tag or branch for example git checkout vx x x tag for a release currently the latest code on the master branch corresponds to mainnet 3 if you are in china see lotus tips when running in china https lotus filecoin io lotus configure nodes in china 4 this build instruction uses the prebuilt proofs binaries if you want to build the proof binaries from source check the complete instructions https lotus filecoin io lotus install prerequisites note if you are building the proof binaries from source installing rustup https lotus filecoin io lotus install linux rustup is also needed 5 build and install lotus sh make clean all mainnet or to join a testnet or devnet make clean calibnet calibration with min 32gib sectors sudo make install this will put lotus lotus miner and lotus worker in usr local bin lotus will use the home lotus folder by default for storage configuration chain data wallets etc see advanced options https lotus filecoin io lotus configure defaults environment variables for information on how to customize the lotus folder 6 you should now have lotus installed you can now start the lotus daemon and sync the chain https lotus filecoin io lotus install linux start the lotus daemon and sync the chain 7 optional follow the setting up prometheus and grafana https github com filecoin project lotus tree master metrics readme md guide for detailed instructions on setting up a working monitoring system running against a local running lotus node license dual licensed under mit https github com filecoin project lotus blob master license mit apache 2 0 https github com filecoin project lotus blob master license apache | filecoin blockchain golang ipfs | blockchain |
orcs-design-system | orcs orchestrated design system featuring styled system styled components github repository https github com orchestrated io orcs design system deployed on github pages https orchestrated io github io orcs design system npm version https badge fury io js orcs design system svg https badge fury io js orcs design system github last commit https img shields io github last commit orchestrated io orcs design system github license https img shields io github license orchestrated io orcs design system svg https github com orchestrated io orcs design system blob master license fossa status https app fossa io api projects git 2bgithub com 2forchestrated io 2forcs design system svg type large https app fossa io projects git 2bgithub com 2forchestrated io 2forcs design system ref badge large getting started clone repository git clone https github com orchestrated io orcs design system git install dependencies npm install run the orcs storybook start the orcs development server npm run storybook a new browser window will open with a random localhost port orcs runs storybook 7 https github com storybookjs storybook releases working with orcs all library components and files are located in lib static files are located in assets viewing changes in pm td as an alternative to npm link you can run npm run dist and then copy the es folder directly into td or pm cp r es team directory node modules orcs design system symlinking with npm link we haven t had much success with this recently if you need to do npm link in your local environment you might encounter the following issues https reactjs org warnings invalid hook call warning html duplicate react https github com styled components styled components issues 2379 both react and styled components cause duplicate instance issue after npm link to fix that we need to ensure both app project and lib project are sharing the same instance of them in orcs design system folder npm link path app repo node modules react npm link path app repo node modules styled components npm link in path app repo npm i orcs design system npm link orcs design system npm start testing npm run test publishing changes in order to publish a new version you will have to patch and push your changes after your changes have been merged to master from your master branch npm version patch git push deploying to github pages orcs automatically deploys to github pages when a new version is published to manually deploy npm run deploy storybook using orcs in a project in your project s root install orcs and styled components npm i orcs design system npm i styled components using components once orcs is installed you can directly import components for example for box import box from orcs design system box box the orcs storybook https orchestrated io github io orcs design system contains documentation for each component including code examples and props tables for components with subcomponents each subcomponent must be imported for example to use tabs import tabscontainer tab from orcs design system using styled system props the design system components are built with styled system https styled system com props generally components can access the space and layout prop categories with additional prop categories on a per component basis check the properties section in a component s documentation to see what props it can use custom props will be listed in the props table as a guide to using these props see the styled system reference table https styled system com table the design system s theme scales are contained in systemtheme https github com orchestrated io orcs design system blob master packages orcs design system lib systemtheme js you may also find it useful to install the following styled system utilities theme get and css npm i styled system theme get npm i styled system css theming this design system uses styled components themeprovider and comes with a default theme systemtheme that uses styled system arrays to apply the theme to your app you can use systemtheme or your own theme object import themeprovider from styled components themeprovider theme theme app themeprovider variables can be referenced using theme arrayname variablealias when using styled system props components refer to the theme field associated with the prop as set out in the reference table https styled system com table using icons you will need to add the icon library that we are using which is font awesome to do this install the font awesome packages npm i fortawesome react fontawesome npm i fortawesome fontawesome svg core npm i fortawesome free regular svg icons npm i fortawesome free solid svg icons then add this to the top of your root app file import library from fortawesome fontawesome svg core import far from fortawesome free regular svg icons import fas from fortawesome free solid svg icons library add far fas alternatively you can use the full icon packages if you purchase font awesome pro license once you have purchased a license you need to add your font awesome npm token as an environment variable in a npmrc file at the root of your app in the following format fortawesome registry https npm fontawesome com npm fontawesome com authtoken font awesome npm token goes here for further usage guidelines for the icon component see the icon docs http localhost 55322 path docs components icon default icon sandbox environment for playing with orcs via playroom https github com seek oss playroom playroom allows you to simultaneously design across a variety of themes and screen sizes powered by jsx and your own component library playroom allows you to create a zero install code oriented design environment built into a standalone bundle that can be deployed alongside your existing design system documentation to run playroom use this command npm run playroom browser device support this design system is intended to work correctly on all modern desktop and mobile browsers a design system is a living funded product with a roadmap and backlog serving an ecosystem nathan curtis security vulnerability checks the audit ci command runs to detect any high and critical severity security vunerabilities currently this runs as a part of the ci github action before publishing a package ci yml as part of the pr and push workflow pr yml this means we effectively stop the line for any high or critical security issues to run locally npm run audit ci or npx audit ci this uses the default configuration file audit ci json audit ci json addressing security issues more information on the specific vunerabilities can be found by adding the report flag npm run audit ci report or npx audit ci report the built in npm audit command provides a useful output too but will not use the audit ci json audit ci json configuration file so npm audit fix will fix everything it can rather than a single vunerability you may find it easier to update issues invidually or by using npm update dep name depth nnn or simlar allow list configuration presently we have a number of high criticality security issues in the allow list see audit c json audit ci json todo add more information on the allowed vunerabilities as comments in the config file strict mode checking we have an additional strict mode check that allows us to run a full audit including medium or low severity issues with suitable user permissions this can be run from a github action triggering the gh action directly on a specific branch including master gh workflow run security yml ref branch name add f mode notstrict to make it run the default audit ci config if you wish or locally npm run audit ci strict this gives a fuller view of the vunerabilities relevant to the codebase | orcs styled-system design-system react teamform storybook styled-components | os |
Palladio-Analyzer-SimuLizar | palladio analyzer simulizar simulizar is a palladio plug in for analyzing self adaptive systems such as cloud computing systems at design time with simulizar we want to provide modeling support for self adaptation rules as well as new analysis for scalability elasticity and efficiency a documentation of the contained testing framework for simulizar based simulations can be found here bundles org palladiosimulator simulizar test commons readme md extending simulizar simulizar provides an elaborate extension api which relies on dagger dependency injection simulizar imposes a strict scoping of dependencies in order to provide simplified lifetime management and therefore reusability more details on scoping in dagger can be found here https dagger dev dev guide in the section on singletons and scoped bindings the concept of components and scoping we used the concept of dagger components to modularize the parts of simulizar the current component hierarchy and the respective dependencies are visualized in the following diagram in principle we differentiate between four hierarchical levels simulizarplatformcomponent services with one instance for the entire application e g eclipse extension point management simulizarrootcomponent services with one instance per simulation run e g preparation of blackboard simulizarruntimecomponent central interpretation runtime entities simulatedthreadcomponent di scope per simulated user created anew for every new user how to provide your extension simulizar currently provides support for several different types of extension in general if you develop your extension please follow the following guidelines select an extension interface fitting for your needs if none are available consult with the simulizar developers how to best achieve what you are doing specify all dependencies of your extension as parameters to its constructor please refrain from doing any other work in the constructor particularly do not try to resolve any models or rely on the runtime state at that particular point in time every extension interface specifies an initialize method which is called by the framework once all preceding simulation entities are initialized do your lookups and initialization work here if you introduced new runtime state make sure to clean in up in the respective cleanup method in order to make your extensions known to simulizar please implement a dagger component interface which provides the appropriate implementation of your extension in order to access simulation entities and entities of other extensions specify explicit component dependencies e g component dependencies qualcomponent class to the required components have your component extend the extensioncomponent interface in line with dagger please add a nested static factory interface with one abstract method which takes an instance of each component dependency and returns your component type the component as well as the factory will be generated by dagger have the factory extend extensioncomponent factory please specify an implementation for the method extensioncomponent contributes have it return a set extensioncontribution for every extension contributed by your component create one extensioncontribution instance specifying the extension interface e g imodelobserver and the function to retrieve it given an instance of your component simulizar takes care about instantiating your component and uses the specified function to retrieve the concrete implementations of your extensions an example of how to build an extensioncomponent can be found here bundles org palladiosimulator simulizar elasticity src org palladiosimulator simulizar elasticity di components elasticityruntimeextensioncomponent java in order to register your extension with simulizar there are two options 1 define a custom launcher and provide your extensioncomponent factory to the simulizarrootcomponent factory create new extensioncomponentsmodule we suggest this option whenever you want to reuse and extend simulizar functionality in your own analysis 2 use the eclipse extension point to contribute the factory the factory needs to be instantiable without parameters we suggest this option when you want to transparently contribute additional support to the simulizar analysis in this case please ensure compatibility with existing simulizar models there are currently a few limitation to take into account with extensioncomponents extensioncomponents cannot have circular dependencies and really should not extensioncomponent factories can only have dependent components for parameters using bindsinstance or module instances is currently not possible as of now the set of available components consists of the following simulizarrootcomponent simenginecomponent qualcomponent simucomframeworkcomponent simulizarruntimecomponent any contributed extension component please note that the extension mechanism currently targets rootcomponent and runtimecomponent level the point in time when your extensions are available dependends on the component dependencies of your extensioncomponent there is currently no support for extending the simulatedthread level although that might change in the future simulizar extensions simulizar currently provides the following extension points interface purpose modelloader allows to add further models to be loaded into the model partition modelcompletion allows to modify the loaded models before running the interpreters called after all resources have been loaded iconfigurer allows to adapt selected run configuration parameters based on the current models in the blackboard called after all models have been resolved and all model completions have been executed runtimestateentitymanager initialize new entities which constitute runtime state e g user management usage evolver management please do not evaluate the pcmpartition here add an imodelobserver to listen to changes in the model and update the runtime state accordingly runtimestateentityobserver allows to listen to changes in the runtime state of the simulation e g to attach listeners to calculators imodelobserver watch the global partition and update the respective simulation entity accordingly e g usagemodelsyncer updates simulatedusagemodels rdseffswitchcontributionfactory provide a factory for a custom package of rdseff actions when encountering actions from this package the rdseff interpreter dispatches the events to your custom interpreter if again traverse the model further please make sure to use the provided rdseffelementdispatcher to do so iinterpreterlistener register for modelelementpassedevent s of the core interpretation logic | nightly-build release | os |
TypeC_Lesson6_ChassisTask-FreeRTOS | typec lesson6 chassistask freertos robomaster | os |
|
MLResources | machine learning resources repository for learning resources related to machine learning and deep learning managed by the dlsu machine learning group img src https github com dlsucomet mlresources blob master assets logo2 jpg width 200 reference list https github com dlsucomet mlresources blob master reference md a curated list of lectures videos books and more for all levels of experience in ml dl check it if you re interested in learning ml or is looking to add to your already existing knowledge this list is being updated over time so be sure to check back often tools and frameworks a collection of useful tools libraries and frameworks either written by hand or sourced from somewhere else includes software for data collection annotation visualization etc coming soon | ai |
|
VFluent | p align center img src examples assert logo vfluent png div p align center a href license img src https img shields io badge license mit yellow svg a a href build img src https travis ci com aleversn vfluent svg branch master a a href img src https img shields io npm dw vfluentdesign a p fluent ui components based on vue client this repository provides a library of components based on microsoft s fluent design system https developer microsoft com en us fluentui we did our best to implement styling such as the acrylic and reveal effects of native system s windows 11 windows 10 compoents on the web project structure bash build the scaffold config js component library configuration file examples docs vuepress index js vue cli lib packaging entrance lib ump package the ump file packages common theme common theme themename theme index scss scss global scss generated by the script componentscss scss componentname src source component source index vue component index js component entry index js all component entries generated by scripts components json component directory vue config js vue cli vue cli config npm script cmds bash pnpm run pub npm pack and push to npm pnpm run docs dev run as development document mode pnpm run bin new componentname chinesename create new component pnpm run bin rm componentname remove the component and re customize the entry pnpm run bin entry custom entry fluent design ui vue 2 7 docs documention https aleversn github io vfluent how to use 1 install via pnpm bash pnpm i vfluentdesign recommend 2 import vue entry main js js import vue from vue import vuex from vuex import vuefluent from vfluentdesign import vfluentdesign lib index css vue use vuefluent vuex 3 sample example html fv button hello vue fluent fv button 4 about fabric ui our project have many using cases contains microsoft s fabric ui such as icons and shadows for more details you can click a href https developer microsoft com en us fabric styles here a to get more information here is the icon using sample vue i class ms icon ms icon aadlogo i in particular if you re using a component of fluent vue design and it contains a prop that support icon you only need to type the icon s name in prop we have updated the new windows 11 fluent icons check the icon dictionary on a href https docs microsoft com en us windows apps design style segoe fluent icons font here a we need you become a contributor vfluent still needs to improve the documentation is not yet complete there are still some details to work out we want to hear from your issues and suggestions it s welcome to have you become a contributor in this project you could add some new components to vfluent or update the existing components to better support the mobile end if you have some other creative ideas we re happy to hear them from you license mit license copyright c 2023 creator sn permission is hereby granted free of charge to any person obtaining a copy of this software and associated documentation files the software to deal in the software without restriction including without limitation the rights to use copy modify merge publish distribute sublicense and or sell copies of the software and to permit persons to whom the software is furnished to do so subject to the following conditions the above copyright notice and this permission notice shall be included in all copies or substantial portions of the software the software is provided as is without warranty of any kind express or implied including but not limited to the warranties of merchantability fitness for a particular purpose and noninfringement in no event shall the authors or copyright holders be liable for any claim damages or other liability whether in an action of contract tort or otherwise arising from out of or in connection with the software or the use or other dealings in the software status alt https repobeats axiom co api embed c3151fa6bc7f4329d5d136aff5300b5a858f8b67 svg repobeats analytics image | fabric-ui fluent-design windows-11 sun-valley vue | os |
qsample | qsample qsample is a natural language processing tool for automatically detecting quotations in text example in the sentence witnesses said that several passengers have broken bones the span that several passengers have broken bones is a quotation requirements java jvm 1 7 and maven 3 0 0 need to be installed all other dependencies will be downloaded automatically the dependencies all together will amount to 250 mb the trained model files take up another 80 mb setup install the tool by running the following commands note this will trigger a 250 mb maven dependency download and will produce a jar file of comparable size git clone https github com christianscheible qsample git cd qsample mvn compile mvn package if the build was successful you will find two jar files in target with and without dependencies respectively next download and unpack the pre trained models 80 mb wget https github com christianscheible qsample releases download 0 1 models tar gz tar xzfv models tar gz usage now we are ready to detect quotations as a first step you can run the tool on the example documents we provide in example documents the expected format is a directory of plain text files each containing a single document to process the documents run the following command java jar target qsample 0 1 jar with dependencies jar sample example documents output qsample will produce several files in the output directory log file storing the messages that were also output to command line conf file documenting the configuration used by the tool for this run one quotations gz file for each document in the input directory containing the detected quotations the quotations gz files contain the predictions made by the model as an example take the following snippet witnesses 230 239 o o said 240 244 o c that 245 249 o b several 250 257 o i passengers 258 268 o i have 269 273 o i broken 274 280 o i bones 281 286 o e 286 287 o o the output format consists of five columns the first column contains the tokens the second and third columns contains the byte begin and end positions of the tokens in the original input file the fourth column contains the gold labels if there are any the fifth column contains the predicted quotes the predictions are encoded using bioe style labels the label c marks the occurrence of a cue and all words between the b begin and e end tag are the content of the quotation data this repository includes the following data example documents three news articles from wikinews for testing qsample expects one plain text document per file you can mark paragraph boundaries in the text by adding an empty line after each paragraph knowledge about paragraphs is useful for detecting quotations linguistic pre processing is performed by stanford corenlp resources parc configs configuration files for running experiments see below the acl2016 configurations use gold pre processing whereas the predpipeline configurations use corenlp processing for each setup we supply one file for each of the methods used in the paper resources parc listfeatures word lists for extracting features we supply lists of attribution nouns and verbs organizations and persons titles as well as a mapping of verbs to verbnet classes these lists were generated from third party resources see licenses license md resources news txt a list of wsj id s that contain news documents running an experiment to run an experiment on annotated data you need to obtain several resources penn attribution relations corpus parc3 http homepages inf ed ac uk s1052974 resources php penn treebank 2 https catalog ldc upenn edu ldc95t7 bbn pronoun coreference and entity type corpus https catalog ldc upenn edu ldc2005t33 afterwards you can run experiments based on the configuration files in resources parc configs to test the pre trained models you need to adapt the paths in the configuration files to train a model you can simply switch from test to train mode in the configuration more information for more information refer to our paper available at http www aclweb org anthology p p16 p16 1164 pdf inproceedings scheibleklingerpado2016 author scheible christian and klinger roman and pad o sebastian title model architectures for quotation detection booktitle proceedings of the 54th annual meeting of the association for computational linguistics pages 1736 1745 year 2016 or check the tool s website at http www ims uni stuttgart de data qsample for news license please see licenses license md | ai |
|
LLM-s | llm s large language models | ai |
|
fswd-angular6 | readme leeme | front_end |
|
Udagram-Image-Filtering-Microservice-Project-02 | udagram image filtering microservice udagram is a simple cloud application developed alongside the udacity cloud engineering nanodegree it allows users to register and log into a web client post photos to the feed and process photos using an image filtering microservice the project is split into three parts 1 the simple frontend https github com udacity cloud developer tree master course 02 exercises udacity c2 frontend a basic ionic client web application which consumes the restapi backend covered in the course 2 the restapi backend https github com udacity cloud developer tree master course 02 exercises udacity c2 restapi a node express server which can be deployed to a cloud service covered in the course 3 the image filtering microservice https github com udacity cloud developer tree master course 02 project image filter starter code the final project for the course it is a node express application which runs a simple script to process images your assignment tasks setup node environment you ll need to create a new node server open a new terminal within the project directory and run 1 initialize a new project npm i 2 run the development server with npm run dev create a new endpoint in the server ts file the starter code has a task for you to complete an endpoint in src server ts which uses query parameter to download an image from a public url filter the image and return the result we ve included a few helper functions to handle some of these concepts and we re importing it for you at the top of the src server ts file typescript import filterimagefromurl deletelocalfiles from util util deploying your system follow the process described in the course to eb init a new application and eb create a new environment to deploy your image filter service don t forget you can use eb deploy to push changes stand out optional refactor the course restapi if you re feeling up to it refactor the course restapi to make a request to your newly provisioned image server authentication prevent requests without valid authentication headers note if you choose to submit this make sure to add the token to the postman collection and export the postman collection file to your submission so we can review custom domain name add your own domain name and have it point to the running services try adding a subdomain name to point to the processing server note domain names are not included in aws free tier and will incur a cost | aws elasticbeanstalk http-server rds-postgres rest-api s3-storage | cloud |
Kobuki | kobuki unimelb kobuki embedded system design lab | os |
|
vota.dev | vota dev div align left https img shields io badge contributions welcome brightgreen svg https img shields io badge maintained 3f yes brightgreen svg div welcome to vota dev https vota dev this is a web platform oriented to help worldwide development community the mainly goal is to give to developement community a tool where they can find out the state of js stacks tools platforms libraries and much more and help them to find out which to learn next or use in actual projects by survey method we collect data and process it to make the metrics and reports to the community table of contents installation and deployment installation and deployment development development github set up an oauth application github set up an oauth application using railway using railway community and contributions community and contributions talk with us or report an issue talk with us or report an issue installation and documentation development 1 install the project with npm install 2 initialize the prisma client with npm prisma generate or npx prisma generate 3 set up your environment variables following the env example file note the environment file must be named like env you can get the github id and github secret following github set up an oauth application github set up an oauth application you can set in secret whatever you want or a strong character string like a base64 sha1 etc you need to uncomment nextauth url to remove the warning alert in localhost you can get the database url following using railway using railway 4 migrate the prisma generated database to the postgresql on railway with npm run migrate dev 5 you can now start developing for vota dev github set up an oauth application 1 login to login github http github com login 2 enter your applications in developer settings apps github https github com settings apps 3 inside oauth apps click on new oauth app and fill the fields you can set the homepage url to http vota dev and the callback url to http localhost 3000 api auth 4 you can retrieve the client id and the client secret there using railway 1 login to login railway https railway app login 2 accept the tos tos railway https railway app legal terms 3 create a new project with postgresql create railway https railway app new 4 claim the project 5 on environment click on postgresql then click on connect 6 you can retrieve the postgres connection url there community and contributions vota dev https vota dev is a community driven open source project we are committed to a fully transparent development process and highly appreciate any contributions whether you are helping us fixing bugs proposing new feature improving our documentation or spreading the word we would love nbsp to have you as part of the vota dev community talk with us or report an issue we are really happy to welcome you in the midudev https twitter com midudev community discord channel https discord gg midudev nbsp or reporting a bug issue via github issues https github com midudev vota dev issues or answer your questions via github discussions https github com midudev vota dev discussions | front_end |
|
business_closures_de_pipeline | codefactor https www codefactor io repository github dylanzenner business closures de pipeline badge https www codefactor io repository github dylanzenner business closures de pipeline python 3 6 https img shields io badge python 3 7 blue svg https www python org downloads release python 360 code style black https img shields io badge code 20style black 000000 svg https github com psf black github last commit https img shields io github last commit dylanzenner business closures de pipeline github repo size https img shields io github repo size dylanzenner business closures de pipeline sf business closures de pipeline tracking business closures in san francisco across corridor zipcode neighborhood and naic descriptions in the months following covid 19 completely hosted in the aws ecosystem including a dashboard built with amazon quicksight if you would like to replicate this project follow the walk through md file in the docs directory currently going through the process of finalizing the walk through after completing the walkthrough and going through it to double check everything i ve decided to just write up a cloudformation template for deployment the walkthrough file did not turn out the way i had envisioned it to be it was very dense and could be boring if people were to try and follow along i ll do my best to get that cf template up as soon as possible architecture architecture final de project architecture diagram png data is sourced from san francisco s open data api https data sfgov org economy and community registered business locations san francisco g8m3 pdis as json documents containing information on business closures throughout san francisco a series of lambda functions orchestrate the data movement and transformations throughout the pipeline the presentation layer is created using amazon quicksight infrastructure the project is housed in the aws ecosystem and utilizes the following resources vpc custom built vpc with two subnets 1 private 1 public igw natgw and route tables security groups ec2 t2 micro resource used to ssh into the documentdb database also initiates the ssm runcommand to extract the transformed data from documentdb load it into s3 and shut down the ec2 instance and documentdb cluster documentdb engine version 4 0 0 db t3 medium resource used for the primary instance of the database 3 lambda functions 1 for starting the ec2 instance and documentdb cluster 1 for pulling data from the api and loading it into the documentdb cluster 1 for transforming the data loading it to s3 and shutting down the services secrets manager for storing connection variables s3 bucket with versioning enabled for storing the transformed data in json format ssm runcommand for shutting down the services cloudwatch time based events for automating the pipeline amazon quicksight for the visualization layer dashboard dashboard images dashboard1 png dashboard images dashboard2 png dashboard images dashboard3 png dashboard images dashboard4 png dashboard images dashboard5 png points moving forward i had a lot of fun building this project but i do have some things i would like to mention this project is relatively expensive if you are not conscious about turning off the ec2 instance and the database i purposfully built a non highly availble architecture in order to save on costs particularly the database if i were to go with 3 instances for the database instead of 1 it would cost roughly 0 32 per hour instead of 0 08 per hour that adds up fast if it were to be left on 24 hr like it would be in a production setting i am spending around 40 00 a month to keep this project running most of that cost is due to the natgw which costs 0 045 per hour and is always running i am a little disappointed in the lack of support that documentdb has for mongodb specifically the fact that documentdb does not support either geospatial geometry specifiers box center centersphere nearsphere geometry maxdistance mindistance polygon uniquedocs geospatial query selectors geointersects geowithin near nearsphere polygon uniquedocs i origionally wanted to do some querying with some of these geospatial operators but since documentdb does not support these operators i was unable to do that another thing i originally wanted to do was embed the quicksight dashboard here in the readme file however in order to do that i would need the enterprise edition of quicksight and i would also be getting charged 0 30 per reader session with a reader session counting as anyone who visits this page i just was unable to spend that amount of money on this project i wish i could ve but perhaps i will look into other methods of visualization for my next project i used cloudwatch time based events for automating the project while i really liked the time based events i would like to use more cloudwatch event based events to help cut down on costs and get some more experience with cloudwatch i had never worked with unit tests before this project but i really wanted to go all out and make this the best that i possibly could so i integrated some unit tests in order to touch on the subject while they may not be the greatest unit tests i belive that they are a start in the right direction i would like to read up more on the subject before i start my next project | documentdb data-engineering aws-ecosystem data-engineering-pipeline aws aws-secretsmanager aws-ec2-bastion slack quicksight aws-ssm-document aws-s3 aws-lambda aws-cloudwatch-events | server |
sputnikvm | sputnikvm a blockchain virtual machine build status https travis ci org etcdevteam sputnikvm svg branch master https travis ci org etcdevteam sputnikvm license https img shields io badge license apache 202 0 blue svg license name description crates io documentation sputnikvm core library for the ethereum virtual machine crates io https img shields io crates v sputnikvm svg https crates io crates sputnikvm documentation https docs rs sputnikvm badge svg https docs rs sputnikvm sputnikvm stateful merkle trie stateful wrapper for sputnikvm crates io https img shields io crates v sputnikvm stateful svg https crates io crates sputnikvm stateful documentation https docs rs sputnikvm stateful badge svg https docs rs sputnikvm stateful sputnikvm is an implementation of an ethereum virtual machine it aims to be an efficient pluggable virtual machine for different ethereum based blockchains we encourage all ethereum esque blockchains to adopt sputnikvm and to make use of sputnikvm s rfc governance project https etcrfc that world which governs the parameters of each blockchain s vm this way we can draw from the experience of the community and learn from other proposed rfcs features standalone can be launched as an independent process or integrated into other apps universal supports different ethereum chains such as etc eth or private ones stateless only an execution environment connected to independent state storage fast main focus is on performance iot compatible designed to support hardware used in embedded devices ffi protobuf and json interface written in rust can be used as a binary cargo crate or shared library supported networks network crates io documentation ethereum classic crates io https img shields io crates v sputnikvm network classic svg https crates io crates sputnikvm network classic documentation https docs rs sputnikvm network classic badge svg https docs rs sputnikvm network classic ethereum crates io https img shields io crates v sputnikvm network foundation svg https crates io crates sputnikvm network foundation documentation https docs rs sputnikvm network foundation badge svg https docs rs sputnikvm network foundation ellaism crates io https img shields io crates v sputnikvm network ellaism svg https crates io crates sputnikvm network ellaism documentation https docs rs sputnikvm network ellaism badge svg https docs rs sputnikvm network ellaism ubiq crates io https img shields io crates v sputnikvm network ubiq svg https crates io crates sputnikvm network ubiq documentation https docs rs sputnikvm network ubiq badge svg https docs rs sputnikvm network ubiq expanse crates io https img shields io crates v sputnikvm network expanse svg https crates io crates sputnikvm network expanse documentation https docs rs sputnikvm network expanse badge svg https docs rs sputnikvm network expanse musicoin crates io https img shields io crates v sputnikvm network musicoin svg https crates io crates sputnikvm network musicoin documentation https docs rs sputnikvm network musicoin badge svg https docs rs sputnikvm network musicoin precompiled contracts the core library has the initial four precompiled contracts embedded to use the bn128 and modexp precompiled contracts introduced by the byzantium hard fork pull the following crates name description crates io documentation sputnikvm precompiled bn128 bn128 precompiled contracts crates io https img shields io crates v sputnikvm precompiled bn128 svg https crates io crates sputnikvm precompiled bn128 documentation https docs rs sputnikvm precompiled bn128 badge svg https docs rs sputnikvm precompiled bn128 sputnikvm precompiled modexp modexp precompiled contracts crates io https img shields io crates v sputnikvm precompiled modexp svg https crates io crates sputnikvm precompiled modexp documentation https docs rs sputnikvm precompiled modexp badge svg https docs rs sputnikvm precompiled modexp related projects sputnikvm dev https github com etcdevteam sputnikvm dev sputnikvm instance for smart contract development provides testing environment and mock for json rpc api sputnikvm in browser https github com sorpaas sputnikvm in browser experimental version of sputnikvm compiled into webassembly therefore can be launched in a browser on node js sputnikvm for embedded devices https github com sorpaas sputnikvm on rux experimental project to run on full functional evm on embedded devices dependencies ensure you have at least rustc 1 26 2 594fb253c 2018 06 01 rust 1 25 0 and before is not supported documentation latest release documentation https docs rs sputnikvm unstable documentation https that world docs sputnikvm sputnikvm build from sources sputnikvm is written rust if you are not familiar with rust please see the getting started guide https doc rust lang org book ch01 00 getting started html build to start working with sputnikvm you ll need to install rustup https www rustup rs then you can do bash git clone git github com etcdevteam sputnikvm git cd sputnikvm cargo build release all testing we currently use two ways to test sputnikvm and ensure its execution aligns with other ethereum virtual machine implementations jsontests jsontests this uses part of the ethereum tests https github com etcdevteam tests those tests currently does not have good coverage for system operation opcodes besides some tests are incorrect so they are disabled regtests regtests a complete regression tests is done on the ethereum classic mainnet from genesis block to block 4 million some of the previously failed tests are also integrated into rust s test system see wiki https github com etcdevteam sputnikvm wiki building and testing for how to reproduce the regression tests to learn more about building sputnikvm from source please read wiki page building and testing https github com etcdevteam sputnikvm wiki building and testing license apache 2 0 | ethereum ethereum-classic rust evm virtual-machine | blockchain |
sql-challenge | sql challenge in this repo i use sql for data engineering on a mock database of employee records find csv files with company data in the folder employeesql data modeling alt text https github com samanthasains sql challenge blob main erd diagram only png raw true erd for employeesql data engineering find the table schema in the file sql challenge schema sql data analysis find query coding to generate various data queries | server |
|
iOS-Reverse-Engineering-Tools | ios reverse engineering tools png pp download 1 alfred url https www 25pp com ios search app 0 query https ws1 sinaimg cn large 006tkftcgy1fsaj8w1di8j30uk0jyags jpg 2 pp download js 3 alfred app https ws4 sinaimg cn large 006tkftcgy1fsaj9m46flj30900cf0u2 jpg 4 ios | reverse-engineering ios security | os |
viseron | div align center a href https github com roflcoopter viseron img width 150 height 150 src docs static img viseron logo svg a br h1 viseron h1 p self hosted local only nvr and ai computer vision software p p with features such as object detection motion detection face recognition and more it gives you the power to keep an eye on your home office or any other place you want to monitor p h1 h1 br div getting started getting started is easy you simply spin up a docker container and edit the configuration file using the built in web interface head over to the documentation https viseron netlify app and follow the instructions on how to get started components viserons functionality is enabled by components https viseron netlify app docs documentation configuration components you can find all the available components by using the component explorer https viseron netlify app components explorer contributing contributors to the project are very much appreciated see the contribution guidelines https viseron netlify app docs contributing on how to get started some things you can help with implement an open feature request or issue from the issue tracker https github com roflcoopter viseron issues improve the documentation answer questions in issues or discussions https github com roflcoopter viseron discussions you can also use the links below to sponsor viseron or make a one time donation a href https github com sponsors roflcoopter target blank img src docs static img sponsor button png alt sponsor style height 37px important width 170px important box shadow 0px 3px 2px 0px rgba 190 190 190 0 5 important webkit box shadow 0px 3px 2px 0px rgba 190 190 190 0 5 a a href https www buymeacoffee com roflcoopter target blank img src https www buymeacoffee com assets img custom images orange img png alt buy me a coffee style height 41px important width 174px important box shadow 0px 3px 2px 0px rgba 190 190 190 0 5 important webkit box shadow 0px 3px 2px 0px rgba 190 190 190 0 5 important a | nvr network-video-capture network-video-recorder tensorflow darknet yolo hardware-acceleration object-detection motion-detection cuda surveillance rtsp ip-camera viseron coral edgetpu google-coral hacktoberfest face-recognition license-plate-recognition | ai |
nlp-workshops | nlp workshops introduction and tutorial about natural language processing this is meant to be presented at the data for good http www dataforgood fr nlp workshops it will be extended over time requirements python 2 7 http docs python guide org en latest starting installation pip package manager for python https pip pypa io en stable installing python modules bash pip install nltk word2vec unidecode jupyter notebook bash pip install jupyter nltk models bash python c import nltk nltk download in the models tab of the downloader interface install average perceptron tagger punkt tokenizer models and snowball data load the notebook go to the notebook directory and run bash jupyter notebook then select the notebook in your browser | ai |
|
ash-prompting | ash prompting gpt 3 prompting code for paper hierarchical prompting assists large language model on web navigation https arxiv org abs 2305 14257 abishek sridhar robert lo https robertlo tech frank f xu https frankxfz me hao zhu https www zhuhao me shuyan zhou https shuyanzhou github io setup you need to first have openai api key and fill it in the notebook follow webshop official repo https github com princeton nlp webshop to setup webshop environment citation bibtex inproceedings sridhar2023hierarchical bibtex show true title hierarchical prompting assists large language model on web navigation author abishek sridhar and robert lo and frank f xu and hao zhu and shuyan zhou booktitle arxiv year preprint html https arxiv org abs 2305 14257 tag nlp | ai |
|
Communication-Protocols-Interfaces | communication protocols interfaces this repository includes different projects that show my coding skills for microcontroller programming the projects go from using the microcontroller s ports to output a binary counter to creating a complex point of sale by communicating two microcontrollers as master slave all the projects are coded using c language and microchip ccs pic compiler using the pic18f4550 microcontroller most of the projects use a pcb designed by myself which include many different embedded systems to start interacting with the microcontroller some of the included components are 1 4x20 lcd display 1 rgb led 1 relay 1 rs232 communication port with the following schematic user interface board schematic userinterfaceboardschematic png some projects that include master slave or usb communication use another board for these purposes because of copyright conflicts i am not able to display many information about this board but some of the features it included are 1 2x16 lcd display 2 led 7 segment displays 4 push buttons 1 rs232 communication port 1 usb communication port 1 i2c communication port | os |
|
iot-device-management | iot device management introduction a system that leverages ethereum platform for identity authentication and reputation of iot devices devices are registered in a smart contract via a web interface and send cryptographically signed messages to a platform that validates them using blockchain this project is a part of my undergraduate thesis using blockchain for registration and control of iot devices https zir nsk hr en islandora object riteh 1085 abstract iot is facing identity security and interoperability problem current systems rely on centralized client server model that will soon be unsatisfactory due to the rapid increase in the number of devices connected to the internet blockchain is shared distributed and decentralized ledger that allows development of decentralized applications this thesis examines the concept of its use for registration and management of iot devices a system that consists of a smart contract web interface and device and platform has been developed systems users entities register devices within a smart contract with their control information via a web interface devices sign messages using private key which are sent to the platform along with control information and associated proof received messages are validated using blockchain which at the end provides authentication integrity and non repudiation concept below are presented four main concepts that apply to this system device identity device is registered without revealing it s private properties by using merkle tree public key or it s representation is used as an id example of properties being hashed into merkle root img src https i imgur com mt2tiqe png width 700 message authentication each message is signed and validated using blockchain on receiver s end generating signature img src https i imgur com 3ttqcqz png width 700 validation img src https i imgur com 5npkikw png width 700 firmware hashing it is possible to confirm that device is running valid firmware that hasn t been tampered with device reputation based on web of trust principle devices can form a network of trust the more signatures a device has from other reputable devices the more trusted it can be architecture consists of entities devices and an iot platform img src https i imgur com 91p9lkx png width 700 development technologies used are as follows ethereum solidity truffle framework web3 js react structure main folders and their content contracts smart contracts solidity frontend web interface react simulations device and platform simulations smart contract see file contracts devicemanager sol for full list and explanations of methods and events web interface home and network status https i imgur com 8ipc2jf png historical events for entity https i imgur com snkz5ze png device registration identifier https i imgur com 9s4bllf png metadata https i imgur com ef1kstt png firmware https i imgur com ourcixi png confirm https i imgur com gmdyehl png download configuration https i imgur com yvdlslq png list devices https i imgur com ydnmddz png edit device https i imgur com ga5sy0c png historical events for device https i imgur com jvieew6 png devices and platform example device configuration js identifier 0xf34d4c8f79657f1086f55b817837439c303dff19 metadatahash 43af4ba721cd8c9ba432ed6aca9adb96d16f82c25ba76 firmwarehash b01d2af9ea9dd59dd9c8af3f1639da03c79b7ed28adaa metadata olive grove 45 0270 14 61685 espressif systems 00 0a 95 9d 68 16 firmware 333f14cdb0a8520199257479ba126a10bca96b229b7924085 address 0xf34d4c8f79657f1086f55b817837439c303dff19 publickey d627bbb0a7c150f814a1960ebe69f0d8b4494e1033d9e72 privatekey 48a2e48b2d178e7d1f1508f2964a89079f1f8a301ebb85a curve secp256k1 deviceid 0 simulation example for device and platform can be found in files simulations device js and simulations platform js | internet-of-things blockchain digital-identity ethereum web-of-trust distributed-ledger-technology smart-contracts | server |
.github | github github profile | server |
|
transdim | transdim mit license https img shields io badge license mit green svg https opensource org licenses mit python 3 7 https img shields io badge python 3 7 blue svg repo size https img shields io github repo size xinychen transdim svg https github com xinychen transdim archive master zip github stars https img shields io github stars xinychen transdim svg logo github label stars logocolor white https github com xinychen transdim h6 align center made by xinyu chen globe with meridians a href https xinychen github io https xinychen github io a h6 logo https github com xinychen transdim blob master images transdim logo large png machine learning models make important developments in the field of spatiotemporal data modeling like how to forecast near future traffic states of road networks but what happens when these models are built on incomplete data commonly collected from real world systems e g transportation system br about this project in the transdim trans portation d ata im putation project we develop machine learning models to help address some of the toughest challenges of spatiotemporal data modeling from missing data imputation to time series prediction the strategic aim of this project is creating accurate and efficient solutions for spatiotemporal traffic data imputation and prediction tasks in a hurry please check out our contents as follows br tasks and challenges missing data are there whether we like them or not the really interesting question is how to deal with incomplete data p align center img align middle src https github com xinychen transdim blob master images missing png width 800 p p align center b figure 1 b two classical missing patterns in a spatiotemporal setting p we create three missing data mechanisms on real world data missing data imputation random missing rm each sensor lost observations at completely random non random missing nm each sensor lost observations during several days blockout missing bm all sensors lost their observations at several consecutive time points p align center img src https github com xinychen transdim blob master images framework png alt drawing width 800 p p align center b figure 2 b tensor completion framework for spatiotemporal missing traffic data imputation p spatiotemporal prediction forecasting without missing values forecasting with incomplete observations p align center img align middle src https github com xinychen transdim blob master images predictor explained png width 700 p p align center b figure 3 b illustration of our proposed low rank autoregressive tensor completion latc imputer predictor with a prediction window green nodes observed values white nodes missing values red nodes panel prediction blue panel training data to construct the tensor p br implementation open data in this project we have adapted some publicly available data sets into our experiments the original links for these data are summarized as follows multivariate time series birmingham parking data set https archive ics uci edu ml datasets parking birmingham california pems traffic speed data set https doi org 10 5281 zenodo 3939792 large scale guangzhou urban traffic speed data set https doi org 10 5281 zenodo 1205228 hangzhou metro passenger flow data set https doi org 10 5281 zenodo 3145403 london urban movement speed data set https movement uber com other cities are also available at uber movement project https movement uber com portland highway traffic data set https portal its pdx edu home including traffic volume speed occupancy see data documentation https portal its pdx edu static files fhwa freeway 20data 20documentation pdf seattle freeway traffic speed data set https github com zhiyongc seattle loop data multidimensional time series new york city nyc taxi data set https www1 nyc gov site tlc about tlc trip record data page pacific surface temperature data set http iridl ldeo columbia edu sources cac for example if you want to view or use these data sets please download them at the datasets https github com xinychen transdim tree master datasets folder in advance and then run the following codes in your python console python import scipy io tensor scipy io loadmat datasets guangzhou data set tensor mat tensor tensor tensor in particular if you are interested in large scale traffic data we recommend pems 4w 8w 12w and utd19 https utd19 ethz ch index html for pems data you can download the data from zenodo https doi org 10 5281 zenodo 3939792 and place them at the folder of datasets data path example datasets california data set pems 4w csv then you can use pandas to open data python import pandas as pd data pd read csv datasets california data set pems 4w csv header none for model evaluation we mask certain entries of the observed data as missing values and then perform imputation for these missing values model implementation in our experiments we implemented some machine learning models mainly on numpy and written these python codes with jupyter notebook if you want to evaluate these models please download and run these notebooks directly prerequisite download the data sets in advance in the following implementation we have improved python codes in jupyter notebook in terms of both readiability and efficiency our proposed models are highlighted in bold fonts imputer imputation models notebook guangzhou birmingham hangzhou seattle london nyc pacific bpmf https nbviewer jupyter org github xinychen transdim blob master imputer bpmf ipynb trmf https nbviewer jupyter org github xinychen transdim blob master imputer trmf ipynb btrmf https nbviewer jupyter org github xinychen transdim blob master imputer btrmf ipynb btmf https nbviewer jupyter org github xinychen transdim blob master imputer btmf ipynb bgcp https nbviewer jupyter org github xinychen transdim blob master imputer bgcp ipynb batf https nbviewer jupyter org github xinychen transdim blob master imputer batf ipynb bttf https nbviewer jupyter org github xinychen transdim blob master imputer bttf ipynb halrtc https nbviewer jupyter org github xinychen transdim blob master imputer halrtc ipynb lrtc tnn https nbviewer org github xinychen transdim blob master imputer lrtc tnn ipynb predictor prediction models notebook guangzhou birmingham hangzhou seattle london nyc pacific trmf https nbviewer jupyter org github xinychen transdim blob master predictor trmf ipynb btrmf https nbviewer jupyter org github xinychen transdim blob master predictor btrmf ipynb btrtf https nbviewer jupyter org github xinychen transdim blob master predictor btrtf ipynb btmf https nbviewer jupyter org github xinychen transdim blob master predictor btmf ipynb bttf https nbviewer jupyter org github xinychen transdim blob master predictor bttf ipynb cover does not cover under development for the implementation of these models we use both dense mat and sparse mat or dense tensor and sparse tensor as inputs however it is not necessary by doing so if you do not hope to see the imputation prediction performance in the iterative process you can remove dense mat or dense tensor from the inputs of these algorithms imputation prediction performance imputation example on guangzhou data example https github com xinychen transdim blob master images estimated series1 png a time series of actual and estimated speed within two weeks from august 1 to 14 example https github com xinychen transdim blob master images estimated series2 png b time series of actual and estimated speed within two weeks from september 12 to 25 the imputation performance of bgcp cp rank r 15 and missing rate 30 under the fiber missing scenario with third order tensor representation where the estimated result of road segment 1 is selected as an example in the both two panels red rectangles represent fiber missing i e speed observations are lost in a whole day prediction example example https github com xinychen transdim blob master images prediction hangzhou png example https github com xinychen transdim blob master images prediction nyc heatmap png example https github com xinychen transdim blob master images prediction nyc png br quick start this is an imputation example of low rank tensor completion with truncated nuclear norm minimization lrtc tnn one notable thing is that unlike the complex equations in our paper our python implementation is extremely easy to work with first import some necessary packages python import numpy as np from numpy linalg import inv as inv define the operators of tensor unfolding ten2mat and matrix folding mat2ten using numpy python def ten2mat tensor mode return np reshape np moveaxis tensor mode 0 tensor shape mode 1 order f python def mat2ten mat tensor size mode index list index append mode for i in range tensor size shape 0 if i mode index append i return np moveaxis np reshape mat list tensor size index order f 0 mode define singular value thresholding svt for truncated nuclear norm tnn minimization python def svt tnn mat tau theta m n mat shape if 2 m n u s v np linalg svd mat mat t full matrices 0 s np sqrt s idx np sum s tau mid np zeros idx mid theta 1 mid theta idx s theta idx tau s theta idx return u idx np diag mid u idx t mat elif m 2 n return svt tnn mat t tau theta t u s v np linalg svd mat full matrices 0 idx np sum s tau vec s idx copy vec theta s theta tau return u idx np diag vec v idx define performance metrics i e rmse mape python def compute rmse var var hat return np sqrt np sum var var hat 2 var shape 0 def compute mape var var hat return np sum np abs var var hat var var shape 0 define lrtc tnn python def lrtc dense tensor sparse tensor alpha rho theta epsilon maxiter low rank tensor completion with truncated nuclear norm lrtc tnn dim np array sparse tensor shape pos missing np where sparse tensor 0 pos test np where dense tensor 0 sparse tensor 0 dense test dense tensor pos test del dense tensor x np zeros np insert dim 0 len dim boldsymbol mathcal x t np zeros np insert dim 0 len dim boldsymbol mathcal t z sparse tensor copy last tensor sparse tensor copy snorm np sqrt np sum sparse tensor 2 it 0 while true rho min rho 1 05 1e5 for k in range len dim x k mat2ten svt tnn ten2mat z t k rho k alpha k rho np int np ceil theta dim k dim k z pos missing np mean x t rho axis 0 pos missing t t rho x np broadcast to z np insert dim 0 len dim tensor hat np einsum k kmnt mnt alpha x tol np sqrt np sum tensor hat last tensor 2 snorm last tensor tensor hat copy it 1 if it 1 50 0 print iter format it 1 print mape 6 format compute mape dense test tensor hat pos test print rmse 6 format compute rmse dense test tensor hat pos test print if tol epsilon or it maxiter break print imputation mape 6 format compute mape dense test tensor hat pos test print imputation rmse 6 format compute rmse dense test tensor hat pos test print return tensor hat let us try it on guangzhou urban traffic speed data set python import scipy io import scipy io import numpy as np np random seed 1000 dense tensor scipy io loadmat datasets guangzhou data set tensor mat tensor dim dense tensor shape missing rate 0 2 random missing rm sparse tensor dense tensor np round np random rand dim 0 dim 1 dim 2 0 5 missing rate run the imputation experiment python import time start time time alpha np ones 3 3 rho 1e 5 theta 0 30 epsilon 1e 4 maxiter 200 tensor hat lrtc dense tensor sparse tensor alpha rho theta epsilon maxiter end time time print running time d seconds end start this example is from imputer lrtc tnn ipynb https nbviewer org github xinychen transdim blob master imputer lrtc tnn ipynb you can check out this jupyter notebook for details br toy examples time series forecasting bayesian vector autoregression forecasting https nbviewer jupyter org github xinychen transdim blob master toy examples bayesian var forecasting ipynb structured low rank matrix completion https nbviewer jupyter org github xinychen transdim blob master toy examples slrmc ipynb br documentation 1 intuitive understanding of randomized singular value decomposition https towardsdatascience com intuitive understanding of randomized singular value decomposition 9389e27cb9de july 1 2020 2 matrix autoregressive model for multidimensional time series forecasting https t co fmsusc0tce amp 1 october 3 2021 3 understanding lyapunov equation through kronecker product and linear equation https towardsdatascience com understand the lyapunov equation through kronecker product and linear equation bfff9c1e59ab october 8 2021 4 generating random numbers and arrays in matlab and numpy https towardsdatascience com generating random numbers and arrays in matlab and numpy 47dcc9997650 october 9 2021 5 dynamic mode decomposition for multivariate time series forecasting https towardsdatascience com dynamic mode decomposition for multivariate time series forecasting 415d30086b4b october 10 2021 6 reduced rank vector autoregressive model for high dimensional time series forecasting https towardsdatascience com reduced rank vector autoregressive model for high dimensional time series forecasting bdd17df6c5ab october 16 2021 7 dynamic mode decomposition for spatiotemporal traffic speed time series in seattle freeway https towardsdatascience com dynamic mode decomposition for spatiotemporal traffic speed time series in seattle freeway b0ba97e81c2c ce4e 5f7c3f01d622 october 29 2021 8 analyzing missing data problem in uber movement speed data https medium com xinyu chen analyzing missing data problem in uber movement speed data 208d7a126af5 february 14 2022 9 using conjugate gradient to solve matrix equations https medium com p 7f16cbae18a3 february 23 2022 10 inpainting fluid dynamics with tensor decomposition numpy https medium com p d84065fead4d march 15 2022 11 temporal matrix factorization for multivariate time series forecasting https medium com p b1c59faf05ea march 20 2022 12 forecasting multivariate time series with nonstationary temporal matrix factorization https medium com p 4705df163fcf april 25 2022 13 implementing kronecker product decomposition with numpy https medium com p 13f679f76347 june 20 2022 14 tensor autoregression a multidimensional time series model https medium com p 21681f696d79 september 3 2022 15 reproducing dynamic mode decomposition on fluid flow data in python https medium com xinyu chen reproducing dynamic mode decomposition on fluid flow data in python 94b8d7e1f203 september 6 2022 16 convolution nuclear norm minimization for time series modeling https medium com p 377c56e49962 october 3 2022 17 reinforce matrix factorization for time series modeling probabilistic sequential matrix factorization https medium com p 873f4ca344de october 5 2022 18 discrete convolution and fast fourier transform explained and implemented step by step https medium com p 83ff1809378d october 19 2022 19 matrix factorization for image inpainting in python https medium com p d7300e6afbfd december 8 2022 20 circulant matrix nuclear norm minimization for image inpainting in python https medium com p b98eb94d8e december 9 2022 21 low rank laplacian convolution model for time series imputation and image inpainting https medium com p a46dd88d107e december 10 2022 22 low rank laplacian convolution model for color image inpainting https medium com p e8c5cdb3cc73 december 17 2022 23 intuitive understanding of tensors in machine learning https medium com xinyu chen intuitive understanding of tensors in machine learning 33635c64b596 january 20 2023 24 low rank matrix and tensor factorization for speed field reconstruction https medium com p bb4807cb93c5 march 9 2023 br our publications xinyu chen zhanhong cheng nicolas saunier lijun sun 2022 laplacian convolutional representation for traffic time series imputation arxiv preprint arxiv 2212 01529 preprint https arxiv org abs 2212 01529 python code univariate imputation https github com xinychen transdim tree master univariate models python code multivariate imputation https github com xinychen transdim tree master multiviarate models xinyu chen lijun sun 2022 bayesian temporal factorization for multidimensional time series prediction ieee transactions on pattern analysis and machine intelligence 44 9 4659 4673 preprint https arxiv org abs 1910 06366v2 doi https doi org 10 1109 tpami 2021 3066551 slides https doi org 10 5281 zenodo 4693404 data python code https github com xinychen transdim xinyu chen mengying lei nicolas saunier lijun sun 2022 low rank autoregressive tensor completion for spatiotemporal traffic data imputation ieee transactions on intelligent transportation systems 23 8 12301 12310 preprint https arxiv org abs 2104 14936 doi https doi org 10 1109 tits 2021 3113608 data python code https github com xinychen transdim also accepted in part to milets workshop of kdd 2021 https kdd milets github io milets2021 see workshop paper https kdd milets github io milets2021 papers milets2021 paper 23 pdf xinyu chen yixian chen nicolas saunier lijun sun 2021 scalable low rank tensor learning for spatiotemporal traffic data imputation transportation research part c emerging technologies 129 103226 preprint https arxiv org abs 2008 03194 doi https doi org 10 1016 j trc 2021 103226 data https doi org 10 5281 zenodo 3939792 python code https github com xinychen transdim tree master large imputer xinyu chen lijun sun 2020 low rank autoregressive tensor completion for multivariate time series forecasting arxiv 2006 10436 preprint https arxiv org abs 2006 10436 data python code https github com xinychen tensor learning xinyu chen jinming yang lijun sun 2020 a nonconvex low rank tensor completion model for spatiotemporal traffic data imputation transportation research part c emerging technologies 117 102673 preprint https arxiv org abs 2003 10271v2 doi https doi org 10 1016 j trc 2020 102673 data python code https github com xinychen transdim xinyu chen zhaocheng he yixian chen yuhuan lu jiawei wang 2019 missing traffic data imputation and pattern discovery with a bayesian augmented tensor factorization model transportation research part c emerging technologies 104 66 77 doi https doi org 10 1016 j trc 2019 03 003 slides https doi org 10 5281 zenodo 2632552 data http doi org 10 5281 zenodo 1205229 matlab code https github com sysuits batf python code https github com xinychen transdim blob master imputer batf ipynb xinyu chen zhaocheng he lijun sun 2019 a bayesian tensor decomposition approach for spatiotemporal traffic data imputation transportation research part c emerging technologies 98 73 84 preprint https www researchgate net publication 329177786 a bayesian tensor decomposition approach for spatiotemporal traffic data imputation doi https doi org 10 1016 j trc 2018 11 003 data http doi org 10 5281 zenodo 1205229 matlab code https github com lijunsun bgcp imputation python code https github com xinychen transdim blob master experiments imputation bgcp ipynb xinyu chen zhaocheng he jiawei wang 2018 spatial temporal traffic speed patterns discovery and incomplete data recovery via svd combined tensor decomposition transportation research part c emerging technologies 86 59 77 doi http doi org 10 1016 j trc 2017 10 023 data http doi org 10 5281 zenodo 1205229 this project is from the above papers please cite these papers if they help your research br collaborators table tr td align center a href https github com xinychen img src https github com xinychen png size 80 width 80px alt xinyu chen br sub b xinyu chen b sub a br a href https github com xinychen transdim commits author xinychen title code a td td align center a href https github com vadermit img src https github com vadermit png size 80 width 80px alt jinming yang br sub b jinming yang b sub a br a href https github com xinychen transdim commits author vadermit title code a td td align center a href https github com yxnchen img src https github com yxnchen png size 80 width 80px alt yixian chen br sub b yixian chen b sub a br a href https github com xinychen transdim commits author yxnchen title code a td td align center a href https github com mengyinglei img src https github com mengyinglei png size 80 width 80px alt mengying lei br sub b mengying lei b sub a br a href https github com xinychen transdim commits author mengyinglei title code a td td align center a href https github com lijunsun img src https github com lijunsun png size 80 width 80px alt lijun sun br sub b lijun sun b sub a br a href https github com xinychen transdim commits author lijunsun title code a td td align center a href https github com hanty img src https github com hanty png size 80 width 80px alt tianyang han br sub b tianyang han b sub a br a href https github com xinychen transdim commits author hanty title code a td tr tr td align center a href https github com xxxx img src https github com xxxx png size 100 width 100px alt xxxx br sub b xxxx b sub a br a href https github com xinychen transdim commits author xxxx title code a td tr table principal investigator pi table tr td align center a href https github com lijunsun img src https github com lijunsun png size 80 width 80px alt lijun sun br sub b lijun sun b sub a br a href https github com xinychen transdim commits author lijunsun title code a td td align center a href https github com nsaunier img src https github com nsaunier png size 80 width 80px alt nicolas saunier br sub b nicolas saunier b sub a br a href https github com xinychen transdim commits author nsaunier title code a td tr table see the list of contributors https github com xinychen transdim graphs contributors who participated in this project our transdim is still under development more machine learning models and technical features are going to be added and we always welcome contributions to help make transdim better if you have any suggestion about this project or want to collaborate with us please feel free to contact xinyu chen email chenxy346 gmail com and send your suggestion statement we would like to thank everyone who has helped this project in any way recommended email subjects suggestion on transdim from your name collaboration statement on transdim from your name if you have any questions please feel free to create an issue https github com xinychen transdim issues br acknowledgements this research is supported by the institute for data valorization ivado https ivado ca en ivado scholarships excellence scholarships phd license this work is released under the mit license | ai |
|
awesome-llm-datasets | awesome llm datasets this repository is a collection of useful links related to datasets for language model models llms and reinforcement learning with human feedback rlhf it includes a variety of open datasets as well as tools pre trained models and research papers that can help researchers and developers work with llms and rlhf from a data perspective follow and star for the latest and greatest links related to datasets for llms and rlhf table of contents 1 datasets datasets 1 for pre training for pre training 1 2023 2023 2 before 2023 before 2023 2 for instruction tuning for instruction tuning 3 for rlhf for rlhf 4 for evaluation for evaluation 5 for other purposes for other purposes 2 models and their datasets models and their datasets 3 tools and methods tools and methods 4 papers papers datasets for pre training 2023 redpajama data https github com togethercomputer redpajama data 1 2 trillion tokens dataset in english dataset token count commoncrawl 878 billion c4 175 billion github 59 billion books 26 billion arxiv 28 billion wikipedia 24 billion stackexchange 20 billion total 1 2 trillion also includes code for data preparation deduplication tokenization and visualization created by ontocord ai mila qu bec ai institute eth ds3lab universit de montr al stanford center for research on foundation models crfm stanford hazy research research group and laion before 2023 for instruction tuning for rlhf alignment for evaluation for other purposes models and their datasets llama overview a collection of open source foundation models ranging in size from 7b to 65b parameters released by meta ai license non commercial bespoke model gpl 3 0 code release blog post https ai facebook com blog large language model llama meta ai arxiv publication https arxiv org abs 2302 13971 model card https github com facebookresearch llama blob main model card md vicuna overview a 13b parameter open source chatbot model fine tuned on llama and 70k chatgpt conversations that maintains 92 of chatgpt s performance and outperforms llama and alpaca license non commercial bespoke license model apache 2 0 code repo https github com lm sys fastchat vicuna weights release blog post https vicuna lmsys org sharegpt dataset https sharegpt com models https huggingface co lmsys gradio demo https chat lmsys org dolly 2 0 overview a fully open source 12b parameter instruction following llm fine tuned on a human generated instruction dataset licensed for research and commercial use license cc by sa 3 0 model cc by sa 3 0 dataset apache 2 0 code repo https github com databrickslabs dolly release blog post https www databricks com blog 2023 04 12 dolly first open commercially viable instruction tuned llm models https huggingface co databricks llava overview a multi modal llm that combines a vision encoder and vicuna for general purpose visual and language understanding with capabilities similar to gpt 4 license non commercial bespoke model cc by nc 4 0 dataset apache 2 0 code repo https github com haotian liu llava project homepage https llava vl github io arxiv publication https arxiv org abs 2304 08485 dataset models https huggingface co liuhaotian gradio demo https llava hliu cc stablelm overview a suite of low parameter 3b 7b llms trained on a new dataset built on the pile with 1 5 trillion tokens of content license cc by sa 4 0 models repo https github com stability ai stablelm release blog post https stability ai blog stability ai launches the first of its stablelm suite of language models models https huggingface co stabilityai gradio demo https huggingface co spaces stabilityai stablelm tuned alpha chat alpaca overview a partially open source instruction following model fine tuned on llama which is smaller and cheaper and performs similarly to gpt 3 5 license non commercial bespoke model cc by nc 4 0 dataset apache 2 0 code release blog post https crfm stanford edu 2023 03 13 alpaca html dataset https huggingface co datasets tatsu lab alpaca tools and methods papers | ai |
|
ml-glossary | machine learning glossary looking for fellow maintainers apologies for my non responsiveness i ve been heads down at cruise buiding ml infra for self driving cars and haven t reviewed this repo in forever looks like we re getting 54k monthly active users now and i think the repo deserves more attention let me know if you would be interested in joining as a maintainer with priviledges to merge prs view the glossary http ml cheatsheet readthedocs io en latest how to contribute 1 clone repo git clone https github com bfortuner ml glossary git 2 install dependencies assumes you have the usual suspects installed numpy scipy etc pip install sphinx sphinx autobuild pip install sphinx rtd theme pip install recommonmark for python 3 x installed use pip3 install sphinx sphinx autobuild pip3 install sphinx rtd theme pip3 install recommonmark 3 preview changes if you are using make build cd ml glossary cd docs make html for windows cd ml glossary cd docs build bat html 4 verify your changes by opening the index html file in build 5 submit pull request https help github com articles creating a pull request short for time feel free to raise an issue https github com bfortuner ml glossary issues to correct errors or contribute content without a pull request style guide each entry in the glossary must include the following at a minimum 1 concise explanation as short as possible but no shorter 2 citations papers tutorials etc excellent entries will also include 1 visuals diagrams charts animations images 2 code python numpy snippets classes or functions 3 equations formatted with latex the goal of the glossary is to present content in the most accessible way possible with a heavy emphasis on visuals and interactive diagrams that said in the spirit of rapid prototyping it s okay to to submit a rough draft without visuals or code we expect other readers will enhance your submission over time why rst and not markdown rst has more features for large and complex documentation projects it s the logical choice https eli thegreenplace net 2017 restructuredtext vs markdown for technical documentation top contributors we re big fans of distill http distill pub prize and we like their idea of offering prizes for high quality submissions we don t have as much money as they do but we d still like to reward contributors in some way for contributing to the glossary for instance a cheatsheet cryptocurreny where tokens equal commits let us know if you have better ideas in the end this is an open source project and we hope contributing to a repository of concise accessible machine learning knowledge is enough incentive on its own tips and tricks adding equations http www sphinx doc org en stable ext math html working with jupyter notebook http louistiao me posts demos ipython notebook demo quickstart with jupyter notebook template graphs and charts importing images linking to code resources desmos graphing tool https www desmos com calculator 3d graphing tool https www geogebra org 3d how to submit pull requests https help github com articles creating a pull request rst cheatsheet http docutils sourceforge net docs user rst quickref html markdown cheatsheet https github com adam p markdown here wiki markdown cheatsheet citation generator http www citationmachine net mathjax cheatsheet https math meta stackexchange com questions 5020 mathjax basic tutorial and quick reference embedding math equations http www sphinx doc org en stable ext math html sphinx tutorial https pythonhosted org an example pypi project sphinx html sphinx docs http www sphinx doc org en stable markup code html sphinx cheatsheet http openalea gforge inria fr doc openalea doc build html source sphinx rest syntax html | machine-learning cheatsheets neural-network deep-learning deep-learning-tutorial data-science | ai |
zephyr-sifive-freedom-template | zephyr sifive freedom board template this repository contains a template for automatically porting zephyr rtos https zephyrproject org to a sifive freedom e series board based on that board s dts file how to use 1 choose a location for storing custom board configurations we ll refer to this directory as board directory 2 git clone https github com sifive zephyr sifive freedom template git your board name 3 cd your board name 4 copy your dts file into the current directory 5 customize board sh 6 select your desired rom boot address customize board sh is idempotent so feel free to re run it as many times as you like to build the hello world sample project 1 source opt zephyr zephyr version zephyr env sh 2 cd opt zephyr zephyr version samples hello world 3 mkdir build cd build 4 cmake dboard your board name dboard root board directory 5 make j nproc 6 the output binary is in zephyr zephyr elf to clean reset 1 git clean dfx will reset the template to a clean state | os |
|
Awesome-AIGC-Tutorials | awesome aigc tutorials awesome https camo githubusercontent com 64f8905651212a80869afbecbf0a9c52a5d1e70beab750dea40a994fa9a9f3c6 68747470733a2f2f617765736f6d652e72652f62616467652e737667 https github com luban agi awesome aigc tutorials license mit https img shields io badge license mit green svg https opensource org licenses mit https img shields io github last commit luban agi awesome aigc tutorials color green github repo stars https img shields io github stars luban agi awesome aigc tutorials style social https github com luban agi awesome aigc tutorials english readme zh md awesome aigc tutorials houses a curated collection of tutorials and resources spanning across large language models ai painting and related fields discover in depth insights and knowledge catered for both beginners and advanced ai enthusiasts recent updates 2023 10 14 added new tutorial 11 667 large language models methods and applications https cmu llms org in large language models 2023 09 28 added new tutorial tinyml and efficient deep learning computing https hanlab mit edu courses 2023 fall 65940 schedule in large language models how to contribute we warmly welcome contributions from everyone whether you ve found a typo a bug have a suggestion or want to share a resource related to aigc for detailed guidelines on how to contribute please see our contributing md contributing md file content introduction introduction large language models large language models prompt engineering prompt engineering llms in practice llms in practice theory of llms theory of llms ai painting ai painting art fundamentals and ai painting techniques art fundamentals and ai painting techniques stable diffusion principles and applications stable diffusion principles and applications ai audio ai audio multimodal multimodal deep learning deep learning ai system ai system miscellaneous miscellaneous star history star history friendship links friendship links introduction ai for everyone andrew ng https www deeplearning ai courses ai for everyone https img shields io badge level easy green https img shields io badge video blue ai for everyone is a beginner s guide to understanding ai s practical applications its limitations and its societal impact ideal for business professionals and leaders alike practical ai for teachers and students wharton school https www youtube com playlist list plwrdpyzpkkn302 rl5rrxvqe8j0jlp02j https img shields io badge level easy green https img shields io badge video blue wharton interactive s crash course delves into the mechanics and impacts of llms spotlighting models like openai s chatgpt4 microsoft s bing in creative mode and google s bard artificial intelligence for beginners microsoft https microsoft github io ai for beginners https img shields io badge level medium yellow this 12 week microsoft curriculum dives deep into ai methodologies spanning symbolic ai to neural networks while highlighting tensorflow and pytorch frameworks yet omits business applications classic machine learning and certain cloud specific topics generative ai learning path google cloud https www cloudskillsboost google journeys 118 https img shields io badge level medium yellow https img shields io badge video blue this learning path offers a comprehensive journey from the basics of large language models to deploying generative ai solutions on google cloud large language models prompt engineering chatgpt prompt engineering for developers deeplearning ai https www deeplearning ai short courses chatgpt prompt engineering for developers https img shields io badge level easy green https img shields io badge video blue https img shields io badge notebook orange co taught by openai and deeplearning ai this course guides learners in leveraging large language models for tasks like summarizing and text transformation with hands on experiences in a jupyter notebook environment building systems with the chatgpt api deeplearning ai https www deeplearning ai short courses building systems with chatgpt https img shields io badge level easy green https img shields io badge video blue https img shields io badge notebook orange led by experts from openai and deeplearning ai this course teaches automating workflows using language models creating prompt chains integrating python and designing chatbots all through hands on jupyter notebook exercises with just basic python knowledge required langchain for llm application development deeplearning ai https www deeplearning ai short courses langchain for llm application development https img shields io badge level easy green https img shields io badge video blue https img shields io badge notebook orange guided by the creator of langchain and andrew ng this course dives into advanced llm techniques like chaining operations and using models as reasoning agents empowering learners to craft robust applications quickly with foundational python knowledge langchain chat with your data deeplearning ai https www deeplearning ai short courses langchain chat with your data https img shields io badge level easy green https img shields io badge video blue https img shields io badge notebook orange delve into retrieval augmented generation and chatbot creation based on document content with langchain covering data loading splitting embeddings advanced retrieval techniques and interactive chatbot building designed for python savvy developers keen on harnessing large language models prompt engineering for chatgpt vanderbilt university https www coursera org learn prompt engineering utm medium sem utm source gg utm campaign b2c emea prompt engineering vanderbilt ftcof learn country gb country uk campaignid 20462816306 adgroupid 157715342052 device c keyword prompt 20engineering 20coursera matchtype b network g devicemodel adposition creativeid 670151312123 hide mobile promo gclid cj0kcqjwuzgnbhd1arisacxbavg8rcauf0lwfyvnmup7t7bhoh0jst0xxhq3s1vmdxtzc8o1wlj8fxqaatg ealw wcb https img shields io badge level easy green https img shields io badge video blue unlock the potential of large language models like chatgpt by mastering prompt engineering transitioning from basic to sophisticated prompts enabling diverse applications ranging from writing to simulation suitable for anyone with basic computer skills prompt engineering guide dair ai https www promptingguide ai https img shields io badge level easy green this guide introduces prompt engineering a discipline that optimizes interactions with large language models offering extensive resources research and tools learn prompting https learnprompting org https img shields io badge level medium yellow dive into a beginner friendly guide on generative ai and prompt engineering offering insights from industry giants and explore how these tools revolutionize content creation and the future of work langchain ai handbook james briggs and francisco ingham https www pinecone io learn series langchain https img shields io badge level medium yellow https img shields io badge book 2391672c explore the transformative world of langchain mastering core components crafting effective prompts and harnessing advanced ai agents conversational memories and custom tools for cutting edge applications llms in practice llm bootcamp the full stack https fullstackdeeplearning com llm bootcamp spring 2023 https img shields io badge level medium yellow https img shields io badge video blue delve deep into prompt engineering llm operations user experience design for language interfaces augmented language model techniques foundational llm insights hands on projects and the future of llms complemented by expert talks from industry leaders on training and agent design finetuning large language models deeplearning ai https www deeplearning ai short courses finetuning large language models https img shields io badge level medium yellow https img shields io badge video blue https img shields io badge notebook orange learn the techniques of finetuning large language models llms with sharon zhou gaining expertise in data preparation training and updating neural net weights for improved results tailored to your data tinyml and efficient deep learning computing massachusetts institute of technology https hanlab mit edu courses 2023 fall 65940 schedule https img shields io badge level medium yellow https img shields io badge video blue this course explores efficient ai computing techniques for deep learning on constrained devices covering model compression pruning quantization architecture search distributed training and quantum machine learning with hands on deployment of large models like llama 2 on laptops learn the fundamentals of generative ai for real world applications aws x deeplearning ai https www deeplearning ai courses generative ai with llms https img shields io badge level medium yellow https img shields io badge video blue this course in partnership with aws offers deep insights into generative ai and large language models llms participants will learn the mechanics optimization and real world applications of llms from aws ai experts suitable for professionals in ai and machine learning with a coursera certificate upon completion basic python and machine learning knowledge recommended theory of llms cs324 advances in foundation models stanford university https stanford cs324 github io winter2023 https img shields io badge level easy green cs 324 delves into foundation models like gpt 3 and dall e covering their principles systems ethics and application and culminates in a hands on research project or application design cs 601 471 671 nlp self supervised models johns hopkins university https self supervised cs jhu edu sp2023 index html https img shields io badge level medium yellow this course offers an in depth exploration of self supervised learning techniques for nlp training students to design and implement neural network models using pytorch with a focus on various language model architectures 11 667 large language models methods and applications carnegie mellon university https cmu llms org https img shields io badge level medium yellow this graduate course offers a comprehensive overview of large language models llms covering basics emergent capabilities applications scaling techniques deployment concerns and future challenges equipping students for research and applications in the ai era cs224n natural language processing with deep learning stanford university https web stanford edu class cs224n https img shields io badge level medium yellow https img shields io badge video blue this course provides a comprehensive insight into deep learning for nlp using pytorch emphasizing end to end neural models eliminating the need for task specific feature engineering and equipping students with the skills to craft their own neural network solutions speech and language processing dan jurafsky and james h martin https web stanford edu jurafsky slp3 https img shields io badge level medium yellow https img shields io badge book 2391672c authored by leading experts in the field this authoritative text provides an in depth exploration of the algorithms and mathematical models for modern natural language processing and speech recognition and is continually updated to reflect the rapid advancements in the nlp domain cos 597g fall 2022 understanding large language models princeton university https www cs princeton edu courses archive fall22 cos597g https img shields io badge level hard red an advanced exploration into the transformative realm of llms discussing state of the art models their profound capabilities and associated challenges with an emphasis on in depth research ethical considerations and hands on project experience tailored for seasoned students versed in machine learning and deep nlp frameworks ai painting art fundamentals and ai painting techniques lecture series an interesting topic every week on the fundamentals of art niji academy https www niji academy work lecture https img shields io badge level easy green niji academy blends art fundamentals with ai elevating painting skills and speeding up art learning stable diffusion principles and applications how diffusion models work deeplearning ai https www deeplearning ai short courses how diffusion models work https img shields io badge level medium yellow https img shields io badge video blue https img shields io badge notebook orange master generative ai in how diffusion models work an intermediate course by sharon zhou where you ll craft diffusion models from scratch enriched with hands on coding and labs ideal for those proficient in python tensorflow or pytorch hugging face diffusion models course https github com huggingface diffusion models class https img shields io badge level medium yellow https img shields io badge notebook orange the hugging face course offers an in depth look into diffusion models guiding participants through media generation hands on training and customization using the diffusers library with a foundational understanding of python and deep learning essential for the best experience practical deep learning for coders part 2 deep learning foundations to stable diffusion fast ai https course fast ai lessons part2 html https img shields io badge level medium yellow https img shields io badge video blue this course offers an in depth exploration of stable diffusion algorithms covering advanced deep learning techniques and hands on projects using pytorch empowering students with expertise in cutting edge diffusion models ai audio hugging face audio course https huggingface co learn audio course chapter0 introduction https img shields io badge level medium yellow the hugging face audio course teaches how to use transformers for various audio tasks from speech recognition to generating speech from text combining theory with hands on exercises for learners familiar with deep learning cs224s spoken language processing stanford university http web stanford edu class cs224s https img shields io badge level medium yellow an immersive course on spoken language technology covering dialog systems deep learning in speech recognition and synthesis with hands on projects using modern tools like pytorch alexa skills kit and speechbrain culminating in student driven research or system design projects multimodal tutorial on multimodal machine learning icml 2023 carnegie mellon university https cmu multicomp lab github io mmml tutorial icml2023 https img shields io badge level medium yellow https img shields io badge video blue this course offers an in depth look at multimodal machine learning drawing insights from the latest edition of a survey paper and cmu s academic teachings addressing its unique challenges and future directions 11 777 multimodal machine learning fall 2022 carnegie mellon university https cmu multicomp lab github io mmml course fall2022 https img shields io badge level medium yellow https img shields io badge video blue this course delves into multimodal machine learning mmml covering its mathematical foundations state of the art probabilistic models and key challenges while highlighting recent applications and techniques such as multimodal transformers and neuro symbolic models 11 877 advanced topics in multimodal machine learning fall 2022 carnegie mellon university https cmu multicomp lab github io adv mmml course spring2022 https img shields io badge level hard red this course explores multimodal machine learning mmml covering technical challenges and recent achievements it emphasizes critical thinking and future research trends with weekly updates discussion probes and research highlights on the course website deep learning neural networks deep learning statquest https www youtube com playlist list plblh5jkooluixgdqs4lffd 41vzf me1 https img shields io badge level easy green https img shields io badge video blue discover the intricacies of neural networks in this highly popular youtube playlist seamlessly blending informative graphics with expert teachings captivating countless students from basics to advanced image classification with convolutional neural networks neural networks 3blue1brown https www 3blue1brown com topics neural networks https img shields io badge level easy green https img shields io badge video blue 3blue1brown unveils the magic of neural networks through vivid animations and clear explanations diving deep into hand written digit recognition the nuances of gradient descent and the intricate calculus behind backpropagation neural networks zero to hero andrej karpathy https karpathy ai zero to hero html https img shields io badge level medium yellow https img shields io badge video blue andrej karpathy s course guides students from the foundational backpropagation to advanced neural networks like gpt emphasizing language models as a versatile gateway to mastering deep learning with prerequisites in python programming and basic math practical deep learning for coders fast ai https course fast ai https img shields io badge level medium yellow https img shields io badge video blue practical deep learning for coders 2022 is a free course offering hands on experience in building training and deploying deep learning models across various domains using tools like pytorch and fastai suitable for those with coding knowledge and without the need for advanced math deep learning specialization andrew ng https www deeplearning ai courses deep learning specialization https img shields io badge level medium yellow https img shields io badge video blue andrew ng s deep learning specialization is a top rated self paced program on coursera with over 1 million learners offering clear modules and practical techniques in ai supported by a vast community and breaking down the latest in machine learning into understandable content 6 s191 introduction to deep learning massachusetts institute of technology http introtodeeplearning com https img shields io badge level medium yellow https img shields io badge video blue mit s intensive bootcamp on deep learning fundamentals covering applications from computer vision to biology with hands on tensorflow practice and a culminating project competition basic calculus and linear algebra knowledge required python experience beneficial cs25 transformers united v2 stanford university https web stanford edu class cs25 https img shields io badge level medium yellow https img shields io badge video blue explore the transformative power of transformers in deep learning across diverse domains from nlp to biology in a seminar featuring expert lectures breakthrough discussions and insights from leading researchers aiming to foster understanding and cross collaborative innovation deep learning lecture series 2020 deepmind x university college london https www deepmind com learning resources deep learning lecture series 2020 https img shields io badge level medium yellow https img shields io badge video blue deepmind presents a 12 lecture series on deep learning diving from foundational topics to advanced techniques encompassing areas from object recognition to responsible ai innovation all delivered by leading research experts reinforcement learning lecture series 2021 deepmind x university college london https www deepmind com learning resources reinforcement learning lecture series 2021 https img shields io badge level hard red https img shields io badge video blue deepmind and ucl present a comprehensive 13 lecture series on modern reinforcement learning from foundational concepts to advanced deep rl techniques led by expert researchers hado van hasselt diana borsa and matteo hessel ai system ai sys sp22 machine learning systems university of california berkeley https ucbrise github io cs294 ai sys sp22 https img shields io badge level medium yellow delve into the symbiotic relationship between cutting edge ai applications and the systems supporting them exploring advancements in hardware software and ai driven optimization techniques through lectures discussions and collaborative hands on projects deep learning systems algorithms and implementation tianqi chen zico kolter https dlsyscourse org https img shields io badge level medium yellow https img shields io badge video blue explore the foundations of deep learning systems by constructing a complete library understanding every layer from model design to efficient algorithms utilizing python and c c cs 329s machine learning systems design stanford university https stanford cs329s github io https img shields io badge level medium yellow master the intricacies of designing robust scalable and deployable machine learning systems focusing on stakeholders evolving requirements and holistic system design while addressing critical issues like privacy fairness and security 15 849 machine learning systems carnegie mellon university https www cs cmu edu zhihaoj2 15 849 https img shields io badge level hard red dive into the architecture of modern ml systems unraveling the journey from high level model design to low level kernel execution on heterogeneous hardware while uncovering the principles and challenges of next gen ml applications and platforms computer science 598d systems and machine learning princeton university https www cs princeton edu courses archive spring21 cos598d general html https img shields io badge level hard red explore the synergy between systems and machine learning by dissecting recent research on efficient ml hardware software and applying ml to system design culminating in hands on projects and deep discussions for graduate students miscellaneous star history star history chart https api star history com svg repos luban agi awesome aigc tutorials type date https star history com luban agi awesome aigc tutorials date friendship links waytoagi http waytoagi com waytoagi com is the most comprehensive chinese resource hub for aigc guiding users on an optimized learning journey to understand and harness the power of ai awesome tool learning https github com luban agi awesome tool learning awesome tool learning is a github repository that offers a wealth of resources on tool learning including papers frameworks and applications | aigc llm ai midjourney stable-diffusion deep-learning tutorials courses-resource prompt-engineering nlp awesome chatgpt multimodal | ai |
clin-summ | clinical text summarization by adapting llms official implementation from stanford university coming soon br b title b clinical text summarization adapting large language models can outperform human experts https arxiv org pdf 2309 07430 pdf br b authors b dave van veen https davevanveen com cara van uden louis blankemeier jean benoit delbrouck asad aali christian bluethgen anuj pareek malgorzata polacin eduardo pontes reis anna seehofnerova nidhi rohatgi poonam hosamani william collins neera ahuja curtis p langlotz jason hom sergios gatidis john pauly akshay s chaudhari b contact b vanveen at stanford dot edu br img src img overview png code and data we will soon publish our code and pre processed data feel free to star the repo so you don t miss it citation misc vanveen2023clinical title clinical text summarization adapting large language models can outperform human experts author dave van veen and cara van uden and louis blankemeier and jean benoit delbrouck and asad aali and christian bluethgen and anuj pareek and malgorzata polacin and william collins and neera ahuja and curtis p langlotz and jason hom and sergios gatidis and john pauly and akshay s chaudhari year 2023 eprint 2309 07430 archiveprefix arxiv primaryclass cs cl | ai |
|
OpenPlugin | openplugin pypi https img shields io pypi v openplugin py https pypi org project openplugin py pypi python version https img shields io pypi pyversions openplugin py hits of code https hitsofcode com github openrl lab openplugin branch main https hitsofcode com github openrl lab openplugin view branch main documentation status https readthedocs org projects openplugin badge version latest https openplugin readthedocs io en latest badge latest demo video https youtu be qbyu8i9zo04 bilibili video https www bilibili com video bv1am4y1s7qu openplugin v0 0 8 is updated on aug 17 2023 toolkit for managing plugins of large language model llm you can install uninstall run and list plugins with op installation pip install openplugin py or clone this repo and pip install e plugin store we provide plugins in plugin store https openrl net plugin store users can download these plugins and use them with op usage check openplugin s version with op version check system information op system info install a plugin op install plugin name you can also install local plugins with op install you can also install a plugin from a zip file op install zip file path uninstall a plugin op uninstall plugin name start a plugin op run plugin name you can use p to specify the port of the plugin by default the port is 5003 you can also run a local plugin with op run list installed plugins op list reinstall plugin op reinstall plugin name an example for using qrcode plugin install qrcode plugin op install qrcode plugin or you can install qrcode plugin from local go to the directory of qrcode plugin cd plugins qrcode plugin install qrcode plugin op install or you can install qrcode plugin from a zip file op install qrcode plugin zip start qrcode plugin op run qrcode plugin p server port or you can start qrcode plugin from local go to the directory of qrcode plugin cd plugins qrcode plugin start qrcode plugin op run p server port then you can get the ai plugin json file via visiting http server ip server port ai plugin json you can get the openaip yaml file via visiting http server ip server port openaip yaml plugins we provide some source codes of plugins you can find them in plugins plugins we call for contributions of plugins you can fork our repo add your plugin into plugins plugins and submit a pull request citing openplugin if our work has been helpful to you please feel free to cite us latex misc openplugin2023 title openplugin author openrl contributors publisher github howpublished url https github com openrl lab openplugin year 2023 star history star history chart https api star history com svg repos openrl lab openplugin type date https star history com openrl lab openplugin date | ai |
|
fabric-boilerplate | fabric boilerplate this is a boilerplate application to get you up and running quickly with your own blockchain application with this boilerplate you get an application that you can run locally as well as on ibm bluemix there is a simple angular frontend application a nodejs backend application and of course a blockchain network all running in containers locally the boilerplate starts up a blockchain network using docker containers on bluemix you can use the blockchain service the boilerplate uses hyperledger fabric v0 6 1 preview and hfc 0 6 5 it has been created and is maintained by the ibm cic benelux blockchain team pull requests are more than welcome prerequisites docker and docker compose https www docker com linux or mac to have good support in your ide it s advisable to also install npm typescript tslint and golang getting started 1 fork this repo 2 git clone your fork 3 cd into the main directory and run npm install or if you don t have npm install sh this will pull the baseimage peer and memberservice download the go dependencies of the chaincode and build your containers it will take a while to get rid of missing module errors in your ide also run npm install from the server and client directory this is not mandatory to run the application running the application to run the application simply do docker compose up this will start the three tiers of our application in separate containers 1 one validating peer 2 the memberservice 3 the nodejs server which registers the users and deploys the chaincode on first boot 4 the angular frontend which connects to the server through a rest api the app is running on http localhost 4200 you can login with the user credentials you find in server resources testdata json development both the frontend and the server use filewatchers any change in the source files will trigger the transpiler and restart that part of the application every time you pull changes update the package json of the server or client or change anything that might affect deployment of chaincode docker compose build when you end docker compose the containers still exist they keep state memberservice the registered users and webappadmin peer world state and ledger server chaincodeid of the last deployment keyvalstore with the private keys of the users so if you start the app again you can use your old chaincode if you want to clear just run with docker compose up force recreate currently if you change the chaincode you will have to recreate the containers in the future we will add a filewatcher for the chaincode as well notes if anyone updates the npm packages all developers have to rebuild the containers if you add angular components from your host environment make sure you have the correct angular cli version to be sure you can enter the client container and do it from there updating the fabric 1 update the hfc client in the package json 2 update the commit level in blockchain src build chaincode vendor yml 3 delete the blockchain src build chaincode vendor directory 4 npm run govend from the project root 5 update the baseimage and tag as latest 6 docker compose build tests we support unittests for the server client and chaincode especially for writing chaincode we recommend a test driven approach to save time you can find the commands to run the tests in the package json in the root npm run test go blockchain src build chaincode chaincode test go contains mock functions for the chaincode stub npm run test server see the server tests directory npm run test client each component has its own test courtesy of angular cli npm run test e2e needs the application to be running it hits the api endpoints for end to end testing npm test runs all the tests except for e2e you can also run the server tests directly from the server directory with npm test and npm run e2e troubleshooting no rows in result set the memberservice remembers something outdated stop your app and run clean sh name or token does not match the info in blockchain data keyvalstore does not match with the connected memberservice clean sh can t connect to docker daemon sudo usermod ag docker whoami logout and login again error usr src app node modules grpc src node extension binary grpc node node invalid elf header the node modules of the server were built outside of the container delete this directory and make a change in server package json then do docker compose build server hfc handshake failed is there a certificate pem in the blockchain src build chaincode dir npm modules not found docker compose build running on bluemix registering and enrolling users the sdk needs to register and enroll an admin user and any other users you would like to add when this takes place the sdk receives enrollment certificates ecerts for each user you only get these certificates once so if you would redeploy or restart your app on bluemix and the sdk wants to register and enroll the users again this would fail our solution to this problem is to register and enroll the users from your local environment before you deploy when the ecerts are received you can then push the app to bluemix including the ecerts so the app that runs on bluemix does not have to register and enroll the users again because the ecerts are already available don t lose server resources keyvaluestore bluemix or else you ll have to recreate the blockchain service on bluemix deploying chaincode and the app the easiest way to deploy a chaincode is to do it from you local environment before you push the app to bluemix we made a script that deploys the chaincode and stores the chaincodeid in a file after that you push the app to bluemix including the chaincodeid file your app can interact with the chaincode perform the following steps to run the application on bluemix credentials and urls create a blockchain service on bluemix update the manifest yml file in the root of the project replace the names and hosts of both servers the values can be anything as long as they are unique replace the name of the service on the last line of the manifest this should be the name of the blockchain service you just created change the settings in client src environments environment prod ts to refer to the correct api endpoint of the server copy the credentials of the blockchain service and overwrite the credentials in server resources credentials json if you retrieve your service credentials from a new console https new console ng bluemix net overview instance of bluemix then you will need to edit your credentials json add credentials to line 2 and then add a closing to the final line your finished payload should be 233 lines if needed change the cloudfoundry api url in the dockerfile delete the server resources keyvalstore bluemix directory if it exists it contains keys to the previously used service deployment get into our cloudfoundry container by running npm run cf from within the container register the users and deploy the chaincode with cd server npm run deploy this can take a few minutes open the dashboard of the blockchain service on bluemix wait until you see the chaincode id appear on the network tab ensure that it is running on all four peers and that all the way at the end it says up for x seconds minutes the blocks with transactions for the deployment and the invocation of the testdata function should be visible on the blockchain tab deploy app to bluemix cd up one level and do cf push still from within the container for assistance with the cloud foundry cli visit the cloud foundry https github com cloudfoundry cli downloads repo after the app has been pushed to bluemix you can view the logs with cf logs name of the app recent where name of the app is the app name you provided in the manifest yml file support and documentation hyperledger project https www hyperledger org official hyperledger slack channel https hyperledgerproject slack com irc hyperledger on freenode net wiki https wiki hyperledger org hfc https www npmjs com package hfc bluemix https console ng bluemix net docs ibm on blockchain https www ibm com blockchain what is blockchain html | blockchain |
|
mega-comic-books | readme mega comic books built date early 2005 website url http mega comic books nfshost com http mega comic books nfshost com git repo https github com dchantzis mega comic books https github com dchantzis mega comic books system description pdf http mega comic books nfshost com files megacomicbookssystemdescription pdf database e r diagram pdf http mega comic books nfshost com files megacomicbooksdberdiagram pdf create database sql pdf http mega comic books nfshost com files megacomicbooksdb pdf database sql insert queries pdf http mega comic books nfshost com files megacomicbookssqlqueries pdf database entries print outs pdf http mega comic books nfshost com files megacomicbooksdbprintouts1 pdf and pdf http mega comic books nfshost com megacomicbooksdbprintouts2 pdf this website currently functions for demonstration purposes only this is a semester project participating in the special issues on databases class for the bsc hons in informatics engineering at the department of information technology thessaloniki greece motivation the design of a comic book store database for the mega comic books store and the implementation of a web interface to that database mega comics books is a fictional small comic book store that needs a small rdbms to keep track of its monthly orders subscriptions technologies and tools used sql postgresql mysql php html css javascript dimitrios chantzis dimitrioschantzis com http www dimitrioschantzis com https github com dchantzis | server |
|
IoTEdge-DevOps | iotedge devops a living repository of best practices and examples for developing azureiot edge https docs microsoft com azure iot edge wt mc id iot 0000 pdecarlo solutions doubly presented as a hands on lab purpose the internet of things https en wikipedia org wiki internet of things is a technology paradigm that involves the use of internet connected devices to publish data often in conjunction with real time data processing machine learning and or storage services development of these systems can be enhanced through application of modern devops principles which include such tasks as automation monitoring and all steps of the software engineering process from development testing quality assurance and release we will examine these concepts as they relate to feature offerings in azure devops services https azure microsoft com services devops wt mc id iot 0000 pdecarlo application insights https azure microsoft com services application insights wt mc id iot 0000 pdecarlo azure container registries https azure microsoft com services container registry wt mc id iot 0000 pdecarlo azure iot hub device provisioning service https docs microsoft com azure iot dps wt mc id iot 0000 pdecarlo and azure iot hubs https azure microsoft com services iot hub wt mc id iot 0000 pdecarlo ioteedge devops lab this lab will walk through creating an azure devops services project repo that employs continuous integration https docs microsoft com azure devops what is continuous integration wt mc id iot 0000 pdecarlo and continuous delivery https docs microsoft com azure devops what is continuous delivery wt mc id iot 0000 pdecarlo to publish an iot edge deployment to specific devices as part of a build definition https docs microsoft com cli vsts build definition wt mc id iot 0000 pdecarlo and release pipeline https docs microsoft com vsts pipelines release wt mc id iot 0000 pdecarlo step 1 creating azure resources step 1 creating azure resources step 2 setup azure devops services step 2 setup azure devops services step 3 setting up continuous integration step 3 setting up continuous integration step 4 creating a release pipeline with a smoke test step 4 creating a release pipeline with a smoke test step 5 monitoring devices with app insights step 5 monitoring devices with app insights step 1 creating azure resources to get started we will need to create a few cloud services that will be used in later portions of the lab these services are outlined below with a brief description of how they will be used in later steps service description application insights https azure microsoft com services application insights wt mc id iot 0000 pdecarlo used to monitor performance metrics of docker host and iot edge modules azure container registries https azure microsoft com services container registry wt mc id iot 0000 pdecarlo a private docker registry service used to store published iot edge modules azure iot hub device provisioning service https docs microsoft com azure iot dps wt mc id iot 0000 pdecarlo allows for automatic provisioning of iot devices in a secure and scalable manner azure iot hubs https azure microsoft com services iot hub wt mc id iot 0000 pdecarlo service which enables us to securely connect monitor and manage iot devices if you have already deployed any of these services into an existing environment you are welcome to reuse them in the lab however it is highly suggested to create brand new services to avoid issues deploy the required services by clicking deploy to azure button below deploy to azure http azuredeploy net deploybutton png https portal azure com create microsoft template uri https 3a 2f 2fraw githubusercontent com 2ftoolboc 2fiotedge devops 2fmaster 2fazuredeploy json on the resulting screen supply a globally unique value for the resource name suffix parameter deploy to azure content deploytoazure png if you encounter any issues in the deployment it is advised to delete the created resource group if any and retry with a new value for the resource name suffix parameter step 2 setup azure devops services azure devops services allows for building testing and deploying code in an easy to manage interface we will build out a base for iot edge devops practices using services provided by azure devops services if you have not already create a new azure devops services account here https azure microsoft com services devops wt mc id iot 0000 pdecarlo next create a new project and give it a descriptive name create project content createprojectvsts png next select repos then click the import button underneath import a repository and supply this url https github com toolboc iotedge devops git import gh to azure devops content importghtovsts png the import process should begin importing this repository into your azure devops project step 3 setting up continuous integration this repository contains an azure devops build definition which is preconfigured to build the included edgesolution in azure pipelines yml azure pipelines yml this build definition relies on an external plugin replace tokens https marketplace visualstudio com items itemname qetza replacetokens wt mc id iot 0000 pdecarlo begin by installing the replace tokens task from the visual studio marketplace by visiting this link https marketplace visualstudio com items itemname qetza replacetokens wt mc id iot 0000 pdecarlo and clicking the get it free button then install into the organization which contains your newly created azure devops project once this task is successfully installed return to the azure devops project and select repos files then edit the azure pipelines yml file edit build definition content editbuilddefvsts png add the following comment to the top of the file as shown below this repository is built using azure devops commit the changes as shown commit build definition content commitbuilddefvsts png navigate back to repos and select set up build then select run and you should see that a build has kicked off upon editing the build definition created build definition content builddefcreated png the build will fail this is to be expected as we need to add a few build variables in order for the build to run successfully we will need to obtain the hostname of the azure container registry which will be represented by acr host in addition we will need the azure container registry username which will be represented by acr user and finally the azure container registry password which will be represented by acr password all of these can be obtained in the azure portal by viewing your created azure container registry and selecting access keys as shown below azure container registry content acr png next we need to obtain the application insights instrumentation key which will be represented by appinsights instrumentationkey this can be obtained in the azure portal by viewing your created application insight resource as shown below application insights content appinsights png once you have obtained all of the necessary values create a build definition variable for acr host acr user acr password and appinsights instrumentationkey as shown below edit build definition variables content editbuilddefvars png build definition variables content builddefvars png finally select the run button and click run in the dialogue as shown below queue build definition content queuebuildvsts png the build should complete successfully as shown below queue build definition content buildsuccessvsts png with a successful build definition in place we can now enforce continuous integration by applying a branch policy to the master branch start by selecting repos branches then click the on the row for the master branch and select branch policies select branch policy content selectbranchpolicyvsts png next under build validation click add build policy and select the newly created build pipeline then click the save button configure build policy content buildpolicyvsts png while this policy is enabled all commits to feature branches will kick off an execution of the newly created build pipeline and it must succeed in order for a pull request of those changes to be made to the master branch step 4 creating a release pipeline with a smoke test deployments to devices need to be done under tight control in production environments to achieve this we will create a release pipeline which deploys to qa devices and smoke tests the edge runtime in a containerized device this is accomplished by running an instance of the azure iot edge device container https github com toolboc azure iot edge device container which is configured as a qa device then probing the iot hub to ensure that qa device receives the desired deployment configuration and is able to successfully run all configured modules this test is contained in edgesmoketest sh scripts edgesmoketest sh to begin select pipelines releases then create a new pipeline with an empty job and save it create empty job content emptyjobvsts png now head back to build and release releases new and select import a pipeline import a pipeline content importapipelinevsts png download the release pipeline json release pipeline json file located in the root of this repo and import it the initial pipeline content initialpipelinevsts png there are a few things that we will need to fix before we can successfully run the release pipeline specifically azure subscription endpoints agent pools and variable settings and artifact source to fix the azure subscription endpoints select tasks create deployment and supply the appropriate azure subscription and azure container registry for the azure iot edge push module images and azure iot edge deploy to iot edge devices tasks fix endpoints 1 content fixazureendpoints1 png fix endpoints 2 content fixazureendpoints2 png next select tasks smoke test and supply the appropriate azure subscription and azure container registry for the remove all registered qa devices and smoke test tasks fix endpoints 3 content fixazureendpoints3 png fix endpoints 4 content fixazureendpoints4 png to fix the agent pools select tasks create deployment agent job and change the agent pool to azure pipelines and set agent specification to ubuntu 18 04 fix agent pool 1 content agentpool1 png fix agent pool 2 content agentpool2 png with these fixes applied you should be able to save the release pipeline it is highly recommended to save at this point if azure devops allows to fix the variables select variables pipeline variables content pipelinevarsvsts png we will need to modify all variables in brackets you may use the same values for acr host acr user acr password and appinsights instrumentationkey that were used in the ci build definition in step 3 iothub name is the name of the iot hub that was created in step 1 for the additional variables we need to create a service principal by performing the following install the azure cli https docs microsoft com cli azure install azure cli view azure cli latest wt mc id iot 0000 pdecarlo run az login to sign in with the azure cli then run az account list to see available subscriptions and set the appropriate subscription with az account set subscription subscriptionid create a service principal for your subscription with the azure cli it is suggested to use a value of iotedge devops or similar for name az ad sp create for rbac name name you should see output similar to appid 12345678 1234 1234 1234 1234567890ab displayname iotedge devops name http iotedge devops password mypassword tenant abcdefgh abcd abcd abcd abcdefghijkl take note of the name password and tenant as these values will be used for spappurl sppassword and tenant respectively note that some passwords could be generated with characters that can cause issues when interpreted from the linux command line if this is the case for example if the resulting password contains a then you can either regenerate a new password by re running the command above or you could try to wrap this value with single quotes i e password any failures that may arise in the smoke test are usually attributed to these values obtain the following parameters and supply the appropriate values for the remaining release pipeline variables parameter description spappurl the service principal app url required sppassword the password for the service principal required tenantid the tenant id for the service principal required subscriptionid the azure subscription id where the iot hub is deployed required to test these parameters on a local docker on linux instance to rule out any potential issues you can use the following command docker run d e spappurl spappurl e sppassword sppassword e tenantid tenantid e subscriptionid subscriptionid e iothub name iothub name e environment qa name qa test restart no v var run docker sock var run docker sock toolboc azure iot edge device container if the container fails to start there is likely an issue with the parameters provided if these fail locally they will also likely fail in the release build once you have properly set the variables for the release we need to fix the artifact source select pipeline add an artifact add new artifact content addnewartifact png next select your ci build pipeline as source and configure to obtain the latest version add new artifact content addnewartifact2 png once you have configured everything appropriately select save then pipelines releases then select the newly created release pipeline and create a release create a release content createreleasevsts png the new release pipeline should begin running running release content runningreleasevsts png step 5 monitoring devices with app insights monitoring allows us to perform long running tests against edge modules and provide real time alerts using application insights our edgesolution includes a dockerappinsights module which is configured in deployment template json edgesolution deployment template json this module monitors the docker host of each containerized iot edge device assuming a device has been deployed and is running you can monitor the device by viewing the appication insights resource deployed in step 1 app insights graph content appinsightsgraph png to configure a chart select metrics explorer add chart edit chart and add the following to monitor block io for all edge modules app insights block io content aiblkio png add the following to monitor the network traffic for all edge modules app insights block io content ainetworktraffic png | server |
|
google-data-engineering-on-google-cloud-platform | data engineering on google cloud platform specialization alt text https cloud google com static 5b514b28c7 images cloud cloud logo svg google cloud courses 1 x google cloud platform big data and machine learning fundamentals https github com skielosky google data engineering on google cloud platform tree master 01 google cloud platform big data and machine learning fundamentals 2 x leveraging unstructured data with cloud dataproc on google cloud platform https github com skielosky google data engineering on google cloud platform tree master 02 leveraging unstructured data with cloud dataproc on google cloud platform 3 x serverless data analysis with google bigquery and cloud dataflow https github com skielosky google data engineering on google cloud platform tree master 03 serverless data analysis with google bigquery and cloud dataflow 4 x serverless machine learning with tensorflow on google cloud platform https github com skielosky google data engineering on google cloud platform tree master 04 serverless machine learning with tensorflow on google cloud platform 5 x building resilient streaming systems on google cloud platform https github com skielosky google data engineering on google cloud platform tree master 05 building resilient streaming systems on google cloud platform certificate 00 ezequiel aguilar gonzalez data engineering on google cloud platform specialization png certification name data engineering on google cloud platform specialization certification authority coursera google cloud license number r78l9m2e2j5f time period from september 2018 time period to certification url https www coursera org account accomplishments specialization r78l9m2e2j5f https www coursera org account accomplishments specialization r78l9m2e2j5f | cloud |
|
blockchain-adventure | adventureum the world s first text based crowd sourced decentralised choose your own adventure game every situation and choice in this game has been defined by other players and all this is stored on the ethereum mainnet play the game here https anallergytoanalogy github io blockchain adventure playing the game you ll be presented with situations for each situation you ll be given one or more choices with what you want to do in that situation and you just need to click on whichever option seems like a good idea adding to the story if you reach a point in the story that nobody has written yet you ll be given the opportunity to write your own situation you can add as many choices as you want to the situation if you want to create a game over situation then just don t add any choices for the devs contract address 0x77b4acc38da51a0e77c77355cfd28c1a6619f6ba https etherscan io address 0x77b4acc38da51a0e77c77355cfd28c1a6619f6ba also wrote an article explaining some of the contract code https medium com coinmonks adventures with dumb contracts 18f8ce8414c9 | blockchain |
|
udagram-node | udagram image filtering microservice udagram is a simple cloud application developed alongside the udacity cloud engineering nanodegree it allows users to register and log into a web client post photos to the feed and process photos using an image filtering microservice the project is split into three parts 1 the simple frontend https github com udacity cloud developer tree master course 02 exercises udacity c2 frontend a basic ionic client web application which consumes the restapi backend covered in the course 2 the restapi backend https github com udacity cloud developer tree master course 02 exercises udacity c2 restapi a node express server which can be deployed to a cloud service covered in the course 3 the image filtering microservice https github com udacity cloud developer tree master course 02 project image filter starter code the final project for the course it is a node express application which runs a simple script to process images your assignment tasks setup node environment you ll need to create a new node server open a new terminal within the project directory and run 1 initialize a new project npm i 2 run the development server with npm run dev create a new endpoint in the server ts file the starter code has a task for you to complete an endpoint in src server ts which uses query parameter to download an image from a public url filter the image and return the result we ve included a few helper functions to handle some of these concepts and we re importing it for you at the top of the src server ts file typescript import filterimagefromurl deletelocalfiles from util util deploying your system follow the process described in the course to eb init a new application and eb create a new environment to deploy your image filter service don t forget you can use eb deploy to push changes stand out optional refactor the course restapi if you re feeling up to it refactor the course restapi to make a request to your newly provisioned image server authentication prevent requests without valid authentication headers note if you choose to submit this make sure to add the token to the postman collection and export the postman collection file to your submission so we can review custom domain name add your own domain name and have it point to the running services try adding a subdomain name to point to the processing server note domain names are not included in aws free tier and will incur a cost submission note elastic beanstalk url http udagram node dev us east 1 elasticbeanstalk com http udagram node dev us east 1 elasticbeanstalk com elastic beanstalk url with test image url http udagram node dev us east 1 elasticbeanstalk com filteredimage image url https picsum photos 200 300 http udagram node dev us east 1 elasticbeanstalk com filteredimage image url https picsum photos 200 300 | cloud |
|
weight-loss | discovering ketosis how to effectively lose weight here is a chart of my weight vs time in the past 16 months or so weight vs time in the past 16 months or so weight 2015 png weight loss progress the chart was generated from a data set weight 2015 csv weight 2015 csv by the script date weight r date weight r in this git repository it requires r http r project org and ggplot2 http ggplot2 org in the following i ll describe the thought process some other people ideas and the code i used to separate signal from noise this separation was critical to help lead me in the right direction this github repository includes my code a q a section qanda md and links for further reading disclaimers the below is what worked for me your situation may be different listen to your own body the code here is designed to be used on your own data not on mine also this was not a scientific experiment or a study rather it was a personal journey of experimentation and discovery with these behind us i d like to channel galileo in the face of the inquisition https en wikipedia org wiki galileo affair evolution has been hard at work for 2 billion years shaping the chemistry of all eukaryotes multi cellular life and eventually mammals the krebs cycle glucose metabolism insulin spikes glycogen in the liver carnitine lipase are as real for you as they are for me we may be very different in our genes and traits some are more insulin resistant for example but we cannot be too different in our most fundamental metabolic chemistry the chemistry which drives fat synthesis and break up salient facts initial observations i used to be a pretty thin person my 1st dmv card below says 143 lb unfortunately since moving to the us i ve been gaining more and more weight i peaked in 2015 over 50 lbs higher the us is a country where obesity is an epidemic poorer demographics in the us have higher levels of obesity first dmv photo and weight with full clothing 1992 ariel dmv png 143 pounds sometime in the 90 s does a us typical lifestyle has anything to do with this epidemic after reading on the subject i could point at a few of the main suspects fast food is highly available and is very cheap compared to most alternatives most food we buy and eat is heavily processed watch food inc documentary http www takepart com foodinc film no fat and low fat labels are everywhere on supermarket shelves many foods are enriched and sweetened with high fructose corn syrup watch sugar coated documentary http sugarcoateddoc com as in many other instances i realized i need to think for myself ignore all expert advice question widely accepted ideas like the fda food pyramid start listening to my own body my own logic data i can collect myself and trust once i did the results followed what didn t work in the past i tried several times to change my diet after reading one of atkins books i realized checked and accepted the fact that excess carbs are a major factor in gaining weight but that realization alone has not led to success my will power apparently was insufficient i had too much love of pizza and bread i would reduce my carb consumption lose a few pounds typically 5 pounds and then break down go back to consuming excess carbs and gain all these pounds back and then some my longest diet stretch lasted just a few months it was obvious that something was missing in my method i just had to find it i could increase my physical activity say start training for a mini marathon but that s not something i felt comfortable with i realized early on that i need to adopt a lifestyle that not just reduces carbs or add exercise but is also sustainable and even enjoyable so it can turn into a painless routine something that i could do for years never feel the urge to break habits is not hard or unpleasant for me to do early insights eureka moments early in the process i figured i could use machine learning https en wikipedia org wiki machine learning to identify the factors that made me gain or lose weight i used a simple method every morning i would weigh myself and record both the new weights and whatever i did in the past 24 hours not just the food i ate but also whether i exercised slept too little or too much etc the file i kept was fairly simple a csv with 3 columns date morningweight yesterday s lifestyle food actions the last column is a arbitrary length list of word weight items the optional numerical weight following expresses higher lower quantities the default weight when missing is 1 comment lines ignored date morningweight yesterdayfactors 2012 06 10 185 0 2012 06 11 182 6 salad sleep bacon cheese tea halfnhalf icecream 2012 06 12 181 0 sleep egg 2012 06 13 183 6 mottsfruitsnack 2 pizza 0 5 bread 0 5 date 3 dietsnapple splenda milk nosleep 2012 06 14 183 6 coffeecandy 2 egg mayo cheese 2 rice meat bread 0 5 peanut 0 4 2012 06 15 183 4 meat sugarlesscandy salad cherry 4 bread 0 dietsnapple 0 5 egg mayo oliveoil 2012 06 16 183 6 caprise bread grape 0 2 pasadena sugaryogurt dietsnapple 0 5 peanut 0 4 hotdog 2012 06 17 182 6 grape meat pistachio 5 peanut 5 cheese sorbet 5 orangejuice 2 and so on then i wrote a script lifestyle csv2vw to convert this file to vowpal wabbit https github com johnlangford vowpal wabbit wiki training set regression format in the converted train set the label target feature is the change in weight delta in the past 24 hours and the input features are what i ve done or ate in the 24 hours leading to this delta a straight copy of the 3rd column i was not dieting at that time just collecting data the machine learning process error convergence after partly sorting the lines descending by abs delta to smooth it out and try to amplify very weak signals from the data and 4 passes over the data looks like this error convergence after partial descending sort by delta vw convergence png loss convergence in 4 data passes you can reproduce my work by compiling your own data file installing all prerequisites and running make in this directory i wrote a howto file with more detailed instructions howto md please open an issue if anything doesn t work for you when you type make in this directory some magic happens here s how a typical result looks like make output trimmed for brevity featurename hashval weight relscore nosleep 143407 0 6654 90 29 melon 234655 0 4636 62 91 sugarlemonade 203375 0 3975 53 94 trailmix 174671 0 3362 45 63 bread 135055 0 3345 45 40 caramelizedwalnut 148079 0 3316 44 99 bun 1791 0 3094 41 98 trimmed for brevity caveat data is too noisy anyway stayhome 148879 0 2690 36 50 bacon 64431 0 2998 40 69 egg 197743 0 3221 43 70 parmesan 3119 0 3385 45 94 oliveoil 156831 0 3754 50 95 halfnhalf 171855 0 4673 63 41 sleep 127071 0 7369 100 00 the positive top relative score values are life style choices that make you gain weight while the negative ones bottom make you lose weight and here s a variable importance chart made from a similar data set a href scores png target blank img src scores png width 900 a disclaimer please don t read too much into the particulars of this data working with this particular data set was pretty challenging since the number of original data points a bit over 100 days may be too small to establish enough significance typical daily changes in body weight are very small often 0 1 lb my scales are not accurate you may note that my data has 0 2 pound resolution this is not ideal getting scales with 0 1 pound resolution is highly recommended you may also note that the loss convergence chart hits a hard floor at 0 2 even when you do multiple passes over the data overfit the training set for a similar reason items that make you lose and gain weight often appear together on the same line so they cancel each other this throws the automatic learning process off course there were some misspellings in the original data i hope i fixed all of these by now so i focused mostly on the extremes start and end of the list as presented above and just used the hints as general guidance for further study experimentation and action despite the noisy insufficient data and the inaccuracies in weighting the machine learning experiments made 4 facts pretty clear pretty early sleeping longer consistently appeared as the 1 factor in losing weight lack of sleep did the opposite too little sleep lead to weight gains carbs made me gain weight the worst were high starch and sugary foods fatty and oily foods tended to do the opposite they were positively correlated with weight loss the stayhome lifestlye which fell mostly on weekends may have been a red herring i slept longer when i didn t have to commute to work otoh my diet on stay home days may have been different it took me a while to figure out the sleep part when we sleep we don t eat it is that simple moreover we tend to binge and snack while not particularly hungry but we never do it during sleep our sleeping time is our longest daily fasting time please note that my explanations of the effects may not in fact be accurate or deeply scientific the goal of all this was incremental discovery experiment check effect rinse repeat further progress you may note that in the top date vs weight chart there s a notable acceleration in the rate of weight loss the cause was deeper insights and better ability to sustain the diet the more i understood the problem extending the fasting time was one major accelerator of weight loss rate i did that by skipping breakfast and stop eating earlier in the evening before going to bed this gave me 14 16 hours of fasting each day rather than the more typical 10 12 hours day of fasting the 2nd accelerator was consuming fatty stuff instead of carbs in order to feel full the 3rd accelerator was understanding the concepts of glycemic index https en wikipedia org wiki glycemic index and glycemic load https en wikipedia org wiki glycemic load and shifting whatever i chose to eat towards lower glycemic loads i now believe and hope that i can go all the way back to my original weight when i first landed on us soil if i can keep the present rate it should take 1 2 years to completely reverse the damage of the past 20 years it is important to stress that i also feel much better the more weight i lose as a welcome side effect the few borderline high levels in my blood tests have moved significantly towards normal averages during the period i lost weight what was my data and clear improvement in health saying looking at my data and reading more convinced me that i should beware of doctors who push statins https www google com search q the truth about statins instead of suggesting a better diet i started doubting anyone who told me i need to reduce fat i run away if anyone now tells me high cholesterol in the diet is dangerous cholesterol by the way is an essential building block for many essential body by products the liver produces as much cholesterol as we need our body is an amazing machine billions of years of evolution have made it extremely adaptive it is not our high fat consumption it is the storage of fat process that makes us acummulate fat in the tissues and become unhealthy an enzyme called lipase breaks up fat raise the levels of lipase and our body fat gets consumed faster to get there we need to give the body fat as an alternative to carbohydrates when the body has depleted both the blood sugar and the glycogen hydrated sugar buffer in the liver it has no other choice but to adapt and compensate our source of energy atp synthesis https en wikipedia org wiki adenosine triphosphate switches from carbs to fats by producing more fat breaking agents the body is a flex fuel kind of machine that has simply replaced one fuel carbs with another fat when lipase and all other agents in the fat to atp chemical path aka beta oxidation https en wikipedia org wiki beta oxidation mobilize and their levels are elevated we burn more fat and lose weight over time in a low carb high fat lchf regime our night sleep fasting time becomes our friend the fat breaking agents keep working while we sleep breaking up the stored fat this leads to weight loss and a healthier state and when we push even further and cut carbs to really low levels we may reach a new steady state called ketosis in which practically all our energy comes from fat and that s when we really win big in the weight loss battle the above is a very simplified and hopefuly easy to digest version of what some diet books try to explain in hundreds of pages my bottom line recipe the hardest part especially at the beginning is reducing carbs the worst are starch rich foods pizza pasta bread etc then processed foods with high sugar content sweet sodas no pulp juices etc this doesn t mean no carbs you may afford yourself carbs from time to time say a pizza once a week as it turns out an occasional lapse isn t enough to completely reverse any steady state however you need to make sure you consume much less carbs and less frequently than before in particular you must avoid binging on snacks like chips pizza doughnuts pasta and bread or drinking sugar rich drinks look up glycemic index https en wikipedia org wiki glycemic index and glycemic load https en wikipedia org wiki glycemic load on wikipedia avoid foods with high glycemic load this prevents the blood sugar spikes which lead to insulin spikes and tell the body chemical cycles to revert back from ketosis or near ketosis to fat accumulation have a sweet tooth eat an orange instead of drinking orange juice the two have vastly different glycemic loads and this makes a huge difference if you must add sweetness to your cup of tea or coffee use a splenda sucralose dextrose tablet https en wikipedia org wiki splenda or a stevia drop tablet https en wikipedia org wiki stevia which typically weight just 0 1 gram rather than a tea spoon of sugar 4 2g about 40x more result similar sweetness effect but much lower glycemic load and resulting levels of blood glucose high fat i switched from milk to half and half and am considering heavy and unsweetened whipped cream it has less carbs lactose and more fat plus it tastes better eat avocados olive oil mayo coconut oil nuts i never worry about natural fat i eat as much fat as i want this is what makes it much easier to avoid carbs when i stuff myself with fat i feel much less hungry and miss the carbs less the body is very good at figuring this out i have too much fat in the blood so let s increase the amount of enzymes which break up fat and this makes me lose weight in the long run most importantly i always avoid any products labeled low fat or fat free the food industry usually replaces fat with sugar so it tastes better otherwise it tastes awful you ll often hear about bad vs good fat my take as long as it is natural it is ok the worst trans fat is fat that s artificially hydrogenated to increase shelf life by the food industry the less saturated fat is the better mono saturated plant liquid oil is the best then come the poly unsaturated fats and finally near saturated but not fully saturated fats that come from animals my buttery spread spectrum is margarine no butter ok earth balance no problem at any rate even the most saturated fat gets broken and depleted by the natural processes in the body a bit of exercise of course more is better but for many this may prove difficult i don t excercise too much i just bike to work and back about 20 min each way meaning 40 min day 5 out of 7 days week you can try walking the dog but walk faster or zumba dance to music the trick is to find something that you don t find hard to do or find company to do it together then do a little bit of it every day longer fasting periods this is the 1 contributor to weight loss sleep longer stop eating as early as possible before going to sleep and start eating as late as possible after sleeping skip breakfast after some time you won t feel hungry in the morning anymore after long periods of fasting the body chemistry adjusts it needs atp but there s a too low level of glucose in the blood the glycogen in the liver is fully consumed this takes about 1 2 days of low or no carbs so there s no other option but to start looking for other sources like stored fat this elevates the enzymes that help with breaking up fat and the krebs cycle reverses direction in the critical paths instead of transforming excess carbs into stored fat we break up stored fat for energy eat eggs they are a wonderful combo of fat and protein with no carbs at all i read an interview with a japanese woman who reached 114 years longevity md and one of her secrets was to eat eggs daily my favorite food is a scrambled egg with grilled onions onions are a bit high on carbs but too tasty to give up and olives eat slower and chew longer don t swallow just yet humans just like dogs tend to swallow too soon stop eating when you feel full there s about 20 min delay before your brain registers that you are full so don t over eat further reading the krebs aka citric acid cycle https en wikipedia org wiki citric acid cycle spikes of insulin and their effects https en wikipedia org wiki sugar crash what the body does when it has excess of sugar vs excess of fat glycemic index https en wikipedia org wiki glycemic index glycemic load https en wikipedia org wiki glycemic load a better metric for weight loss than glycemic index glycogen and its storage in the liver https en wikipedia org wiki glycogen ketone bodies https en wikipedia org wiki ketone bodies ketosis not to be confused with keto acidosis https en wikipedia org wiki ketosis ketogenic diet https en wikipedia org wiki ketogenic diet the eating academy peter attia m d http eatingacademy com why we get fat and what to do about it gary taubes http www amazon com gp product 0307272702 summary of good calories bad calories gary taub by lower thought https lowerthought wordpress com complete notes to good calories bad calories the obesity code unlocking the secrets of weight loss jason fung https www amazon com obesity code unlocking secrets weight ebook dp b01c6d0lck the best summary about statins i ve seen http www newswithviews com howenstine james23 htm high cholesterol doesn t cause heart disease http www telegraph co uk science 2016 06 12 high cholesterol does not cause heart disease new research finds dr mark hyman take on a good diet a bit different than mine http drhyman com blog 2014 08 18 one test doctor isnt save life documentaries food inc 2008 https www netflix com title 70108783 sugar coated 2015 https www netflix com title 80100595 more videos reversing type 2 diabetes starts with ignoring the guidelines sarah hallberg tedxpurdueu https www youtube com watch v da1vvigy5tq a nice 7 41 minute video of james mccarter in quantified self an eye opener for me james mccarter the effects of a year in ketosis https vimeo com 147795263 questions answers comments some questions and comments i got and tried to answer qanda md more friendly interface shyal beardsley http shyal com has built a starter front end for this weightbrains com http weightbrains com note and fair warning this is a prototype experimental work in progress acknowledgements big thanks to the following people for contributing to this project in myriad ways comments references corrections etc anat faigon ingrid kane hans lee steve malmskog eyal friedman shiri shoham gabi harel shingi noa update 2016 08 12 this project made hacker news https news ycombinator com item id 12279415 and reached the top place for a while thanks for some great comments by benkuhn aab0 zzleeper and others which helped me make it better image of this project on hacker news 2016 08 12 hackernews 2016 08 12 png special thanks to john langford and the many other contributors to vowpal wabbit https en wikipedia org wiki vowpal wabbit license this code and additional material are released under a permissive and simple 2 clause bsd licence licence md the one sentence summary of this is as long as you don t sue me and not claim it as your own you should be ok | ai |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.