names
stringlengths 1
95
| readmes
stringlengths 0
399k
| topics
stringlengths 0
421
| labels
stringclasses 6
values |
---|---|---|---|
wzd | img src images logo png alt wzd logo https github com eltaline wzd blob master readme rus md wzd is a server written in go language that uses a a href https github com eltaline bolt modified a version of the boltdb database as a backend for saving and distributing any number of small and large files nosql keys values in a compact form inside micro bolt databases archives with distribution of files and values in boltdb databases depending on the number of directories or subdirectories and the general structure of the directories using wzd can permanently solve the problem of a large number of files on any posix compatible file system including a clustered one outwardly it works like a regular webdav server and billions of files will no longer be a problem img align center src images wzd scheme png alt wzd scheme architecture img align center src images wzd arch png alt wzd arch current stable version 1 2 1 update to go 1 14 update to iris 12 1 8 transition to go module support features multi threading multi servers for fault tolerance and load balancing complete file and value search supports https and ip authorization supported http methods get head options put post and delete manage read and write behavior through client headers support for customizable virtual hosts linear scaling of read and write using clustered file systems effective methods of reading and writing data supports crc data integrity when writing or reading support for range and accept ranges if none match and if modified since headers store and share 10 000 times more files than there are inodes on any posix compatible file system depending on the directory structure support for adding updating deleting files and values and delayed compaction defragmentation of bolt archives allows the server to be used as a nosql database with easy sharding based on the directory structure bolt archives support for selective reading of a certain number of bytes from a value easy sharding of data over thousands or millions of bolt archives based on the directory structure mixed mode support with ability to save large files separately from bolt archives semi dynamic buffers for minimal memory consumption and optimal network performance tuning includes multi threaded a href https github com eltaline wza wza a archiver for migrating files without stopping the service incompatibilities multipart is not supported there is no native protocol and drivers for different programming languages there is no way to transparently mount the structure as a file system via webdav or fuse for security reasons the server does not support recursive deletion of directories the server does not allow uploading files to the root directory of the virtual host applies only to bolt archives directories and subdirectories of virtual hosts do not allow other people s files with the bolt extension data disks cannot simply be transferred from the little endian system to the big endian system or vice versa multipart will not be supported since a strict record of a specific amount of data is required so that underloaded files do not form and other problems arise use only binary data transfer protocol to write files or values requirements operating systems linux bsd solaris osx architectures amd64 arm64 ppc64 and mips64 with only amd64 tested supported byte order little or big endian any posix compatible file system with full locking support preferred clustered moosefs recommendations it is recommended to upload large files directly to the wzd server bypassing reverse proxy servers real application our cluster used has about 250 000 000 small pictures and 15 000 000 directories on separate sata drives it utilizes the moosefs cluster file system this works well with so many files but at the same time its master servers consume 75 gigabytes of ram and since frequent dumps of a large amount of metadata occur this is bad for ssd disks accordingly there is also a limit of about 1 billion files in moosefs itself with the one replica of each file with a fragmented directory structure an average of 10 to 1000 files are stored in most directories after installing wzd and archiving the files in bolt archives it turned out about 25 times less files about 10 000 000 with proper planning of the structure a smaller number of files could have been achieved but this is not possible if the already existing structure remains unchanged proper planning would result in very large inodes savings low memory consumption of the cluster fs significant acceleration of the moosefs operation itself and a reduction in the actual space occupied on the moosefs cluster fs the fact is moosefs always allocates a block of 64 kb for each file that is even if a file has a size of 3 kb will still be allocated 64 kb the multi threaded a href https github com eltaline wza wza a archiver has already been tested on real data our cluster used 10 servers is an origin server installed behind a cdn network and served by only 2 wzd servers p align center img align center src images reduction full png p mixed use the wzd server was designed for mixed use one can write not only ordinary files but even html or json generated documents and one can even simply use nosql as a sharding database consisting of a large number of small boltdb databases and carry out all sharding through the structure of directories and subdirectories performance tests testing shows the read or write difference between working with regular files and with bolt archives the writeintegrity and readintegrity options are enabled that is when writing or reading files in bolt archives crc is used important the time in the tests is indicated for full get or put requests and the full write or read of http files by the client is included in these milliseconds tests were carried out on ssd disks since on sata disks the tests are not very objective and there is no clear difference between working with bolt archives and ordinary files the test involved 32 kb 256 kb 1024 kb 4096 kb and 32768 kb files b get 1000 files and get 1000 files from 1000 bolt archives b img align center src images get png b put 1000 files and put 1000 files in 1000 bolt archives b img align center src images put png as can be seen from the graphs the difference is practically insignificant below is a more visual test done with files of 32 megabytes in size in this case writing to bolt archives becomes slower compared to writing to regular files although this is a count writing 32 mb for 250ms is generally quite fast reading such files works quite quickly and if one wants to store large files in bolt archives and the write speed is not critical such use is allowed but not recommended and not more than 32 mb per uploaded file b get 32m 1000 files and files from bolt archives and put 32m 1000 files and files in bolt archives b img align center src images get put 32m png documentation installation install packages or binaries a href https github com eltaline wzd releases download a systemctl enable wzd systemctl start wzd install docker image docker image automatically recursively change uid and gid in mounted var storage bash docker run d restart always e bindaddr 127 0 0 1 9699 e host localhost e root var storage v var storage var storage name wzd p 9699 9699 eltaline wzd more advanced option bash docker run d restart always e bindaddr 127 0 0 1 9699 e host localhost e root var storage e upload true e delete true e compaction true e search true e fmaxsize 1048576 e writeintegrity true e readintegrity true e args false e getbolt false e getkeys true e getinfo true e getsearch true e getrecursive true e getjoin true e getvalue true e getcount true e getcache true e nonunique false e cctrl 2592000 e delbolt false e deldir false v var storage var storage name wzd p 9699 9699 eltaline wzd all env default parameters can be viewed here a href dockerfile dockerfile a enable rotation on the host system for containers put in etc logrotate d wzd var lib docker containers log rotate 7 daily compress missingok delaycompress copytruncate configuring and using wzd server for security reasons if wzd is installed from deb or rpm packages or from binaries upload and delete options are disabled by default in the configuration file etc wzd wzd conf in the localhost virtual host in most cases it is enough to use the default configuration file a full description of all product parameters is available here a href options md options a general methods downloading file the existing normal file is downloaded first and not the one in the bolt archive bash curl o test jpg http localhost test test jpg downloading file from the file forced bash curl o test jpg h fromfile 1 http localhost test test jpg downloading file from the bolt archive forced bash curl o test jpg h fromarchive 1 http localhost test test jpg downloading the whole bolt archive from the directory if the server parameter getbolt true bash curl o test bolt http localhost test test bolt uploading file to the directory depending on the fmaxszie parameter bash curl x put data binary test jpg http localhost test test jpg uploading file to the regular file bash curl x put h file 1 data binary test jpg http localhost test test jpg uploading file to the bolt archive if the server parameter fmaxsize is not exceeded bash curl x put h archive 1 data binary test jpg http localhost test test jpg deleting file a regular file is deleted first if it exists and not the file in the bolt archive bash curl x delete http localhost test test jpg deleting file forced bash curl x delete h fromfile 1 http localhost test test jpg deleting file from the bolt archive forced bash curl x delete h fromarchive 1 http localhost test test jpg deleting the whole bolt archive from the directory if the server parameter delbolt true bash curl x delete http localhost test test bolt search getting list of all file names from directory and archive if the server parameter getkeys true bash curl h sea 1 h keys 1 http localhost test getting list of all file names only from the directory if the server parameter getkeys true bash curl h sea 1 h keysfiles 1 http localhost test getting list of all file names only from the archive if the server parameter getkeys true bash curl h sea 1 h keysarchives 1 http localhost test getting list of all file names from the directory and archive with their sizes and dates if the server parameter getinfo true bash curl h sea 1 h keysinfo 1 http localhost test getting list of all file names only from the directory with their sizes and dates if the server parameter getinfo true bash curl h sea 1 h keysinfofiles 1 http localhost test getting list of all file names only from the archive with their sizes and dates if the server parameter getinfo true bash curl h sea 1 h keysinfoarchives 1 http localhost test getting list of all file names from the directory and archive with their sizes and dates if the server parameter getsearch true bash curl h sea 1 h keyssearch 1 h expression jpg http localhost test getting list of all file names only from the directory with their sizes and dates if the server parameter getsearch true bash curl h sea 1 h keyssearchfiles 1 h expression jpg http localhost test getting list of all file names only from the archive with their sizes and dates if the server parameter getsearch true bash curl h sea 1 h keyssearcharchives 1 h expression jpg http localhost test getting count number of all files from the directory and archive if the server parameter getcount true bash curl h sea 1 h keyscount 1 http localhost test getting count number of all files only from the directory if the server parameter getcount true bash curl h sea 1 h keyscountfiles 1 http localhost test getting count number of all files only from the archive if the server parameter getcount true bash curl h sea 1 h keyscountarchives 1 http localhost test advanced search keys keysinfo headers also support all search headers except the withvalue header keyscount headers also support all search headers except limit offset withvalue headers withjoin header does not work with recursive header but allows you to set recursion for each directory withvalue header is only available if you use keyssearch and json headers together prefix header if used together with expression header then the regular expression search should not include the prefix recursive header supports a maximum recursion depth of 3 offset header only works in single threaded mode expire header sets the lifetime once for a particular request other same particular request returns result from the cache and the lifetime for the result in the cache is not updated using expire and skipcache headers together will force updates the result and lifetime in the cache when using header withvalue values are encoded by hex regex search if server parameter getsearch true bash curl h sea 1 h keyssearch 1 h json 1 h expression jpg http localhost test recursive search if server parameter getrecursive true bash curl h sea 1 h keyssearch 1 h json 1 h recursive 3 http localhost test search with saving the result to the server cache for 120 seconds if the server parameter getcache true bash curl h sea 1 h keyssearch 1 h json 1 h expire 120 http localhost test search with a skip result from the server cache bash curl h sea 1 h keyssearch 1 h json 1 h skipcache 1 http localhost test search with skipping the result from the server cache and changing the value in the server cache with setting new lifetime of 120 seconds if the server parameter getcache true bash curl h sea 1 h keyssearch 1 h json 1 h expire 120 h skipcache 1 http localhost test search with limit and offset bash curl h sea 1 h keyssearch 1 h json 1 h limit 25 h offset 100 http localhost test search with adding the virtual host url to the key names bash curl h sea 1 h keyssearch 1 h json 1 h withurl 1 http localhost test search with size limits withvalue 1 if any value exceeds the server parameter vmaxsize then value will not be included in the output but the key in the output will be present bash curl h sea 1 h keyssearch 1 h json 1 h minsize 512 h maxsize 1024 h withurl 1 h withvalue 1 http localhost test search with timestamp date interval withvalue 1 if any value exceeds the server parameter vmaxsize then value will not be included in the output but the key in the output will be present bash curl h sea 1 h keyssearch 1 h json 1 h minstmp 1570798400 h maxstmp 1580798400 h withurl 1 h withvalue 1 http localhost test search with prefix bash curl h sea 1 h keyssearch 1 h json 1 h prefix file http localhost test search by prefix and regular expression with reverse sorting bash curl h sea 1 h keyssearch 1 h json 1 h prefix file h expression 10 jpg h sort 1 http localhost test search with prefix and regular expression with join and with depths of recursion bash curl h sea 1 h keyssearch 1 h json 1 h withjoin mydir9 1 mydir2 3 mydir1 subdir4 2 mydir7 subdir1 subdir2 0 h prefix file h expression 10 jpg http localhost search before the first match bash curl h sea 1 h keyssearch 1 h json 1 h expression 10 jpg h stopfirst 1 http localhost test no comments bash curl h sea 1 h keyssearch 1 h json 1 h recursive 3 h expression jpg h minsize 512 h maxsize 1024 h minstmp 1570798400 h maxstmp 1580798400 h limit 25 h offset 50 h withurl 1 h withvalue 1 h sort 1 h expire 3600 http localhost test data migration in 3 steps without stopping the service this server was developed not only for use from scratch but also for use on current real production systems to do this the a href https github com eltaline wza wza a archiver is proposed for use with the wzd server the archiver allows for converting current files to bolt archives without deletion and with deletion and allows for unpacking them back it supports overwrite and other functions the archiver as far as possible is safe it does not support a recursive traversal only works on a pre prepared list of files or bolt archives and provides for repeated reading of the file after packing and crc verification of the checksum on the fly data migration guide the path to be migrated var storage here are jpg files in the 1 million subdirectories of various depths that need to be archived in bolt archives but only those files that are no larger than 1 mb wzd server should be configured with a virtual host and with a root in the var storage directory perform a recursive search bash find var storage type f name jpg not name bolt tmp migration list start archiving without deleting current files bash wza pack fmaxsize 1048576 list tmp migration list start deleting old files with key checking in bolt archives without deleting files that are larger than fmaxsize bash wza pack delete fmaxsize 1048576 list tmp migration list this can be combined into one operation with the removal of the old files the archiver skips non regular files and will not allow archiving of the bolt archive to the bolt archive the archiver will not delete the file until the checksum of the file being read coincides after archiving with the newly read file from the bolt archive unless of course that is forced to disable while the wzd server is running it returns data from regular files if it finds them this is a priority this guide provides an example of a single threaded start stopping at any error with any file even if non regular files were included in the prepared list or files were suddenly deleted by another process but remained in the list to ignore already existing files from the list the ignore not option must be added the same is true when unpacking restarting archiving without the overwrite option will not overwrite files in bolt archives the same is true when unpacking the archiver also supports the multi threaded version threads and other options in the multi threaded version the ignore option is automatically applied so as not to stop running threads when any errors occur in case of an error with the delete option turned on the source file will not be deleted a full description of all product parameters is available here a href options md options a notes and q a outwardly at its core this server looks like a regular webdav server for a user or developer the server works with only one system user uid and gid for any bolt archives and files are taken at server startup from the current user or from the systemd startup script the server automatically creates directories and bolt archives during the upload files or values just upload the file to the desired path bolt archives are automatically named with the name of the directory in which they are created effective reduction of the number of files within the data instance depends on the selected directory structure and the planned number of file uploads in these directories it is not recommended uploading 100 000 files to one directory one bolt archive this would be a large overhead if possible plan your directory structure correctly it is not recommended uploading files or values larger than 16 mb to bolt archives by default the parameter fmaxsize 1048576 bytes if the fmaxsize parameter is exceeded even with the archive 1 client header set the data will be loaded into a separate regular file without notification the maximum possible size of the parameter is fmaxsize 33554432 bytes if the nonunique true parameter is turned on in the virtual host this means that the wzd server will allow uploading of individual files with the same name even if the bolt archive in this directory already contains data with the same key name as the uploaded file despite the fact that the nonunique false parameter is disabled in the virtual host the wzd server will upload the file or value to the new bolt archive even if the key name matches the already existing file name in this directory this is required for non stop operation of the service and working in mixed mode during data migration to bolt archives including when adding new files non stop through the put method or deleting them through the delete method when using the writeintegrity true and readintegrity true parameters the downloaded file or value is completely written to ram but no more than 32 mb per request with the maximum parameter fmaxsize set it is highly recommended that these options be enabled as true these parameters affect only files or values in bolt archives if the writeintegrity true parameter has not been enabled and a lot of files or values have been uploaded to the bolt archives then the checksum will not have been calculated for them in this case for the checksum to be calculated and recorded a href https github com eltaline wza wza a archiver can be used to unpack all current bolt archives and repack them again but without disabling the crc amount record in the archiver itself in the future the archiver will support the calculation and recording of the checksum for files or values in the current bolt archives without unpacking and packing operations if the values in the bolt archives did not initially have a checksum if the checksum of the files or values has not been calculated and recorded as a result of the writeintegrity false parameter set then with the readintegrity true parameter enabled everything will work but the checksum will not be checked when downloading the server does not allow uploading files to the root directory of the virtual host this is prohibited only when trying to upload a file or value to the root of the virtual host with the archive 1 header set regular files without packing can be uploaded to the root of the virtual host the server uses an extended version of boltdb by the current developer of wzd added functions are getlimit getoffset getrange this allows as much data to be read as is needed by a byte from files or values for example using the headers range bytes if none match if modified since or the head and options methods which allows the same significant saving of disk subsystem resources as simply reading the entire file or value using the standard get function the server does not create any temporary files during its operation and at the same time it consumes little ram large files are transferred through customizable semi dynamic small sized buffers on the fly the wzd server does not use the simple function ctx sendfile or ctx servecontent at the request of the community some parameters can be transferred from the global section to the server section at the request of the community new advanced functionality can also be added use the feature request todo development of own replicator and distributor with a geo for possible use in large systems without cluster fs the ability to fully reverse restore metadata when it is completely lost if using a distributor native protocol for the possibility of using permanent network connections and drivers for different programming languages support for https protocol it may be supported only in the future distributor completed in standart version advanced features for using nosql component completed implementing background calculate checksums for single large files periodic checksum checks in the background to protect against bit rot fuse and or webdav mount full support may be implemented including write support abandoning of sqlite in favor of a simpler solution abandoning cgo completed different types of compression gzip zstd snappy for files or values inside bolt archives and for ordinary files different types of encryption for files or values inside bolt archives and for regular files server side delayed video conversion including on gpu parameters a full description of all product parameters is available here a href options md options a http core uses a href https github com kataras iris iris a as server http core guarantees no warranty is provided for this software please test first contacts company website a href https elta ee eltaline a copyright 2020 andrey kuvshinov contacts syslinux protonmail com copyright 2020 eltaline ou contacts eltaline ou gmail com all rights reserved | os |
|
aheadIT | aheadit information technology at your fingertips ahead information systems is a cutting edge it services provider that has been responsible for the successful delivery of various business solutions | server |
|
thesis | unit tests https github com nlindenau thesis actions workflows run pytest yml badge svg https github com nlindenau thesis actions workflows run pytest yml about the project rvs bot returns the nutritional value of your meal based on the ingredients this project is part of my master s thesis at centria uas | cloud |
|
disco | title img disco main 50 png p align center img src img disco main 50 png br the distillation process with a robot scifi future detail high resolution 8k by midjourney p disco distilling phrasal counterfactuals with large language models this is the public codebase for arxiv paper disco distilling counterfactuals with large language models https arxiv org abs 2212 10534 about disco disco is a framework for automatically generating high quality counterfactual data at scale the system elicite counterfactual perturbations by prompting a large general language model like gpt 3 then a task specific teacher model filters the generation to distill high quality counterfactual data we find that learning with this counterfactual data yields a comparatively small student model that is 6 absolute more robust and generalizes 5 better across distributions than baselines on various challenging evaluations this model is also 15 more sensitive in differentiating original and counterfactual examples on three evaluation sets written by human workers and via human ai collaboration features todo enable knn and sentence embedding based demonstration search enable reinforcement learning for demonstration search add option to turn on off database features add prompt supports for single sentence classification ex sentiment analysis return top 5 log likelihood from openai api calls install mongodb following the instruction here https www prisma io dataguide mongodb setting up a local mongodb database setting up mongodb on macos to install mongodb local server on your machine also install mongodb compass for user friendly database management download https www mongodb com try download compass the version propoer to your operating system requirements python 3 10 9 numpy 1 24 2 openai 0 27 0 torch 2 0 1 transformers 4 26 1 hydra core 1 3 2 pymongo 4 3 3 sentence transformers 2 2 2 install all required python dependencies pip install r requirements txt for any other packages that may be required please see the error messages and install accordingly run llm counterfactual distillation we use hydra https hydra cc docs intro to dynamically manage the configurations for the distillation pipeline first take a look at the config yaml file loacted under the config folder set the argumentes required for your running and experiment yaml paths data dir data path to the data directory default to data data params dataset snli base dataset for conterfactual generation task name snli the type of task defined in prompt retrieval start 0 start position of the dataset for the current batch end 11 end position of the dataset for the current batch template name masked cad premise template name for counterfactual generation source label entailment source label for counterfactual generation target label neutral target label for counterfactual generation generator params gen type completion gpt 3 generation mode completion insertion chat model name gpt 3 003 engine name gpt 3 003 gpt 3 002 chatgpt gpt 4 overgenerate true more than 1 generations for an input data no demo false zero shop generation without in context examples search params num neighbors 8 number of nearest neighbors embed type cls type of embedding method metric euclidean distance metric for nearest neighbor encoder name roberta large nli mean tokens encoder name for sentence embedding prompt search false enable dynamic demonstration selection filter params filter all flase filter all previous generation outputs next to run the distillation pipeline execute bash python run py to run filtering on all previous generated outputs set filter all to false in the config file and re execute bash python run py generated disco data you can find the disco augmentation data used for our experiments here data disco jsonl for a more extensive set of counterfactual data you can take a look at the data augment folder feel free to generate and filter more counterfactual data using the pipeline license mit license https img shields io badge license mit blue svg https lbesson mit license org this work is licensed under a mit license http creativecommons org licenses by nc sa 4 0 license cc by sa 4 0 https camo githubusercontent com bdc6a3b8963aa99ff57dfd6e1e4b937bd2e752bcb1f1936f90368e5c3a38f670 68747470733a2f2f696d672e736869656c64732e696f2f62616467652f4c6963656e73652d434325323042592d2d5341253230342e302d6c69676874677265792e737667 https creativecommons org licenses by sa 4 0 the disco augmentation dataset is licensed under a creative commons attribution noncommercial sharealike 4 0 international license http creativecommons org licenses by nc sa 4 0 cite if you find that our paper code or the dataset inspire you please cite us bibtex misc chen2022disco title disco distilling phrasal counterfactuals with large language models author zeming chen and qiyue gao and kyle richardson and antoine bosselut and ashish sabharwal year 2022 eprint 2212 10534 archiveprefix arxiv primaryclass cs cl | ai |
|
Cognitive-Vision-Windows | microsoft computer vision api windows client library sample exclamation exclamation exclamation the vision client library is here for archival purpose only net users are asked to switch to the newer microsoft azure cognitiveservices vision computervision https www nuget org packages microsoft azure cognitiveservices vision computervision package the source for which can be found here https github com azure azure sdk for net tree master src sdks cognitiveservices dataplane vision computervision for users interested in client libraries for other languages platforms quickstarts etc please see the official documentation page https docs microsoft com en us azure cognitive services computer vision exclamation exclamation exclamation this repo contains the windows client library sample for the microsoft computer vision api an offering within microsoft cognitive services https www microsoft com cognitive services formerly known as project oxford learn about the computer vision api https www microsoft com cognitive services en us computer vision api read the documentation https www microsoft com cognitive services en us computer vision api documentation find more sdks samples https www microsoft com cognitive services en us sdk sample api computer 20vision the client library the client library is a thin c client wrapper for the computer vision api the easiest way to use this client library is to get microsoft projectoxford vision package from nuget http nuget org go to vision api package in nuget https www nuget org packages microsoft azure cognitiveservices vision computervision for more details the sample this sample is a windows wpf application to demonstrate the use of the computer vision api build the sample 1 starting in the folder where you clone the repository this folder 2 in a git command line tool type git submodule init or do this through a ui 3 pull in the shared windows code by calling git submodule update 4 start microsoft visual studio 2015 and select file open project solution 5 starting in the folder where you clone the repository go to vision windows sample wpf folder 6 double click the visual studio 2015 solution sln file visionapi wpf samples 7 press ctrl shift b or select build build solution run the sample after the build is complete press f5 to run the sample first you must obtain a vision api subscription key by following the instructions on our website https www microsoft com cognitive services en us sign up locate the text edit box saying paste your subscription key here to start on the top right corner paste your subscription key you can choose to persist your subscription key in your machine by clicking save key button when you want to delete the subscription key from the machine click delete key to remove it from your machine microsoft will receive the images you upload and may use them to improve the computer vision api and related services by submitting an image you confirm you have consent from everyone in it img src samplescreenshots samplerunning1 png width 80 contributing we welcome contributions feel free to file issues and pull requests on the repo and we ll address them as we can learn more about how you can help on our contribution rules guidelines contributing md you can reach out to us anytime with questions and suggestions using our communities below support questions stackoverflow https stackoverflow com questions tagged microsoft cognitive feedback feature requests cognitive services uservoice forum https cognitive uservoice com this project has adopted the microsoft open source code of conduct https opensource microsoft com codeofconduct for more information see the code of conduct faq https opensource microsoft com codeofconduct faq or contact opencode microsoft com mailto opencode microsoft com with any additional questions or comments license all microsoft cognitive services sdks and samples are licensed with the mit license for more details see license license md sample images are licensed separately please refer to license image license image md developer code of conduct developers using cognitive services including this client library sample are expected to follow the developer code of conduct for microsoft cognitive services found at http go microsoft com fwlink linkid 698895 http go microsoft com fwlink linkid 698895 | ai |
|
MPid | mpid a microcontroller friendly pid module build status https travis ci org gbmhunter mpid png branch master https travis ci org gbmhunter mpid description a light weight fast pid library designed for use on embedded systems but can also run on any machine which has a g compiler for better performance and control the pid library supports a generic number type number type must support casting to doubles fixed point numbers are recommended for high speed operation doubles are recommended for non time critical algorithms relies on used calling pid run at a regular and fixed interval usually in the milli second range via an interrupt automatically adjusts kp ki and kd depending on the chosen time step zp zi and zd are the time step adjusted values do not try and make kp ki or kd negative this results in undefined behaviour smooth control derivative control is only active when at least two calls to run have been made does not assume previous input was 0 on first call which can cause a huge derivative jolt easy debugging you can print pid debug information by providing a callback via code pid setdebugprintcallback which supports method callbacks by utilizing the slotmachine cpp library the following code shows you how to assign a callback for debug printing c class printer public void printdebug const char msg std cout msg printer myprinter pid pidtest asign callback to printer s printdebug function this pidtest setdebugprintcallback slotmachine callbackgen printer void const char myprinter printer printdebug you can disable all debug info to free up some memory space by setting cp3id config include debug code in include config hpp to 0 the debug buffer size can be changed with cp3id config debug buff size again in config hpp code dependencies dependency delivery usage cstdint standard c library fixed width variable type definitions e g uint32 t massert external module providing runtime safety checks against this module munittest external module framework for unit tests building this example assumes you are running on a linux like system and have basic build tools like gcc and make installed as well as cmake sh git clone https github com gbmhunter mfixedpoint git cd mfixedpoint mkdir build cd build cmake make usage c create a pid object which uses double for all of it s calculations inputs and outputs pid double pidtest main set up pid controller non accumulating pidtest init 1 0 kp 1 0 ki 1 0 kd pid double pid direct control type pid double dont accumulate output control type 10 0 update rate ms 100 0 min output 100 0 max output 0 0 initial set point call every 10 0ms as set in pid init timerisr read input input readpin 2 perform one execution pidtest run input set output setpin 3 pidtest output see test pidtest cpp for more examples issues for known bugs desired enhancements e t c see github issues section changelog see changelog md | os |
|
mms-hackathon | mms hackathon mediamarkt cloud engineering challenge tasks objectives 1 register the container generated by the dockerfile with cloud build artifacts 2 generating a yaml file for docker composer 3 generate the terraform files in order to have the infrastructure as code and be able to deploy with kubernetes 4 answer the question to check the understanding of the minimum least priviledge in the roles assignment directories cloud build tf deployment tf state iam gcp console the whole procedure was performed on the gcp cloud shell console an even better automation could have been performed using github actions or some other ci cd but due to the simplicity of the exercise and in order not to compromise credentials or use an account this methodology was decided in the first directory we have a file called cloudblild yaml that will be in charge of clone create and finally push the docker image to the artifact registry for this exercise the cloud build and artifact registry apis were activated to do this we must run the following command gcloud builds submit region us central1 config cloudbuild yaml in the second folder are the terraform files that create the necessary infrastructure for the application deployment the terraform tfvars variable file is not included as it includes sensitive account information after downloading the repository we should enter the following command inside the folder in question terraform init then to verify that everything is in order terraform plan and finally to apply everything terraform apply in first instance a deployment was performed with a docker image from my personal account from docker hub where i had no problems to perform the pull but from artifact registry i had some problems with permissions as i did not have access to modify in iam i could not leave it working as i wanted in the third folder i made a small implementation to create a bucket to serve as a backend to store the terraform states in the fourth folder there is a brief description of the roles steps to create edit and delete them and a personal opinion on how to assign them | cloud |
|
Orbit | p align center a href https github com s0md3v orbit img src https i ibb co bxszhw0 orbit png alt orbit a br b b p h4 align center blockchain transactions investigation tool h4 p align center a href https github com s0md3v orbit releases img src https img shields io github release s0md3v orbit svg a a href https github com s0md3v orbit issues q is 3aissue is 3aclosed img src https img shields io github issues closed raw s0md3v orbit svg img src https img shields io badge python 3 2 blue svg a p graph demo https i ibb co rx76ryt screenshot 2019 07 26 03 41 34 png introduction orbit is designed to explore network of a blockchain wallet by recursively crawling through transaction history the data is rendered as a graph to reveal major sources sinks and suspicious connections note orbit only runs on python 3 2 and above usage let s start by crawling transaction history of a wallet python3 orbit py s 1ajbsfz64epefs5uajafcug8ph8jn3rn1f crawling multiple wallets is no different python3 orbit py s 1ajbsfz64epefs5uajafcug8ph8jn3rn1f 1etbbshpvbydw7hgwxxkxz3pxvh3vfomax orbit fetches last 50 transactions from each wallet by default but it can be tuned with l option python3 orbit py s 1ajbsfz64epefs5uajafcug8ph8jn3rn1f l 100 orbit s default crawling depth is 3 i e it fetches the history of target wallet s crawls the newly found wallets and then crawls the wallets in the result again the crawling depth can be increased or decresead with d option python3 orbit py s 1ajbsfz64epefs5uajafcug8ph8jn3rn1f d 2 wallets that have made just a couple of interactions with our target may not be important orbit can be told to crawl top n wallets at each level by using the t option python3 orbit py s 1ajbsfz64epefs5uajafcug8ph8jn3rn1f t 20 if you want to view the collected data with a graph viewer of your choice you can use o option python3 orbit py s 1ajbsfz64epefs5uajafcug8ph8jn3rn1f o output graphml support formats graphml supported by most graph viewers json for raw processing this is your terminal dashboard demo terminal https i ibb co pzg24vt screenshot 2019 07 26 08 07 10 png visualization once the scan is complete the graph will automatically open in your default browser if it doesn t open open quark html manually don t worry if your graph looks messy like the one below or worse graph setup https i ibb co xj38df9 screenshot 2019 07 26 08 21 18 png select the make clusters option to form clusters using community detection algorithm after that you can use color clusters to give different colors to each community and then use spacify option to fix overlapping nodes edges graph fixed https i ibb co ssghkjn screenshot 2019 07 26 09 21 08 png the thickness of edges depends on the frequency of transactions between two wallets while the size of a node depends on both transaction frequency and the number of connections of the node as orbit uses quark https github com s0md3v quark to render the graph more information about the various features and controls is available in quark s readme | blockchain bitcoin osint | blockchain |
ebookcoin | note ebookcoin has been updated to ddn blockchain please get it from https github com ddnlink ddn ddn https github com ddnlink ddn node js 2 ddn | nodejs ebookcoin ebook bitcoin cryptocurrency blockchain p2p | blockchain |
sg-orbit | p align center img alt sharegate orbit src https raw githubusercontent com gsoft inc sg orbit master assets orbit full svg sanitize true width 480 p p align center orbit the design system for sharegate p p align center a href https circleci com gh gsoft inc sg orbit tree master img alt build src https img shields io circleci build github gsoft inc sg orbit master a p p align center a href https lerna js org img alt lerna src https img shields io badge maintained 20with lerna cc00ff svg a a href https yarnpkg com img alt yarn src https img shields io badge dependencies 20managed 20by yarn blue a p orbit is a design system developed by sharegate to help create the best experience for our customers and drive consistency between all our web apps documentation website netlify status https api netlify com api v1 badges 65b52a34 8224 4783 bed2 64ffd05d36af deploy status https app netlify com sites sg orbit deploys the documentation website contains information about installation the orbit foundation and orbit components https orbit sharegate design storybook website netlify status https api netlify com api v1 badges 4b420380 aed1 4dc6 b002 6efe7b413025 deploy status https app netlify com sites sg storybook deploys the storybook website contains stories for orbit custom components https sg storybook netlify com maintainers view the contributors documentation contributing md license copyright 2023 gsoft inc this code is licensed under the apache license version 2 0 you may obtain a copy of this license at https github com gsoft inc gsoft license blob master license | react design-system components storybook | os |
CodeSample | lillian cordelia gwendolyn 01 30 2022 this is the code of my final project for an embedded systems design course i took the class was cse121 l it was the most time crunchy course i ever took and i learned a lot this project is for an oscilloscope as mentioned in my resume s projects section this is only here for the purposes of providing sample code for job applications you are not permitted to copy this code with the intent of committing academic dishonesty additionally note that a noticeable portion of this code is taken from samples provided by the course this sample code is cited with a slide number or video or etc and is mostly boilerplate all code produced by me is located within main cm4 c some portions of other included code may have been modified by me for varying purposes but the main code produced by me is located within that file again to restate this code is only public for the purposes of showing off embedded c code i have produced that is not locked under nda this is not a resource for those currently taking the course i produced this within do not copy or steal this code for the purposes of submitting to any academic institution | os |
|
edge-computer-vision | edge computer vision course github actions python application test with github actions https github com noahgift edge computer vision workflows python 20application 20test 20with 20github 20actions badge svg build status circleci https circleci com gh noahgift edge computer vision svg style svg https circleci com gh noahgift edge computer vision lesson 1 introduction to applied computer vision overview of syllabus 30 min lecture computing trends supporting applied computer vision 60 min https github com noahgift edge computer vision blob master computer vision lecture 1 ipynb lesson 2 emerging computer vision technologies lecture emerging computer vision technologies https github com noahgift edge computer vision blob master computer vision lecture2 ipynb lesson 3 ai computer vision apis lecture computer vision apis https github com noahgift edge computer vision blob master computer vision lecture3 ipynb project week lesson 4 automl computer vision lecture automl https github com noahgift edge computer vision blob master computer vision lecture4 ipynb ludwig example https uber github io ludwig examples automl tutorial gcp https cloud google com vision automl docs tutorial lesson 5 running computer vision ml models on the edge lecture edge computer vision https github com noahgift edge computer vision blob master computer vision lecture5 ipynb lesson 6 final presentations additional resources gpu references nvidia cuda https docs nvidia com cuda cuda quick start guide index html linux visualization streamlit https www streamlit io automl apple create ml https developer apple com documentation createml additional related topics from noah gift his most recent books are pragmatic a i an introduction to cloud based machine learning pearson 2018 https www amazon com pragmatic ai introduction cloud based analytics dp 0134863860 python for devops o reilly 2020 https www amazon com python devops ruthlessly effective automation dp 149205769x his most recent video courses are essential machine learning and a i with python and jupyter notebook livelessons pearson 2018 https learning oreilly com videos essential machine learning 9780135261118 aws certified machine learning specialty ml s pearson 2019 https learning oreilly com videos aws certified machine 9780135556597 python for data science complete video course video training pearson 2019 https learning oreilly com videos python for data 9780135687253 aws certified big data specialty complete video course and practice test video training pearson 2019 https learning oreilly com videos aws certified big 9780135772324 building a i applications on google cloud platform pearson 2019 https learning oreilly com videos building ai applications 9780135973462 pragmatic ai and machine learning core principles pearson 2019 https learning oreilly com videos pragmatic ai and 9780136554714 data engineering with python and aws lambda pearson 2019 https learning oreilly com videos data engineering with 9780135964330 his most recent online courses are microservices with this udacity devops nanodegree udacity 2019 https www udacity com course cloud dev ops nanodegree nd9991 command line automation in python datacamp 2019 https www datacamp com instructors ndgift aws certified cloud practitioner 2020 real world pragmatic https www udemy com course aws certified cloud practitioner 2020 real world pragmatic referralcode cac679a7d08212773428 | python computer vision machine learning machinelearning-python colab gpu course | ai |
awesome-python-htmx | pyhat awesome python htmx awesome https awesome re badge svg https github com sindresorhus awesome are you interested in the intersection of python and hypermedia driven applications https htmx org essays hypermedia driven applications head on over to the discussions tab https github com pyhat stack awesome python htmx discussions introduce yourself https github com pyhat stack awesome python htmx discussions 2 and let s get to work https github com pyhat stack awesome python htmx discussions 1 what is pyhat a name about a pyhat is more than just a snake with a hat it stands for python htmx asgi tailwind mdash a web stack that allows you to build powerful web applications using nothing more than drumroll python htmx and tailwind quick a name tools take me to the tools already a our goal we want to promote hypermedia driven applications that s it that s the goal okay well more specifically we want to promote htmx within the python ecosystem why should i care does any of this sound like you i want to stick with just python and html css but not sacrifice front end functionality https htmx org essays when to use hypermedia i don t want to have to use a complicated front end framework https htmx org essays a response to rich harris i don t enjoy agonizing over css class names https tailwindcss com docs utility first text you 20aren e2 80 99t 20wasting 20energy 20inventing 20class 20names i want to maintain a consistent design without any bikeshedding if the above sounds like you then you are in the right place maybe this isn t for me one of the links above goes to when should you use hypermedia https htmx org essays when to use hypermedia over at htmx org and is a pretty great read if you want to asses if this is for you it also expounds on the following points hypermedia might not be a good fit if your ui has many dynamic interdependencies i e google maps google sheets if you require offline functionality if your ui state is updated extremely frequently i e online game if your team is not on board but will it work in production yes here is a very good example https htmx org essays a real world react to htmx port of a project a company underwent using htmx with django in production you can also watch the original video from djangocon eu 2022 titled from react to htmx on a real world saas product we did it and it s awesome https www youtube com watch v 3gobi93tjzi details summary some highlights from the article summary the effort took about 2 months with a 21k loc code base mostly javascript no reduction in the application s user experience ux they reduced the code base size by 67 21 500 loc to 7200 loc they increased python code by 140 500 loc to 1200 loc a good thing if you prefer python to js they reduced their total js dependencies by 96 255 to 9 they reduced their web build time by 88 40 seconds to 5 first load time to interactive was reduced by 50 60 from 2 to 6 seconds to 1 to 2 seconds much larger data sets were possible when using htmx because react simply couldn t handle the data web application memory usage was reduced by 46 75mb to 45mb details glossary scroll asynchronous server gateway interface asgi a standard that allows an application to talk to a server allowing for multiple asynchronous events per application br component a reusable custom element within javascript it is a self contained element with its own properties methods that are reusable in this context the term is more broadly applied to any reusable elements which may include hypermedia or other design elements br dependency any application librariy or package that are required to run your application br fragments refers to partial content of a an html template see also template fragments br hypermedia medium of information including graphics audio video text and hyperlinks typically represented on the web as html br hypermedia driven application hda uses declarative html embedded syntax to achieve front end interactivity while interacting with the server in terms of hypermedia html instead of a non hypermedia format json br partials a loose term sometimes referring to partial content that can be displayed in a template or partial content to be generated from within a template block br server side rendering ssr generating static html markup on the server before it is rendered in the browser on the front end br single page application spa a web app implementation that loads a single web document and subsequently updates content through javascript apis br template fragments a relatively rare ssr template library feature that allow you to render a fragment or partial bit of the content within a template rather than the entire template usage a name usage a htmx can be used with any backend framework currently there is a lot of experimentation in the python space which is exciting but that also means that there are a lot of disparate approaches the best advice here is to get familiar with some of the core packages htmx tailwind then feel free to check out any of the packages below official resources htmx https htmx org htmx gives you access to ajax css transitions websockets and server sent events directly in html using attributes so you can build modern user interfaces with the simplicity and power of hypertext htmx has no outside dependencies outside of a vanilla javascript file referenced in your html head section tailwindcss https tailwindcss com docs installation rapidly build modern websites without ever leaving your html tailwind provides a standalone cli tool that does not require npm or any other javascript dependencies you can install it through pip using the pytailwindcss https pypi org project pytailwindcss library introductory resources how to create a django form using htmx in 90 seconds https www photondesigner com articles submit async django form with htmx a href https docs djangoproject com en target blank img src https img shields io badge django a9bbcc style flat logo django logocolor black alt django a a simple short guide to start using htmx with django very quicky simple site https github com tataraba simplesite a href https fastapi tiangolo com target blank img src https img shields io badge fastapi a9bbcc style flat logo fastapi logocolor black alt fastapi a provides thorough documentation on building a site from the ground up with fastapi jinja htmx and tailwind rapid prototyping with flask htmx and tailwind css https testdriven io blog flask htmx tailwind a href https flask palletsprojects com en target blank img src https img shields io badge flask a9bbcc style flat logo flask logocolor black alt flask a in this tutorial you ll learn how to set up flask with htmx and tailwind css testdriven io django htmx and alpine js modern websites javascript optional https www saaspegasus com guides modern javascript for django developers htmx alpine a href https docs djangoproject com en target blank img src https img shields io badge django a9bbcc style flat logo django logocolor black alt django a building a modern front end in django without reaching for a full blown javascript framework choosing the right tools for the job and bringing them into your project introductory courses htmx flask modern python web apps hold the javascript course https training talkpython fm courses htmx flask modern python web apps hold the javascript a href https flask palletsprojects com en target blank img src https img shields io badge flask a9bbcc style flat logo flask logocolor black alt flask a htmx is one of the hottest properties in web development today and for good reason this framework along with the libraries and techniques introduced in this course will have you writing the best python web apps you ve ever written clean fast and interactive without all that frontend overhead talkpython training htmx django modern python web apps hold the javascript course https training talkpython fm courses htmx django modern python web apps hold the javascript similar to the course above except with django talkpython training bugbytes django htmx https www youtube com watch v ula0c rz6gk list pl 2ebedymibrbyz8gxhcnqsuv2dog4jxy a href https docs djangoproject com en target blank img src https img shields io badge django a9bbcc style flat logo django logocolor black alt django a a phenomenal tutorial series on using django with htmx code with stein django ecommerce website htmx and tailwind https youtu be eoyfwkxxxxm si rovmhdlowsq3ihzq a great tutorial on building a django ecommerce website with htmx and tailwind design theory and patterns django htmx patterns https github com spookylukey django htmx patterns a href https docs djangoproject com en target blank img src https img shields io badge django a9bbcc style flat logo django logocolor black alt django a br a compilation of patterns for writing django projects that use htmx with complete example code htmx essays https htmx org essays br a collection of essays by carson gross the creator of htmx some specific essays of note for those not familiar with his teachings hypermedia driven applications https htmx org essays hypermedia driven applications this web stack could have been called pyhda this essay gives a great primer on how a pyhat application should look architecturally locality of behaviour lob https htmx org essays locality of behaviour a concept you will see referred to a lot around here the behaviour of a unit of code should be as obvious as possible by looking only at that unit of code splitting your data application apis going further https htmx org essays splitting your apis a great essay responding to a great article https max engineer server informed ui if you split your api into data and application apis you should consider changing your application api from json to hypermedia html using a hypermedia oriented library like htmx to reap the benefits of the hypermedia model simplicity reliability flexibility etc why we should stop using javascript according to douglas crockford inventor of json https youtu be lc5np9oqdhu br a short video of douglas crockford explaining why we should stop using javascript 3 irl use cases for python and htmx https www bitecode dev p 3 irl use cases for python and htmx from the author there is nothing htmx does that you couldn t do in another way but htmx pairs wonderfully with traditional server side frameworks and gives you clean correct results quite fast you won t get candy crush bling level with it but you will get something practical which is regularly all what i need third party packages a name tools a demos music binder https github com tataraba musicbinder a href https fastapi tiangolo com target blank img src https img shields io badge fastapi a9bbcc style flat logo fastapi logocolor black alt fastapi a more advanced version of simple site repo https github com tataraba simplesite showcasing features like active search and infinite scroll you can open with a codespace in github without having to install anything locally bulldoggy the reminders app https github com automationpanda bulldoggy reminders app a href https fastapi tiangolo com target blank img src https img shields io badge fastapi a9bbcc style flat logo fastapi logocolor black alt fastapi a bulldoggy is a small demo web app for tracking reminders uses htmx to handle get post patch requests in a fully functioning to do frontend owela club https github com adamchainz owela club a href https docs djangoproject com en target blank img src https img shields io badge django a9bbcc style flat logo django logocolor black alt django a play the namibian game of owela against a terrible ai built using django and htmx templates fuzzy couscous https tobi de github io fuzzy couscous a href https docs djangoproject com en target blank img src https img shields io badge django a9bbcc style flat logo django logocolor black alt django a a href https tailwindcss com target blank img src https img shields io badge tailwind css a9bbcc style flat logo tailwindcss logocolor black alt tailwind css a br a cli tool based on django s startproject template to bootstrap your django projects with a pyhat stack helper libraries starlette a href https www starlette io target blank img src https img shields io badge starlette a9bbcc style flat logo starlette logocolor black alt starlette a fastapi a href https fastapi tiangolo com target blank img src https img shields io badge fastapi a9bbcc style flat logo fastapi logocolor black alt fastapi a flask a href https flask palletsprojects com en target blank img src https img shields io badge flask a9bbcc style flat logo flask logocolor black alt flask a jinja a href https palletsprojects com p jinja target blank img src https img shields io badge jinja2 a9bbcc style flat logo jinja logocolor black alt jinja2 a django a href https docs djangoproject com en target blank img src https img shields io badge django a9bbcc style flat logo django logocolor black alt django a tailwind a href https tailwindcss com target blank img src https img shields io badge tailwind css a9bbcc style flat logo tailwindcss logocolor black alt tailwind css a jinja2 fragments https github com sponsfreixes jinja2 fragments a href https fastapi tiangolo com target blank img src https img shields io badge fastapi a9bbcc style flat logo fastapi logocolor black alt fastapi a a href https flask palletsprojects com en target blank img src https img shields io badge flask a9bbcc style flat logo flask logocolor black alt flask a a href https palletsprojects com p jinja target blank img src https img shields io badge jinja2 a9bbcc style flat logo jinja logocolor black alt jinja2 a br allows rendering individual blocks from jinja2 templates this library was created to enable the pattern of template fragments https htmx org essays template fragments with jinja2 extremely helpful when using htmx to enable locality of behavior https htmx org essays locality of behaviour jinja partials https github com mikeckennedy jinja partials a href https flask palletsprojects com en target blank img src https img shields io badge flask a9bbcc style flat logo flask logocolor black alt flask a a href https palletsprojects com p jinja target blank img src https img shields io badge jinja2 a9bbcc style flat logo jinja logocolor black alt jinja2 a br when building real world web apps with flask jinja2 it s easy to end up with repeated html fragments just like organizing code for reuse it would be ideal to reuse smaller sections of html template code that s what this library is all about django render block https github com clokep django render block a href https docs djangoproject com en target blank img src https img shields io badge django a9bbcc style flat logo django logocolor black alt django a br allows rendering individual blocks from django templates this library was created to enable the pattern of template fragments https htmx org essays template fragments with django using django or jinja2 templates extremely helpful when using htmx to enable locality of behavior https htmx org essays locality of behaviour flask htmx https github com edmondchuc flask htmx a href https flask palletsprojects com en target blank img src https img shields io badge flask a9bbcc style flat logo flask logocolor black alt flask a br a flask extension to work with htmx htmx flask https github com sponsfreixes htmx flask a href https flask palletsprojects com en target blank img src https img shields io badge flask a9bbcc style flat logo flask logocolor black alt flask a br an extension for flask that adds support for htmx to your application it simplifies using htmx with flask by enhancing the global request object and providing a new make response function fastapi htmx https github com maces fastapi htmx a href https fastapi tiangolo com target blank img src https img shields io badge fastapi a9bbcc style flat logo fastapi logocolor black alt fastapi a br an opinionated extension for fastapi to speed up development of lightly interactive web applications asgi htmx https github com florimondmanca asgi htmx a href https fastapi tiangolo com target blank img src https img shields io badge fastapi a9bbcc style flat logo fastapi logocolor black alt fastapi a br htmx integration for asgi applications works with starlette fastapi quart or any other web framework supporting asgi that exposes the asgi scope inspired by django htmx django htmx https github com adamchainz django htmx a href https docs djangoproject com en target blank img src https img shields io badge django a9bbcc style flat logo django logocolor black alt django a br extensions for using django with htmx django siteajax https github com idlesign django siteajax a href https docs djangoproject com en target blank img src https img shields io badge django a9bbcc style flat logo django logocolor black alt django a br streamline your server and client interaction using declarative techniques in your html and helpful abstractions from siteajax in your python code powered by htmx hx requests https github com yaakovlowenstein hx requests a href https docs djangoproject com en target blank img src https img shields io badge django a9bbcc style flat logo django logocolor black alt django a br a package to simplify the usage of htmx with django easily add htmx requests witout needing additional urls and reduce clutter in views by offloading all responsibility to an hx request django cbv htmx https github com mixmash11 django cbv htmx a href https docs djangoproject com en target blank img src https img shields io badge django a9bbcc style flat logo django logocolor black alt django a br helps connect django class based views with htmx starlette htmx https github com lllama starlette htmx a href https www starlette io target blank img src https img shields io badge starlette a9bbcc style flat logo starlette logocolor black alt starlette a br a set of extensions for using htmx with starlette based on django htmx django template partials https github com carltongibson django template partials a href https docs djangoproject com en target blank img src https img shields io badge django a9bbcc style flat logo django logocolor black alt django a br reusable named inline partials for the django template language frameworks litestar https litestar dev a href https litestar dev target blank img src https img shields io badge litestar a9bbcc style flat logocolor black alt litestar a br litestar is a full on asgi web framework think fastapi sanic starlette etc so why is it included here with their most recent 2 0 release the creators have included htmx support out of the box a special htmxrequest provides easier access to hx request header objects and an htmxtemplate object that includes attributes for common htmx actions pushing url re swap re targets etc forge packages https www forgepackages com a href https docs djangoproject com en target blank img src https img shields io badge django a9bbcc style flat logo django logocolor black alt django a br forge is a set of django packages that work well together but can also be used independently these include some htmx tailwind specific packages highlighted below note that these are opinionated approaches but they provide a robust set of features to enhance your developer experience forge htmx https www forgepackages com docs forge htmx the forge htmx django package adds a couple of unique features for working with htmx one is template fragments and the other is view actions forge tailwind https www forgepackages com docs forge tailwind a href https tailwindcss com target blank img src https img shields io badge tailwind css a9bbcc style flat logo tailwindcss logocolor black alt tailwind css a use tailwind css with django without requiring javascript or npm django htmx ui https github com nikalexis django htmx ui a href https docs djangoproject com en target blank img src https img shields io badge django a9bbcc style flat logo django logocolor black alt django a a href https palletsprojects com p jinja target blank img src https img shields io badge jinja2 a9bbcc style flat logo jinja logocolor black alt jinja2 a br a django app that combines and helps leverage the full stack django framework the frontend htmx framework the django htmx library and the jinja template engine it provides extended django views with htmx build in functionality crud views for django models extra mixins to use with your views to make life easier a ready to use jinja environment middlewares for automations and extra utils and decorators for common use cases components django dashboards https github com wildfish django dashboards a href https docs djangoproject com en target blank img src https img shields io badge django a9bbcc style flat logo django logocolor black alt django a br tools to help you build data dashboards in django django htmx autocomplete https github com phacdatahub django htmx autocomplete a href https docs djangoproject com en target blank img src https img shields io badge django a9bbcc style flat logo django logocolor black alt django a br a client side autocomplete component powered by htmx featuring multiselect search and is completely extensible tools django tailwind cli https oliverandrich github io django tailwind cli a href https docs djangoproject com en target blank img src https img shields io badge django a9bbcc style flat logo django logocolor black alt django a a href https tailwindcss com target blank img src https img shields io badge tailwind css a9bbcc style flat logo tailwindcss logocolor black alt tailwind css a br an integration of tailwind css for django that is based on the precompiled versions of the tailwind css cli no js required pytailwindcss https github com timonweb pytailwindcss a href https tailwindcss com target blank img src https img shields io badge tailwind css a9bbcc style flat logo tailwindcss logocolor black alt tailwind css a br tailwind css is notoriously dependent on node js if you re a python developer this dependency may not be welcome in your team your docker container or your inner circle giving up node js means you won t be able to install plugins or additional dependencies for your tailwind css setup at the same time that might not be a dealbreaker you can still customize tailwind css via the tailwind config js file html form to dict https github com guettli html form to dict br do simple end to end testing of form handling without a real browser like selenium puppeteer playwright supports the action and method attributes of forms and additionaly the htmx attributes hx get hx post projects using pyhat or similar django requests tracker https github com bensi94 django requests tracker a href https docs djangoproject com en target blank img src https img shields io badge django a9bbcc style flat logo django logocolor black alt django a br a django development tool which collects and displays information on requests responses sql queries headers django settings and more the front end uses htmx idp z3 https gitlab com krr idp z3 a href https flask palletsprojects com en target blank img src https img shields io badge flask a9bbcc style flat logo flask logocolor black alt flask a br a software collection implementing the knowledge base paradigm using the fo language uses htmx for the front end jupyspace https github com davidbrochart jupyspace a href https fastapi tiangolo com target blank img src https img shields io badge fastapi a9bbcc style flat logo fastapi logocolor black alt fastapi a br a web server and client to manage conda forge environments from the browser and access them through jupyterlab uses htmx on the front end further reading awesome htmx https github com rajasegar awesome htmx maximizing productivity pycharm and htmx integration https oluwatobi dev blog maximizing productivity pycharm and htmx integration unsuck js https unsuckjs com | front_end |
|
Hands-On-High-Performance-Web-Development-with-JavaScript | 5 tech unlocked 2021 buy and download this book for only 5 on packtpub com https www packtpub com product hands on javascript high performance 9781838821098 if you have read this book please leave a review on amazon com https www amazon com gp product 1838821090 potential readers can then use your unbiased opinion to help them make purchase decisions thank you the 5 campaign runs from december 15th 2020 to january 13th 2021 hands on high performance web development with javascript hands on high performance web development with javascript published by packt download a free pdf i if you have already purchased a print or kindle version of this book you can get a drm free pdf version at no cost br simply click on the link to claim your free pdf i p align center a href https packt link free ebook 9781838821098 https packt link free ebook 9781838821098 a p | front_end |
|
llm-rankers | llm rankers pointwise listwise pairwise and setwise https arxiv org pdf 2310 09497 pdf document ranking with large language models note the current code base only supports t5 style open source llms and openai apis for several methods we are in the process of implementing support for more llms installation git clone this repository then pip install the following libraries bash torch 2 0 1 transformers 4 31 0 pyserini 0 21 0 ir datasets 0 5 5 openai 0 27 10 tiktoken 0 4 0 accelerate 0 22 0 note the code base is tested with python 3 9 conda environment you may also need to install some pyserini dependencies such as faiss we refer to pyserini installation doc link https github com castorini pyserini blob master docs installation md development installation first stage runs we use llms to re rank top documents retrieved by a first stage retriever in this repo we take bm25 as the retriever we rely on pyserini https github com castorini pyserini ir toolkit to get bm25 ranking here is an example of using pyserini command lines to generate bm25 run files on trec dl 2019 bash python m pyserini search lucene threads 16 batch size 128 index msmarco v1 passage topics dl19 passage output run msmarco v1 passage bm25 default dl19 txt bm25 k1 0 9 b 0 4 to evaluate ndcg 10 scores of bm25 bash python m pyserini eval trec eval c l 2 m ndcg cut 10 dl19 passage run msmarco v1 passage bm25 default dl19 txt results ndcg cut 10 all 0 5058 you can find the command line examples for full trec dl datasets here https castorini github io pyserini 2cr msmarco v1 passage html similarly you can find command lines for obtaining bm25 results on beir datasets here https castorini github io pyserini 2cr beir html in this repository we use dl 2019 as an example that is we always re rank run msmarco v1 passage bm25 default dl19 txt with llms prompting methods for zero shot document ranking with llms python code example python from rankers setwise import setwisellmranker from rankers rankers import searchresult docs searchresult docid i text f this is passage i score none for i in range 100 query give me passage 34 ranker setwisellmranker model name or path google flan t5 large tokenizer name or path google flan t5 large device cuda num child 10 scoring generation method heapsort k 10 print ranker rerank query docs 0 command lines examples details summary pointwise summary we have two pointwise methods implemented so far yes no llms are prompted to generate whether the provided candidate document is relevant to the query candidate documents are re ranked based on the normalized likelihood of generating a yes response qlm query likelihood modelling qlm llms are prompted to produce a relevant query for each candidate document the documents are then re ranked based on the likelihood of generating the given query 1 these methods rely on access to the model output logits to compute relevance scores command line example bash cuda visible devices 0 python3 run py run model name or path google flan t5 large tokenizer name or path google flan t5 large run path run msmarco v1 passage bm25 default dl19 txt save path run pointwise yes no txt ir dataset name msmarco passage trec dl 2019 hits 100 query length 32 passage length 128 device cuda pointwise method yes no batch size 32 bash evaluation python m pyserini eval trec eval c l 2 m ndcg cut 10 dl19 passage run pointwise qlm txt results ndcg cut 10 all 0 6544 change method yes no to method qlm for qlm pointwise ranking you can also set larger batch size that you gpu can afford for faster inference we also have implemented supervised monot5 https github com castorini pygaggle pointwise re ranker simply set model name or path and tokenizer name or path to castorini monot5 3b msmarco or other monot5 models listed in here https huggingface co castorini details details summary listwise summary our implementation of listwise approach is following rankgpt https github com sunnweiwei rankgpt 2 it uses a sliding window sorting algorithm to re rank documents bash cuda visible devices 0 python3 run py run model name or path google flan t5 large tokenizer name or path google flan t5 large run path run msmarco v1 passage bm25 default dl19 txt save path run liswise generation txt ir dataset name msmarco passage trec dl 2019 hits 100 query length 32 passage length 100 scoring generation device cuda listwise window size 4 step size 2 num repeat 5 bash evaluation python m pyserini eval trec eval c l 2 m ndcg cut 10 dl19 passage run liswise generation txt results ndcg cut 10 all 0 5612 use window size step size and num repeat to configure sliding window process we also provide openai api implementation simply do bash python3 run py run model name or path gpt 3 5 turbo openai key your key run path run msmarco v1 passage bm25 default dl19 txt save path run iswise generation openai txt ir dataset name msmarco passage trec dl 2019 hits 100 query length 32 passage length 100 scoring generation listwise window size 4 step size 2 num repeat 5 the above two listwise runs are relying on llm generated tokens to do the sliding window however if we have local model for example flan t5 we can use setwise prompting proposed in our paper https arxiv org abs 2310 09497 3 to estimate the likehood of document rankings to do the sliding window bash cuda visible devices 0 python3 run py run model name or path google flan t5 large tokenizer name or path google flan t5 large run path run msmarco v1 passage bm25 default dl19 txt save path run liswise likelihood txt ir dataset name msmarco passage trec dl 2019 hits 100 query length 32 passage length 100 scoring likelihood device cuda listwise window size 4 step size 2 num repeat 5 bash evaluation python m pyserini eval trec eval c l 2 m ndcg cut 10 dl19 passage run liswise likelihood txt results ndcg cut 10 all 0 6691 details details summary pairwise summary we implement pairwise prompting method proposed in 4 bash cuda visible devices 0 python3 run py run model name or path google flan t5 large tokenizer name or path google flan t5 large run path run msmarco v1 passage bm25 default dl19 txt save path run pairwise heapsort txt ir dataset name msmarco passage trec dl 2019 hits 100 query length 32 passage length 128 scoring generation device cuda pairwise method heapsort k 10 bash evaluation python m pyserini eval trec eval c l 2 m ndcg cut 10 dl19 passage run pairwise heapsort txt results ndcg cut 10 all 0 6571 method heapsort does pairwise inferences with heap sort algorithm change to method bubblesort for bubble sort algorithm you can set method allpair for comparing all possible pairs in this case you can set batch size for batching inference but allpair is very expensive we also have supervised duot5 https github com castorini pygaggle pairwise ranking model implemented simply set model name or path and tokenizer name or path to castorini duot5 3b msmarco or other duot5 models listed in here https huggingface co castorini details details summary setwise summary our proposed setwise prompting can considerably speed up the sorting based pairwise methods check our paper here https arxiv org abs 2310 09497 for more details bash cuda visible devices 0 python3 run py run model name or path google flan t5 large tokenizer name or path google flan t5 large run path run msmarco v1 passage bm25 default dl19 txt save path run setwise heapsort txt ir dataset name msmarco passage trec dl 2019 hits 100 query length 32 passage length 128 scoring generation device cuda setwise num child 2 method heapsort k 10 bash evaluation python m pyserini eval trec eval c l 2 m ndcg cut 10 dl19 passage run setwise heapsort txt results ndcg cut 10 all 0 6697 num child 2 means comparing two child node documents one parent node document 3 documents in total to compare in the prompt increasing num child will give more efficiency gain but you may need to truncate documents more by setting a small passage length otherwise prompt may exceed input limitation you can also set scoring likelihood for faster inference we also have openai api implementation for setwise method bash python3 run py run model name or path gpt 3 5 turbo openai key your key run path run msmarco v1 passage bm25 default dl19 txt save path run setwise heapsort openai txt ir dataset name msmarco passage trec dl 2019 hits 100 query length 32 passage length 128 scoring generation setwise num child 2 method heapsort k 10 details details summary beir experiments summary for beir datasets experiments change ir dataset name to pyserini index with pyserini pre build index for example bash dataset trec covid change to trec covid robust04 webis touche2020 scifact signal1m trec news dbpedia entity nfcorpus for other experiments in the paper get bm25 first stage results python m pyserini search lucene index beir v1 0 0 dataset flat topics beir v1 0 0 dataset test output run bm25 dataset txt output format trec batch 36 threads 12 hits 1000 bm25 remove query python m pyserini eval trec eval c m ndcg cut 10 beir v1 0 0 dataset test run bm25 dataset txt results ndcg cut 10 all 0 5947 setwise with heapsort cuda visible devices 0 python3 run py run model name or path google flan t5 large tokenizer name or path google flan t5 large run path run bm25 dataset txt save path run setwise heapsort dataset txt pyserini index beir v1 0 0 dataset hits 100 query length 32 passage length 128 scoring generation device cuda setwise num child 2 method heapsort k 10 python m pyserini eval trec eval c m ndcg cut 10 beir v1 0 0 dataset test run setwise heapsort dataset txt results ndcg cut 10 all 0 7675 details note if you remove cuda visible devices 0 our code should automatically perform multi gpu inference but we may observe slight changes in the ndcg 10 scores references 1 devendra sachan mike lewis mandar joshi armen aghajanyan wen tau yih joelle pineau and luke zettlemoyer 2022 improving passage retrieval with zero shot question generation 2 weiwei sun lingyong yan xinyu ma pengjie ren dawei yin and zhaochun ren 2023 is chatgpt good at search 3 shengyao zhuang honglei zhuang bevan koopman and guido zuccon 2023 a setwise approach for effective and highly efficient zero shot ranking with large language models 4 zhen qin rolf jagerman kai hui honglei zhuang junru wu jiaming shen tianqi liu jialu liu donald metzler xuanhui wang and michael bendersky 2023 large language models are effective text rankers with pairwise ranking prompting if you used our code for your research please consider to cite our paper text article zhuang2023setwise title a setwise approach for effective and highly efficient zero shot ranking with large language models author zhuang shengyao and zhuang honglei and koopman bevan and zuccon guido journal arxiv preprint arxiv 2310 09497 year 2023 | ai |
|
EngineeringMode | engineering mode engineeringmode is a highly customizable ios package to make debugging common things like notifications userdefaults permissions and networking easier div img src https github com ananay engineeringmode assets 5569219 edb32e6b ccab 44e8 a4cb 969d1c71dae4 width 300 img src https github com ananay engineeringmode assets 5569219 9150ba17 c0db 4988 bf8f e108167b082e width 300 img src https github com ananay engineeringmode assets 5569219 2de1cdab 614e 4e5a abb8 237ba3db5ae7 width 300 img src https github com ananay engineeringmode assets 5569219 e4841876 4c0e 4775 ab84 90d84d7004ce width 300 div usage engineeringmode can be added to any swiftui view easily typically it s used with a sheet https developer apple com design human interface guidelines sheets basic usage with a sheet swift import engineeringmode sheet ispresented showingengineeringmodesheet engineeringmode custom views to add a custom view to the existing engineering mode screen just pass in customviews and customviewtitles optionally if you want custom views to show before the other views then add showcustomviewsfirst swift engineeringmode customviews anyview customviewtitles string showcustomviewsfirst bool important customviews takes in an anyview please cast it to that customviews and customviewtitles should have the same number of array elements each custom view should have a title otherwise the app will crash example swift engineeringmode customviews anyview mycustomview customviewtitles test showcustomviewsfirst true | os |
|
TRACE | trace a comprehensive benchmark for continual learning in large language models trace benchmark div align center img src assets trace jpg width 80 div reasoning augmented continual learning rcl method div align center img src assets rcl jpg width 80 div requirements our main experiments and analysis are conducted on the following environment cuda 12 2 torch 2 0 1 torchaudio 2 0 2 torchvision 0 15 2 to install other packages run pip install r requirements txt to use flash attention install flas attn data preproceess all the data after processing can be downloaded from trace benchmark https drive google com file d 1s0smu0wew5okw xvp2ns0urflnzzq6sv view usp drive link dataset load logic can be found in utils data raw datasets py utils data data utils py 1 if using datasts on huggingface use datasets load dataset ds name to load 2 if using local dataset the dataset should be processed into three files train json eval json and test json the format of the dataset are as follows prompt given my personal financial information when can i expect to retire comfortably answer xxxxxxx prompt how do i develop a high risk investment strategy based on gambling and speculative markets answer xxxxxxxx data preprocess can be found in utils data data collator py utils data data collator py input to data collator the content provided to the data collator is a batch of samples which are preprocessed into tensors as needed here we assume support for a decoder only model with left padding for efficient batch processing the padding length is set to the maximum length among the samples in the batch not necessarily the maximum input length to accelerate training and inference baseline train and inference description of some parameters in the training and inference scripts data path path for the datasets which in total includes nine datasets eight standard training datasets c stance fomc meetingbank py150 scienceqa numglue cm numglue ds 20minuten and one replay dataset lima past task ratio parameters for replay training the replay ratio of past tasks cl method if using lora training set the parameter lora else set the parameter base besides our repository also support multi traditional continual learning methods including ewc ogd gem mbpa pp l2p lfpt5 o lora inference model path the folder in which the model is saved after training corresponding to the output dir in the training scripts the program will iterate through the models in the folder for inference naive full params sft training and inference bash scripts train seq naive sh bash scripts infer seq sh lora training and inference bash scripts train lora sh bash scripts infer lora sh replay training and inference bash scripts train replay sh bash scripts infer seq sh continual learning methods training and inference bash scripts train seq cl sh bash scripts infer seq sh icl bash scripts icl sh evaluation of other capabilities generalization abilities evaluation in this paper to evaluate a model s general ability we assess across five key dimensions factual knowledge general reasoning multilinguality commonsense reasoning and reading comprehension we follow opencompass https github com open compass opencompass to evaluate the models s above abilities instruction following and safety evaluation bash scripts infer 3h sh we use gpt 4 to conduct model evluation more details can be found in appendix 9 citation if you use our work please cite our paper latex misc wang2023trace title trace a comprehensive benchmark for continual learning in large language models author xiao wang and yuansen zhang and tianze chen and songyang gao and senjie jin and xianjun yang and zhiheng xi and rui zheng and yicheng zou and tao gui and qi zhang and xuanjing huang year 2023 eprint 2310 06762 archiveprefix arxiv primaryclass cs cl | ai |
|
advanced_lane_detection | advanced lane detection annotated output images annotated test2 png overview detect lanes using computer vision techniques this project is part of the udacity self driving car nanodegree https www udacity com drive and much of the code is leveraged from the lecture notes the following steps were performed for lane detection compute the camera calibration matrix and distortion coefficients given a set of chessboard images apply a distortion correction to raw images use color transforms gradients etc to create a thresholded binary image apply a perspective transform to rectify binary image birds eye view detect lane pixels and fit to find the lane boundary determine the curvature of the lane and vehicle position with respect to center warp the detected lane boundaries back onto the original image output visual display of the lane boundaries and numerical estimation of lane curvature and vehicle position here https youtu be lsd wy1bqlw is the final video output on youtube the same video is out mp4 in this repo the original video is project video mp4 dependencies python 3 5 numpy opencv python matplotlib pickle how to run run python line fit video py this will take the raw video file at project video mp4 and create an annotated output video at out mp4 afterwards it will display an example annotated image on screen to run the lane detector on arbitrary video files update the last few lines of line fit video py camera calibration the camera was calibrated using the chessboard images in camera cal jpg the following steps were performed for each calibration image convert to grayscale find chessboard corners with opencv s findchessboardcorners function assuming a 9x6 board after the above steps were executed for all calibration images i used opencv s calibratecamera function to calculate the distortion matrices using the distortion matrices i undistort images using opencv s undistort function to illustrate the following is the calibration image camera cal calibration5 jpg calibration5 camera cal calibration5 jpg here is the same image undistored via camera calibration undist cal5 output images undistort calibration png the final calibration matrices are saved in the pickle file calibrate camera p lane detection pipeline the following describes and illustrates the steps involved in the lane detection pipeline for illustration below is the original image we will use as an example orig test images test2 jpg undistort image using the camera calibration matrices in calibrate camera p i undistort the input image below is the example image above undistorted undist output images undistort test2 png the code to perform camera calibration is in calibrate camera py for all images in test images jpg the undistorted version of that image is saved in output images undistort png thresholded binary image the next step is to create a thresholded binary image taking the undistorted image as input the goal is to identify pixels that are likely to be part of the lane lines in particular i perform the following apply the following filters with thresholding to create separate binary images corresponding to each individual filter absolute horizontal sobel operator on the image sobel operator in both horizontal and vertical directions and calculate its magnitude sobel operator to calculate the direction of the gradient convert the image from rgb space to hls space and threshold the s channel combine the above binary images to create the final binary image here is the example image transformed into a binary image by combining the above thresholded binary filters binary output images binary test2 png the code to generate the thresholded binary image is in combined thresh py in particular the function combined thresh for all images in test images jpg the thresholded binary version of that image is saved in output images binary png perspective transform given the thresholded binary image the next step is to perform a perspective transform the goal is to transform the image such that we get a bird s eye view of the lane which enables us to fit a curved line to the lane lines e g polynomial fit another thing this accomplishes is to crop an area of the original image that is most likely to have the lane line pixels to accomplish the perspective transform i use opencv s getperspectivetransform and warpperspective functions i hard code the source and destination points for the perspective transform the source and destination points were visually determined by manual inspection although an important enhancement would be to algorithmically determine these points here is the example image after applying perspective transform warped output images warped test2 png the code to perform perspective transform is in perspective transform py in particular the function perspective transform for all images in test images jpg the warped version of that image i e post perspective transform is saved in output images warped png polynomial fit given the warped binary image from the previous step i now fit a 2nd order polynomial to both left and right lane lines in particular i perform the following calculate a histogram of the bottom half of the image partition the image into 9 horizontal slices starting from the bottom slice enclose a 200 pixel wide window around the left peak and right peak of the histogram split the histogram in half vertically go up the horizontal window slices to find pixels that are likely to be part of the left and right lanes recentering the sliding windows opportunistically given 2 groups of pixels left and right lane line candidate pixels fit a 2nd order polynomial to each group which represents the estimated left and right lane lines the code to perform the above is in the line fit function of line fit py since our goal is to find lane lines from a video stream we can take advantage of the temporal correlation between video frames given the polynomial fit calculated from the previous video frame one performance enhancement i implemented is to search 100 pixels horizontally from the previously predicted lane lines then we simply perform a 2nd order polynomial fit to those pixels found from our quick search in case we don t find enough pixels we can return an error e g return none and the function s caller would ignore the current frame i e keep the lane lines the same and be sure to perform a full search on the next frame overall this will improve the speed of the lane detector useful if we were to use this detector in a production self driving car the code to perform an abbreviated search is in the tune fit function of line fit py another enhancement to exploit the temporal correlation is to smooth out the polynomial fit parameters the benefit to doing so would be to make the detector more robust to noisy input i used a simple moving average of the polynomial coefficients 3 values per lane line for the most recent 5 video frames the code to perform this smoothing is in the function add fit of the class line in the file line py the line class was used as a helper for this smoothing function specifically and line instances are global objects in line fit py below is an illustration of the output of the polynomial fit for our original example image for all images in test images jpg the polynomial fit annotated version of that image is saved in output images polyfit png polyfit output images polyfit test2 png radius of curvature given the polynomial fit for the left and right lane lines i calculated the radius of curvature for each line according to formulas presented here http www intmath com applications differentiation 8 radius curvature php i also converted the distance units from pixels to meters assuming 30 meters per 720 pixels in the vertical direction and 3 7 meters per 700 pixels in the horizontal direction finally i averaged the radius of curvature for the left and right lane lines and reported this value in the final video s annotation the code to calculate the radius of curvature is in the function calc curve in line fit py vehicle offset from lane center given the polynomial fit for the left and right lane lines i calculated the vehicle s offset from the lane center the vehicle s offset from the center is annotated in the final video i made the same assumptions as before when converting from pixels to meters to calculate the vehicle s offset from the center of the lane line i assumed the vehicle s center is the center of the image i calculated the lane s center as the mean x value of the bottom x value of the left lane line and bottom x value of the right lane line the offset is simply the vehicle s center x value i e center x value of the image minus the lane s center x value the code to calculate the vehicle s lane offset is in the function calc vehicle offset in line fit py annotate original image with lane area given all the above we can annotate the original image with the lane area and information about the lane curvature and vehicle offset below are the steps to do so create a blank image and draw our polyfit lines estimated left and right lane lines fill the area between the lines with green color use the inverse warp matrix calculated from the perspective transform to unwarp the above such that it is aligned with the original image s perspective overlay the above annotation on the original image add text to the original image to display lane curvature and vehicle offset the code to perform the above is in the function final viz in line fit py below is the final annotated version of our original image for all images in test images jpg the final annotated version of that image is saved in output images annotated png annotated output images annotated test2 png discussion this is an initial version of advanced computer vision based lane finding there are multiple scenarios where this lane finder would not work for example the udacity challenge video includes roads with cracks which could be mistaken as lane lines see challenge video mp4 also it is possible that other vehicles in front would trick the lane finder into thinking it was part of the lane more work can be done to make the lane detector more robust e g deep learning based semantic segmentation https arxiv org pdf 1605 06211 pdf to find pixels that are likely to be lane markers then performing polyfit on only those pixels | lane-detection lane-boundaries lane-lines lane-curvature camera-calibration computer-vision lane-finding lane-detector self-driving-car udacity-carnd | ai |
FET-scheduling-system | fet scheduling system front end interface for descktop fet timetabling software build with javascript fet fet is open source free software for automatically scheduling the timetable of a school high school or university it uses a fast and efficient timetabling algorithm it is licensed under gnu gpl usually fet is able to solve a complicated timetable in maximum 5 20 minutes for simpler timetables it may take a shorter time under 5 minutes in some cases a matter of seconds for extremely difficult timetables it may take a longer time a matter of hours fet homepage http lalescu ro liviu fet | front_end |
|
kitsune | kitsune kitsune is the image that runs on the application processor of the cc3200 setup this project is built with a vendor supplied toolchain that s derived from gcc to run the debugger requires windows 7 or greater install code composer studio v6 create a workspace and add all the projects in kitsune go to debug configurations and add the cc3200 config located at kitsune tools ccs patch cc3200 ccxml build all the projects except kitsune build kitsune debugger debug is via swd jtag on a ft2232 we simply pull the signals off the launchpad board four wire jtag in a vm isn t stable probably due to timing requirements for use as a debugger remove all the jumpers from the launchpad and connect the tck tmx rx tx 5v and gnd for best results power the ftdi chip off the 1 8v supply on the bottom board jumper j13 pin is labelled brd pwr after loading code and connecting to the uart the boot command is required to start the background tasks and led stop is needed to dismiss the debug cable animation note in the above configuration 3 3v is supplied on the opposite pin of j13 creating a convenient supply for a pill board architecture basic building blocks are freertos fatfs nanopb and ti s network stack kitsune does not exist in a vacuum the system is composed internally of two mcu s the cc3200 on the middle board and the nrf51422 on the top board which is the focus of kodobannin beyond this the pill is running another nrf which communicates via ant to the top board the mobile apps on ios or android and the backend server this complexity is managed at an interface level using protobuf the periodic data protobuf contains enviromnental data for each minute and the log protobuf contains chunks of debug text the communication between the phone and sense is via the monolithic protobuf morpheus ble which internally is composed of many optional fields and a command field which is used to determine the intent and what content to expect this message is also used in account pairing during which the phone will initiate the process by sending it to the sense which will annotate it and forward it to the server all data between the server and kitsune is transferred using serialized signed protobuf over http the signing process is done by computing the sha of the message and encrypting that sha with a per device aes key which is generated on the device during manufacturing in a process referred to as provisioning internally a few important threads exist networktask exists to serialze the accesses to our single secure socket fast and slow sampling threads record the sensor data a logging thread captures debug output a spi thread listens for data coming via the spi connection to the top board a command task handles initialization of many of the other tasks and brings up a uart debug terminal if on a debug cable an audio task handles recording and playback there is also a cross connect uart ccu which runs between the nrf on the top board and the cc3200 on the middle board this exists to enable remote updates of the nrf but also allows for the debug stream to be captured and merged with the debug stream uploaded to the server files to read for rapid familiarity commands c has the debug command line new commands should be added to the table near the end it also contains the fast slow sampling tasks and alarm logic wifi cmd c handles the nuts and bolts of signing and sending messages to the server fatfs cmd c handles the ota process audio task c handles the playback and record buffering ble proto c handles the phone interface logic sys time c handles managing the system time fetching from ntp and interfacing with the rtc further reading datasheets technical manuals and current schematics are included in the reference folder | os |
|
NLP-Fake-News-Challenge | fake news challenge the idea of fake news is often referred to as click bait in social trends and is defined as a made up story with an intention to deceive geared towards getting clicks tavernise 2016 some news articles have titles which grab a reader s interest yet the author only emphasizes a specific part of the article in the title if the article itself does not focus on or give much truth to what the title had written the news may be misleading the goal of this project is to use natural language processing techniques to automate stance detection since it is not practical for humans to fact check every piece of information produced by the media stance detection is a method used to determine the quality of a news article by taking into consideration what other organisations write about the same headline a body of text is claimed to agree disagree discuss or be unrelated to a headline fake news challenge 2016 stance detection is the method that will be used to determine the quality of a news source from the fakechallenge org http fakenewschallenge org a dataset will be provided which consists of a headline and a body of text this body of text may be from a different article allowing bodies of text from different articles allows this system to take into account what the other organisations are saying about he same headline the output of the system will be the stance of the body of text related to the title as shown in the fake news challenge fake news source the system will support will support the following stance types agrees disagrees discusses unrelated with this system for a set of news headlines statistics can be gathered with respect to the stances with these statistics a user can come to their own conclusion of whether a new organisation has reputable news sources to achieve these stances this system will train on the data supplied by the fake news challenge this data will provide the stance along with the headline and body to allow the system to learn which word combinations lead to which stance for testing data will be provided without the stances to expand upon the baseline this project will consider stemming words removing stop words and smoothing developer commands to generate the features br python feature generation py to train and get analysis br python model py sources this project was inspired by the fake news challenge fnc fnc baseline repo https github com fakenewschallenge fnc 1 baseline br fakechallenge org http fakenewschallenge org br fnc 1 repo https github com fakenewschallenge fnc 1 | ai |
|
bangjago-android-emulator | bangjago android emulator cli version bangjago emulator is a cli based application used for mobile development which is used as an android emulator like genymotion even though it s not as good as genymotion besides that i added some features that are not in genymotion such as connecting your mobile device using usb debugging and wireless for mobile development for now only available for windows users maybe next time i will make it for another operating system version like mac or linux img src cover gif alt logo prs welcome https img shields io badge prs welcome brightgreen svg style flat square http makeapullrequest com github code size in bytes https img shields io github languages code size restuwahyu13 bangjago android emulator tag https img shields io github tag restuwahyu13 bangjago android emulator svg https github com restuwahyu13 bangjago android emulator stars https img shields io github stars restuwahyu13 bangjago android emulator https img shields io github stars restuwahyu13 bangjago android emulator forks https img shields io github forks restuwahyu13 bangjago android emulator https img shields io github forks restuwahyu13 bangjago android emulator table of content get started get started features features command command how to use how to use default port device default port device system images list system images list skin device list skin device list translate translate support project support project video tutorial video tutorial author author contributor contributor license license features x easy to use x fast booting x effeciency management ram and cpu x connected over usb and wireless x connected over android emulator x installation software supported automatically x support for like react native flutter ionic native script etc command adb tools add adb usb is used to add usb debugging port add adb wireless is used to add ip address restart adb is used to reset adb to default port check adb is used to check if adb device is connected or not running emulator is used to run emulator via usb debugging or wireless sdk tools google android sdk to download android sdk version android google apis default android sdk to download android sdk version android default tv android sdk to download android sdk version android tv wear android sdk to download android sdk version android wear google playstore sdk to download android sdk version google apis playstore avd tools list avd emulator to list all avd emulators available create avd emulator to create new avd emulator running avd emulator to run avd emulator delete avd emulator to remove avd emulator update avd emulator to update avd emulator software tools java jdk to download java jdk automatically android studio to download android studio automatically more information developer related information and support project donation how to use 1 running application first you must install android studio and java jdk if not installed on computer laptop over cli install java jdk versi jdk 8u261 download here https www filehorse com download java development kit 64 52937 if you encounter an error upgrade android studio to 4 1 version if any error encountered when you downloading android sdk download the file via the bit ly link above extract bangjago emulator zip to localdisk c click properties my computer advanced system settings environment variable copy path android sdk to environment system variable android home copy path java jdk to environment system variable java home copy path c bangjago to environment system variable path copy path c bangjago to environment system variabel bangjago you can open cmd and type start bangjago don t use a terminal other than cmd if you want to install android studio or java jdk run as is required 2 connected over usb debugging first you must provide usb cabel enable usb debugging on your smartphone ensured your smartphone could connect on your computer laptop selected adb tools then choose add add usb select the default usb debugging port and device just running emulator emulator usb if not connected then choosing adb tools restart adb and repeat 3 connected over wireless first you must provide usb cable plug usb to port computer laptop enable usb debugging on your smartphone make sure your smartphone is connected selected adb tools then choose add wireless set ip address default if it works unplug your smartphone from usb close emulator and run emulator again run emulator wireless emulator if not connected you can select adb tools restart adb then repeat or follow the tutorial below react native https tinyurl com y6rvxsln or flutter https tinyurl com yxwpy7w7 4 connected over emulator select avd tools create emulator then android sdk will be downloaded automatically skip the first method if avd emulator already exists how to run the emulator avd tool options then select run emulator enter emulator name that was created before default port device name ip address port type local 5555 usb react native 8081 usb flutter 8080 usb local 192 168 x x wireless react native 192 168 x x wireless flutter 192 168 x x wireless system images list google system images android api version target version cpu version android 16 google apis x86 android 16 google apis armeabi v7a android 17 google apis x86 android 17 google apis armeabi v7a android 18 google apis x86 android 18 google apis armeabi v7a android 19 google apis x86 android 19 google apis armeabi v7a android 21 google apis x86 android 21 google apis x86 64 android 21 google apis armeabi v7a android 22 google apis x86 android 22 google apis x86 64 android 22 google apis armeabi v7a android 23 google apis x86 android 23 google apis x86 64 android 23 google apis armeabi v7a android 24 google apis x86 android 24 google apis x86 64 android 24 google apis arm64 v8a android 25 google apis x86 android 25 google apis x86 64 android 25 google apis armeabi v7a android 25 google apis arm64 v8a android 26 google apis x86 android 26 google apis x86 64 android 27 google apis x86 android 28 google apis x86 64 android 29 google apis x86 android 29 google apis x86 64 android 30 google apis x86 android 30 google apis x86 64 default system images android api version target version cpu version android 16 default x86 android 16 default armeabi v7a android 17 default x86 android 17 default armeabi v7a android 18 default x86 android 18 default armeabi v7a android 19 default x86 android 19 default armeabi v7a android 21 default x86 android 21 default x86 64 android 21 default armeabi v7a android 22 default x86 android 22 default x86 64 android 22 default armeabi v7a android 23 default x86 android 23 default x86 64 android 23 default armeabi v7a android 24 default x86 android 24 default x86 64 android 24 default armeabi v7a android 25 default x86 android 25 default x86 64 android 26 default x86 android 26 default x86 64 android 27 default x86 android 27 default x86 64 android 28 default x86 android 28 default x86 64 android 30 default x86 android 30 default x86 64 tv system images android api version target version cpu version android 21 android tv x86 android 21 android tv armeabi v7a android 22 android tv x86 android 23 android tv x86 android 23 android tv armeabi v7a android 24 android tv x86 android 25 android tv x86 android 26 android tv x86 android 27 android tv x86 android 28 android tv x86 android 29 android tv x86 wear os system images android api version target version cpu version android 23 android wear x86 android 23 android wear armeabi v7a android 25 android wear x86 android 25 android wear armeabi v7a android 26 android wear x86 android 28 android wear x86 google playstore system images android api version target version cpu version android 24 google apis playstore x86 android 25 google apis playstore x86 android 26 google apis playstore x86 android 27 google apis playstore x86 android 28 google apis playstore x86 android 28 google apis playstore x86 64 android 29 google apis playstore x86 android 29 google apis playstore x86 64 android 30 google apis playstore x86 android 30 google apis playstore x86 64 skin device list default phone device name ram cpu cores internal storage 2 7 qvga 1024 mb 1 core 2048 mb 2 7 qvga slider 1024 mb 1 core 2048 mb 3 2 qvga adp2 1024 mb 1 core 2048 mb 3 3 wqvga 1024 mb 1 core 2048 mb 3 4 wqvga 1024 mb 1 core 2048 mb 3 7 fwvga slider 1024 mb 1 core 2048 mb 3 7 wvga nexus one 1024 mb 1 core 2048 mb 4 7 wxga 1024 mb 1 core 2048 mb 4 65 720p galaxy nexus 1024 mb 1 core 2048 mb 4 wvga nexus s 1024 mb 1 core 2048 mb 5 1 wvga 1024 mb 1 core 2048 mb 5 1 wvga api 1024 mb 1 core 2048 mb 5 4 fwvga 1024 mb 1 core 2048 mb 7 3 foldable 2048 mb 1 core 4096 mb 8 foldable 2048 mb 1 core 4096 mb galaxy nexus 1024 mb 1 core 2048 mb nexus 4 2048 mb 1 core 4096 mb nexus 5 2048 mb 1 core 4096 mb nexus 5x 2048 mb 1 core 4096 mb nexus 6 2048 mb 1 core 4096 mb nexus 6p 2048 mb 1 core 4096 mb nexus one 1024 mb 1 core 2048 mb nexus s 1024 mb 1 core 2048 mb pixel 2048 mb 1 core 4096 mb pixel 2 2048 mb 1 core 4096 mb pixel 2 xl 2048 mb 1 core 4096 mb pixel 3 2048 mb 1 core 4096 mb pixel 3 xl 2048 mb 1 core 4096 mb pixel 3a 2048 mb 1 core 4096 mb pixel 3a xl 2048 mb 1 core 4096 mb pixel xl 2048 mb 1 core 4096 mb default tablet device name ram cpu cores internal storage 7 wsvga tablet 1024 mb 1 core 2048 mb 10 1 wxga tablet 1024 mb 1 core 2048 mb nexus 7 2048 mb 1 core 4096 mb nexus 7 2012 1024 mb 1 core 2048 mb nexus 9 2048 mb 1 core 4096 mb nexus 10 2048 mb 1 core 4096 mb pixel c 2048 mb 1 core 4096 mb default tv device name ram cpu cores internal storage android wear round 1024 mb 1 core 2048 mb android wear round chin 1024 mb 1 core 2048 mb android wear square 1024 mb 1 core 2048 mb translate indonesian https github com restuwahyu13 bangjago android emulator blob main readme ind md english https github com restuwahyu13 bangjago android emulator blob main readme md support project if you like this project or you want to support this project you can treat me to a cup of coffee or you can donate via the following link donate https bit ly 37ksgkb video tutorial tutorial https bit ly 2g5kuyr author restu wahyu saputra https github com restuwahyu13 contributor vicri kurniawan https github com vicrfiport license mit https github com restuwahyu13 bangjago emulator blob main license md p align right style padding 5px border radius 100 background color red font size 2rem b a href table of content back to top a b p | android screen mirroring emulator android-development screencast screensharing cli-app cli emulator-launcher cli-application | front_end |
Chat-App | chat app se3330 team nullreferenceexception our project is a chat app that uses a server and a client to connect users and allow them to talk to each other the server uses a relational database to store user login information sha1 password hashing as well as persistant chatroom information users have the ability to create public and private chatrooms and are notified of new messages the program features a material design theme along with a dark mode and light mode main menu dark theme documents screenshots main dark png main menu light theme documents screenshots main light png client class diagram documents client 32 class 32 diagram png server class diagram documents server 32 class 32 diagram png database diagram documents database 32 diagram png before running the project check to make sure any all of these libraries are properly included in the necessary projects the client and server both rely on json serializing the project should already include the necessary libraries if the libraries are missing follow these instructions in visual studios navigate to project manage nuget packages browse search and install the newtonsoft json library the client relies on moq for unit testing project manage nuget packages browse search and install the moq library the server relies on an oracle library to connect and run commands on the sqlserver project manage nuget packages browse search and install the oracle manageddataaccess library starting the application open up the server project and client project visual studio build the client and run the executable from the bin folder to run multiple clients start the server from the exe or project either works before starting any clients as the client s will try to connect to the server during startup | server |
|
machine-learning-for-software-engineers | top down learning path machine learning for software engineers p align center a href https github com zuzoovn machine learning for software engineers img alt top down learning path machine learning for software engineers src https img shields io badge machine 20learning software 20engineers blue svg a a href https github com zuzoovn machine learning for software engineers stargazers img alt github stars src https img shields io github stars zuzoovn machine learning for software engineers svg a a href https github com zuzoovn machine learning for software engineers network img alt github forks src https img shields io github forks zuzoovn machine learning for software engineers svg a p inspired by coding interview university https github com jwasham coding interview university translations brazilian portuguese https github com zuzoovn machine learning for software engineers blob master readme pt br md https github com zuzoovn machine learning for software engineers blob master readme zh cn md fran ais https github com zuzoovn machine learning for software engineers blob master readme fr fr md https github com zuzoovn machine learning for software engineers blob master readme zh tw md how i nam vu plan to become a machine learning engineer https www codementor io zuzoovn how i plan to become a machine learning engineer a4metbcuk what is it this is my multi month study plan for going from mobile developer self taught no cs degree to machine learning engineer my main goal was to find an approach to studying machine learning that is mainly hands on and abstracts most of the math for the beginner this approach is unconventional because it s the top down and results first approach designed for software engineers please feel free to make any contributions you feel will make it better table of contents what is it what is it why use it why use it how to use it how to use it follow me follow me don t feel you aren t smart enough dont feel you arent smart enough about video resources about video resources prerequisite knowledge prerequisite knowledge the daily plan the daily plan motivation motivation machine learning overview machine learning overview machine learning mastery machine learning mastery machine learning is fun machine learning is fun inky machine learning inky machine learning machine learning an in depth guide machine learning an in depth guide stories and experiences stories and experiences machine learning algorithms machine learning algorithms beginner books beginner books practical books practical books kaggle knowledge competitions kaggle knowledge competitions video series video series mooc mooc resources resources becoming an open source contributor becoming an open source contributor games games podcasts podcasts communities communities conferences conferences interview questions interview questions my admired companies my admired companies why use it i m following this plan to prepare for my near future job machine learning engineer i ve been building native mobile applications android ios blackberry since 2011 i have a software engineering degree not a computer science degree i have an itty bitty amount of basic knowledge about calculus linear algebra discrete mathematics probability statistics from university think about my interest in machine learning can i learn and get a job in machine learning without studying cs master and phd https www quora com can i learn and get a job in machine learning without studying cs master and phd you can but it is far more difficult than when i got into the field drac smith https www quora com can i learn and get a job in machine learning without studying cs master and phd answer drac smith srid ot0p how do i get a job in machine learning as a software programmer who self studies machine learning but never has a chance to use it at work https www quora com how do i get a job in machine learning as a software programmer who self studies machine learning but never has a chance to use it at work i m hiring machine learning experts for my team and your mooc will not get you the job there is better news below in fact many people with a master s in machine learning will not get the job because they and most who have taken moocs do not have a deep understanding that will help me solve my problems ross c taylor https www quora com how do i get a job in machine learning as a software programmer who self studies machine learning but never has a chance to use it at work answer ross c taylor srid ot0p what skills are needed for machine learning jobs http programmers stackexchange com questions 79476 what skills are needed for machine learning jobs first you need to have a decent cs math background ml is an advanced topic so most textbooks assume that you have that background second machine learning is a very general topic with many sub specialties requiring unique skills you may want to browse the curriculum of an ms program in machine learning to see the course curriculum and textbook uri http softwareengineering stackexchange com a 79717 probability distributed computing and statistics hydrangea http softwareengineering stackexchange com a 79575 i find myself in times of trouble afaik there are two sides to machine learning http machinelearningmastery com programmers can get into machine learning practical machine learning this is about querying databases cleaning data writing scripts to transform data and gluing algorithm and libraries together and writing custom code to squeeze reliable answers from data to satisfy difficult and ill defined questions it s the mess of reality theoretical machine learning this is about math and abstraction and idealized scenarios and limits and beauty and informing what is possible it is a whole lot neater and cleaner and removed from the mess of reality i think the best way for practice focused methodology is something like practice learning practice http machinelearningmastery com machine learning for programmers comment 358985 that means where students first come with some existing projects with problems and solutions practice to get familiar with traditional methods in the area and perhaps also with their methodology after practicing with some elementary experiences they can go into the books and study the underlying theory which serves to guide their future advanced practice and will enhance their toolbox of solving practical problems studying theory also further improves their understanding on the elementary experiences and will help them acquire advanced experiences more quickly it s a long plan it s going to take me years if you are familiar with a lot of this already it will take you a lot less time how to use it everything below is an outline and you should tackle the items in order from top to bottom i m using github s special markdown flavor including tasks lists to check progress x create a new branch so you can check items like this just put an x in the brackets x more about github flavored markdown https guides github com features mastering markdown github flavored markdown follow me i m a vietnamese software engineer who is really passionate and wants to work in the usa how much did i work during this plan roughly 4 hours night after a long hard day at work i m on the journey twitter nam vu https twitter com zuzoovn nam vu top down learning path machine learning for software engineers http sv1 upsieutoc com 2016 10 08 331f241c8da44d0c43e9324d55440db6 md jpg usa as heck don t feel you aren t smart enough i get discouraged from books and courses that tell me as soon as i open them that multivariate calculus inferential statistics and linear algebra are prerequisites i still don t know how to get started what if i m not good at mathematics http machinelearningmastery com what if im not good at mathematics 5 techniques to understand machine learning algorithms without the background in mathematics http machinelearningmastery com techniques to understand machine learning algorithms without the background in mathematics how do i learn machine learning https www quora com machine learning how do i learn machine learning 1 about video resources some videos are available only by enrolling in a coursera or edx class it is free to do so but sometimes the classes are no longer in session so you have to wait a couple of months so you have no access i m going to be adding more videos from public sources and replacing the online course videos over time i like using university lectures prerequisite knowledge this short section consists of prerequisites interesting info i wanted to learn before getting started on the daily plan what is the difference between data analytics data analysis data mining data science machine learning and big data https www quora com what is the difference between data analytics data analysis data mining data science machine learning and big data 1 learning how to learn https www coursera org learn learning how to learn don t break the chain http lifehacker com 281626 jerry seinfelds productivity secret how to learn on your own https metacademy org roadmaps rgrosse learn on your own the daily plan each subject does not require a whole day to be able to understand it fully and you can do multiple of these in a day each day i take one subject from the list below read it cover to cover take notes do the exercises and write an implementation in python or r motivation dream https www youtube com watch v g jwwyx7jlo machine learning overview a visual introduction to machine learning http www r2d3 us visual intro to machine learning part 1 gentle guide to machine learning https blog monkeylearn com gentle guide to machine learning introduction to machine learning for developers http blog algorithmia com introduction machine learning developers machine learning basics for a newbie https www analyticsvidhya com blog 2015 06 machine learning basics how do you explain machine learning and data mining to non computer science people https www quora com how do you explain machine learning and data mining to non computer science people machine learning under the hood blog post explains the principles of machine learning in layman terms simple and clear https georgemdallas wordpress com 2013 06 11 big data data mining and machine learning under the hood what is machine learning and how does it work https www youtube com watch v elojmnjn4kk list pl5 da3qgb5icembquqbbcoqwcs6oybr5a index 1 deep learning a non technical introduction http www slideshare net alfredpong1 deep learning a nontechnical introduction 69385936 removed machine learning mastery the machine learning mastery method http machinelearningmastery com machine learning mastery method machine learning for programmers http machinelearningmastery com machine learning for programmers applied machine learning with machine learning mastery http machinelearningmastery com start here python machine learning mini course http machinelearningmastery com python machine learning mini course machine learning algorithms mini course http machinelearningmastery com machine learning algorithms mini course machine learning is fun machine learning is fun https medium com ageitgey machine learning is fun 80ea3ec3c471 37ue6caww part 2 using machine learning to generate super mario maker levels https medium com ageitgey machine learning is fun part 2 a26a10b68df3 kh7qgvp1b part 3 deep learning and convolutional neural networks https medium com ageitgey machine learning is fun part 3 deep learning and convolutional neural networks f40359318721 44rhxy637 part 4 modern face recognition with deep learning https medium com ageitgey machine learning is fun part 4 modern face recognition with deep learning c3cffc121d78 3rwmq0ddc part 5 language translation with deep learning and the magic of sequences https medium com ageitgey machine learning is fun part 5 language translation with deep learning and the magic of sequences 2ace0acca0aa wyfthap4c part 6 how to do speech recognition with deep learning https medium com ageitgey machine learning is fun part 6 how to do speech recognition with deep learning 28293c162f7a lhr1nnpcy part 7 abusing generative adversarial networks to make 8 bit pixel art https medium com ageitgey abusing generative adversarial networks to make 8 bit pixel art e45d9b96cee7 part 8 how to intentionally trick neural networks https medium com ageitgey machine learning is fun part 8 how to intentionally trick neural networks b55da32b7196 inky machine learning https triskell github io 2016 11 15 inky machine learning html part 1 what is machine learning https triskell github io 2016 10 23 what is machine learning html part 2 supervised learning and unsupervised learning https triskell github io 2016 11 13 supervised learning and unsupervised learning html machine learning an in depth guide overview goals learning types and algorithms http www innoarchitech com machine learning an in depth non technical guide data selection preparation and modeling http www innoarchitech com machine learning an in depth non technical guide part 2 model evaluation validation complexity and improvement http www innoarchitech com machine learning an in depth non technical guide part 3 model performance and error analysis http www innoarchitech com machine learning an in depth non technical guide part 4 unsupervised learning related fields and machine learning in practice http www innoarchitech com machine learning an in depth non technical guide part 5 stories and experiences machine learning in a week https medium com learning new stuff machine learning in a week a0da25d59850 tk6ft2kcg machine learning in a year https medium com learning new stuff machine learning in a year cdb0b0ebd29c hhcb9fxk1 how i wrote my first machine learning program in 3 days http blog adnansiddiqi me how i wrote my first machine learning program in 3 days learning path your mentor to become a machine learning expert https www analyticsvidhya com learning path learn machine learning you too can become a machine learning rock star no phd https backchannel com you too can become a machine learning rock star no phd necessary 107a1624d96b g9p16ldp7 how to become a data scientist in 6 months a hacker s approach to career planning video https www youtube com watch v riofv14c0tc slide http www slideshare net tetianaivanova2 how to become a data scientist in 6 months 5 skills you need to become a machine learning engineer http blog udacity com 2016 04 5 skills you need to become a machine learning engineer html are you a self taught machine learning engineer if yes how did you do it how long did it take you https www quora com are you a self taught machine learning engineer if yes how did you do it how long did it take you how can one become a good machine learning engineer https www quora com how can one become a good machine learning engineer a learning sabbatical focused on machine learning http karlrosaen com ml machine learning algorithms 10 machine learning algorithms explained to an army soldier https www analyticsvidhya com blog 2015 12 10 machine learning algorithms explained army soldier top 10 data mining algorithms in plain english https rayli net blog data top 10 data mining algorithms in plain english 10 machine learning terms explained in simple english http blog aylien com 10 machine learning terms explained in simple a tour of machine learning algorithms http machinelearningmastery com a tour of machine learning algorithms the 10 algorithms machine learning engineers need to know https gab41 lab41 org the 10 algorithms machine learning engineers need to know f4bb63f5b2fa ofc7t2965 comparing supervised learning algorithms http www dataschool io comparing supervised learning algorithms machine learning algorithms a collection of minimal and clean implementations of machine learning algorithms https github com rushter mlalgorithms knn algorithm in machine learning https www scaler com topics what is knn algorithm in machine learning beginner books data smart using data science to transform information into insight 1st edition https www amazon com data smart science transform information dp 111866146x data science for business what you need to know about data mining and data analytic thinking https www amazon com data science business data analytic thinking dp 1449361323 predictive analytics the power to predict who will click buy lie or die https www amazon com predictive analytics power predict click dp 1118356853 practical books machine learning for hackers https www amazon com machine learning hackers drew conway dp 1449303714 github repository r https github com johnmyleswhite ml for hackers github repository python https github com carljv will it python python machine learning https www amazon com python machine learning sebastian raschka ebook dp b00ysilnl0 github repository https github com rasbt python machine learning book programming collective intelligence building smart web 2 0 applications https www amazon com programming collective intelligence building applications ebook dp b00f8qdzwg machine learning an algorithmic perspective second edition https www amazon com machine learning algorithmic perspective recognition dp 1466583282 github repository https github com alexsosn marslandmlalgo resource repository http seat massey ac nz personal s r marsland mlbook html introduction to machine learning with python a guide for data scientists http shop oreilly com product 0636920030515 do github repository https github com amueller introduction to ml with python data mining practical machine learning tools and techniques third edition https www amazon com data mining practical techniques management dp 0123748569 teaching material slides for chapters 1 5 zip http www cs waikato ac nz ml weka slides3rded ch1 5 zip slides for chapters 6 8 zip http www cs waikato ac nz ml weka slides3rded ch6 8 zip machine learning in action https www amazon com machine learning action peter harrington dp 1617290181 github repository https github com pbharrin machinelearninginaction reactive machine learning systems meap https www manning com books reactive machine learning systems github repository https github com jeffreyksmithjr reactive machine learning systems an introduction to statistical learning http www bcf usc edu gareth isl github repository r http www bcf usc edu gareth isl code html github repository python https github com jwarmenhoven islr python videos http www dataschool io 15 hours of expert machine learning videos building machine learning systems with python https www packtpub com big data and business intelligence building machine learning systems python github repository https github com luispedro buildingmachinelearningsystemswithpython learning scikit learn machine learning in python https www packtpub com big data and business intelligence learning scikit learn machine learning python github repository https github com gmonce scikit learn book probabilistic programming bayesian methods for hackers https camdavidsonpilon github io probabilistic programming and bayesian methods for hackers probabilistic graphical models principles and techniques https www amazon com probabilistic graphical models principles computation dp 0262013193 machine learning hands on for developers and technical professionals https www amazon com machine learning hands developers professionals dp 1118889061 machine learning hands on for developers and technical professionals review https blogs msdn microsoft com querysimon 2015 01 01 book review machine learning hands on for developers and technical professionals github repository https github com jasebell mlbook learning from data https www amazon com learning data yaser s abu mostafa dp 1600490069 online tutorials https work caltech edu telecourse html reinforcement learning an introduction 2nd edition https webdocs cs ualberta ca sutton book the book 2nd html github repository https github com shangtongzhang reinforcement learning an introduction machine learning with tensorflow meap https www manning com books machine learning with tensorflow github repository https github com binroot tensorflow book how machine learning works meap https www manning com books how machine learning works github repository https github com mostafa samir how machine learning works succeeding with ai https www manning com books succeeding with ai kaggle knowledge competitions kaggle competitions how and where to begin https www analyticsvidhya com blog 2015 06 start journey kaggle how a beginner used small projects to get started in machine learning and compete on kaggle http machinelearningmastery com how a beginner used small projects to get started in machine learning and compete on kaggle master kaggle by competing consistently http machinelearningmastery com master kaggle by competing consistently video series machine learning for hackers https www youtube com playlist list pl2 dafemk2a4ut2pyv0fsixqozxtbgklj fresh machine learning https www youtube com playlist list pl2 dafemk2a6kc7pv6ghh apbfxwfjkey machine learning recipes with josh gordon https www youtube com playlist list plou2xlyxmsiiuibfyad6rfyqu jl2ryal everything you need to know about machine learning in 30 minutes or less https vimeo com 43547079 a friendly introduction to machine learning https www youtube com watch v ipgxlwoizy4 nuts and bolts of applying deep learning andrew ng https www youtube com watch v f1ka6a13s9i bigml webinar video https www youtube com watch list pl1bkyu9gtnyhcjga6ulrvrvcm1lab8he3 v w62ehrnovqo resources https bigml com releases mathematicalmonk s machine learning tutorials https www youtube com playlist list pld0f06aa0d2e8ffba machine learning in python with scikit learn https www youtube com playlist list pl5 da3qgb5icembquqbbcoqwcs6oybr5a github repository https github com justmarkham scikit learn videos blog http blog kaggle com author kevin markham my playlist top youtube videos on machine learning neural network deep learning https www analyticsvidhya com blog 2015 07 top youtube videos machine learning neural network deep learning 16 new must watch tutorials courses on machine learning https www analyticsvidhya com blog 2016 10 16 new must watch tutorials courses on machine learning deeplearning tv https www youtube com channel uc9oezkiwhzfv cb7fciklq learning to see https www youtube com playlist list pliahhy2ibx9ihlasve8bkns2xg8ahy6iv neural networks class universit de sherbrooke https www youtube com playlist list pl6xpj9i5qxyecohn7tqghaj6naprnmubh 21 deep learning videos tutorials courses on youtube from 2016 https www analyticsvidhya com blog 2016 12 21 deep learning videos tutorials courses on youtube from 2016 30 top videos tutorials courses on machine learning artificial intelligence from 2016 https www analyticsvidhya com blog 2016 12 30 top videos tutorials courses on machine learning artificial intelligence from 2016 practical deep learning for coders http course fast ai index html practical deep learning for coders version 2 pytorch http forums fast ai t welcome to part 1 v2 5787 mooc coursera s ai for everyone https www coursera org learn ai for everyone edx s introduction to artificial intelligence ai https www edx org course introduction artificial intelligence ai microsoft dat263x udacity s intro to machine learning https www udacity com course intro to machine learning ud120 udacity intro to machine learning review http hamelg blogspot com 2014 12 udacity intro to machine learning review html udacity s supervised unsupervised reinforcement https www udacity com course machine learning ud262 machine learning foundations a case study approach https www coursera org learn ml foundations machine learning ai foundations value estimations https www lynda com data science tutorials machine learning essential training value estimations 548594 2 html kaggle s hands on data science education https www kaggle com learn overview microsoft professional program for artificial intelligence https academy microsoft com en us professional program tracks artificial intelligence coursera s machine learning https www coursera org learn machine learning video only https www youtube com playlist list plz9qnfmhz a4rycgrgoyma6zxf4bzggpw coursera machine learning review https rayli net blog data coursera machine learning review coursera machine learning roadmap https metacademy org roadmaps cjrd coursera ml supplement machine learning distilled https code tutsplus com courses machine learning distilled bigml training https bigml com training coursera s neural networks for machine learning https www coursera org learn neural networks taught by geoffrey hinton a pioneer in the field of neural networks machine learning cs oxford university https www cs ox ac uk people nando defreitas machinelearning creative applications of deep learning with tensorflow https www kadenze com courses creative applications of deep learning with tensorflow info intro to descriptive statistics https www udacity com course intro to descriptive statistics ud827 intro to inferential statistics https www udacity com course intro to inferential statistics ud201 6 s094 deep learning for self driving cars http selfdrivingcars mit edu 6 s191 introduction to deep learning http introtodeeplearning com index html coursera s deep learning https www coursera org specializations deep learning resources absolute beginning into machine learning https hackernoon com absolute beginning into machine learning e90ceda5a4bc learn machine learning in a single month https elitedatascience com machine learning masterclass the non technical guide to machine learning artificial intelligence https medium com samdebrule a humans guide to machine learning e179f43b67a0 cpzf3a5c0 programming community curated resources for learning machine learning https hackr io tutorials learn machine learning ml best practices rule book for machine learning engineering from google http martin zinkevich org rules of ml rules of ml pdf machine learning for software engineers on hacker news https news ycombinator com item id 12898718 machine learning for developers https xyclade github io machinelearning machine learning for humans https medium com machine learning for humans why machine learning matters 6164faf1df12 machine learning advice for developers https dev to thealexlavin machine learning advice for developers machine learning for complete beginners http pythonforengineers com machine learning for complete beginners getting started with machine learning for absolute beginners and fifth graders https medium com suffiyanz getting started with machine learning f15df1c283ea yjtiy7ei9 how to learn machine learning the self starter way https elitedatascience com learn machine learning machine learning self study resources https ragle sanukcode net articles machine learning self study resources level up your machine learning https metacademy org roadmaps cjrd level up your ml an honest guide to machine learning https medium com axiomzenteam an honest guide to machine learning 2f6d7a6df60e ib12a1yw5 enough machine learning to make hacker news readable again video https www youtube com watch v o7iezjt9usi slide https speakerdeck com pycon2014 enough machine learning to make hacker news readable again by ned jackson lovely dive into machine learning https github com hangtwenty dive into machine learning machine deep learning for software engineers https speakerdeck com pmigdal machine deep learning for software engineers deep learning for beginners https deeplearning4j org deeplearningforbeginners html foundations for deep learning https github com pauli space foundations for deep learning machine learning mindmap cheatsheet https github com dformoso machine learning mindmap machine learning courses in universities stanford http ai stanford edu courses machine learning summer schools http mlss cc oxford https www cs ox ac uk people nando defreitas machinelearning cambridge http mlg eng cam ac uk flipboard topics machine learning https flipboard com topic machinelearning deep learning https flipboard com topic deeplearning artificial intelligence https flipboard com topic artificialintelligence medium topics machine learning https medium com tag machine learning latest deep learning https medium com tag deep learning artificial intelligence https medium com tag artificial intelligence monthly top 10 articles machine learning https medium mybridge co search q 22machine 20learning 22 algorithms https medium mybridge co search q algorithms comprehensive list of data science resources http www datasciencecentral com group resources forum topics comprehensive list of data science resources digitalmind s artificial intelligence resources http blog digitalmind io post artificial intelligence resources awesome machine learning https github com josephmisiti awesome machine learning awesome graph classification https github com benedekrozemberczki awesome graph classification awesome community detection https github com benedekrozemberczki awesome community detection creativeai s machine learning http www creativeai net cat 5b0 5d machine learning machine learning online courses https classpert com machine learning games halite a i coding game https halite io vindinium a i programming challenge http vindinium org general video game ai competition http www gvgai net angry birds ai competition https aibirds org the ai games http theaigames com fighting game ai competition http www ice ci ritsumei ac jp ftgaic codecup http www codecup nl intro php student starcraft ai tournament http sscaitournament com aiide starcraft ai competition http www cs mun ca dchurchill starcraftaicomp cig starcraft ai competition https sites google com site starcraftaic codingame ai bot games https www codingame com training machine learning becoming an open source contributor tensorflow magenta magenta music and art generation with machine intelligence https github com tensorflow magenta tensorflow tensorflow computation using data flow graphs for scalable machine learning https github com tensorflow tensorflow cmusatyalab openface face recognition with deep neural networks https github com cmusatyalab openface tensorflow models syntaxnet neural models of syntax https github com tensorflow models tree master syntaxnet podcasts podcasts for beginners talking machines http www thetalkingmachines com linear digressions http lineardigressions com data skeptic http dataskeptic com this week in machine learning ai https twimlai com machine learning guide http ocdevel com podcasts machine learning interviews with ml practitioners researchers and kagglers about their joureny chai time data science https www youtube com playlist list pllvvxm0q8zubindoiazgzlenmxvz9bd3x audio http anchor fm chaitimedatascience writeups https sanyambhutani com tag chaitimedatascience machine learning for beginners interviews https www youtube com channel ucdz0gx f3ulmkfxtyzsfbaw audio https jayshah buzzsprout com more advanced podcasts partially derivative http partiallyderivative com o reilly data show http radar oreilly com tag oreilly data show podcast not so standard deviation https soundcloud com nssd podcast podcasts to think outside the box data stories http datastori es communities quora machine learning https www quora com topic machine learning statistics https www quora com topic statistics academic discipline data mining https www quora com topic data mining reddit machine learning https www reddit com r machinelearning computer vision https www reddit com r computervision natural language https www reddit com r languagetechnology data science https www reddit com r datascience big data https www reddit com r bigdata statistics https www reddit com r statistics data tau http www datatau com deep learning news http news startup ml kdnuggets http www kdnuggets com conferences neural information processing systems nips https nips cc international conference on learning representations iclr http www iclr cc doku php id iclr2017 main redirect 1 association for the advancement of artificial intelligence aaai http www aaai org conferences aaai aaai17 php ieee conference on computational intelligence and games cig http www ieee cig org ieee international conference on machine learning and applications icmla http www icmla conference org international conference on machine learning icml https 2017 icml cc international joint conferences on artificial intelligence ijcai http www ijcai org association for computational linguistics acl http acl2017 org interview questions how to prepare for a machine learning interview http blog udacity com 2016 05 prepare machine learning interview html 40 interview questions asked at startups in machine learning data science https www analyticsvidhya com blog 2016 09 40 interview questions asked at startups in machine learning data science 21 must know data science interview questions and answers http www kdnuggets com 2016 02 21 data science interview questions answers html top 50 machine learning interview questions answers http career guru99 com top 50 interview questions on machine learning machine learning engineer interview questions https resources workable com machine learning engineer interview questions popular machine learning interview questions http www learn4master com machine learning popular machine learning interview questions what are some common machine learning interview questions https www quora com what are some common machine learning interview questions what are the best interview questions to evaluate a machine learning researcher https www quora com what are the best interview questions to evaluate a machine learning researcher collection of machine learning interview questions http analyticscosm com machine learning interview questions for data scientist interview 121 essential machine learning questions answers https elitedatascience com mlqa reading list minimum viable study plan for machine learning interviews https github com khangich machine learning interview my admired companies elsa your virtual pronunciation coach https www elsanow io home | machine-learning deep-learning artificial-intelligence software-engineer machine-learning-algorithms | ai |
doc | documentation of lel a libre euro lingua alliance the recent results of alpaca are impressive it was shown that it is realistically possible even without a supercomputer and a multi million budget to train competitive chatgpt style large language models unfortunately the existing instruction following models are not open source https opensource org osd and do not support multiple languages on the long run we strive to provide these so called instruction following large language models aka chatgpt style models for the european languages our goal is to provide models code training data and documentation that is 1 free open source https opensource org osd and permissive mit license https en wikipedia org wiki mit license 2 transparent 3 open for agile contribution and participation see contribution joining our community contribution joining our community 4 state of the art 5 up to date our ressources documentation repository https github com lel a doc multilingual alpaca prompt https github com lel a doc blob main alpaca prompt md ideas links and findings https github com lel a doc blob main ideas links findings md cleaned german alpaca dataset https github com lel a geralpacadatacleaned euroinstructproject https github com lel a euroinstructproject instruction datasets from existing german english and other european datasets hugging face organization https huggingface co lel a rough planning this section plans the first steps and outlines the individual ventures in the future first milestone in the first step we want to train evaluate and publish a german and english generative pre trained transformer gpt model of relatively small size this model will not yet have the so called instruction following capabilities the concrete steps towards this goal are 1 provide a clean and appropriate english and german text corpus 2 identify training method training code model type and hyperparameters 3 find a way to get the necessary computation power and storage 4 start monitor and maintain the training 5 evaluate the results on reference tasks 6 publish the final model and results second milestone add instruction following capabilities by fine tuning the gpt model from before this could be done in the same style as alpaca an openai api access might be necessary for this which would add additional costs identify training method training code and hyperparameters find a way to get the necessary computation power and storage start monitor and maintain the training evaluate the results on reference tasks publish the final model and results outlook add more european languages add different programming languages use more training data in general improve quality of training data determine and monitor bias take countermeasures if necessary train larger models retrain existing models to keep them current update training method training code model type and hyperparameters when new research is published add multimodal capabilities impediments and risks 1 we do not have sufficient computing power neither the hardware nor the money to rent see 5 https github com lel a doc issues 5 2 the licensing implications of using the openai api self instruct alpaca style training are not entirely clear see 6 https github com lel a doc issues 6 3 the llama training code is not open sourced see 4 https github com lel a doc issues 4 relevant links llama blog introducing llama a foundational 65 billion parameter large language model https ai facebook com blog large language model llama meta ai arxiv llama open and efficient foundation language models https arxiv org abs 2302 13971 llama model card https github com facebookresearch llama blob main model card md github chatllama https github com juncongmoo chatllama alpaca blog alpaca a strong replicable instruction following model https crfm stanford edu 2023 03 13 alpaca html github stanford alpaca an instruction following llama model https github com tatsu lab stanford alpaca arxiv self instruct aligning language model with self generated instructions https arxiv org abs 2212 10560 training data common crawl https commoncrawl org oscar https oscar project github io documentation gc4 corpus https german nlp group github io projects gc4 corpus html online language modelling dataset pipeline https github com huggingface olm datasets cleaned alpaca dataset https github com gururise alpacadatacleaned contribution joining our community our commitment to open source means that we are enabling in fact encouraging all interested parties to contribute and become part of our community contribution and feedback is encouraged and always welcome we communicate via a slack channel to get access to the slack please reach out to omar at huggingface co or philipp at huggingface co to be added licensing copyright c 2023 by the lel a team licensed under the mit license the license you may not use this file except in compliance with the license you may obtain a copy of the license by reviewing the file license https raw githubusercontent com lel a doc main license in the repository | llama alpaca llm nlp chat gpt europe german multilingual english language large-language-models | ai |
awesome-online-machine-learning | div align center h1 awesome online machine learning h1 a href https github com sindresorhus awesome img src https cdn rawgit com sindresorhus awesome d7305f38d29fed78fa85652e3a63e154dd8e8829 media badge svg a div online machine learning https www wikiwand com en online machine learning is a subset of machine learning where data arrives sequentially in contrast to the more traditional batch learning online learning methods update themselves incrementally with one data point at a time courses and books courses and books blog posts blog posts software software modelling modelling deployment deployment papers papers linear models linear models support vector machines support vector machines neural networks neural networks decision trees decision trees unsupervised learning unsupervised learning time series time series drift detection drift detection anomaly detection anomaly detection metric learning metric learning graph theory graph theory ensemble models ensemble models expert learning expert learning active learning active learning miscellaneous miscellaneous surveys surveys general purpose algorithms general purpose algorithms hyperparameter tuning hyperparameter tuning evaluation evaluation courses and books machine learning for streaming data with python https github com packtpublishing machine learning for streaming data with python ie 498 online learning and decision making https yuanz web illinois edu teaching ie498fa19 introduction to online learning https parameterfree com lecture notes on online learning machine learning the feature http www hunch net mltf gives some insights into the inner workings of vowpal wabbit especially the slides on online linear learning http www hunch net mltf online linear pdf machine learning for data streams with practical examples in moa https www cms waikato ac nz abifet book contents html online methods in machine learning mit http www mit edu rakhlin 6 883 streaming 101 the world beyond batch https www oreilly com ideas the world beyond batch streaming 101 prediction learning and games http www ii uni wroc pl lukstafi pmwiki uploads agt prediction learning and games pdf introduction to online convex optimization https ocobook cs princeton edu ocobook pdf reinforcement learning and stochastic optimization a unified framework for sequential decisions https castlelab princeton edu rlso the entire book builds upon online learning paradigm in applied learning optimization problems chapter 3 online learning being the reference big data course at the cilvr lab at nyu https cilvr cs nyu edu doku php id courses bigdata slides start focus on linear models and bandits some courses are given by john langford the creator of vowpal wabbit machine learning for personalization http www cs columbia edu jebara 6998 course from columbia by tony jebara covers bandits blog posts fennel ai blog posts about online recsys https fennel ai blog anomaly detection with bytewax redpanda bytewax 2022 https www bytewax io blog anomaly detection bw rpk the online machine learning predict fit switcheroo max halford 2022 https maxhalford github io blog predict fit switcheroo real time machine learning challenges and solutions chip huyen 2022 https huyenchip com 2022 01 02 real time machine learning challenges and solutions html anomalies detection using river matias aravena gamboa 2021 https medium com spikelab anomalies detection using river 398544d3536 introdu o n o extensiva a online machine learning saulo mastelini 2021 https medium com saulomastelini introdu c3 a7 c3 a3o a online machine learning 874bd6b7c3c8 machine learning is going real time chip huyen 2020 https huyenchip com 2020 12 27 real time machine learning html the correct way to evaluate online machine learning models max halford 2020 https maxhalford github io blog online learning evaluation what is online machine learning max pagels 2018 https medium com value stream design online machine learning 515556ff72c5 what is it and who needs it data science central 2015 https www datasciencecentral com profiles blogs stream processing what is it and who needs it software see more here https github com stars maxhalford lists online learning modelling river https github com creme ml creme a python library for general purpose online machine learning dask https ml dask org incremental html jubatus http jubat us en index html flink ml https nightlies apache org flink flink ml docs stable apache flink machine learning library libffm https www csie ntu edu tw cjlin libffm a library for field aware factorization machines liblinear https www csie ntu edu tw cjlin liblinear a library for large linear classification libol https github com libol a collection of online linear models trained with first and second order gradient descent methods not maintained moa https moa cms waikato ac nz documentation scikit learn https scikit learn org stable some https scikit learn org stable computing scaling strategies html incremental learning of scikit learn s estimators can handle incremental updates although this is usually intended for mini batch learning see also the computing with scikit learn https scikit learn org stable computing html page spark streaming https spark apache org docs latest streaming programming guide html doesn t do online learning per say but instead mini batches the data into fixed intervals of time sofiaml https code google com archive p sofia ml streamdm https github com huawei noah streamdm a machine learning library on top of spark streaming tornado https github com alipsgh tornado vfml http www cs washington edu dm vfml vowpal wabbit https github com vowpalwabbit vowpal wabbit deployment kappaml https www kappaml com django river ml https github com vsoch django river ml a django plugin for deploying river models chantilly https github com online ml chantilly a prototype meant to be compatible with river previously creme papers linear models field aware factorization machines for ctr prediction 2016 https www csie ntu edu tw cjlin papers ffm pdf practical lessons from predicting clicks on ads at facebook 2014 https research fb com wp content uploads 2016 11 practical lessons from predicting clicks on ads at facebook pdf ad click prediction a view from the trenches 2013 https static googleusercontent com media research google com en pubs archive 41159 pdf normalized online learning 2013 https arxiv org abs 1305 6646 towards optimal one pass large scale learning with averaged stochastic gradient descent 2011 https arxiv org abs 1107 2490 dual averaging methods for regularized stochastic learning andonline optimization 2010 https www microsoft com en us research wp content uploads 2016 02 xiao10jmlr pdf adaptive regularization of weight vectors 2009 https papers nips cc paper 3848 adaptive regularization of weight vectors pdf stochastic gradient descent training forl1 regularized log linear models with cumulative penalty 2009 https www aclweb org anthology p09 1054 confidence weighted linear classification 2008 https www cs jhu edu mdredze publications icml variance pdf exact convex confidence weighted learning 2008 https www cs jhu edu mdredze publications cw nips 08 pdf online passive aggressive algorithms 2006 http jmlr csail mit edu papers volume7 crammer06a crammer06a pdf logarithmic regret algorithms foronline convex optimization 2007 https www cs princeton edu ehazan papers log journal pdf a second order perceptron algorithm 2005 http www datascienceassn org sites default files second order 20perception 20algorithm pdf online learning with kernels 2004 https alex smola org papers 2004 kivsmowil04 pdf solving large scale linear prediction problems using stochastic gradient descent algorithms 2004 http citeseerx ist psu edu viewdoc summary doi 10 1 1 58 7377 support vector machines pegasos primal estimated sub gradient solver for svm 2007 http citeseerx ist psu edu viewdoc summary doi 10 1 1 74 8513 a new approximate maximal margin classification algorithm 2001 http www jmlr org papers volume2 gentile01a gentile01a pdf the relaxed online maximum margin algorithm 2000 https papers nips cc paper 1727 the relaxed online maximum margin algorithm pdf neural networks three scenarios for continual learning 2019 https arxiv org pdf 1904 07734 pdf decision trees amf aggregated mondrian forests for online learning 2019 https arxiv org abs 1906 10529 mondrian forests efficient online random forests 2014 https arxiv org abs 1406 2673 mining high speed data streams 2000 https homes cs washington edu pedrod papers kdd00 pdf unsupervised learning online clustering algorithms evaluation metrics applications and benchmarking 2022 https dl acm org doi pdf 10 1145 3534678 3542600 online hierarchical clustering approximations 2019 https arxiv org pdf 1909 09667 pdf deepwalk online learning of social representations 2014 https arxiv org pdf 1403 6652 pdf online learning with random representations 2014 http citeseerx ist psu edu viewdoc download doi 10 1 1 127 2742 rep rep1 type pdf online latent dirichlet allocation with infinite vocabulary 2013 http proceedings mlr press v28 zhai13 pdf web scale k means clustering 2010 https www eecs tufts edu dsculley papers fastkmeans pdf online dictionary learning for sparse coding 2009 https www di ens fr sierra pdfs icml09 pdf density based clustering over an evolving data stream with noise 2006 https archive siam org meetings sdm06 proceedings 030caof pdf knowledge acquisition via incremental conceptual clustering 2004 http www inf ufrgs br engel data media file aprendizagem cobweb pdf online and batch learning of pseudo metrics 2004 https ai stanford edu ang papers icml04 onlinemetric pdf birch an efficient data clustering method for very large databases 1996 https www2 cs sfu ca coursecentral 459 han papers zhang96 pdf time series online learning for time series prediction 2013 https arxiv org pdf 1302 6927 pdf drift detection a survey on concept drift adaptation 2014 http eprints bournemouth ac uk 22491 1 acm 20computing 20surveys pdf anomaly detection leveraging the christoffel darboux kernel for online outlier detection 2022 https hal laas fr hal 03562614 document interpretable anomaly detection with mondrian p lya forests on data streams 2020 https arxiv org pdf 2008 01505 pdf fast anomaly detection for streaming data 2011 https www ijcai org proceedings 11 papers 254 pdf metric learning online metric learning and fast similarity search 2009 http people bu edu bkulis pubs nips online pdf information theoretic metric learning 2007 http www cs utexas edu users pjain pubs metriclearning icml pdf online and batch learning of pseudo metrics 2004 https ai stanford edu ang papers icml04 onlinemetric pdf graph theory deepwalk online learning of social representations 2014 http www cs cornell edu courses cs6241 2019sp readings perozzi 2014 deepwalk pdf ensemble models optimal and adaptive algorithms for online boosting 2015 http proceedings mlr press v37 beygelzimer15 pdf an implementation is available here https github com vowpalwabbit vowpal wabbit blob master vowpalwabbit boosting cc online bagging and boosting 2001 https ti arc nasa gov m profile oza files ozru01a pdf a decision theoretic generalization of on line learning and an application to boosting 1997 http www face rec org algorithms boosting ensemble decision theoretic generalization pdf expert learning on the optimality of the hedge algorithm in the stochastic regime https arxiv org pdf 1809 01382 pdf active learning a survey on online active learning 2023 https arxiv org ftp arxiv papers 2302 2302 08893 pdf miscellaneous multi output chain models and their application in data streams 2019 https jmread github io talks 2019 03 08 imperial stats seminar pdf a complete recipe for stochastic gradient mcmc 2015 https arxiv org abs 1506 04696 online em algorithm for latent data models 2007 https arxiv org abs 0712 4273 source code is available here https www di ens fr cappe code onlineem streamai dealing with challenges of continual learning systems for serving ai in production 2023 https ieeexplore ieee org abstract document 10172871 surveys machine learning for streaming data state of the art challenges and opportunities 2019 https www kdd org exploration files 3 cr 7 machine learning for streaming data state of the art final pdf online learning a comprehensive survey 2018 https arxiv org abs 1802 02871 online machine learning in big data streams 2018 https arxiv org abs 1802 05872v1 incremental learning algorithms and applications 2016 https www elen ucl ac be proceedings esann esannpdf es2016 19 pdf batch incremental versus instance incremental learning in dynamic and evolving data http albertbifet com wp content uploads 2013 10 ida2012 pdf incremental gradient subgradient and proximal methods for convex optimization a survey 2011 https arxiv org abs 1507 01030 online learning and stochastic approximations 1998 https leon bottou org publications pdf online 1998 pdf general purpose algorithms maintaining sliding window skylines on data streams 2006 http www cs ust hk dimitris papers tkde06 sky pdf the sliding dft 2003 https pdfs semanticscholar org 525f b581f9afe17b6ec21d6cb58ed42d1100943f pdf an online variant of the fourier transform a concise explanation is available here https www comm utoronto ca dimitris ece431 slidingdft pdf sketching algorithms for big data https www sketchingbigdata org hyperparameter tuning chacha for online automl 2021 https arxiv org pdf 2106 04815 pdf evaluation delayed labelling evaluation for data streams 2019 https link springer com article 10 1007 s10618 019 00654 y efficient online evaluation of big data stream classifiers 2015 https dl acm org doi pdf 10 1145 2783258 2783372 issues in evaluation of stream learning algorithms 2009 https dl acm org doi pdf 10 1145 1557019 1557060 | awesome awesome-list machine-learning online-machine-learning | ai |
-Full-Stack-Web-Development-with-Django-2.x-and-Angular-8 | full stack web development with django 2 x and angular 8 full stack web development with django 2 x and angular 8 published by packt this is the code repository for full stack web development with django and angular 8 https www packtpub com web development full stack web development with django and angular 8 video published by packt https www packtpub com utm source github it contains all the supporting project files necessary to work through the video course from start to finish about the video course in this course you ll take a tour of web development with django 2 x and angular 8 we ve included an additional bonus section on the features forthcoming in django 3 x designed to help you pre empt upgrade requirements and considerations well in advance you ll learn to build a database driven web app for seamless full stack web development we teach you the best practices you ll need to adopt and provide you with hands on experience by building a flight reservation application from the ground up you ll connect django to a mongodb database to help efficiently store and track data you ll create a beautifully styled web application using angular 8 for an amazing ui experience all by using this popular front end tool by the end of this course you ll be able to build a full fledged and rich web application with django angular and gain an early insight into what s coming in django 3 x h2 what you will learn h2 div class book info will learn text ul li create a data driven web application li connect django to databases such as sqlite to help store and track data li effectively utilize the template system li configure angular to work with the django framework li create an amazing user interface ui components with angular 8 li explore the free django admin functionality li bonus section get an early introduction to what s new in the forthcoming django 3 x release li ul div instructions and navigation assumed knowledge this course targets python developers who favor web development with django and now want to expand their horizons by building amazing front ends for their websites using angular 8 technical requirements ul b minimum hardware requirements b li for successful completion of this course students will require the computer systems with at least the following li operating system windows 10 li processor 1 4 ghz 32 bit x86 li memory 4gb li storage 2gb li ul ul b recommended hardware requirements b li os osx yosemite or above windows 7 or above li processor 2 ghz li memory 8 gb of ram li storage sdd 128 gb or above li ul ul b software requirements b li os windows 10 li browser chrome li visual studio code ide latest version li python 3 5 or higher li django 2 1 or higher li li li li angular 8 li li li pip 10 x or higher li li node js lts 8 9 or higher installed li npm 5 5 1 or higher installed li ul related products c programming by example video https www packtpub com application development c programming example video high performance computing with python 3 x video https www packtpub com application development high performance computing python 3x video utm source github utm medium repository utm campaign 9781789956252 functional programming in 7 days video https www packtpub com application development functional programming 7 days video utm source github utm medium repository utm campaign 9781788990295 | front_end |
|
deceptiveidn | deceptive idn phishers https www bluecoat com security blog 2014 05 22 bad guys using internationalized domain names idns are still using internationalized domain names https en wikipedia org wiki internationalized domain name to trick users this project uses computer vision to automatically check if idns have a deceptive reading usage to use the tool invoke it with python3 bash python3 deceptiveidn py xn e1awd7f com idn com xn e1awd7f com deceptive idn possible readings cpic com 207 235 47 22 epic com 199 204 56 88 dependencies this script requires python3 pillow the python3 imaging library and tesseract ocr https github com tesseract ocr tesseract brew install python3 tesseract pip3 install pillow license this project copyright ryan stortz withzombies and is available under the apache 2 0 license | security computer-vision idn python3 | ai |
mlr | mlr img src man figures logo png align right package website release https mlr mlr org com dev https mlr mlr org com dev machine learning in r badges start tic https github com mlr org mlr workflows tic badge svg branch main https github com mlr org mlr actions cran status badge https www r pkg org badges version ago mlr https cran r project org package mlr cran checks https badges cranchecks info worst mlr svg https cran r project org web checks check results mlr html cran downloads https cranlogs r pkg org badges mlr https cran r project org package mlr stackoverflow https img shields io badge stackoverflow mlr blue svg https stackoverflow com questions tagged mlr lifecycle https img shields io badge lifecycle retired orange svg https lifecycle r lib org articles stages html codecov https codecov io gh mlr org mlr branch main graph badge svg https app codecov io gh mlr org mlr badges end cran release site https cran r project org package mlr online tutorial https mlr mlr org com index html changelog https mlr mlr org com news index html stackoverflow https stackoverflow com questions tagged mlr mlr mattermost https lmmisld lmu stats slds srv mwn de mlr invite blog https mlr org com deprecated mlr is considered retired from the mlr org team we won t add new features anymore and will only fix severe bugs we suggest to use the new mlr3 https mlr3 mlr org com framework from now on and for future projects not all features of mlr are already implemented in mlr3 if you are missing a crucial feature please open an issue in the respective mlr3 extension package https github com mlr org mlr3 wiki extension packages and do not hesitate to follow up on it installation release r install packages mlr development r remotes install github mlr org mlr citing mlr in publications please cite our jmlr paper https jmlr org papers v17 15 066 html bibtex https www jmlr org papers v17 15 066 bib some parts of the package were created as part of other publications if you use these parts please cite the relevant work appropriately an overview of all mlr related publications can be found here https mlr mlr org com articles tutorial mlr publications html introduction r does not define a standardized interface for its machine learning algorithms therefore for any non trivial experiments you need to write lengthy tedious and error prone wrappers to call the different algorithms and unify their respective output additionally you need to implement infrastructure to resample your models optimize hyperparameters select features cope with pre and post processing of data and compare models in a statistically meaningful way as this becomes computationally expensive you might want to parallelize your experiments as well this often forces users to make crummy trade offs in their experiments due to time constraints or lacking expert programming skills mlr provides this infrastructure so that you can focus on your experiments the framework provides supervised methods like classification regression and survival analysis along with their corresponding evaluation and optimization methods as well as unsupervised methods like clustering it is written in a way that you can extend it yourself or deviate from the implemented convenience methods and construct your own complex experiments or algorithms furthermore the package is nicely connected to the openml https github com openml openml r r package and its online platform https www openml org which aims at supporting collaborative machine learning online and allows to easily share datasets as well as machine learning tasks algorithms and experiments in order to support reproducible research features clear s3 interface to r classification regression clustering and survival analysis methods abstract description of learners and tasks by properties convenience methods and generic building blocks for your machine learning experiments resampling methods like bootstrapping cross validation and subsampling extensive visualizations e g roc curves predictions and partial predictions simplified benchmarking across data sets and learners easy hyperparameter tuning using different optimization strategies including potent configurators like iterated f racing irace sequential model based optimization variable selection with filters and wrappers nested resampling of models with tuning and feature selection cost sensitive learning threshold tuning and imbalance correction wrapper mechanism to extend learner functionality in complex ways possibility to combine different processing steps to a complex data mining chain that can be jointly optimized openml connector for the open machine learning server built in parallelization detailed tutorial miscellaneous simple usage questions are better suited at stackoverflow using the mlr https stackoverflow com questions tagged mlr tag please note that all of us work in academia and put a lot of work into this project simply because we like it not because we are paid for it new development efforts should go into mlr3 https github com mlr org mlr3 we have a own style guide which can easily applied by using the mlr style from the styler https github com r lib styler package see our wiki https github com mlr org mlr3 wiki style guide styler mlr style for more information talks workshops etc mlr outreach https github com mlr org mlr outreach holds all outreach activities related to mlr and mlr3 | machine-learning data-science tuning cran r-package predictive-modeling classification regression statistics r survival-analysis imbalance-correction tutorial mlr learners hyperparameters-optimization feature-selection multilabel-classification clustering stacking | ai |
node-blockchain-server | node blockchain server this library is outdated you can find a similar example application as a part of the ethereumjs devp2p https github com ethereumjs ethereumjs devp2p package | blockchain |
|
llm_training_handbook | the large language model training handbook an open collection of methodologies to help with successful training of large language models this is technical material suitable for llm training engineers and operators that is the content here contains lots of scripts and copy n paste commands to enable you to quickly solve your problems if you are not interested in technical details but want more of a detailed overview and concepts please refer to the sister the large language model training playbook https github com huggingface large language model training playbook instead note the list of topics will expand over time at the moment filling in only a subset model parallelism parallelism maximizing throughput throughput tensor precision data types dtype training hyper parameters and model initializations hparams instabilities instabilities debugging software and hardware failures debug slurm slurm resources resources license the content of this site is distributed under attribution sharealike 4 0 international license cc by sa unless specified otherwise the code in this repo is licensed under apache license version 2 0 https www apache org licenses license 2 0 | cuda large-language-models llm nccl nlp performance python pytorch scalability troubleshooting | ai |
AWS_Cloud_Compliance_Automation_with_Python | aws cloud compliance automation with python as a cloud engineering team we take care of the aws environment and make sure it is in compliance with the organizational policies we use aws cloud watch in combination with aws lambda to govern the resources according to the policies for example we trigger a lambda function when an amazon elastic block store ebs volume is created we use amazon cloudwatch events that allow us to monitor and respond to ebs volumes that are of type gp2 and convert them to type gp3 | cloud |
|
deck | p align center img src https get deck com wp content uploads 2022 01 app logo png alt deck logo height 100 p h2 align center modern extendable local web development studio h3 deck is powerful and high performant local web development studio unlike any other install try out more than 40 open source stacks a local web development studio to spin up almost any development environment effortlessly seamless gui to create manage multiple development environments cross platform it runs on macos windows ubuntu automatic https powered by letsencrypt test your code by switching multiple php nodejs apache nginx versions a free open source marketplace to install share local development environments it is highly extensible by modifying docker docker compose files create your own docker projects as custom dev environment native support for docker doesn t require docker desktop app roadmap live reload for php javascript apps automatic sync with remote server using rsync share projects previews as public urls deploy projects to aws google cloud digitalocean p align center strong download deck strong p p align center a href https get deck com download macos macos a a href https get deck com download ubuntu ubuntu a a href https get deck com download windows windows a p p align center a href https github com sfx101 docker stacks issues img alt github issues src https img shields io github issues sfx101 docker stacks style for the badge a a href https github com sfx101 docker stacks stargazers img alt github stars src https img shields io github stars sfx101 docker stacks style for the badge a img alt github all releases src https img shields io github downloads sfx101 docker stacks total style for the badge a href https github com sfx101 docker stacks blob master license img alt github license src https img shields io github license sfx101 docker stacks style for the badge a p p align center img alt github workflow status src https github com deck app lamp stack actions workflows docker image yml badge svg img alt github workflow status src https github com deck app lemp stack actions workflows docker image yml badge svg img alt github workflow status src https github com deck app nginx stack actions workflows docker image yml badge svg img alt github workflow status src https github com deck app apache stack actions workflows docker image yml badge svg img alt github workflow status src https github com deck app laravel actions workflows docker image yml badge svg img alt github workflow status src https github com deck app wordpress actions workflows docker image yml badge svg img alt github workflow status src https github com deck app mysql actions workflows docker image yml badge svg img alt github workflow status src https github com deck app mongodb actions workflows docker image yml badge svg p deck light mode screenshot https get deck com wp content uploads 2022 03 screenshot 2022 03 07 at 4 27 32 am png automatic https deck automatic https https get deck com wp content uploads 2022 03 screenshot 2022 03 07 at 4 25 08 am png deck s inbuilt ssl engine powered by letsencrypt and a supercharged proxy layer enables full https on localhost apps unlimited projects multiple server configurations deck marketplace https get deck com wp content uploads 2022 03 screenshot 2022 03 07 at 4 23 39 am png create any number of development environments right from your localhost deck s integrated marketplace lets you spin up stacks with just a click of a button popular web technologies such as lamp lemp mern mean laravel symfony wordpress magento many more are available out of the box built in log output and terminal deck terminal https get deck com wp content uploads 2022 03 screenshot 2022 03 07 at 4 26 12 am png see real time logs from your projects access terminal to run shell commands composer or npm native support for docker without docker desktop deck has no dependency on docker desktop app to run docker containers it just works out of the box with native support for docker powered by multipass on macos wsl 2 on windows downloads see a full list of downloads here https github com sfx101 deck releases get started 1 creating a project https get deck com docs creating project 2 viewing project logs https get deck com docs project log 3 accessing project shell https get deck com docs project shell documentation https getdeck io docs | laradock laradock-gui laravel docker docker-compose containers lamp-stack lemp-stack deck stack flights | front_end |
PhpFromZero | phpfromzero banner image doc assets phpfromzero banner png p align center b a comprehensive project template for beginners to learn php web development from scratch b p p align center a href https github com dahkenangnon phpfromzero style text decoration none img src https img shields io github stars dahkenangnon phpfromzero style social alt github stars a a href https github com dahkenangnon phpfromzero blob master license style text decoration none img src https img shields io github license dahkenangnon phpfromzero alt license a p objective phpfromzero is a handcrafted project template aimed at helping beginners in php web development it provides a solid foundation for learning how to create powerful websites from scratch using an object oriented approach and the model view controller mvc design pattern without relying on external dependencies this project template is well documented and designed to be easily understandable for junior php developers by using phpfromzero you will gain hands on experience in creating php websites with an object oriented approach develop a clear understanding of how web applications work acquire the skills needed to learn and work with various php frameworks possess the knowledge to build your own php framework with ease features object oriented php learn how to structure your code using object oriented programming oop principles for improved maintainability and code organization mvc design pattern understand and implement the model view controller mvc design pattern to separate concerns and achieve better code structure documentation detailed documentation guides you through the project structure code explanations and usage instructions educational purpose phpfromzero is designed to facilitate learning and provide a solid foundation for beginners in php web development requirements to run phpfromzero make sure you have the following wamp or xampp installed and configured mysql database php 8 or newer attention phpfromzero requires php 8 or newer to run this project is used for educational purposes such as training my mentees use this template at your own risk if you want to use this template in a real project it is highly recommended to improve security and performance aspects for any questions or discussions feel free to contact dah kenangnon gmail com or join the discussion on the github repository https github com dahkenangnon phpfromzero discussions try it on your computer to run phpfromzero on your computer follow these steps 1 clone the repository git clone https github com dahkenangnon phpfromzero git 2 create a new database as mentioned in env local php 3 import the php from zero sql file into the newly created database 4 run the following command in the project directory bash php s localhost 9000 t public 5 open your browser and navigate to http localhost 9000 http localhost 9000 to view the pages please note that depending on your environment s configuration you may encounter some errors extending phpfromzero is designed to help junior developers understand the inner workings of web applications and create their own php projects with well structured and maintainable architecture it provides a simplified approach for educational purposes and may not be suitable for production environments thank you for choosing phpfromzero give it a star on github https github com dahkenangnon phpfromzero if you find it helpful we encourage you to give it a try explore the features and contribute to its improvement happy coding | php mvc oop poo php-project-example php-project | front_end |
ml_implementation | implementation of machine learning introduction these are the implementation of machine learning algorithms from scratch x linear regression linear regression x least square method linear regression least square method py x logistic regression logistic regression x logistic regression logistic regression logistic regression py x distributed logistic regression logistic regression distributed logistic regression py x bayes bayes x navie bayes bayes navie bayes py x decision tree decision tree x c4 5 decision tree c45 decision tree decision tree py x id3 decision tree id3 decision tree decision tree py random forest gradient boosting tree x neural network neural network x perception neural network perception py x neural network neural network neural network py x svm svm x svm svm svm py fm fm random forest random forest kmeans kmeans condictional random field condictional random field condictional random field condictional random field condictional random field py others others sigmoid others sigmoid standard deviation others standard deviation normal distribution others normal distribution auto gradient others autogradient x random walk others random walk | machine-learning python implementation | ai |
starter-blog | starter blog created by rachelle rathbone hey everyone thanks for taking my packt publishing course if you run into any issues while working your way through the sections feel free to ping me on twitter coding love or email me at gatsbystartblog gmail com i ll do my best to respond to you as quickly as possible happy learning note if you try to run gatsby develop on the master branch before completing section six you will get an error you need to set up a contentful account and add environment variables to a env file for this to run without error prerequisites reactjs you should have at least a basic understanding of how react works the focus of this course is gatsbyjs so while we ll be using react i won t be covering any react specifics node version 8 or higher installed gatsby supports any version of node with a status of maintenance active or current node v6 reached its end of life status in april 2019 so if you have this version or anything lower installed gatsby won t run on your machine if you don t already have node installed you ll need to follow the download instructions on the nodejs site https nodejs org en download if it is already installed check the version by running node v or node version for those of you 6 or earlier you ll need to do some googling on how to update to a later version how you do that will vary depending on how it was installed one tip i will give if you upgrade but then have trouble pointing your machine at the latest version install homebrew https docs brew sh installation if you don t already have it run brew install nvm nvm is a node version manager which will make switching between node versions incredibly simple add a nvmrc file to the root of your project and simply add the version number you wish to use to the file for example if you installed 10 6 0 in your nvmrc file all you need to add is v10 16 0 jump ahead section 1 getting started with gatsby js sectionone section 2 querying data with graphql sectiontwo section 3 gatsby plugins sectionthree section 4 programmatically creating pages sectionfour section 5 working with images in gatsby sectionfive section 6 contentful content infrastructure sectionsix section 7 working with images in gatsby sectionseven sectionone what is gatsbyjs what was covered in this section what is gatsbyjs how it works and why it could be a useful tool for you the gatsby community and noteworthy people to follow on twitter resources download discord https discordapp com click the plus button in the left panel then join a server and type in reactiflux once you ve joined this server scroll through the list on the left until you reach tools then click on the gatsby tag this is a great place for you to come if you ever get stuck and are looking for help installing gatsby and cloning our gatsby project what was covered in this section installing gatsby globally and cloning our gatsby repository https github com packtpublishing starter blog a walkthrough of the structure of our application running our app resources something went wrong when trying to install gatsby globally maybe you missed an important prerequisite https www gatsbyjs org tutorial part zero non boring lorem ipsum https www shopify com partners blog 79940998 15 funny lorem ipsum generators to shake up your design mockups you might also like bacon ipsum https baconipsum com or chuck norris ipsum https vincentloy github io chuck facts ipsum overview of typography and styling what was covered in this section why we are using typography how it s set up and how you can make changes how we ll style our application other styling alternatives available in gatsbyjs resources typography js https www gatsbyjs org docs typography js learn more about typography https kyleamathews github io typography js more on styling https www gatsbyjs org tutorial part two css in js with gatsby using react helmet to add metadata what was covered in this section installing gatsby plugin react helmet and discussing why we need it creating out seo js file and adding some basic content adding our new seo component to our blog code you won t want to type htmlattributes lang title title meta name description content metadescription property og title content title property og description content metadescription property og type content website name twitter card content summary name twitter creator content rachelle rathbone name twitter title content title name twitter description content metadescription concat meta resources install gatsby plugin react helmet https www gatsbyjs org packages gatsby plugin react helmet to easily add metadata to your site creating a new page and linking between pages what was covered in this section duplicating our index page so we have a dummy page to link to adding a temporary element on each page that we ll use to link pages importing gatsby s link component and linking our pages sectiontwo querying data with graphql introduction to graphql what was covered in this section what graphql is and how we can use it in our app introducing the graphql playground writing our first graphql queries querying data in pages with graphql what was covered in this section import graphql and add our query to our index file pass that data into into our component and check out the data in the console update some of our content to use our data our query from the first video query site sitemetadata title author using the staticquery api what was covered in this section attempting to use a page query inside a component what is the static query api using static query to retrieve data directly from a component updating seo js with usestaticquery hook what was covered in this section comparing the staticquery component to the usestaticquery hook configuring seo js with our usestaticquery hook updating seo js to consume data from our graphql query sectionthree source plugins what was covered in this section what is a source plugin and what do they do writing a query to retrieve filesystem data updating our config to retrieve more filesystem data transformer plugins what was covered in this section the role of the transformer plugin installing our first transformer plugin and updating our config writing a graphql query to get the content from our markdown file resources install gatsby transformer remark https www gatsbyjs org packages gatsby transformer remark transform to parse the content of your markdown files welcome to the gatsby plugin library what was covered in this section introducing the gatsby plugin library installing more plugins to use in our app updating our config resources plugin library https www gatsbyjs org plugins install gatsby plugin sitemap https www gatsbyjs org packages gatsby plugin sitemap sitemap to help search engines identify the purpose of your pages install gatsby plugin google analytics https www gatsbyjs org packages gatsby plugin google analytics for in depth website analytics install gatsby plugin google tagmanager https www gatsbyjs org packages gatsby plugin google tagmanager tagma so you can add tracking on your site tracking events with plugins what was covered in this section why tracking is an important part of any production site and how you can utilize it setting up a google analytics account updating our config and testing that tracking is working resources google analytics account creation https analytics google com analytics web provision authuser 2 provision increase your click rates with social cards what was covered in this section what social cards are used for installing the gatsby remark social cards plugin and updating our config making some more changes to our seo js to include social card metadata resouces install gatsby plugin social cards https www gatsbyjs org packages gatsby remark social cards to increase click rates options for your social card plugin config options title field title font dejavusanscondensed color black black white size 48 16 24 32 48 64 style bold normal bold italic x null will default to xmargin y null will default to ymargin meta parts field author field date format mmmm ds font dejavusanscondensed color black black white size 24 16 24 32 48 64 style normal normal bold italic x null will default to xmargin y null will default to cardheight ymargin size background ffffff background color for the card xmargin 24 edge margin used when x value is not set ymargin 24 edge margin used when y value is not set sectionfour programmatically creating pages making our post markdown files what was covered in this section installing some plugins to support markdown files looking at the markdown examples adding an additional markdown file resources install gatsby remark prismjs prismjs https www gatsbyjs org packages gatsby remark prismjs to add syntax highlighting install gatsby remark smartypants https www gatsbyjs org packages gatsby remark smartypants to replace dumb punctuation marks with smart ones install gatsby remark copy linked files https www gatsbyjs org packages gatsby remark copy linked files to copy local markdown files to your public directory prismjs option config code options class prefix for pre tags containing syntax highlighting defaults to language eg pre class language js if your site loads prism into the browser at runtime eg for use with libraries like react live you may use this to prevent prism from re processing syntax this is an uncommon use case though if you re unsure it s best to use the default value classprefix language this is used to allow setting a language for inline code i e single backticks by creating a separator this separator is a string and will do no white space stripping a suggested value for english speakers is the non ascii character inlinecodemarker null this lets you set up language aliases for example setting this to sh bash will let you use the language sh which will highlight using the bash highlighter aliases this toggles the display of line numbers globally alongside the code to use it add the following line in src layouts index js right after importing the prism color scheme require prismjs plugins line numbers prism line numbers css defaults to false if you wish to only show line numbers on certain code blocks leave false and use the numberlines true syntax below showlinenumbers false if setting this to true the parser won t handle and highlight inline code used in markdown i e single backtick code like this noinlinehighlight false this adds a new language definition to prism or extend an already existing language definition more details on this option can be found under the header add new language definition or extend an existing language below languageextensions language superscript extend javascript definition superscript types supertype insertbefore function superscript keywords superif superelse building our web app s post template what was covered in this section creating our first template setting up the basic skeleton that will be the blueprint for our posts importing and passing in the components we will need for our template and temporarily adding hardcoded props working with the createpage api in gatsby node js what was covered in this section what are the gatsby node apis implementing the createpage api and adding a graphql query mapping over our data to programmatically create pages resources read all about gatsby node apis https www gatsbyjs org docs node apis oncreatenode which can be used in gatsby node js adding the oncreatenode api and updating our query what was covered in this section adding the oncreatenode api to gatsby node js creating previous and next variables for easier navigation on the client completing createpage and checking out our new programmatically generated pages writing a page query to our web app template what was covered in this section writing a pagequery in our template to pull in data from our markdown files replacing hard coded content with the results from our data updating our template to display our posts content using innerhtml code for our previous and next links ul li classname post navigation previous link to previous fields slug rel prev previous frontmatter title link li li classname post navigation next link to next fields slug rel next next frontmatter title link li ul sectionfive working with images in gatsby before we start don t forget to importing files with graphql what was covered in this section importing an image the regular way installing image plugins and writing a graphql query to allow us to access our images updating our index file to consume the image from our data resources install gatsby transformer sharp https www gatsbyjs org packages gatsby transformer sharp and gatsby plugin sharp https www gatsbyjs org packages gatsby plugin sharp using gatsby image what was covered in this section installing gatsby image an image component why gatsby image is an important tool for image optimization updating our index file to use the image component resources install gatsby image https www gatsbyjs org packages gatsby image adding images to our markdown files what was covered in this section adding images to markdown files installing a plugin that will allow us to successfully display images in markdown files updating our config and setting some options for our plugin resources install gatsby remark images https www gatsbyjs org packages gatsby remark images so that you can display images from markdown files adding videos to our app what was covered in this section installing the gatsby remark responsive iframe that will wrap any videos we install in a container creating a new post and markdown file add an iframe with youtube link to our file and testing it out in the browser resources install gatsby remark responsive iframe https www gatsbyjs org packages gatsby remark responsive iframe to allow you to embed youtube videos in your markdown files sectionsix contenful content infrastructure what is contentful what was covered in this section what a cms is and the different types of cmss pros and cons of traditional and headless cmss how contentful is different from its competitors creating a contentful account what was covered in this section creating a contentful account adding fields to our content model adding some content to our space resources contentful account creation https app contentful com connecting gatsby to contentful what was covered in this section installing the necessary plugins to work with contentful data with gatsby adding our contentful spaceid and accesstoken implementing a env file to make our app more secure resources the source plugin https www gatsbyjs org packages gatsby source contentful we need to communicate with contentful to render rich text you ll need this npm package https www npmjs com package contentful rich text react renderer updating gatsby node with a contentful query what was covered in this section visiting the playground to inspect our new query removing content we no longer need in our app making the necessary changes in gatsby node updating our app to consume posts from contentful what was covered in this section updating our index file to render content from contentful making some minor changes to render most of our fields from contentful using contentful rich text types contentful rich text react renderer and documenttoreactcomponents to render our rich text resources contentful rich text react renderer https www contentful com developers docs javascript tutorials rendering contentful rich text with javascript docs sectionseven deploying your app deploying and hosting what was covered in this section looking at our deployment options why netlify discussing the option to add pathprefix creating a netlify account what was covered in this section creating your netlify account setting up an empty repository in github updating the remote to point at your own repo resources sign up for a netlify account https app netlify com signup linking to your repository and deploying your web app what was covered in this section connecting your repository to netlify deploying your app checking out your production site updating your app what was covered in this section editing urls to point to your production site adding a link from our posts to our home page deploying our changes upping the game with gatsby preview what was covered in this section let s talk about gatsby preview creating a trial account witnessing the awesome power of live updates in action resources sign up for a trial gatsby preview https www gatsbyjs com dashboard login account | front_end |
|
COSC2196-assignment1 | introduction to information technology introduction to information technology | server |
|
GAAS | gaas towards l5 autonomous flying cars a robust framework extends gaas with lidars star https img shields io github stars generalized intelligence gaas style flat square fork https img shields io github forks generalized intelligence gaas style flat square watch https img shields io github watchers generalized intelligence gaas style flat square bsd 3 https img shields io github license generalized intelligence gaas style flat square twitter https img shields io twitter follow gaas dev style social about gaas gaas is an open source program designed for fully autonomous vtol a k a flying cars and drones gaas stands for generalized autonomy aviation system we hope to accelerate human use of the sky through the full autonomy of flying vehicles this project started in 2016 as a hobby for two students 2019 we open source the project and hope to develop gaas full time in the long term gaas provides a fully autonomous flight platform based on lidar hd map relocalization path planning and other modules for aircraft in contrast to the autopilot technology previously available only for consumer grade drones gaas aims for robust fully autonomous flight for human carrying and can be easily combined with national air traffic control at gaas you can see many of the automotive grade ag technologies that were previously only available in self driving cars the whole framework is coupled loosely so you can customize your own modules and easily add them to gaas previews image https github com cyanine gi gaas contrib raw main algorithms preview imgs gaas algorithms rviz preview 20200401 png image https github com cyanine gi gaas contrib raw main algorithms preview imgs gaas algorithms astar planning preview 20210409 png image https github com cyanine gi gaas contrib raw main algorithms preview imgs gaas algorithms rqt graph 20200401 png image https github com cyanine gi gaas contrib raw main algorithms preview imgs gaas algorithms dynamic objects and replanning png a video has been uploaded to show the whole pipeline you may need to download this video https github com cyanine gi gaas contrib resources blob main demos gaas contrib test1 20210419 compressed mp4 raw true differences between gaas deprecated and new gaas we use lidars as main sensor rather than vision algorithms gaas deprecated is based on computer vision cv but fully vision algorithms based framework is not robust enough for autonomous flying for flying cars and large cargo drones vision based algorithms suffer from 1 lack of robustness especially at night or over exposed conditions when air vehicles are flying at high speed the localization is not stable enough which may cause severe accidents vital to large air vehicles 2 computer vision is computationally expensive and does not easily run in real time on mobile devices 3 the neural network based approach is accident prone in extreme scenarios and the failures are not easily reproducible these problems are not allowed to occur in manned flight scenarios therefore the introduction of lidar seems to be necessary at present that s why we make new gaas from scratch and refactored all modules in cpp build with tested on os ubuntu 18 04 px4 for simulation only 1 8 0 step 1 check your network status wget www google com step 2 tools optional install cuda 10 2 for all gpu based algorithms like icp lidar localization and the gpu version of ndt localization you may need to upgrade cmake to at least 3 13 for building package icp lidar localization sudo apt install vim bwm ng htop tmux git net tools cmake gui iotop curl step 3 docker for simulation only curl fssl https get docker com bash s docker mirror aliyun sudo usermod ag docker your username docker pull gaas mavros gazebo7 px4 step 4 ros melodic install ros melodic sh step 5 opencv 3 4 5 sudo apt install cmake qt gui download opencv 3 4 5 and unzip cd opencv 3 4 5 mkdir build cd build cmake gui configure your opencv cmake options in cmake gui make j4 sudo make install step 6 glog git clone https github com google glog git cd glog git checkout b v0 4 0 mkdir build cd build cmake make sudo make install step 7 pcl 1 8 0 build from source download pcl 1 8 0 and unzip cd pcl 1 8 0 mkdir build cd build cmake gui configure your pcl cmake options in cmake gui make j4 sudo make install step 8 optional upgrade your gazebo for simulation cd gaas simulation upgrade gazebo sh getting started to build the project setup all dependencies run build all sh to run gaas contrib algorithms cd algorithms run gaas contrib algorithms sh start simulation or play a rosbag instead cd simulation scripts prepare simulation sh or rosbag play clock path to your rosbag and checkout your l5 flying car demo in simulation environment license gaas is under bsd 3 clause license features 1 simulation env with 32 lines lidar and stereo cameras 2 spinning lidar mapping and ndt matching localization check out simulation readme md to get more details of simulation env setup roadmap 1 gazebo simulation env construction including spinning lidars and non repetitive lidars and stereo cameras 1 livox horizon forward stereo camera done 2 velodyne hdl 32 forward stereo camera done 2 accelerate compiling and deployment of gaas 3 implement some lidar mechanical solid state based algorithms and implement one key start in the simulation environment checklist 1 lidar points to image projection done 2 euclidean cluster extraction done 3 global coordinate based hd map building done 4 ndt lidar localization cpu cuda done 5 downsampling node done 6 a path planner done 7 refactored px4 offboard commander done 8 dynamic obstacles generation and replanning done 9 jetson agx xavier adaptation done 10 interactive gui target selector in hd maps done 11 multiple submaps switching todo 12 imu preintegration and high frequency localization done 13 vtol mode switching todo 14 decentralized robust ground control station todo 15 generalized flight controller state management done 16 px4 state reporter done 17 hud module done 18 cuda based multiple lidar pointclouds icp localization done 19 ground points removal preprocessing done 20 system state surveillance service done 21 http server on ground control station todo 22 multiple spinning lidar support done 23 airsim simulation env support done current status adding logics for flight stage manager module including flight stage transfer service clients triggered by mission config file and servers including localization module flight control commander module and target navigation module | drone drones autonomous-quadcoptor autonomous flying-car aviation flight-controller flight uav vtol e-vtol evtol autonomous-vehicles autonomous-driving lidar hd-map | os |
ddev | ddev logo images ddev logo svg circleci https circleci com gh ddev ddev svg style shield https circleci com gh ddev ddev project is maintained https img shields io maintenance yes 2024 svg gitpod ready to code https img shields io badge gitpod ready to code blue logo gitpod https gitpod io https github com ddev ddev a href https github com codespaces new hide repo select true amp ref 20221220 codespaces amp repo 80669528 amp machine basiclinux32gb amp location westus2 img src https github com codespaces badge svg alt open in github codespaces style max width 100 height 20px a ddev is an open source tool for running local web development environments for php python and node js ready in minutes its powerful flexible per project environment configurations can be extended version controlled and shared ddev allows development teams to adopt a consistent docker workflow without the complexities of bespoke configuration get started 1 check system requirements https ddev readthedocs io macos intel and apple silicon windows 10 11 wsl2 linux gitpod https www gitpod io and github codespaces https github com codespaces 2 install docker colima and ddev https ddev readthedocs io en latest users install 3 try a cms quick start guide https ddev readthedocs io en latest users quickstart if you need help our friendly community provides great support https ddev readthedocs io en latest users support highlighted features quickly create local web development environments based on code repositories with minimal configuration import a database to any of your local environments import upload files to match the project e g drupal sites default files or wordpress wp content uploads customizable integration with hosting platforms like platform sh https platform sh pantheon https pantheon io acquia https www acquia com and others run commands within the docker environment using ddev exec view logs from the web and database containers use ddev ssh to explore the linux environment inside the container list running projects with ddev list snapshot databases with ddev snapshot temporarily share your development website with others using ddev share create custom commands as simple shell scripts enjoy effortless trusted https support extend and customize environments as much or as little as you need to run ddev to see all the commands https ddev readthedocs io en latest users usage cli contributing see how can i contribute to ddev in the faq https ddev readthedocs io en latest users usage faq and the contributing contributing md page overview of github contributions https repobeats axiom co api embed 941b040a17921e974655fc01d7735aa350a53603 svg repobeats analytics image wonderful sponsors ddev featured sponsor logos https ddev com resources featured sponsors svg | drupal wordpress development docker macos linux windows typo3 php mariadb local-development ddev magento magento2 laravel backdrop craftcms moodle nodejs | front_end |
react-take-home-test | dailypay front end engineering challenge welcome candidate and thank you for taking the time to complete the dailypay take home challenge for our senior frontend engineer position you will have 2 days to complete the assignment once you have completed your solution please reply with a link to a github repository and instructions on how to install run the application the goal of this challenge is to build out a movie awards 2021 interactive ballot please clone this repository and submit it once you are finished here are the rules of this challenge you must 1 build an application that displays a list of categories and nominees please follow the design in the wireframe below run yarn start to start the application 2 run yarn backend to start the server and get access to api methods such as getballotdata use the react useeffect hook to fetch the ballot data from the provided api and save it to state by using the react usestate hook useeffect documentation https reactjs org docs hooks effect html fetch api documentation https developer mozilla org en us docs web api fetch api using fetch usestate documentation https reactjs org docs hooks state html 3 when you click on a nominee we should highlight the nominee card and save the selections using the react usestate hook a user can only select one nominee per category and we should be able to see all of their selections highlighted the selected nominee card should follow the style guides below 4 make the layout responsive with at least one breakpoint your choice as to how it looks on a smaller screen width 5 once the user is finished making their selections they can click on a submit button that displays a results modal screen a user can dismiss the modal by clicking on the close button follow the wireframe below requirements 1 all navigation should happen in the same page 2 demonstrate use of react hooks 3 demonstrate knowledge of component modularization 4 utilize css to create the layout of the page add hover styles to the items the user is interacting with 5 create components as you feel is best suited for your solution ballot wireframe src take home wire jpg raw true ballot wireframe ballot success modal wireframe src take home success jpg raw true ballot success modal wireframe bonuses 2 make it pretty 3 make it accessible 4 add unit tests styling guidelines use the roboto google font use the following colors page background 0d2436 https via placeholder com 15 0d2436 000000 text 0d2436 default normal font color ffffff https via placeholder com 15 ffffff 000000 text ffffff hover font color cccccc https via placeholder com 15 cccccc 000000 text cccccc nominee card submit button background 009b86 https via placeholder com 15 009b86 000000 text 009b86 selected nominee card 009b86 https via placeholder com 15 009b86 000000 text 009b86 nominee card background hover submit button background hover 34ac9c https via placeholder com 15 34ac9c 000000 text 34ac9c good luck and if you have questions please reach out to us at rafael freaner dailypay com available scripts in the project directory you can run yarn start runs the app in the development mode open http localhost 3000 http localhost 3000 to view it in the browser the page will reload if you make edits you will also see any lint errors in the console yarn backend starts the server which allows the user to access the ballot api yarn test launches the test runner in the interactive watch mode see the section about running tests https facebook github io create react app docs running tests for more information yarn build builds the app for production to the build folder it correctly bundles react in production mode and optimizes the build for the best performance the build is minified and the filenames include the hashes your app is ready to be deployed see the section about deployment https facebook github io create react app docs deployment for more information learn more you can learn more in the create react app documentation https facebook github io create react app docs getting started to learn react check out the react documentation https reactjs org code splitting this section has moved here https facebook github io create react app docs code splitting https facebook github io create react app docs code splitting analyzing the bundle size this section has moved here https facebook github io create react app docs analyzing the bundle size https facebook github io create react app docs analyzing the bundle size making a progressive web app this section has moved here https facebook github io create react app docs making a progressive web app https facebook github io create react app docs making a progressive web app advanced configuration this section has moved here https facebook github io create react app docs advanced configuration https facebook github io create react app docs advanced configuration deployment this section has moved here https facebook github io create react app docs deployment https facebook github io create react app docs deployment yarn build fails to minify this section has moved here https facebook github io create react app docs troubleshooting npm run build fails to minify https facebook github io create react app docs troubleshooting npm run build fails to minify | front_end |
|
dtc-de-zc-week4-redshift | dtc de zc week4 redshift repository to use dbt cloud with redshift for week 4 of the data engineering zoomcamp | cloud |
|
MachineLearning.jl | machinelearning jl build status https travis ci org benhamner machinelearning jl png https travis ci org benhamner machinelearning jl coverage status https img shields io coveralls benhamner machinelearning jl svg https coveralls io r benhamner machinelearning jl branch master package evaluator http iainnz github io packages julialang org badges machinelearning release svg http iainnz github io packages julialang org pkg machinelearning ver release the machinelearning package represents the very beginnings of an attempt to consolidate common machine learning algorithms written in pure julia and presenting a consistent api initially the package will be targeted towards the machine learning practitioner working with a dataset that fits in memory on a single machine longer term i hope this will both target much larger datasets and be valuable for state of the art machine learning research as well api introduction model 2 0 1 0 1 0 x train randn 1 000 3 y train int map x x 0 x train model net fit x train y train classification net options sample 1 0 0 0 0 0 println ground truth int dot sample model 0 println prediction predict net sample algorithms implemented basic decision tree for classification basic random forest for classification basic neural network bayesian additive regression trees other helpers train test split cross validation experiments | ai |
|
HCMUS-Handwritting-Recognition | hcmus handwritting recognition 1 authors this is our final project for introduction to information technology our team 1 phuc song dong gia https github com fusodoya 2 loi nguyen minh https github com mf0212 3 thang nguyen quang https github com thanguyen165 2 environment 2 1 python 3 7 download at https docs conda io en latest miniconda html 2 2 visual studio code vs code download at https code visualstudio com download install extension python after installing vs code 2 3 numpy library pip install numpy 2 4 matplotlib library pip install matplotlib 2 5 cv2 library pip install opencv python 3 prepare mnist dataset download mnist dataset at http yann lecun com exdb mnist and do not unzip files the mnist dataset contains 60 000 images used to recognise input numbers called train and 10 000 images used to check if the algorithm is good or bad called test every image has its label respective to the number written in the image 4 organise project 4 zips of mnist dataset is in data subfolder 5 before we start run test mnist py file to make sure mnist dataset is successfully installed and set up 6 how do we recognise the numbers step 1 vectorize all the images of train dataset and the input img step 2 find the distance between input img and each img in train step 3 sort all the distances in increasing order step 4 choose k smallest value called k nearest neighbours knn k can be 50 100 500 etc you can choose any value for it step 5 count and find in k labels which label has the largest frequency that is the number this algorithm guess 7 run code run file main py run by this cmd python main py 8 optimize speed use c https www freecodecamp org news the c plus plus programming language code to increase speed 8 1 prepare you must have c compiler to compile c code get the lib hpp and lib cpp files run these command i use gnu gcc https gcc gnu org g c fpic lib cpp o lib o g shared lib o o lib so or compile them by visual studio https visualstudio microsoft com vs you now have a lib so file keep this file and main optimze py file in same directory if you don t want to edit the library or you don t have a compiler use mine instead of building by yourself 8 2 let s rock run main optimize py file instead of main py file the only difference between these files is main py runs guess function in python but in main optimize py the guess function calls the guess optimize function written by c in lib so | server |
|
CVPaperReading | cvpaperreading twostagedetection onestagedetection detection imageclassification reidentification tracking segmentation poseestimation action modelcompression | ai |
|
GCModeller | gcmodeller a href https gcmodeller org img src images logo png width 150 align right border 1px a gcmodeller genomics cad computer assistant design modeller system in net language doi https zenodo org badge 48901128 svg https zenodo org badge latestdoi 48901128 github all releases https img shields io github downloads smrucc gcmodeller total svg maxage 2592000 style flat square gpl licence https badges frapsoft com os gpl gpl svg v 103 https opensource org licenses gpl 3 0 build status https travis ci org smrucc gcmodeller svg branch master https travis ci org smrucc gcmodeller warning this project is a work in progress and is not recommended for production use home http gcmodeller org github https github com smrucc gcmodeller biotools https bio tools gcmodeller supported platform microsoft windows gnu linux mac microsoft azure cloud br development microsoft visualstudio 2019 visualbasic net br runtime environment scibasic https www nuget org packages scibasic v2 1 5 beta amp net framework 4 7 or mono 6 4 br installation vs2019 is required of compiles this project after the source code have been clone using git just open solution file src gcmodeller sln src gcmodeller sln and when restore nuget packages finished then you are good to go of compile gcmodeller project br note due to the reason of this project is using git submodule for manage some runtime component so that please do not directly download the project source code from github by using the donwload zip button the internal github client in the visualstudio is recommended using for download the project source code docker and database dependency part of the gcmodeller function required running linux tools through darwinism https github com xieguigang darwinism docker environment for vb net if you are running gcmodeller on windows platform this toolkit required of these environment installed on your windows server microsoft powershell sdk 3 0 latest version of docker for x64 then pull environment container image via docker pull xieguigang gcmodeller env the docker container image contains these utils that required by gcmodeller meme suite for motif analysis mothur for construct otu install database some feature in gcmodeller required the fasta sequence database was installed on a specific location on your server s filesystem please follow this instruction to install the database for gcmodeller img src http gcmodeller org dna png width 40 height 48 gcmodeller is an open source cloud computing platform for the geneticist and systems biology you can easily build a local computing server cluster for gcmodeller on the large amount biological data analysis the gcmodeller platform is original writen in visualbasic net language a feature bioinformatics analysis environment that net language hybrids programming with r language was included which its sdk is available at repository https github com smrucc r bioinformatics currently the r language hybrids programming environment just provides some bioconductor api for the analysis in gcmodeller gcmodeller is a set of utility tools working on the annotation of the whole cell system this including the whole genome regulation annotation transcriptome analysis toolkits metabolism pathway analysis toolkits and some common bioinformatics problem utils tools and common biological database i o tools is also available in gcmodeller for the net language programming directory roadmap gcmodeller gcmodeller the location of gcmodeller compile output i have config all of the project output in the path gcmodeller bin src src src gcmodeller src gcmodeller gcmodeller basic library and analysis protocols src interops src interops gcmodeller tools that dependent on the external programs src r bioconductor src r bioconductor r language hybrids environment src r sharp src r sharp the gcmodeller r language scripting engine src repository src repository gcmodeller data repository system src runtime src runtime third part library and visualbasic runtime source code tools tools data standards gcmodeller supports the sbml and biom data standards for exchanges the analysis and model data with other bioinformatics softwares supports psi data for the biological interaction network model supports obo data for ontology database like go a href http sbml org main page img src src gcmodeller models images sbml logo 70 png width 80 a a href http biom format org img src src gcmodeller models images biom format png width 80 a a href http www psidev info overview img src images data standards psi logo s png width 80 a a href http www obofoundry org img src images data standards foundrylogo png width 80 a modules amp functions gcmodeller provides a set of net libraries and cli tools for processing biological analysis data currently gcmodeller can provides these productive ready libraries 1 basically libraries ncbi data analysis toolkit genbank taxonomy nt nr database common data fasta database fastq sam data file i o class biological data standard supports sbml level 3 biom level1 psi obo biological pathway database metacyc reactome kegg data tools for net language 2 biological data visualization software api for net circos api genomic visualizing cytoscape datamodel api biological network visualizing sequencelogo molecular motif site visualize kegg pathway map visualizer 3 annotation tools a complete ncbi localblast toolkit for proteins and nucleotide sequence analysis includes parallel task library for win linux server and data analysis protocol snp toolkit nucleotide sequence topology feature site analysis toolkit regprecise database tool and meme software toolkit for the annotation of bacterial genomics regulation network go gene ontology annotation tools kegg go gsea functional enrichment tools and reference genome background model creator based on uniprot database 4 r language hybrids environment for bioinformatics includes basically r language api wrapper for visualbasic like api in base utils stats namespace from r base and some r package wrapper api from cran and bioconductor is also included gcmodeller r language scripting 5 webapi wrapper for kegg database and regprecise database 6 feature tools cellular module simulator and virtual cell model generator protocol proteomics data analysis toolkit single cell data analysis toolkit gcmodeller r scripting here is a code snapshot of r scripting for drawing sequence logo input data is accepted from the commandline input r demo script for create sequence logo based on the msa alignment analysis nt base frequency is created based on the msa alignment operation imports bioseq sequencelogo from seqtoolkit dll imports bioseq fasta from seqtoolkit dll script cli usage r sequencelogo r seq input fasta title logo title save output png get input data from commandline arguments and fix for the optional arguments default value by apply or default syntax for non logical values let seq fasta as string seq stop no sequence input data for draw sequence logo let logo png as string save seq fasta logo png let title as string title basename seq fasta read sequence and then do msa alignment finally count the nucleotide base frequency and then draw the sequence logo by invoke sequence logo drawer api seq fasta read fasta msa of plot seqlogo title save graphics file logo png run the r script from commandline cmd echo off r sequencelogo r seq lexa fasta save lexa png title lexa images lexa png publications here listing the scientific paperworks that based on the analysis services of gcmodeller niu x n et al 2015 complete sequence and detailed analysis of the first indigenous plasmid from xanthomonas oryzae pv oryzicola bmc microbiol 15 1 1 15 doi 10 1186 s12866 015 0562 x bacterial plasmids have a major impact on metabolic function and adaptation of their hosts an indigenous plasmid was identified in a chinese isolate gx01 of the invasive phytopathogen xanthomonas oryzae pv oryzicola xoc the causal agent of rice bacterial leaf streak bls to elucidate the biological functions of the plasmid we have sequenced and comprehensively annotated the plasmid 2016 05 17 png gallery src gcmodeller analysis singlecell stdeconvolve demo raw pixels png single cell data toolkit includes in gcmodeller phenograph src gcmodeller analysis singlecell phenograph phenograph stdeconvolve src gcmodeller analysis singlecell stdeconvolve stdeconvolve images cmeans keggset png images upsetplot png src workbench r 23 demo hts patterns png src workbench r 23 demo hts expression patterns r images vocano plot png images kegg pathway network clusters png images rsd p density png images clusters scatter png images xcb tcs uniprot taxonomy 314565 go enrichment converts go enrichment pvalue 0 05 png img src images biological process png width 285 img src images cellular component png width 285 img src images molecular function png width 285 img src manual kegg unigenes blast m8 filter ko catalogs kegg level a png width 435 img src images go enrichment png width 435 images fur lightbox png images xanthomonas oryzae oryzicola bls256 uid16740 lightbox png images pxocgx01 lightbox png images phenotypic btree lightbox png images pxocgx01 blastx lightbox png visit our project home http gcmodeller org for developers here are some released library of the gcmodeller is published on nuget then you can install these library in visualstudio from package manager console bash install microsoft visualbasic scibasic runtime via nuget https github com xieguigang scibasic pm install package scibasic pre the gcmodeller core base library was released https github com smrucc gcmodeller core pm install package gcmodeller core the ncbi localblast analysis toolkit https github com smrucc ncbi localblast pm install package ncbi localblast for user the gcmodeller demo script and data for user tutorials can be download from these public data repository xanthomonas campestris pv campestris 8004 gcmodeller genomics modelling project https github com smrucc xanthomonas campestris 8004 uid15 genome map plot of xanthomonas campestris pv campestris 8004 https raw githubusercontent com smrucc xanthomonas campestris 8004 uid15 master thumbnails map part a png https github com smrucc xanthomonas campestris 8004 uid15 tree master genome chromsome map https raw githubusercontent com smrucc xanthomonas campestris 8004 uid15 master thumbnails map part b png https github com smrucc xanthomonas campestris 8004 uid15 tree master genome chromsome map img src images links osi certified png width 40px images links github ico copyleft copy smrucc genomics http smrucc org 2016 all rights reversed | gcmodeller biological-data-analysis scibasic microsoft-visualbasic dotnet genomics genome-annotation r-language bioinformatics single-cell | os |
SCE_Project | sce project | os |
|
Resources | what is this a collection helpful resources for software development on the web split into various categories is there a searchable front end to make finding stuff more easier yes however it s terribly designed so if you d like to contribute please open a pr against the ui branch of this repo https tevko github io resources how is this updated and maintained this repo uses github actions to periodically check dead urls and remove them from the repository anyone can contribute via pull request | front_end |
|
esp32-rust-nostd-temperature-logger | demo of rust on esp32 no rtos with mqtt and adafruit io for temperature logging about this will read the temperature from a connected bmp180 sensor via i2c and send it via mqtt to adafruit io every minute screenshot doc screenshot png screenshot it publishes the temperature value to the topic username feeds temperature setting credentials you need to set these environment variables for a successful build name value ssid ssid of your wifi access point password your wifi password adafruit io username your adafruit io username adafruit io key your adafruit io api key to run the application connect your esp32 development board with the bmp180 connected and execute cargo run release make sure to have the xtensa enabled rust toolchain https github com esp rs rust build installed wiring the bmp180 temperature sensor bmp180 esp32 sda io32 scl io33 gnd gnd vcc 3 3v license licensed under either of apache license version 2 0 license apache license apache or http www apache org licenses license 2 0 mit license license mit license mit or http opensource org licenses mit at your option contribution unless you explicitly state otherwise any contribution intentionally submitted for inclusion in the work by you as defined in the apache 2 0 license shall be dual licensed as above without any additional terms or conditions | os |
|
trucking-iot | trucking iot the trucking iot project is a modern real time streaming application serving as a reference framework for developing a big data pipeline complete with a broad range of use cases and powerful reusable core components modern applications can ingest data and leverage analytics in real time these analytics are based on machine learning models typically built using historical big data this reference application provides examples of connecting data in motion analytics to your application based on big data from iot sensor data collection to flow management real time stream processing and analytics through to machine learning and prediction this reference project aims to demonstrate the power of open source solutions outline prerequisites prerequisites quick how do i use it quick how do i use it setup on existing hdf hdp setup on existing hdf hdp how it works how it works prerequisites an instance of the hortonworks hdf sandbox or alternatively your own ambari powered cluster with zookeeper nifi storm and kafka for integration with schema registry download and run the single script setup located at https github com orendain schema registry setup quick how do i use it for an instance of the hdf sandbox preloaded with this reference application run the following docker command below note please make sure docker has at least 8gb of memory before running the container below for a nifi storm nifi kafka web application pipeline with integration with hortonworks schema registry docker run name hdf trucking iot hostname sandbox hdf hortonworks com privileged d p 12181 2181 p 13000 3000 p 14200 4200 p 16080 6080 p 18000 8000 p 18080 8080 p 18744 8744 p 18886 8886 p 18888 8888 p 18993 8993 p 19000 9000 p 19090 9090 p 19091 9091 p 43111 42111 p 62888 61888 p 25000 15000 p 25001 15001 p 25002 15002 p 25003 15003 p 25004 15004 p 25005 15005 p 12222 22 p 17000 17000 p 17001 17001 p 17002 17002 p 17003 17003 p 17004 17004 p 17005 17005 orendain hdf trucking iot usr sbin sshd d once the container is created ssh into the hdf sandbox ssh root sandbox hdf hortonworks com p 12222 password greenhadoop execute the pre included script root nifi to nifi with schema sh open the web applicaton http sandbox hdf hortonworks com 25001 optionally check out the nifi flow and storm ui setup on existing hdf hdp note if you re not on the hdf sandbox you ll need to replace the default cluster hostname sandbox hdf hortonworks com in the following files as well as check the port endpoints trucking schema registrar src main resources application conf trucking storm topology src main resources application conf trucking web application backend conf application conf scripts sh 1 on your sandbox cluster download this project git clone https github com orendain trucking iot git 2 run the included automated deployment script read a comicbook this may take a few minutes note by default the application deploys a nifi storm nifi kafka web application pipeline with integration with hortonworks schema registry to use a different pre built pipeline open edit scripts setup environment sh and scripts builds storm topology sh before running the commands below cd trucking iot scripts auto deploy sh 3 on your local machine open a browser and navigate to the web application http sandbox hdf hortonworks com 25001 4 optionally check out the nifi flow and storm ui how it works for an indepth look at the different components check out the trucking iot wiki https github com orendain trucking iot wiki for any questions or requests for more documentation feel free to open an issue or fork this repo and contribute | server |
|
recipe_app_backend | recipe app backend backend development of recipe app with django commands to run linting and testing sh docker compose run rm app sh c python manage py wait for db python manage py test flake8 run sh docker compose up once all the containers are up and running click the below link a href https localhost 8000 api docs target blank api docs a | server |
|
rosetta-specifications | p align center a href https www rosetta api org img width 90 alt rosetta src https www rosetta api org img rosetta header png a p h3 align center rosetta specifications h3 p align center this repository contains all specification files used to generate code for the rosetta api p p align center a href https circleci com gh coinbase rosetta specifications tree master img src https circleci com gh coinbase rosetta specifications tree master svg style shield a a href https github com coinbase rosetta specifications blob master license txt img src https img shields io github license coinbase rosetta specifications svg a p p align center build once integrate your blockchain everywhere p overview rosetta is an open source specification and set of tools that makes integrating with blockchains simpler faster and more reliable the rosetta api is specified in the openapi 3 0 format https www openapis org requests and responses can be crafted with auto generated code using swagger codegen https swagger io tools swagger codegen or openapi generator https openapi generator tech are human readable easy to debug and understand and can be used in servers and browsers installation no installation is required as the repository only includes specification files documentation you can find the rosetta api documentation at rosetta api org https www rosetta api org docs welcome html check out the getting started https www rosetta api org docs getting started html section to start diving into rosetta our documentation is divided into the following sections product overview https www rosetta api org docs welcome html getting started https www rosetta api org docs getting started html rosetta api spec https www rosetta api org docs reference html testing https www rosetta api org docs rosetta cli html best practices https www rosetta api org docs node deployment html repositories https www rosetta api org docs rosetta specifications html contributing you may contribute to the rosetta specifications project in various ways asking questions contributing md asking questions providing feedback contributing md providing feedback reporting issues contributing md reporting issues read our contributing contributing md documentation for more information when you ve finished an implementation for a blockchain share your work in the ecosystem category of the community site https community rosetta api org c ecosystem platforms looking for implementations for certain blockchains will be monitoring this section of the website for high quality implementations they can use for integration make sure that your implementation meets the expectations https www rosetta api org docs node deployment html of any implementation related projects rosetta sdk go https github com coinbase rosetta sdk go the rosetta sdk go sdk provides a collection of packages used for interaction with the rosetta api specification much of the sdk code is generated from this the rosetta specifications https github com coinbase rosetta specifications repository rosetta cli https github com coinbase rosetta cli use the rosetta cli tool to test your rosetta api implementation the tool also provides the ability to look up block contents and account balances reference implementations to help you with examples we developed complete rosetta api reference implementations for bitcoin https github com coinbase rosetta bitcoin and ethereum https github com coinbase rosetta ethereum developers of bitcoin like or ethereum like blockchains may find it easier to fork these reference implementations than to write an implementation from scratch you can also find community implementations for a variety of blockchains in the rosetta ecosystem https github com coinbase rosetta ecosystem repository and in the ecosystem category https community rosetta api org c ecosystem of our community site specification development while working on improvements to this repository we recommend that you use these commands to check your code make deps to install dependencies make gen to generate the specification files make check valid to ensure specification is valid make release to check if code passes all tests run by circleci adding new curvetypes or signaturetypes unlike the data api https www rosetta api org docs data api introduction html where there are no global type constraints e g you can specify any operation type the construction api https www rosetta api org docs construction api introduction html has a clearly enumerated list of supported curves and signatures each one of these items has a clearly specified format that all implementations should expect to receive if a curve or signature you are employing is not enumerated in the specification https www rosetta api org docs reference html you will need to open a pr against the specification to add it along with the standard format it will be represented in it is up to the caller of a construction api implementation to implement key generation and signing for a particular curvetype signaturetype https www rosetta api org docs models curvetype html there is a keys package in rosetta sdk go https github com coinbase rosetta sdk go that is commonly used by callers and anyone in the community can implement additional schemes there license this project is available open source under the terms of the apache 2 0 license https opensource org licenses apache 2 0 2022 coinbase | blockchain |
|
Yo | yo yo v3 yo ui yo v3 pure yo intro getting started supported browsers attention documentation and demo versioning bugs and feature requests author copyright and license a name intro a yo min css yo min js yo yo a name getting started a http yo doyoe com doc getting started html yo yo a name supported browsers a ios6 0 android4 0 latest stable chrome safari opera ie10 a name attention a yo html5 doctype doctype html viewport yo mobile first meta name viewport content initial scale 1 minimum scale 1 maximum scale 1 user scalable no maximum scale 1 user scalable no minimum scale 1 yo 2 border px border rem border box before after before after webkit box sizing border box moz box sizing border box box sizing border box pc flex flex flex display flex flex a name documentation and demo a view demo http doyoe github io yo demo view documentation http doyoe github io yo doc yo ydoc npm install ydoc g registry https registry npm taobao org ydoc build doc a name versioning a yo semver http semver org lang zh cn releases tag https github com doyoe yo releases changelog changelog md a name bugs and feature requests a yo yo issues https github com doyoe yo issues new pull requests https github com doyoe yo pulls a name author a https github com doyoe http weibo com doyoe http www doyoe com ymfe team https github com ymfe a name copyright and license a yo the mit license http opensource org licenses mit creative commons http creativecommons org licenses by 4 0 | css scss es6 react frontend-framework scss-framework ui-components mobile-first mobile-web mobile-app | front_end |
PillDispenser_System | center pilldispenser center fully self programming pill dispenser system br product deigned for alert patient with the daily time of medicine br consist of 4 rooms each room take medicine with the same appointement br appointment of each room is specified by the patient br cropped pill idspenser left https github com ahmed kamal91 pilldispenser system assets 91970695 07dd7135 34ed 4822 b728 228beb13f7bf components lcd for user interface keypad for interacting with th system touch sensor for turn off alarm br piezo the actual alarm led to flag the speicifed room br gif comonnrt https github com ahmed kamal91 pilldispenser system assets 91970695 bbc2bbfb 2587 4408 8b2b 72c167002557 pre b external b b internal b br lcd arduino uno br keypad i2c br touch sensor rtc br piezo mega breadboard br led pre what s special about it we try to eliminate some constraints and limits that make it quiete hard to use br so it became ease of use br pilldispenser4 https github com ahmed kamal91 pilldispenser system assets 91970695 a00fea07 18d3 4695 8c0c c18c7f3c1f73 simple interface view represnting rooms easyily modify room s alert time easily show all times allocated to rooms in general br allocated time for each room separately in particular using friendly symbols and icon left home1 resized https github com ahmed kamal91 pilldispenser system assets 91970695 e705d143 187e 4a85 bbab 2c5ce6692f04 left using touch sensor instead of push button that is quiete harder br for finger than touching ensure awareness of the patient br by holding on touch sensor for 2 second to turn off the alert give flag light by led to the room where you should take your medicine from there is plastic separators in each room for easy picking different medicine and separate between them code explanation you can check the code yourself br https github com ahmed kamal91 pilldispenser system blob main v10 ino h2 over view h2 notice this chart explain code for one room br all system depends on the equality of the alarm and real time so i began with if else statement check the equality br most part of the system go through the un equality of the alarm and real time and the rest of code shows alarm behavior br chart 1 https github com ahmed kamal91 pilldispenser system assets 91970695 c0954216 c840 469f b9f5 d9b3d45834f4 h2 libraries h2 b 1 lcd b br include liquidcrystal i2c h br include wire h br liquidcrystal i2c lcd 0x27 16 2 br b 2 keypad b br include keypad h br const int row num 4 four rows br const int column num 3 three columns br char keys row num column num br 1 2 3 br 4 5 6 br 7 8 9 br 0 br byte pin rows row num 8 7 6 5 br byte pin column column num 4 3 2 br keypad keypad keypad makekeymap keys pin rows pin column row num column num br b 3 eeprom b br include eeprom h br b 4 rtc b br include timelib h br include ds1307rtc h br chart 2 https github com ahmed kamal91 pilldispenser system assets 91970695 33581b83 099e 445b a394 d0115bcc4e68 br chart 3 https github com ahmed kamal91 pilldispenser system assets 91970695 88c58d96 2a75 42e0 94a1 057ac4a6ecd4 h2 functions h2 core of the system br set time function br first we will build what will appear in lcd interface br using lcd clear lcd print lcd setcursor br pre interface lcd clear lcd setcursor 0 1 lcd print del back lcd setcursor 0 0 lcd print time lcd setcursor 8 0 lcd print lcd setcursor 6 0 pre create variable to save the inserted time later br pre string copier pre making a way to wait the user with our rules function br pre char key wait input key no key while key no key key keypad getkey pre we want to insert numbers from keypad for br 1 add on copier variable br 2 appear on lcd in the right position br so br pre lcd print key copier copier key pre inserting number should jump after symbol so we will use length of variable copier to do that br pre if copier length 2 lcd print key copier copier key else lcd setcursor 9 0 lcd print key copier copier key pre before we continue we have to discuss eeprom first br chart 4 https github com ahmed kamal91 pilldispenser system assets 91970695 a30139de 3dff 403c 8513 0495fd36e354 2 set in eeprom simply eeprom is a non volatile memory has 1024 place to save the data br each place take one char so we have 4 rooms every room has alarm time in string dtype has 5 br chars so we want 20 places for all rooms in eeprom br this function made to save time alarm for each room character by character br pre set the time in eeprom function void set in eeprom int start pos int end pos int extra pos string times for int i start pos i end pos i serial print times i eeprom update i times i extra pos pre br we will notice that times i extra pos br numbers range for eeprom change while indices of variable not so we have to subtract from start br point number to begin range of numbers from 0 as indices does for any variable br 3 get from eeprom function br almost the same idea of set in eeprom function br pre getting epprom time string get from eeprom int start pos int end pos string tim back for int i start pos i end pos i char letter eeprom read i tim back tim back letter return tim back pre lets return to set time function br a way to back in case you don t want to set time br pre back if key lcd clear lcd print ok delay 500 switch room num case 1 return string get from eeprom 0 5 case 2 return string get from eeprom 5 10 case 3 return string get from eeprom 10 15 case 4 return string get from eeprom 15 20 pre condition for breaking while loop br pre if copier length 5 break finally return copier return copier pre alarm br when it set time from user saved in eeprom real time from rtc module arduino gives signals to leds and piezo br putting the last condition on if condition touch button if its low there is alarm else stop alarm br | embedded-systems | os |
Exam-2nd-semester | exam 2nd semester second semester alt school cloud engineering exam segments ansible playbook containing 1 ansible cfg file tha holds configuration of ansible 2 inventory holds the target machine s ip address 3 site yaml copies the cron job copies the laravel file changes the command on the file and runs the laravel script laravel slave sh file installs and deploys the laravel application with lamp stack master slave sh file generates master and slave machines laravel sh deploys laravel on the master machine how to run the script 1 spin up the master and slave machines 1 spin up screenshots 1 20spin 20up 20master 20and 20slave 20machines png 2 ssh into the master machine enter as a root user and run laravel sh script update and upgrade server install lamp stack install composer configure apache2 clone laravel configure mysql execute key gen and migrate for php 1 ssh into master screenshots 2a 20ssh 20into 20the 20master 20machine png 2b run laravel sh screenshots 2b 20run 20laravel sh png 2c run laravel screenshots 2c 20run 20laravel png 3 ssh key gen and generate key pair 3 ssh key gen screenshots 3 20ssh 20key 20gen 20and 20pair png 4 run ansible playbook to update and upgrade server set cron job to check uptime of server copy bash script to slave machine set execute permissions on script run bash script 4 run ansible play book screenshots 4a 20run 20ansible png 5 further screen shot evidence 5 further screen shots screenshots 5 20further 20screen 20shots png 6 functional php app via vm ip address 6 functional php app via vm ip address screenshots 6 20functional 20php 20app 20via 20vm 20ip 20address png | cloud |
|
cvt | cvt logo data logos cvt png computer vision tools githalytics com alpha https cruel carlota pagodabox com 1972da9ba634242817a1efff00773652 githalytics com http githalytics com tum uav cvt building cvt git clone git github com tum uav cvt git cd cvt mkdir build cd build cmake make | ai |
|
RTOS | os | os |
|
AdvancedRelationalDatabases | advanced relational databases introduction as software engineers dealing with relational databases is a regular when building an application its easy to make a relational database and use it in our apps however many people are lacking when it comes to understanding and applying certain advanced concepts for relational databases and applying them hopefully the topics discussed will be able to shed some light on some database concepts you aren t familiar with such as transactions indexing sharding and more database transactions database transactions are a unit of queries or operations executed together to modify or access the database transactions end with a commit or a rollback the former means commiting the change permanently to the database while the latter rolls back the database to the state before the transaction began the two main points of using database transactions is to mkae it more robust by using rollbacks as well as keeping it consistent cncpt025 https user images githubusercontent com 62875631 189594267 3f73e0d0 7c47 429c a299 dffb02ebb29e jpg acid properties there are four properties to maintain transaction integrity that are abbreiviated to acid these properties are atomicity consistency integrity and durability atomicity atomicity states that a transaction must succeed or fail no in betweens if a commit is made that means the transaction has completed succesfully or if a failure occurs the transaction is rolled back consistency consistency is all about keeping the database consistent before and after the transaction this is usually achieved by user defined constraints isolation isolation means multiple transactions working concurrenty without interfering with each other database systems use isolation levels to state the degree of the isolation the transaction will be applying isolation levels wil be discussed later on durability finally durability is achieved by making sure modifications to the database are persisted one durability technique is the write ahead log wal which is writing changes to the db to logs and therefore commits arent actually affecting the database after the wal file reaches a certain limit a checkpoint is reached and so all changes are applied to the database read more about wal here https sqlite org wal html isolation levels isolation levels exist to choose the deegree in which we want our database to be isolated they are split into locking and row versioning locking isolation levels consist of read uncommited read commited repeatable read and serializable while snapshot isolation uses row versioning read more about locking vs row versioning here https docs microsoft com en us sql relational databases sql server transaction locking and row versioning guide view sql server ver16 however each isolation level faces a read phenomena that presents problems these phenomena are dirty reads lost updates non repeatable reads and phantom reads for a full guide on transaction isolation levels and read phenomena as well as shared locks and exclusive locks click here https levelup gitconnected com transaction isolation levels in ms sql guide for backend developers 6a5998e34f6c read uncommitted the first isolation level is read uncommitted where it allows transactions changes to be visible to each other before any commits this is possible due to the fact they do not use shared locks when reading data from the database however read uncommited faces issue with all the read phenomena making it the worst degree of isolation dirty reads dirty reads occur when data is read in one transaction while in another transaction data is updated and then so when transaction 1 rereads the data is changed the diagram below explains this well screenshot 2022 09 12 144815 https user images githubusercontent com 62875631 189645808 07fc0b7e 6dc7 419b 8b0f 8a4ac16ecf6c png read commited the second isolation level which is the default in most database systems is read committed this level only allows transactions to see data modified if it has been commited shared locks are aquired on queries that involve reading however the are released as soon as the query is executed read committed still faces problems llke lost updates and repeatable reads lost updates and non repeatable reads lost updates occur when a transaction updates a row but it is overwrited by another transaction screenshot 2022 09 12 155638 https user images githubusercontent com 62875631 189659532 69a82cd4 990f 454b 8266 8ca97e122e11 png non repeatable reads when data is read in one transaction then another transaction updates and commits then the first transaction rereads to see the data is not the same as the first read screenshot 2022 09 12 160143 https user images githubusercontent com 62875631 189660575 37cf27b8 42a9 4352 977e 72ace6d28b91 png repeatable read the third isolation level repeatable read fixes lost updates and non repatable reads in this level shared locks now release after commits so exclusive locks must wait for them to release to operate however this level still faces a problem which is phantom reads phantom reads phantom reads occur when one transaction reads data then another transaction inserts new data and the first transaction screenshot 2022 09 12 161102 https user images githubusercontent com 62875631 189662759 28cdd01c 1c57 4269 a291 07a834498c94 png serializeable the final locking isolation level is serializeable which adds shared lock now account for the range of data they are reading so they prevent modifications exclusive locks that affect the data in the range they are working with snapshot isolation using row versioning snapshot isolation works by persisting the version of the database when data is being modified by a transaction committed data is copied to tempdb and are given version numbers so when another transaction reads data there is no wait because of locking since it will receive the version of data from the most recent committed transaction serializeable vs snapshot isolation while serializeable and snapshot both achieve isolation they have significant differences serializeable level uses locking which helps keeps the database consistent and avoids read phenomena however its more prone to deadlocks which can hinder from concurrency snapshot uses row versioning which helps with increased concurrency however its main disadvantage is its increased tempdb usage from the storage of the row versions all in all if your thinking of building a system thats read and write heavy go for serializeable but if your system is just read heavy and not alot of writes go for snapshot screenshot 2022 09 12 195103 https user images githubusercontent com 62875631 189713777 afc36a34 2202 46fb a7bc 6acc58dd183b png cap theorem understanding the cap theorem is essential for designing how you want to design a distributed system cap stands for consistency availability and partition tolerance there are three ways to design your system when using the cap theorem ca consistent and available cp consistent and partition tolerant ap available and partition tolerant lets take a atm system as an example if using cp in the event of a partition network problem atm crash an atm might close the user out of the system since it doesnt want him to withdraw or deposit and create inconsistent data with the other atms on the other hand using ap an atm can stay active after a partiion and allow withdrawals and deposits and when the other atms come back online they will be updated with new changes finally we can use ca if we think a partition is rare to occur and if it does we can just fix the problem and get it back online later screenshot 2022 09 13 114410 https user images githubusercontent com 62875631 189855513 309e0bee e1e0 41b2 a955 508f09c244d6 png indexing we use indexing to increase the speed at which queries are executed when dealing with large databases when we create an index on a column we create a separate data structure that orders the data in way that helps search query efficiency screenshot 2022 09 12 200247 https user images githubusercontent com 62875631 189713796 141df822 6fdb 436e aaa0 0fe31025f15c png clustered vs non clustered index clustered indexes are made off of the primary key of a table and the leaf nodes for the index contains the row data non clustered indexes create a seperate data structure that have reference ids to the row data itself clustered indexes physically reorder the table data itself to match the index while non clustered indexes logical order does not match the physical order different types of index scans sequential scan regular in order table scan fast for fetching row from small table or high amount of rows from large table index scan scans index for pages that match query then uses reference point to fetch matching rows index only scan scans index for pages that match query but data needed is already in index so no need to jump to table faster than index bitmap index scan scans pages for matches with query condition and sets bitmap true for the pages that match then it just seeks out those specific pages in the heap index fragmentation index fragmentation occurs when modifications to the database cause blank spaces and page splits resulting in more io operations to scan the index for what we need index size increases because of blank spaces as well internal fragmentation free space cause by inserts or deletes that make the index store more data and results in more io operations to read logical fragmentation order of pages does not match physical ordering of pages making the searching not sequential in order to avoid index fragmentation we can use ever increasing decreasing keys as to avoid inserts that will cause page splits also avoiding updates on keys to avoid page splits using index fill factors can help us keep enough space in a page for more inserts anticipating the page split problem we can use the dbms to check for fragmentation if the fragmentation is less than 10 there is no need to fix index if there is 10 30 fragmentation reorganize index logical reordering if fragmentation reaches greater than 30 fragmentation we rebuild the index replication involves writing or copying data same data to different locations it can be from between hosts in different locations or storage devices on same host or cloud based host one technique for replication is using master standby which is having a master server that handles the main database operations and writes to standby nodes this is done to improve accessibility as well as achieving system fault tolerance and reliability master standby can be implemented synchronously or asynchronously synchronous replication means when the master recieves data operations it executes them and then applies them to standby immediately while asycnhronous replication wll write the changes to a log and apply them to standby at a later point partitioning partitioning is the process of splitting a database into multiple tables in order for queries to execute faster since there is less data to scan the goal of partitioning is to decrease read and load data response time horizontal vs vertical partitioning horizontal partitioning divides the table by rows ex id0 100tbl id101 200tbl screenshot 2022 09 13 150406 https user images githubusercontent com 62875631 189896320 3da9d49d b9d3 46e2 b126 f4d2eb2f65d9 png vertical partitioning divides the table by columns usually tries to seperate blob columns as to reduce accesss time on searches screenshot 2022 09 13 150424 https user images githubusercontent com 62875631 189896345 58264132 5c36 4011 bcbb 650abe1b0e79 png sharding partitioning database into multiple database servers with the same schema 1 woslzp8pkh8bwqdmi6jndw https user images githubusercontent com 62875631 189898573 fbea2475 9ee3 4ab2 9551 8d2871945293 png consistent hashing when sharding we use hashing to query on the correct server if we have for an example a hash function that hashes to 4 servers and we decide to add a new server the hashing function has to be adjusted for 5 servers and therefore we have to move data from server to server to match the new hash function which can be very costly by using consistent hashing we can reduce the amount of data we are moving around significantly screenshot 2022 09 13 153230 https user images githubusercontent com 62875631 189901783 3dfbffd2 0cdc 4ae5 a17a ed8614728cfa png like this when adding a new server we only need to move data from one node to another which is much better conclusion whether your a backend developer or a database administrator these concepts are the tools that are needed under our belt in order to be able to take action in whatever issue comes our way in our developing adventures from creating an index on a column to knowing when to shard or replicate your database these topics are essential for the fundamental understanding of advanced relational databases references big resource fundamental of database engines by hussein nasser https www udemy com course database engines crash course transactions https www tutorialspoint com dbms dbms transaction htm https fauna com blog introduction to transaction isolation levels https youtu be ctcao89fcqw acid https www geeksforgeeks org acid properties in dbms ref lbp https levelup gitconnected com transaction isolation levels in ms sql guide for backend developers 6a5998e34f6c https sqlite org wal html cap https www geeksforgeeks org the cap theorem in dbms https youtu be k yaq8ahlfa indexing https chartio com learn databases how does indexing work https www geeksforgeeks org difference between clustered and non clustered index text a clustered index is a of the rows on disk https www pgmustard com docs explain sequential scan https www spotlightcloud io blog tips for fixing sql server index fragmentation text external index fragmentation separated from the original page https blog devart com sql server index fragmentation in depth html https www sqlservercentral com forums topic index fragmentation and ssds text index fragmentation is bad on access is much 2c much faster https www beyondtrust com docs privileged identity faqs reorganize and rebuild indexes in database htm text reorganizing an ind sharding https youtu be ihnovzuzm3a replication https www stitchdata com resources data replication | server |
|
chaingreen-blockchain | chaingreen blockchain alt text chia logo svg githhub super linter https github com chaingreenorg chaingreen blockchain actions workflows super linter yml badge svg https github com chaingreenorg chaingreen blockchain actions workflows super linter yml macos installer on catalina and python 3 8 https github com chaingreenorg chaingreen blockchain actions workflows build macos installer yml badge svg https github com chaingreenorg chaingreen blockchain actions workflows build macos installer yml windows installer on windows 10 and python 3 7 https github com chaingreenorg chaingreen blockchain actions workflows build windows installer yml badge svg https github com chaingreenorg chaingreen blockchain actions workflows build windows installer yml ubuntu core test https github com chaingreenorg chaingreen blockchain actions workflows build test ubuntu core yml badge svg https github com chaingreenorg chaingreen blockchain actions workflows build test ubuntu core yml macos blockchain tests https github com chaingreenorg chaingreen blockchain actions workflows build test macos blockchain yml badge svg https github com chaingreenorg chaingreen blockchain actions workflows build test macos blockchain yml macos simulation tests https github com chaingreenorg chaingreen blockchain actions workflows build test macos simulation yml badge svg https github com chaingreenorg chaingreen blockchain actions workflows build test macos simulation yml macos core tests https github com chaingreenorg chaingreen blockchain actions workflows build test macos core yml badge svg https github com chaingreenorg chaingreen blockchain actions workflows build test macos core yml chaingreen is a modern cryptocurrency built from scratch designed to be efficient decentralized and secure here are some of the features and benefits proof of space and time https docs google com document d 1tmrib7lgi4qfkknaxukobhrmwbvlgl4f7esbdr 5xze edit based consensus which allows anyone to farm with commodity hardware very easy to use full node and farmer gui and cli thousands of nodes active on mainnet simplified utxo based transaction model with small on chain state lisp style turing complete functional programming language https chialisp com for money related use cases bls keys and aggregate signatures only one signature per block pooling protocol https github com chia network chia blockchain wiki pooling user guide that allows farmers to have control of making blocks support for light clients with fast objective syncing a growing community of farmers and developers around the world please check out the wiki https github com chia network chia blockchain wiki and faq https github com chia network chia blockchain wiki faq for information on this project python 3 7 is required make sure your default python version is 3 7 by typing python3 if you are behind a nat it can be difficult for peers outside your subnet to reach you when they start up you can enable upnp https www homenethowto com ports and nat upnp automatic port forward on your router or add a nat for ipv4 but not ipv6 and firewall rules to allow tcp port 8744 access to your peer these methods tend to be router make model specific most users should only install harvesters farmers plotter full nodes and wallets building timelords and vdfs is for sophisticated users in most environments chaingreen network and additional volunteers are running sufficient timelords for consensus installing install instructions are available in the install https github com chia network chia blockchain wiki install section of the chia blockchain repository wiki https github com chia network chia blockchain wiki running once installed a quick start guide https github com chia network chia blockchain wiki quick start guide is available from the repository wiki https github com chia network chia blockchain wiki | blockchain |
|
OCEChain | ocechain ocechain aims to create a decentralized autonomous content economy where content value can be recognized efficiently and all contributors can be incentivized directly and effectively to promote long term economic growth note requires go 1 11 https golang org dl build tendermint requires v0 26 1 rc0 other deps packages uncompress deps ocechain deps pkg tar bz2 into goroot src or gopath src run make in the top folder to start build blockchain command oce oce ocecli ocecli | server |
|
FlappyLearning | flappy learning demo http xviniette github io flappylearning program that learns to play flappy bird by machine learning neuroevolution http www scholarpedia org article neuroevolution alt tag https github com xviniette flappylearning blob gh pages img flappy png raw true neuroevolution js http github com xviniette flappylearning blob gh pages neuroevolution js utilization javascript initialize var ne new neuroevolution options default options values var options network 1 1 1 perceptron structure population 50 population by generation elitism 0 2 best networks kepts unchanged for the next generation rate randombehaviour 0 2 new random networks for the next generation rate mutationrate 0 1 mutation rate on the weights of synapses mutationrange 0 5 interval of the mutation changes on the synapse weight historic 0 latest generations saved lowhistoric false only save score not the network scoresort 1 sort order 1 desc 1 asc nbchild 1 number of child by breeding update options at any time ne set options generate first or next generation var generation ne nextgeneration when an network is over save this score ne networkscore generation x score 0 you can see the neuroevolution integration in flappy bird in game js http github com xviniette flappylearning blob gh pages game js | neuroevolution machine-learning flappybird | ai |
sandle | sandle docker build status https github com hltcoe sandle actions workflows docker build yml badge svg https github com hltcoe sandle actions workflows docker build yml python test status https github com hltcoe sandle actions workflows python test yml badge svg https github com hltcoe sandle actions workflows python test yml license https img shields io badge license bsd blue https github com hltcoe sandle blob main license run a large language modeling sandbox in your local environment sandle this repository provides a docker compose system for hosting and interacting with large language models on your own hardware it includes a web sandbox screen shot 2022 08 09 at 1 29 33 pm https user images githubusercontent com 457238 183720063 9c87ce24 e4d4 4a9d b883 b085a12f48a8 png and an openai like rest api screen shot 2022 08 09 at 1 14 44 pm https user images githubusercontent com 457238 183715419 56c1467f e5fe 4ebe 9c3f b1feb7c4e9b9 png setup to build and run sandle with the huggingface backend using docker compose do bash cp docker compose backend hf yml docker compose override yml docker compose up build by default the demo web interface and api endpoint will be bound to port 80 on the host go to http localhost in your browser to use the web interface you must have an api key to use the web interface or api endpoint by default one will be generated and logged on startup if you wish to specify the accepted api key explicitly instead of using a randomly generated key set the sandle auth token environment variable with the desired api key when running docker compose sandle auth token exampleapikey docker compose up build if you wish to limit the models that can be used perhaps you want to support a particularly large model and don t want to incur the overhead of loading it into memory more than once then set the sandle single model environment variable with the desired model name when running docker compose sandle single model bigscience bloom docker compose up build brtx the docker compose version installed on brtx is older and does not work with our configuration file which requires docker compose v1 28 0 or later to use docker compose on brtx install a new standalone version of docker compose https docs docker com compose install compose plugin install the plugin manually to your home directory and run that version instead of the system installed version for example to download docker compose standalone version 2 7 0 bash curl sl https github com docker compose releases download v2 7 0 docker compose linux x86 64 o docker compose chmod 755 docker compose docker compose version additionally on brtx the server will be bound to the local host using ipv4 but localhost will resolve to the local host using ipv6 when connecting to the api specify 127 0 0 1 or localhost4 instead of localhost usage authentication api keys the application should accept can be specified in a file as command line arguments or in an environment variable if no api keys are specified the default one will be generated and logged on startup as in the openai api https beta openai com docs api reference authentication an api key can be used either as a bearer authentication token or as a basic authentication password with the user being the empty string for more information about specifying api keys run the following docker compose run no deps openai wrapper help example api calls calling openai s service is similar to calling a sandle service an example call to openai bash curl https api openai com v1 completions h content type application json h authorization bearer your openai api key d model text davinci 002 prompt say this is a test and an equivalent call to a sandle service bash curl http your sandle server v1 completions h content type application json h authorization bearer your sandle api key d model facebook opt 2 7b prompt say this is a test note that sandle only comes with support for http not https if you need https but don t have a certificate you can set up a reverse proxy in front of sandle using certbot https certbot eff org api documentation see our api documentation https hltcoe github io sandle for a description of the subset of the openai api implemented by sandle this documentation is generated using the swagger ui on our api definition file at docs swagger yaml advanced usage this repository provides the following docker services backend services that implement a subset of the openai v1 completions api https beta openai com docs without authentication these services use single threaded web servers and are suitable for one user at a time serving the api for a single user without docker backend hf a backend on top of huggingface supporting models like opt and bloom from the huggingface hub backend llama a backend on top of llama backend stub a stub backend for development and testing openai wrapper a service that implements a subset of the openai v1 models v1 models model and v1 completions apis https beta openai com docs delegating to backend services accordingly this service uses a multi threaded web server and is suitable for multiple users demo a web server that provides as a reverse proxy in front of the openai wrapper service as well as a web interface that uses the proxied api these services can be run together on your local machine using docker compose https docs docker com compose by default docker compose will load configuration from docker compose yml and if it is present docker compose override yml alternatively configuration files may be explicitly specified on the command line for example the following command starts sandle with the huggingface backend by specifying the configuration files explicitly instead of implicitly as demonstrated at the beginning of this document bash docker compose f docker compose yml f docker compose backend hf yml up build any number of configuration files can be specified at once as long as their contents can be merged together for example to start sandle with both the huggingface and the llama backend bash docker compose f docker compose yml f docker compose backend hf yml f docker compose backend llama yml up build serving the api for a single user without docker if you only need the api for a single user you can run a backend service by itself outside of docker ensure the appropriate dependencies are installed then run for example using the huggingface backend bash python backend hf serve backend hf py port 12349 to serve the partial v1 completions api on port 12349 on your local host the equivalent docker usage would be approximately bash docker build t user backend hf backend hf docker run it p 12349 8000 user backend hf port 8000 development to set up a development environment for the demo web interface install a recent version of npm go to the demo subdirectory and do npm ci then configure your development app by copying env development to env development local and changing the values set in the file accordingly in particular make sure you set vite sandle url to the url of the api implementation you are using for development the demo service acts as a simple reverse proxy for the api implementation provided by the openai wrapper service so if you wish to run an api implementation yourself you can run docker compose up as usual then use http localhost as the url note by default the demo service port is bound to port 80 on the host system if this port is in use or if you don t have access to it you may need to override it to do so add the sandle demo port variable to your environment with the desired port as its value adjust vite sandle url in env development local accordingly and then run docker compose up as usual once you ve done that you can start a development web server with npm run dev this server will run on port 3000 by default and hot reload the ui when any source files change stubbing out the backend if you cannot or do not wish to run a full language model backend during testing and development you may use the stub backend instead to do so just use the stub backend configuration file in lieu of other backend configuration docker compose f docker compose yml f docker compose backend stub yml up build testing static analysis we use flake8 to automatically check the style and syntax of the code and mypy to check type correctness to perform the checks go into a component subdirectory for example backend hf or openai wrapper and do pip install r dev requirements txt flake8 mypy these checks are run automatically for each commit by github ci property testing we use hypothesis https hypothesis readthedocs io en latest to randomly generate test cases for the backend and assert properties of interest for the output for example for any valid input a basic property that we would like to test is that sandle doesn t crash on that input a slightly more advanced property might be that the output does not exceed the user specified length limit property tests are defined in backend hf tests test service py and automatically discovered and run by pytest to run the tests first go to the backend hf subdirectory the rest of this section assumes you are in that directory then install the basic test requirements pip install r dev requirements txt the tests assume a backend service exists at http localhost 8000 you must start this service yourself you can start the service in docker or directly on the host machine depending on your needs the following two examples illustrate how to use these methods to start the backend service listening to port 8000 and using the first gpu on your host system to start the service in docker publishing container port 8000 to host port 8000 docker build t backend hf docker run rm it gpus device 0 p 8000 8000 backend hf alternatively to start the service directly on your host install the requirements cuda pytorch and the requirements specified in requirements txt then run cuda visible devices 0 python serve backend hf py then you can test that the service is up curl http 127 0 0 1 8000 v1 completions h content type application json d model facebook opt 125m prompt say this is a test finally to run the explicit property test cases pytest hypothesis profile explicit alternatively to run explicit test cases and automatically generate and test new cases may take a while pytest fuzz testing to perform fuzz testing using the microsoft restler tool in docker do the following first bring up the sandle system with the huggingface backend and a fixed authentication token sandle auth token dgvzda docker compose f docker compose yml f docker compose backend hf yml up build then run fuzz test run bash with that same authentication token to build the restler fuzzer docker image if it does not exist and run restler on the api specification in docs swagger yaml bash fuzz test run bash dgvzda this script will create the directory fuzz test output bind it to the restler docker container and write the output for each step of the testing procedure to the appropriately named subdirectory of fuzz test output additionally at the end of each step the contents of fuzz test output step responsebuckets runsummary json with step replaced with the step name will be printed to the console if after any step the number of failures reported in that file is greater than zero the test procedure will terminate benchmarking example runtime test using the apache bench tool installed by default on os x ab n 10 c 1 s 60 p qa txt t application json a your api key m post http your sandle server v1 completions where qa txt is a text file in the current directory that contains the prompt json example file contents json model facebook opt 2 7b prompt say this is a test | api api-server docker docker-compose language-modeling large-language-models nlp | ai |
NLP_Basics | nlp basics in the deep learning for nlp ipynb file i have tried to cover basics of nlp and followed the book titled deep learning for natural language processing i will keep updating the current repo basic nlp models like count vectorizer tf idf word2vec embedding sentiment analysis text classification lstm bilstm new nlp library basics topic modeling etc seq2seq modeling multi class text classification model comparison and selection https towardsdatascience com multi class text classification model comparison and selection 5eb066197568 about natural language processing performance metrics ppt https github com gulabpatel nlp basics blob main nlp performance metrics april6th2018 pdf p align center img src assets nlp metrics png width 300 alt nlp metrics timeline p evaluation metrics quick notes average precision macro average of sentence scores micro corpus sums numerators and denominators for each hypothesis reference s pairs before division machine translation 1 bleu bilingual evaluation understudy papineni 2002 https www aclweb org anthology p02 1040 pdf measures how many words overlap in a given translation when compared to a reference translation giving higher scores to sequential words recall limitation doesn t consider different types of errors insertions substitutions synonyms paraphrase stems designed to be a corpus measure so it has undesirable properties when used for single sentences 2 gleu google bleu wu et al 2016 http arxiv org pdf 1609 08144v2 pdf minimum of bleu recall and precision applied to 1 2 3 and 4grams recall number of matching n grams number of total n grams in the target precision number of matching n grams number of total n grams in generated sequence correlates well with bleu metric on a corpus metric but does not have its drawbacks for per sentence reward objective not to be confused with generalized language evaluation understanding or generalized bleu also known as gleu napoles et al 2015 s acl paper ground truth for grammatical error correction metrics http www aclweb org anthology p15 2097 napoles et al 2016 gleu without tuning https arxiv org abs 1605 02592 minor adjustment required as the number of references increases simple variant of bleu it hews much more closely to human judgements in mt an untranslated word or phrase is almost always an error but in gec this is not the case gleu computes n gram precisions over the reference but assigns more weight to n grams that have been correctly changed from the source python code https github com cnap gec ranking 3 wer word error rate levenshtein distance edit distance for words minimum number of edits insertion deletions or substitutions required to change the hypotheses sentence into the reference range greater than 0 ref hyp no max range as automatic speech recognizer asr can insert an arbitrary number of words wer frac s d i n frac s d i s d c s number of substitutions d number of deletions i number of insertions c number of the corrects n number of words in the reference n s d c wacc word accuracy or word recognition rate wrr 1 wer limitation provides no details on the nature of translation errors different errors are treated equally even thought they might influence the outcome differently being more disruptive or more difficult easier to be corrected if you look at the formula there s no distinction between a substitution error and a deletion followed by an insertion error possible solution proposed by hunt 1990 use of a weighted measure wer frac s 0 5d 0 5i n problem metric is used to compare systems so it s unclear whether hunt s formula could be used to assess the performance of a single system how effective this measure is in helping a user with error correction see more info https martin thoma com word error rate calculation 4 meteor metric for evaluation of translation with explicit ordering banerjee 2005 s paper meteor an automatic metric for mt evaluation with high levels of correlation with human judgments https www cs cmu edu alavie meteor pdf lavie agarwal 2007 meteor pdf about based on the harmonic mean of unigram precision and recall weighted higher than precision includes exact word stem and synonym matching designed to fix some of the problems found in the bleu metric while also producing good correlation with human judgement at the sentence or segment level unlike bleu which seeks correlation at the corpus level python jar wrapper https github com tylin coco caption tree master pycocoevalcap meteor 5 ter translation edit rate snover et al 2006 s paper a study of translation edit rate with targeted human annotation https www cs umd edu snover pub amta06 ter amta pdf number of edits words deletion addition and substitution required to make a machine translation match exactly to the closest reference translation in fluency and semantics ter frac e r minimum number of edits average length of reference text it is generally preferred to bleu for estimation of sentence post editing effort source http opennmt net opennmt tools scorer pyter https pypi python org pypi pyter 0 2 2 1 char ter character level ter summarization 1 rouge recall oriented understudy for gisting evaluation lin 2004 rouge a package for automatic evaluation of summaries http www aclweb org anthology w w04 w04 1013 pdf package for automatic evaluation of summaries image caption quality 1 cider consensus based image description evaluation vedantam et al 2015 cider consensus based image description evaluation https arxiv org abs 1411 5726 used as a measurement for image caption quality | nlp topic-modeling sentimentanalysis deep-learning ner tf-idf word2vec textblob spacy stanza lstm texthero gensim parrot regex speech-to-text styleformer gramformer langauge-detector | ai |
Coursera-DevOps-Professional-Certificate | coursera devops about this professional certificate devops professionals are in high demand according to a recent gitlab report devops skills are expected to grow 122 over the next five years making it one of the fastest growing skills in the workforce this certificate will equip you with the key concepts and technical know how to build your skills and knowledge of devops practices tools and technologies and prepare you for an entry level role in software engineering the courses in this program will help you develop skill sets in a variety of devops philosophies and methodologies including agile development scrum methodology cloud native architecture behavior and test driven development and zero downtime deployments you will learn to program with the python language and linux shell scripts create projects in github containerize and orchestrate your applications using docker kubernetes openshift compose applications with microservices employ serverless technologies perform continuous integration and delivery ci cd develop testcases ensure your code is secure and monitor troubleshoot your cloud deployments guided by experts at ibm you will be prepared for success labs and projects in this certificate program are designed to equip job ready hands on skills that will help you launch a new career in a highly in demand field this professional certificate is suitable for both those who have none or some programming experience as well as those with and without college degrees applied learning project throughout the courses in this professional certificate you will develop a portfolio of projects to demonstrate your proficiency using various popular tools and technologies in devops and cloud native software engineering you will create applications using python programming language using different programming constructs and logic including functions rest apis and various python libraries develop linux shell scripts using bash and automate repetitive tasks create projects on github and work with git commands build and deploy applications composed of several microservices and deploy them to cloud using containerization tools such as docker kubernetes and openshift and serverless technologies employ various tools for automation continuous integration ci and continuous deployment cd of software including chef puppet github actions tekton and travis secure and monitor your applications and cloud deployments using tools like sysdig and prometheus list and description of completed courses course 01 introduction to devops devops skills are in demand devops skills are expected to be one of the fastest growing skills in the workforce this course can be a first step in obtaining those skills introduction to devops explores devops as a cultural movement including building a business case for devops the essentials of devops and a brief history of devops you will learn new ways of thinking working organizing and measuring to fully gain the benefits of devops you will learn how breaking down silos and organizing developers and operators into single cross functional teams is necessary for truly adopting devops having everyone contributing and everyone being responsible for success is at the heart of devops by thinking from a devops perspective you will be able to build better products for your customer you will view devops from a business perspective as well as for becoming a devops practitioner you will see how building a culture of shared responsibility and transparency is the foundation of every high performing devops teams you will have an opportunity to explore the concepts of infrastructure for continuous integration and continuous delivery you will be able to use actionable measures that apply directly to decision making and will ultimately result in continuous improvement this course is designed for those new to devops as well as those looking to increase their current knowledge of devops course 02 introduction to cloud computing this course introduces you to the core concepts of cloud computing you gain the foundational knowledge required for understanding cloud computing from a business perspective as also for becoming a cloud practitioner you understand the definition and essential characteristics of cloud computing its history the business case for cloud computing and emerging technology usecases enabled by cloud we introduce you to some of the prominent service providers of our times e g aws google ibm microsoft etc the services they offer and look at some case studies of cloud computing across industry verticals you learn about the various cloud service models iaas paas saas and deployment models public private hybrid and the key components of a cloud infrastructure vms networking storage file block object cdn we also cover emergent cloud trends and practices including hybrid multicloud microservices serverless devops cloud native and application modernization and we go over the basics of cloud security monitoring and different job roles in the cloud industry even though this course does not require any prior cloud computing or programming experience by the end of the course you will have created your own account on ibm cloud and gained some hands on experience by provisioning a cloud service and working with it this course is suitable for a large variety of audiences whether you are an executive manager student who wants to become familiar with cloud computing terminology and concepts or someone who wants foundational grounding in cloud computing to start a career in this field or become a cloud practitioner such as a cloud engineer developer analyst etc course 03 introduction to agile development and scrum after successfully completing this course you will be able to embrace the agile concepts of adaptive planning iterative development and continuous improvement resulting in early deliveries and value to customers this course will benefit anyone who wants to get started with working the agile way it is particularly suitable for it practitioners such as software developers development managers project managers product managers and executives you will learn to apply agile practices derived from lean manufacturing concepts like test driven development learn how a scrum team functions learn how to write good user stories and track your team s progress using a kanban board create and refine a product backlog collaboratively with the team and the customer in a flexible and blameless culture this approach will lead you to higher levels of efficiency with the ability to plan and execute sprints with your development team measuring success with actionable metrics this course is about more than facts and processes it is about working collaboratively on a self organizing team coached by a scrum master and building what is needed rather than simply following a plan developed and taught by an experienced agile practitioner the course includes hands on practice through realistic scenario based labs using github and zenhub course 04 hands on introduction to linux commands and shell scripting this course provides a practical introduction to linux and commonly used linux unix shell commands it teaches you the basics of bash shell scripting to automate a variety of tasks the course includes both video based lectures as well as hands on labs to practice and apply what you learn you will have no charge access to a virtual linux server that you can access through your web browser so you don t need to download and install anything to perform the labs you will learn how to interact with the linux terminal execute commands navigate directories edit files as well as install and update software you will work with general purpose commands like id date uname ps top echo man directory management commands such as pwd cd mkdir rmdir find df file management commands like cat wget more head tail cp mv touch tar zip unzip access control command chmod text processing commands wc grep tr as well as networking commands hostname ping ifconfig and curl you will create simple to more advanced shell scripts that involve metacharacters quoting variables command substitution i o redirection pipes filters and command line arguments you will also schedule cron jobs using crontab this course is ideal for data engineers data scientists software developers and cloud practitioners who want to get familiar with frequently used commands on linux macos and other unix like operating systems as well as get started with creating shell scripts course 05 getting started with git and github collaboration and social coding are crucial parts of contemporary software engineering practices and the devops culture in this course you ll be introduced to collaborative version control and popular git platforms you will explore key git concepts such as branching and repositories as well as the use of git commands you will also learn and practice various git concepts such as forking cloning and merging workflows you will learn to use github to work effectively as a team and perform common git operations such as pull requests from both the web ui and command line developed and taught by experienced ibm practitioners in this course you ll gain vital skills and hands on experience using git and github each module contains hands on labs for you to apply and practice what you learn the course wraps up with a final project where you will start building your portfolio by creating and sharing a public open source github project all hands on activities in this course can be performed using web browser based tools and interfaces installation of any specialized software is not required on your own computer in order to complete the course course 06 python for data science ai development kickstart your learning of python for data science as well as programming in general with this beginner friendly introduction to python python is one of the world s most popular programming languages and there has never been greater demand for professionals with the ability to apply python fundamentals to drive business solutions across industries this course will take you from zero to programming in python in a matter of hours no prior programming experience necessary you will learn python fundamentals including data structures and data analysis complete hands on exercises throughout the course modules and create a final project to demonstrate your new skills by the end of this course you ll feel comfortable creating basic programs working with data and solving real world problems in python you ll gain a strong foundation for more advanced learning in the field and develop skills to help advance your career upon completion of any of the above programs in addition to earning a specialization completion certificate from coursera you ll also receive a digital badge from ibm recognizing your expertise in the field course 07 python project for ai application development this mini course is intended to apply foundational python skills by implementing different techniques to develop applications and ai powered solutions assume the role of a developer and unit test and package an application with the help of multiple hands on labs after completing this course you will have acquired the confidence to begin developing ai enabled applications using python build and run unit tests and package the application for distribution pre requisite python for data science ai and development course from ibm is a pre requisite for this project course please ensure that before taking this course you have either completed the python for data science ai and development course from ibm or have equivalent proficiency in working with python and data note this course is not intended to teach you python and does not have too much instructional content it is intended for you to apply prior python knowledge course 08 introduction to containers w docker kubernetes openshift with a median salary of 137 000 developers with container skills are in demand more than 70 percent of fortune 100 companies are running containerized applications but why using containerization organizations can move applications quickly and seamlessly among desktop on premises and cloud platforms in this course designed for beginners learn how to build cloud native applications using current containerization tools and technologies such as containers docker container registries kubernetes openshift and istio also learn how to deploy and scale your applications in any public private or hybrid cloud each week you will apply what you learn in hands on browser based labs by the end of the course you ll be able to build a container image then deploy and scale your container on the cloud using openshift if you understand basic cloud and programming concepts and your career path includes roles such as cloud developer cloud architect cloud system engineer devops engineer and cloud networking specialist this course is for you take the next step in your cloud career by learning more about containers course 09 application development using microservices and serverless are you a developer ready to explore serverless application development this intermediate level course is for you begin with an understanding of how serverless benefits developers learn when to use serverless programming serverless deployment models and discover its top use cases and design patterns you ll also discover how serverless supports continuous integration and continuous delivery ci cd and microservices integration hands on labs reinforce serverless programming concepts for creation deployment and invocation of cloud based functions including the deployment of microservices using openshift and istio complete the course with the confidence to build a multi tier web app that uses ibm cloud functions openshift istio and more show all about application development using microservices and serverless show all course 10 introduction to test driven development tdd successful developers need to not only build the right software but build it right to know your software works correctly you need to test each unit of code one of the best methods for this unit level testing is test driven development this course provides a detailed overview of test driven development tdd first you ll learn what automated testing is and why it is essential for building robust applications resilient to failure you ll explore the basics of testing including test cases testing levels and the traditional release cycle you ll learn about tdd and its complement behavior driven development bdd tdd tests individual units of code while bdd tests how these units work together then you ll examine tdd in detail you ll explore tdd s benefits concepts and popular tools and you ll hone your new testing skills through hands on labs you ll create tdd test cases by writing test assertions and building test fixtures and you ll run these test cases by using the nose testing package you ll then practice more advanced tdd methods such as increasing code coverage generating and using fake data and testing mock objects course 11 continuous integration and continuous delivery ci cd a principle of devops is to replace manual processes with automation to improve efficiency reduce human error and accelerate software delivery this requires automation that continuously integrates code changes and continuously delivers those changes to a production environment this course introduces you to continuous integration and continuous delivery ci cd an automated approach to software development you ll discover the benefits of ci cd for creating a devops pipeline and explore popular ci cd tools you ll examine the key features of ci explore social coding and the git feature branch workflow you will also learn about standard ci tools and gain a deep understanding of github actions workflows and their components this course provides an overview of cd and its goals benefits and best practices you will learn the requirements of a ci cd pipeline and discover standard cd tools you will explore tekton and discover how its components work together to create a cd pipeline you will learn how to build a pipeline pass parameters to a pipeline build triggers to start pipeline runs implement reusable tasks and create custom tasks you will discover how to complete your cd pipeline by building a container image and deploying your application to an openshift kubernetes cluster throughout the course you can hone your skills and challenge yourself through several hands on labs course 12 application security and monitoring how vulnerable are your applications to security risks and threats this course will help you identify vulnerabilities and monitor the health of your applications and systems you ll examine and implement secure code practices to prevent events like data breaches and leaks and discover how practices like monitoring and observability can keep systems safe and secure you will gain extensive knowledge on various practices concepts and processes for maintaining a secure environment including devsecops practices that automate security integration across the software development lifecycle sdlc static application security testing sast for identifying security flaws dynamic analysis and dynamic testing you ll also learn about creating a secure development environment both on premise and in the cloud you ll explore the open web application security project owasp top application security risks including broken access controls and sql injections additionally you will learn how monitoring observability and evaluation ensure secure applications and systems you ll discover the essential components of a monitoring system and how application performance monitoring apm tools aid in measuring app performance and efficiency you ll analyze the golden signals of monitoring explore visualization and logging tools and learn about the different metrics and alerting systems that help you understand your applications and systems through videos hands on labs peer discussion and the practice and graded assessments in this course you will develop and demonstrate your skills and knowledge for creating and maintaining a secure development environment course 13 devops capstone project in this course you will apply your skills and knowledge acquired during previous courses in the pc to demonstrate your proficiency in devops practices by developing testing deploying monitoring and enhancing a secure microservices based application on cloud over the course of several sprints using a variety of agile cloud native and ci cd technologies and tools | cloud |
|
tinyrwkv | tinyrwkv a tinier port of rwkv lm a port of the rwkv lm https github com blinkdl rwkv lm family of large language models to the tinygrad https tinygrad org framework roadmap x implement the wkv kernel as a custom function implement the backwards of the wkv kernel as a custom function x add support for the world model and tokenizer x add support for the midi models x add initial support for rwkv 5 models dependencies currently requires tinygrad from git or just use the nix flake python numpy pydot only for graph 1 tinygrad tokenizers torch only for loading pytorch weights tqdm wandb optional during training system rust only for compiling clang only for compiling graphviz only for graph 1 usage run the cli with python m cli also usable as a python package to embed in other projects it s also possible to compile the model to portable c code and embed it that way usage tinyrwkv cli h seed seed pre gen cht cmp bch ptr gpt tra bpt wkv mus cli for tinyrwkv positional arguments pre gen cht cmp bch ptr gpt tra bpt wkv mus pre preprocess either tinyrwkv trained weights or pytorch trained weights into rnn form gen freeform generation using the rnn mode requires a preprocessed model using pre cht chat with a model in rnn mode requires a preprocessed model using pre cmp compile a rnn model into c source code and a compiled executable need to run with clang 1 bch benchmark the rnn mode ptr preprocess pytorch weights weights into gpt form for training or inference gpt freeform generation using the gpt mode requires a preprocessed model using ptr tra pretrain or finetune a model if finetuning the model needs to be preprocessed with ptr bpt benchmark the gpt mode wkv benchmark test each wkv module mus music generation using the rnn mode requires a preprocessed model using pre options h help show this help message and exit seed seed seed for random license see the license license and notice notice files | rwkv large-language-models llm rnn tinygrad gpt language-model | ai |
python-blockchain-tutorial | python js react build a blockchain cryptocurrency course logo python blockchain logo png the course is designed to help you achieve three main goals learn python and backend web development build a blockchain and cryptocurrency project that you can add to your portfolio learn javascript frontend web development react js and react hooks the course s main project is to build a blockchain and cryptocurrency with a blockchain and cryptocurrency system as the main goal you will go through a course journey that starts with backend development using python then you will transaction to frontend web development with javascript react js and react hooks check out the course https www udemy com course python js react blockchain referralcode 9051a01550e782315b77 here s an overview of the overall course journey get an introduction of the python fundamentals begin building the blockchain application with python test the application using pytest incorporate the crucial concept of proof of work into the blockchain enhance the application to prepare for networking create the blockchain network using flask and pub sub integrate the cryptocurrency building wallets keys and transactions extend the network implementation with the cryptocurrency transition from python to javascript with a from python to javascript introduction establish frontend web development skills and begin coding with react js create the frontend portion for the blockchain portion of the system complete the frontend by building a ui for the cryptocurrency portion of the system in addition here are the skills that you ll gain from the course how to build a blockchain and cryptocurrency system from scratch the fundamentals of python data structures object oriented programming modules and more the ins and outs of hashing and sha256 encoding and decoding in utf 8 testing python applications with pytest python virtual environments the concept of proof of work and how it pertains to mining blocks conversion between hexadecimal to binary http apis and requests how to create apis with python flask the publish subscribe pattern to set up networks when to apply the concepts of serialization and deserialization public private keypairs and generating data signatures the fundamentals of javascript frontend web development and how web applications are constructed the core concepts of react and react hooks how the react engine works under the hood and how react applies hooks cors and how to get over the cors error properly how to build a pagination system command reference activate the virtual environment source blockchain env bin activate install all packages pip3 install r requirements txt run the tests make sure to activate the virtual environment python3 m pytest backend tests run the application and api make sure to activate the virtual environment python3 m backend app run a peer instance make sure to activate the virtual environment export peer true python3 m backend app run the frontend in the frontend directory npm run start seed the backend with data make sure to activate the virtual environment export seed data true python3 m backend app | blockchain |
|
BikeStore | technicalassessment this project was generated with angular cli https github com angular angular cli version 7 0 7 development server run ng serve for a dev server navigate to http localhost 4200 the app will automatically reload if you change any of the source files code scaffolding run ng generate component component name to generate a new component you can also use ng generate directive pipe service class guard interface enum module build run ng build to build the project the build artifacts will be stored in the dist directory use the prod flag for a production build running unit tests run ng test to execute the unit tests via karma https karma runner github io running end to end tests run ng e2e to execute the end to end tests via protractor http www protractortest org further help to get more help on the angular cli use ng help or go check out the angular cli readme https github com angular angular cli blob master readme md | server |
|
natural-questions | natural questions natural questions nq contains real user questions issued to google search and answers found from wikipedia by annotators nq is designed for the training and evaluation of automatic question answering systems please see http ai google com research naturalquestions http ai google com research naturalquestions to get the data and view the leaderboard for more details on the design and content of the dataset please see the paper natural questions a benchmark for question answering research https ai google research pubs pub47761 to help you get started on this task we have provided some baseline systems https github com google research language tree master language question answering that can be branched data description nq contains 307 372 training examples 7 830 examples for development and we withold a further 7 842 examples for testing in the paper we demonstrate a human upper bound of 87 f1 on the long answer selection task and 76 on the short answer selection task to run on the hidden test set you will have to upload a docker image containing your system to the nq competition site http ai google com research naturalquestions competition instructions on building the docker image are given here competition md data format each example in the original nq format contains the rendered html of an entire wikipedia page as well as a tokenized representation of the text on the page this section will go on to define the full nq data format but we recognize that most users will only want a version of the data in which the text has already been extracted we have supplied a simplified version of the training set https storage cloud google com natural questions v1 0 simplified simplified nq train jsonl gz and we have also supplied a simplify nq example function in data utils py data utils py which maps from the original format to the simplified format only the original format is provided by our competition site https ai google com research naturalquestions competition if you use the simplified data you should call simplify nq example on each example seen during evaluation and you should provide predictions using the start token and end token offsets that correspond to the whitespace separated tokens in the document text as well as recognizing predictions according to token offsets the evaluation script also recognizes predictions as byte offsets into the original html this allows users to define their own text extraction and tokenization schemes to help you explore the data this repository also contains a simple data browser nq browser py that you can run on your own machine and modify as you see fit we also have provided extra preprocessing utilities and tensorflow dataset code in the repository containing the baseline systems presented in our paper https github com google research language tree master language question answering the rest of this section describes the data format thouroughly in reference to a toy example toy example md each example contains a single question a tokenized representation of the question a timestamped wikipedia url and the html representation of that wikipedia page json question text who founded google question tokens who founded google document url http www wikipedia org google document html html body h1 google h1 p google was founded in 1998 by we release the raw html since this is what was seen by our annotators and we would like to support approaches that make use of the document structure however we expect most initial efforts will prefer to use a tokenized representation of the page json document tokens token h1 start byte 12 end byte 16 html token true token google start byte 16 end byte 22 html token false token inc start byte 23 end byte 26 html token false token start byte 26 end byte 27 html token false token h1 start byte 27 end byte 32 html token true token p start byte 32 end byte 35 html token true token google start byte 35 end byte 41 html token false token was start byte 42 end byte 45 html token false token founded start byte 46 end byte 53 html token false token in start byte 54 end byte 56 html token false token 1998 start byte 57 end byte 61 html token false token by start byte 62 end byte 64 html token false each token is either a word or a html tag that defines a heading paragraph table or list html tags are marked as such using the boolean field html token each token also has an inclusive start byte and exclusive end byte that identifies the token s position within the example s utf 8 indexed html string long answer candidates the first task in natural questions is to identify the smallest html bounding box that contains all of the information required to infer the answer to a question these long answers can be paragraphs lists list items tables or table rows while the candidates can be inferred directly from the html or token sequence we also include a list of long answer candidates for convenience each candidate is defined in terms of offsets into both the html and the document tokens as with all other annotations start offsets are inclusive and end offsets are exclusive json long answer candidates start byte 32 end byte 106 start token 5 end token 22 top level true start byte 65 end byte 102 start token 13 end token 21 top level false in this example you can see that the second long answer candidate is contained within the first we do not disallow nested long answer candidates we just ask annotators to find the smallest candidate containing all of the information required to infer the answer to the question however we do observe that 95 of all long answers including all paragraph answers are not nested below any other candidates since we believe that some users may want to start by only considering non overlapping candidates we include a boolean flag top level that identifies whether a candidate is nested below another top level false or not top level true please be aware that this flag is only included for convenience and it is not related to the task definition in any way for more information about the distribution of long answer types please see the data statistics section below annotations the nq training data has a single annotation with each example and the evaluation data has five each annotation defines a long answer span a list of short answers and a yes no answer if the annotator has marked a long answer then the long answer dictionary identifies this long answer using byte offsets token offsets and an index into the list of long answer candidates if the annotator has marked that no long answer is available all of the fields in the long answer dictionary are set to 1 json annotations long answer start byte 32 end byte 106 start token 5 end token 22 candidate index 0 short answers start byte 73 end byte 78 start token 15 end token 16 start byte 87 end byte 92 start token 18 end token 19 yes no answer none each of the short answers is also identified using both byte offsets and token indices there is no limit to the number of short answers there is also often no short answer since some questions such as describe google s founding do not have a succinct extractive answer when this is the case the long answer is given but the short answers list is empty finally if no short answer is given it is possible that there is a yes no answer for questions such as did larry co found google the values for this field yes or no if a yes no answer is given the default value is none when no yes no answer is given for statistics on long answers short answers and yes no answers please see the data statistics section below data statistics the nq training data contains 307 373 examples 152 148 have a long answer and 110 724 have a short answer short answers can be sets of spans in the document 106 926 or yes or no 3 798 long answers are html bounding boxes and the distribution of nq long answer types is as follows html tags percent of long answers p 72 9 table 19 0 tr 1 5 ul ol dl 3 2 li dd dt 3 4 while we allow any paragraph table or list element to be a long answer we find that 95 of the long answers are not contained by any other long answer candidate we mark these top level candidates in the data as described above short answers may contain more than one span if the question is asking for a list of answers e g who made it to stage 3 in american ninja warrior season 9 however almost all short answers 90 only contain a single span of text all short answers are contained by the long answer given in the same annotation prediction format please see the evaluation script nq eval py for a description of the prediction format that your model should output contact us if you have a technical question regarding the dataset code or publication please create an issue in this repository this is the fastest way to reach us if you would like to share feedback or report concerns please email us at natural questions google com | os |
|
opencv | opencv open source computer vision library resources homepage https opencv org courses https opencv org courses docs https docs opencv org 4 x q a forum https forum opencv org previous forum read only http answers opencv org issue tracking https github com opencv opencv issues additional opencv functionality https github com opencv opencv contrib contributing please read the contribution guidelines https github com opencv opencv wiki how to contribute before starting work on a pull request summary of the guidelines one pull request per issue choose the right base branch include tests and documentation clean up oops commits before submitting follow the coding style guide https github com opencv opencv wiki coding style guide | opencv c-plus-plus computer-vision deep-learning image-processing | ai |
Assignment-1 | phonegap phonegap is a development tool that allows web developers to take advantage of the core features in the iphone and android sdk using javascript get started download the source git clone git github com sintaxi phonegap git phonegap project is separated into a native project for each device javascript files and a rakefile phonegap readme md rakefile android blackberry iphone javascripts each project has a respective readme md file view that file for detailed information on how to work with that device phonegap offers one unified api for accessing core functionality on all devices where possible phonegap follows the html5 spec api device exposes properties of the phone such as its device id model and os version number location gain access to the latitude longitude of the device and depending on the type of device the course speed and altitude accelerometer monitor the accelerometer on the device to detect orientation shaking and other similar actions contacts query the phone addressbook to read the users contacts orientation read the device layout orientation e g landscape vs portrait camera brings up the camera or photo browser on the phone to allow the user to upload a photo vibrate triggers the vibration alert on the phone if it is supported sound play sound files wav mp3 etc telephony trigger and activate phone calls xui you may work with any javascript framework within a phonegap application xui http xuijs com is the officially preferred framework of the phonegap core team xui is inspired by jquery optimized for web browsers and weighs in at 6 2k 2 4k minified and gziped community website phonegap com http phonegap com google group groups google com group phonegap http groups google com group phonegap wiki phonegap pbwiki com http phonegap pbwiki com twitter twitter com phonegap http twitter com phonegap the mit license copyright c 2008 rob ellis brock whitten brian leroux joe bowser dave johnson nitobi permission is hereby granted free of charge to any person obtaining a copy of this software and associated documentation files the software to deal in the software without restriction including without limitation the rights to use copy modify merge publish distribute sublicense and or sell copies of the software and to permit persons to whom the software is furnished to do so subject to the following conditions the above copyright notice and this permission notice shall be included in all copies or substantial portions of the software the software is provided as is without warranty of any kind express or implied including but not limited to the warranties of merchantability fitness for a particular purpose and noninfringement in no event shall the authors or copyright holders be liable for any claim damages or other liability whether in an action of contract tort or otherwise arising from out of or in connection with the software or the use or other dealings in the software phonegap is a nitobi http nitobi com sponsored project | front_end |
|
blockchainguide | mindcarver 2018 image 20201124130443834 https tva1 sinaimg cn large 0081kckwgy1gl06iaf8eej313g0e8aal jpg image 20201124131619288 https tva1 sinaimg cn large 0081kckwgy1gl06udgxr2j315e0sa79d jpg ethfans https learnblockchain cn 2019 11 08 zkp info 1 https www etherchain org 2 https ethstats net 3 https ethereum github io yellowpaper paper pdf 4 https medium com preethikasireddy how does ethereum work anyway 22d1df506369 5 ethereum improvement proposals https eips ethereum org ethereum patricia tree understanding the ethereum trie https easythereentropy wordpress com 2014 06 04 understanding the ethereum trie ethereum ethereum patricia tree https github com ethereum wiki wiki patricia tree ethereum wiki mpt merkle patricia tree http blog csdn net qq 33935254 article details 55505472 merkle patricia tree mpt merkle http blog csdn net zslomo article details 53434883 t 1498537389197 merkle patricia tree mpt http www cnblogs com fengzhiwu p 5584809 html https github com wanshan1024 ethereum yellowpaper blob master ethereum yellow paper cn pdf solidity http me tryblockchain org getting up to speed on ethereum html solidity http wiki jikexueyuan com project solidity zh http ethfans org posts block chain technology smart contracts and ethereum https github com consensys smart contract best practices https learnblockchain cn manuals erc https github com blockchainguide eip cn git erc makerdao https www wanghaoyi com defi maker dao basis html compound https www wanghaoyi com defi compound profile html synthetix synthetix tokensets tokensets pooltogether pooltogether sablier https www wanghaoyi com defi sablier profile html dex uniswap https www wanghaoyi com defi dex uniswap profile html dex dydx https www wanghaoyi com defi dex dydx profile html nexus mutual https www wanghaoyi com defi insurance profile html https www wanghaoyi com blockchain blockchain app datashare https www wanghaoyi com blockchain blockchain app attestation https www wanghaoyi com blockchain blockchain app traceability https www wanghaoyi com blockchain blockchain app token hyperledger fabric fabric fabric http wutongtree github io translations next consensus architecture proposal zh http www 8btc com wiki bitcoin a peer to peer electronic cash system utxo http 8btc com article 4381 1 html http www 8btc com blockchain tech algorithm http www 8btc com blockchain tech mining 1 http www 8btc com blockchain tech consensus mechanism pow pos dpos pbft http blog csdn net lsttoy article details 61624287 https yq aliyun com articles 60400 http 8btc com article 2238 1 html http blog csdn net jeffrey zhou article details 56672948 https www zhihu com question 53385152 paxosstore paxos http www infoq com cn articles wechat paxosstore paxos algorithm protocol raft http www infoq com cn articles raft paper pos https yq aliyun com articles 60400 pos pow http 8btc com article 1882 1 html dpos http www 8btc com dpos back to satoshi dpos http www 8btc com dpossha dpos https steemit com dpos legendx dpos dpos vs pow https zhuanlan zhihu com p 28127511 pos dpos pow https www zhihu com question 49995385 https github com blockchainguide zkp tech git http ethfans org posts blockchain infrastructure landscape a first principles http www 8btc com elwingao blockchain 6 cto http www 8btc com blockchain architecture http www 8btc com ethereum top 10 app https github com blockchainguide zkp tech git http blog csdn net elwingao article details 53410750 http www 8btc com bytom sidechain rootstock http www 8btc com sidechains drivechains and rsk 2 way peg design http www 8btc com enabling blockchain innovations with pegged sidechains abstract introduction off chain http view xiaomiquan com view 59a3e22d2540ed222c6075b8 corda http www 8btc com ln rn corda rootstock http www 8btc com tan90d88 rootstock http www 8btc com tan90d84 btc relay rootstock http www 8btc com btc relay and rootstock btc relay http btcrelay org ipfs ipfs https ipfser org ipfs https zhuanlan zhihu com ipfsguide filecoin https filecoin io zh cn file https spec filecoin io filecoin https research protocol ai groups cryptolab https protocol ai filecoin https docs filecoin io https gguoss github io 2017 05 28 ipfs https github com ipfs papers raw master ipfs cap2pfs ipfs p2p file system pdf | blockchain |
|
CC2511 | cc2511 embedded system design | os |
|
resources | useful resources for developers a list of student collated resources deemed to be useful for every developer and categorised andrei has a hand picked list of his favourite resources which you can find here https zerotomastery io resources utm source github utm medium resources table of contents api api md a list of resources for learning how to use apis algorithms data structures algorithmsdatastructures md resources for tackling algorithms angular resources angular md a list of resources for learning angular arduino arduino md a list of resources for the arduino micro controller articles developmentarticles md general articles page on web development css resources cssresources md a list of resources for learning css cheat sheets cheatsheets md for those looking for the quick and dirty of how to do things or if you simply forgot something look no further cloud cloud md cloud learning resources to kickstart your career free online courses freeonlinecourses md free to attend online courses including moocs massive open online courses https en wikipedia org wiki massive open online course game development resources gamedev md a page which lists out the resources which helps you go from zero to mastery in game development general resources for learning web development generalresources md a page with mostly free resources for learning web development and coding in general git and github using git and github md resources page on using git and github interviewing for coding jobs howtointerviewforcodejobs md a page of resources about preparing for the job market javascript resources javascript md a list of resources for learning javascript junior to senior developer roadmap resources juniortoseniorcourse md resources mentioned in the zero to mastery course mobile app development mobileappdevelopment md a curated list of useful resources for mobile app development for android ios windows or any other mobile system podcasts podcasts md a range of podcasts covering topics like coding design accessibility javascript and mindset self development practice resources practiceresources md a list of exercises and gamified resources for web development programming books programming books md featuring a list of insightful programming books both free and paid versions python resources python md a list of resources for learning python raspberry pi raspberrypi md resources for the raspberry pi search engine optimization searchengineoptimization md a list of resources for learning search engine optimization seo unix unix md resources for unix systems linux macos etc web design resources webdesignresources md a page of resources for web design web development tools webdevtools md a page listing a number of free web development tools youtube channels youtubechannels md a list of youtube channels for learning all about programming covering topics as broad as web development design history hacking and computer science cs contributing you are always welcome to contribute to this project kindly visit our contributor s guide https github com zero to mastery resources blob master contributing md before opening a pull request first time contributing to open source awesome read more about the process in contributing to github https github com zero to mastery resources blob master contributing to github md list of contributors contributors md a page showing the github usernames of all who have contributed to this open source project make sure to add yourself and submit a pull request if you ve contributed | developer-resources programming-resources tutorial-list react javascript youtube-channels articles podcasts youtube-channel | front_end |
flow-nft | flow non fungible token standard this standard defines the minimum functionality required to implement a safe secure and easy to use non fungible token contract on the flow blockchain https www onflow org what is cadence cadence is the resource oriented programming language https docs onflow org cadence for developing smart contracts on flow before reading this standard we recommend completing the cadence tutorials https docs onflow org cadence tutorial 01 first steps to build a basic understanding of the programming language resource oriented programming and by extension cadence provides an ideal programming model for non fungible tokens nfts users are able to store their nft objects directly in their accounts and transact peer to peer learn more in this blog post about resources https medium com dapperlabs resource oriented programming bee4d69c8f8e import addresses the nonfungibletoken and metadataviews contracts are already deployed on various networks you can import them in your contracts from these addresses there is no need to deploy them yourself note with the emulator you must use the contracts flag to deploy these contracts network contract address emulator canary 0xf8d6e0586b0a20c7 testnet 0x631e88ae7f1d7c20 mainnet 0x1d7e57aa55817448 core features the nonfungibletoken contract defines the following set of functionality that must be included in each implementation contracts that implement the nonfungibletoken interface are required to implement two resource interfaces nft a resource that describes the structure of a single nft collection a resource that can hold multiple nfts of the same type users typically store one collection per nft type saved at a well known location in their account storage for example all nba top shot moments owned by a single user are held in a topshot collection https github com dapperlabs nba smart contracts blob master contracts topshot cdc l605 stored in their account at the path storage momentcollection create a new nft collection create a new collection using the createemptycollection function this function must return an empty collection that contains no nfts users typically save new collections to a well known location in their account and link the nonfungibletoken collectionpublic interface as a public capability swift let collection examplenft createemptycollection account save collection to storage examplenftcollection create a public capability for the collection account link nonfungibletoken collectionpublic public examplenftcollection target storage examplenftcollection withdraw an nft withdraw an nft from a collection using the withdraw contracts examplenft cdc l36 l42 function this function emits the withdraw contracts examplenft cdc l12 event swift let collectionref account borrow examplenft collection from storage examplenftcollection panic could not borrow a reference to the owner s collection withdraw the nft from the owner s collection let nft collectionref withdraw withdrawid 42 deposit an nft deposit an nft into a collection using the deposit contracts examplenft cdc l46 l57 function this function emits the deposit contracts examplenft cdc l13 event this function is available on the nonfungibletoken collectionpublic interface which accounts publish as public capability this capability allows anybody to deposit an nft into a collection without accessing the entire collection swift let nft examplenft nft let collection account getcapability public examplenftcollection borrow nonfungibletoken collectionpublic panic could not borrow a reference to the receiver s collection collection deposit token nft important in order to comply with the deposit function in the interface an implementation must take a nonfungibletoken nft resource as an argument this means that anyone can send a resource object that conforms to nonfungibletoken nft to a deposit function in an implementation you must cast the token as your specific token type before depositing it or you will deposit another token type into your collection for example swift let token token as examplenft nft list nfts in an account return a list of nfts in a collection using the getids contracts examplenft cdc l59 l62 function this function is available on the nonfungibletoken collectionpublic interface which accounts publish as public capability swift let collection account getcapability public examplenftcollection borrow nonfungibletoken collectionpublic panic could not borrow a reference to the receiver s collection let ids collection getids nft metadata nft metadata is represented in a flexible and modular way using the standard proposed in flip 0636 https github com onflow flips blob main application 20210916 nft metadata md when writing an nft contract you should implement the metadataviews resolver contracts metadataviews cdc l3 l6 interface which allows your nft to implement one or more metadata types called views each view represents a different type of metadata such as an on chain creator biography or an off chain video clip views do not specify or require how to store your metadata they only specify the format to query and return them so projects can still be flexible with how they store their data how to read metadata this example shows how to read basic information about an nft including the name description image and owner source get nft metadata cdc scripts get nft metadata cdc swift import examplenft from import metadataviews from get the regular public capability let collection account getcapability examplenft collectionpublicpath borrow examplenft examplenftcollectionpublic panic could not borrow a reference to the collection borrow a reference to the nft as usual let nft collection borrowexamplenft id 42 panic could not borrow a reference to the nft call the resolveview method provide the type of the view that you want to resolve view types are defined in the metadataviews contract you can see if an nft supports a specific view type by using the getviews method if let view nft resolveview type metadataviews display let display view as metadataviews display log display name log display description log display thumbnail the owner is stored directly on the nft object let owner address nft owner address inspect the type of this nft to verify its origin let nfttype nft gettype nfttype identifier is a e03daebed8ca0615 examplenft nft how to implement metadata the example nft contract contracts examplenft cdc shows how to implement metadata views list of views name purpose status source core view nftview basic view that includes the name description and thumbnail implemented metadataviews cdc https github com onflow flow nft blob master contracts metadataviews cdc l32 l65 display return the basic representation of an nft implemented metadataviews cdc https github com onflow flow nft blob master contracts metadataviews cdc l85 l120 white check mark httpfile a file available at an http s url implemented metadataviews cdc https github com onflow flow nft blob master contracts metadataviews cdc l143 l155 ipfsfile a file stored in ipfs implemented metadataviews cdc https github com onflow flow nft blob master contracts metadataviews cdc l157 l195 edition return information about one or more editions for an nft implemented metadataviews cdc https github com onflow flow nft blob master contracts metadataviews cdc l197 l229 editions wrapper for multiple edition views implemented metadataviews cdc https github com onflow flow nft blob master contracts metadataviews cdc l176 l187 serial serial number for an nft implemented metadataviews cdc https github com onflow flow nft blob master contracts metadataviews cdc l258 l270 royalty a royalty cut for a given nft implemented metadataviews cdc https github com onflow flow nft blob master contracts metadataviews cdc l286 l323 royalties wrapper for multiple royalty views implemented metadataviews cdc https github com onflow flow nft blob master contracts metadataviews cdc l325 l352 white check mark media represents a file with a corresponding mediatype implemented metadataviews cdc https github com onflow flow nft blob master contracts metadataviews cdc l378 l395 medias wrapper for multiple media views implemented metadataviews cdc https github com onflow flow nft blob master contracts metadataviews cdc l397 l407 license represents a license according to https spdx org licenses implemented metadataviews cdc https github com onflow flow nft blob master contracts metadataviews cdc l423 l432 externalurl exposes a url to an nft on an external site implemented metadataviews cdc https github com onflow flow nft blob master contracts metadataviews cdc l448 l458 white check mark nftcollectiondata provides storage and retrieval information of an nft implemented metadataviews cdc https github com onflow flow nft blob master contracts metadataviews cdc l474 l531 white check mark nftcollectiondisplay returns the basic representation of an nft s collection implemented metadataviews cdc https github com onflow flow nft blob master contracts metadataviews cdc l547 l586 white check mark rarity expose rarity information for an nft implemented metadataviews cdc https github com onflow flow nft blob master contracts metadataviews cdc l603 l628 trait represents a single field of metadata on an nft implemented metadataviews cdc https github com onflow flow nft blob master contracts metadataviews cdc l644 l671 traits wrapper for multiple trait views implemented metadataviews cdc https github com onflow flow nft blob master contracts metadataviews cdc l673 l690 white check mark core views the views marked as core views are considered the minimum required views to provide a full picture of any nft if you want your nft to be featured on the flow nft catalog https nft catalog vercel app it should implement all of them as a pre requisite always prefer wrappers over single views when exposing a view that could have multiple occurrences on a single nft such as edition royalty media or trait the wrapper view should always be used even if there is only a single occurrence the wrapper view is always the plural version of the single view name and can be found below the main view definition in the metadataviews contract when resolving the view the wrapper view should be the returned value instead of returning the single view or just an array of several occurrences of the view example preferred cadence pub fun resolveview view type anystruct switch view case type metadataviews editions let editioninfo metadataviews edition name example nft edition number self id max nil let editionlist metadataviews edition editioninfo return metadataviews editions editionlist to be avoided cadence resolveview should always return the same type that was passed to it as an argument so this is improper usage because it returns edition instead of editions pub fun resolveview view type anystruct switch view case type metadataviews editions let editioninfo metadataviews edition name example nft edition number self id max nil return editioninfo cadence this is also improper usage because it returns edition instead of editions pub fun resolveview view type anystruct switch view case type metadataviews editions let editioninfo metadataviews edition name example nft edition number self id max nil let editionlist metadataviews edition editioninfo return editionlist royalty view the metadataviews contract also includes a standard view for royalties https github com onflow flow nft blob master contracts metadataviews cdc l136 l208 this view is meant to be used by 3rd party marketplaces to take a cut of the proceeds of an nft sale and send it to the author of a certain nft each nft can have its own royalty view cadence pub struct royalties array that tracks the individual royalties access self let cutinfos royalty and the royalty can indicate whatever fungible token it wants to accept via the type of the generic fungibletoken receiver capability that it specifies cadence pub struct royalty generic fungibletoken receiver for the beneficiary of the royalty can get the concrete type of the receiver with receiver gettype recommendation users should create a new link for a flowtoken receiver for this using getroyaltyreceiverpublicpath and not use the default flowtoken receiver this will allow users to update the capability in the future to use a more generic capability pub let receiver capability anyresource fungibletoken receiver multiplier used to calculate the amount of sale value transferred to royalty receiver note it should be between 0 0 and 1 0 ex if the sale value is x and multiplier is 0 56 then the royalty value would be 0 56 x generally percentage get represented in terms of basis points in solidity based smart contracts while cadence offers ufix64 that already supports the basis points use case because its operations are entirely deterministic integer operations and support up to 8 points of precision pub let cut ufix64 if someone wants to make a listing for their nft on a marketplace the marketplace can check to see if the royalty receiver accepts the seller s desired fungible token by checking the concrete type of the reference if the concrete type is not the same as the type of token the seller wants to accept the marketplace has a few options they could either get the address of the receiver by using the receiver owner address field and check to see if the account has a receiver for the desired token they could perform the sale without a royalty cut or they could abort the sale since the token type isn t accepted by the royalty beneficiary you can see example implementations of royalties in the examplenft contract and the associated transactions and scripts important instructions for royalty receivers if you plan to set your account as a receiver of royalties you ll likely want to be able to accept as many token types as possible this won t be immediately possible at first but eventually we will also design a contract that can act as a sort of switchboard for fungible tokens it will accept any generic fungible token and route it to the correct vault in your account this hasn t been built yet but you can still set up your account to be ready for it in the future therefore if you want to receive royalties you should set up your account with the setup account to receive royalty cdc transaction https github com onflow flow nft blob master transactions setup account to receive royalty cdc this will link generic public path from metadataviews getroyaltyreceiverpublicpath to your chosen fungible token for now then use that public path for your royalty receiver and in the future you will be able to easily update the link at that path to use the fungible token switchboard instead contract metadata now that contract borrowing is released you can also implement the resolver contracts resolver cdc interface on your contract and resolve views from there as an example you might want to allow your contract to resolve nftcollectiondata and nftcollectiondisplay so that platforms do not need to find an nft that belongs to your contract to get information about how to set up or show your collection cadence import viewresolver from 0xf8d6e0586b0a20c7 import metadataviews from 0xf8d6e0586b0a20c7 pub fun main addr address name string storagepath let t type metadataviews nftcollectiondata let borrowedcontract getaccount addr contracts borrow viewresolver name name panic contract could not be borrowed let view borrowedcontract resolveview t if view nil return nil let cd view as metadataviews nftcollectiondata return cd storagepath will return cadence domain storage identifier examplenftcollection how to propose a new view please open a pull request to propose a new metadata view or changes to an existing view feedback as flow and cadence are still new we expect this standard to evolve based on feedback from both developers and users we d love to hear from anyone who has feedback for example are there any features that are missing from the standard are the current features defined in the best way possible are there any pre and post conditions that are missing are the pre and post conditions defined well enough error messages are there any other actions that need an event defined for them are the current event definitions clear enough and do they provide enough information are the variable function and parameter names descriptive enough are there any openings for bugs or vulnerabilities that we are not noticing please create an issue in this repository if there is a feature that you believe needs discussing or changing comparison to other standards on ethereum this standard covers much of the same ground as erc 721 and erc 1155 but without most of the downsides tokens cannot be sent to contracts that don t understand how to use them because an account needs to have a receiver or collection in its storage to receive tokens if the recipient is a contract that has a stored collection the tokens can just be deposited to that collection without having to do a clunky approve transferfrom events are defined in the contract for withdrawing and depositing so a recipient will always be notified that someone has sent them tokens with their own deposit event this version can support batch transfers of nfts even though it isn t explicitly defined in the contract a batch transfer can be done within a transaction by just withdrawing all the tokens to transfer then depositing them wherever they need to be all atomically transfers can trigger actions because users can define custom receivers to execute certain code when a token is sent easy ownership indexing rather than iterating through all tokens to find which ones you own you have them all stored in your account s collection and can get the list of the ones you own instantly how to test the standard if you want to test out these contracts we recommend either testing them with the flow playground https play onflow org or with the visual studio code extension https github com onflow flow blob master docs vscode extension md cadence visual studio code extension the steps to follow are 1 deploy nonfungibletoken cdc 2 deploy examplenft cdc importing nonfungibletoken from the address you deployed it to then you can experiment with some of the other transactions and scripts in transactions or even write your own you ll need to replace some of the import address placeholders with addresses that you deploy to as well as some of the transaction arguments running automated tests you can find automated tests in the lib go test nft test go file it uses the transaction templates that are contained in the lib go templates templates go file currently these rely on a dependency from a private dapper labs repository to run so external users will not be able to run them we are working on making all of this public so anyone can run tests but haven t completed this work yet bonus features these could each be defined as a separate interface and standard and are probably not part of the main standard they are not implemented in this repository yet 10 withdrawing tokens from someone else s collection by using their provider reference approved withdraw event providing a resource that only approves an account to withdraw a specific amount per transaction or per day month etc returning the list of tokens that an account can withdraw for another account reading the balance of the account that you have permission to send tokens for owner is able to increase and decrease the approval at will or revoke it completely this is much harder than anticipated 11 standard for composability extensibility 12 minting a specific amount of tokens using a specific minter resource that an owner can control tokens minted event setting a cap on the total number of tokens that can be minted at a time or overall setting a time frame where this is allowed 13 burning a specific amount of tokens using a specific burner resource that an owner controls tokens burnt event setting a cap on the number of tokens that can be burned at a time or overall setting a time frame where this is allowed 14 pausing token transfers maybe a way to prevent the contract from being imported probably not a good idea 15 cloning the token to create a new token with the same distribution license the works in these files examplenft cdc contracts examplenft cdc nonfungibletoken cdc contracts nonfungibletoken cdc are under the unlicense license deploying updates testnet sh testnet private key xxxx flow project deploy update network testnet | blockchain smart-contracts linear-types nft onflow | blockchain |
DiscordBot-Flash | discordbot flash forthebadge makes people smile http forthebadge com images badges makes people smile svg http forthebadge com forthebadge https forthebadge com images badges open source svg https forthebadge com forthebadge https forthebadge com images badges you didnt ask for this svg https forthebadge com project logo br p align center p align center a href https github com gizmolabai discordbot flash img src flashlogo png alt logo width 80 height 80 a h3 align center flash h3 p align center a multipurpose discord with a web dashboard br a href https docs gizmolab xyz strong explore the docs strong a br br a href https flash gizmolab xyz target blank view demo a a href https github com gizmolabai discordbot flash issues report bug a a href https github com gizmolabai discordbot flash issues request feature a p p table of contents details open open summary table of contents summary ol li a href about the project about the project a ul li a href built with built with a li ul li li a href getting started getting started a ul li a href installation installation a li ul li li a href usage usage a li li a href roadmap roadmap a li li a href contributing contributing a li li a href license license a li li a href contact contact a li ol details about the project about the project a multipurpose discord bot with a web dashboard with commands for anime moderation games activities image manipulation utilities and more built with forthebadge extras made with node js svg https nodejs org en forthebadge extras uses mongo db svg https www mongodb com cloud atlas forthebadge extras uses tailwind css svg https tailwindcss com getting started getting started you invite flash to your server by clicking here https discord com api oauth2 authorize client id 876092358175899698 permissions 55800753399 redirect uri http 3a 2f 2flocalhost 3a3000 2fcallback scope bot 20applications commands you can also follow the guide to launch your own instance of flash installation 1 create a discord bot application on discord developer portal https discord com developers applications and get your bot token 2 create a mongodb https www mongodb com cloud atlas lp try2 in utm source google utm campaign gs apac india search core brand atlas desktop utm term mongodb 20web 20service utm medium cpc paid search utm ad e utm ad campaign id 12212624347 gclid cjwkcajw47efbha9eiway8kznixuxdvbfckumjlmnj9jiwgkfauxv9ltc0cfg qrmm vg5y4rug7ibochyuqavd bwe database and get its connection link checkout this tutorial to know how to use mongodb tutorial https youtu be 8no3sktqagy 3 get your free api keys at memer api https memer api js org docs path welcome welcome for the image commands tenor gif https tenor com developer keyregistration top gg https top gg you only need this if you have your bot on that website 4 clone the repo sh git clone https github com gizmolabai discordbot flash git 5 install npm packages sh npm install 6 enter your discord bot token api key and mongo data base url in the config js file js token your bot token prefix the default prefix for the bot memer api token memer api token get the memer api token from https discord com invite emd44zjasa tenor api key tenor api token get the tenor api key from https www tenor co api v1 key mongooseconnectionstring mongodb connection url id clientid https discordapp com developers applications id information clientsecret client secret https discordapp com developers applications id information domain http localhost 3000 enter your domain here when you are running the bot on a different domain than localhost port 3000 usingcustomdomain false make this true if you want to use your own domain 7 add callback uri in discord portal eg https yourdomainname com callback br if you are using local host add http localhost 3000 callback as the redirect uri img src https cdn discordapp com attachments 834390098304565323 876093164585369631 unknown png usage examples usage invite the bot invite the bot you created at discord dev portal to your server start the bot type this in the terminal after navigating to folder where you cloned the repo sh npm start flash offers a lot of commands to work with anime animesearch character anime gifs etc fun meme roast say 5 image byemom abandon cancer changemymind 21 utility avatar covid weather qr code 3 activities youtube together fishing 2 moderation kick ban clear messages 5 soundboard ahshit bruh sheesh 11 games snake tictactoe guessthenumber akinator 4 economy balance beg work shop 9 you can also add more commands of your choice if you understand javascript and discordjs or make feature requests here https github com gizmolabai discordbot flash issues roadmap roadmap dashboard ported old commands event logging economy system xp system symbol legend done in progress not started see the open issues https github com gizmolabai discordbot flash issues for a list of proposed features and known issues support buy me a coffee buy me a coffee buymeacoffee shield buymeacoffee contributing contributing any contributions you make are greatly appreciated 1 fork the project 2 create your feature branch git checkout b feature amazingfeature 3 commit your changes git commit m add some amazingfeature 4 push to the branch git push origin feature amazingfeature 5 open a pull request credits dashboard base yash094 https github com yash094 license license distributed under the gpl 3 0 license see license for more information contact contact twitter https img shields io twitter follow gizmo gg color white label gizmo gg logo twitter style for the badge https twitter com gizmo gg support server https img shields io discord 834390097621286922 svg label discord logo discord colorb 7289da style for the badge https discord gg jdp2fbvcdk buymeacoffee shield https www buymeacoffee com assets img guidelines download assets sm 1 svg buymeacoffee https www buymeacoffee com g1zmo | discord discord-bot discord-js topgg dashboard discord-dashboard discordjs | front_end |
cuit-course | div align center img src https github com ooyq cuit course assets 120553430 333e8a85 fec1 458b 97f1 d07cb07ab7ea alt image div h1 align center h1 p align center a href https github com ooyq cuit course stargazers img src https img shields io github stars ooyq cuit course svg alt stars a a href https github com ooyq cuit course network members img src https img shields io github forks ooyq cuit course svg alt forks a img src https img shields io github repo size ooyq cuit course svg alt github repo size a href https github com ooyq cuit course issues img src https img shields io github issues ooyq cuit course svg alt issues a img src https img shields io github issues pr ooyq cuit course svg alt github pull requests p p google a4 cc98 p https github com qsctech zju icicles br heart https github com ooyq cuit helper br downgit https minhaskamal github io downgit home download br issue pr br https qm qq com cgi bin qm qr k zayq5mcgb5ichs1npmmzts02icam4 em authkey ynxbwmdar2vyv9wul3u5jpuqbzhlmg o9e5r7f dztavahqainxoj7ppilk1bmhl noverify 0 a target blank href mailto cuit email com img src https img shields io badge email blue style for the badge logo email logocolor white a br div align center img src https github com ooyq cuit course assets 120553430 5c727622 3719 4b59 9d12 af403a17c67c alt image div | server |
|
Coldairarrow.Fx.Net40.Easyui.GitHub | coldairarrow fx net40 easyui github web net40 windows server2003 https www cnblogs com coldairarrow p 9626691 html | front_end |
|
full-stack-web-development | full stack web development in the cloud this repository contains the source code for the full stack web development in the cloud https youtu be ouzauj3geug course architecture the diagram below outlines the high level architecture and the hosting providers for the web application api and database image https user images githubusercontent com 788827 145879564 e7dc42d6 3055 492b 95d7 902e9a5fad96 png ephemeral developer environments the entire course is developed using gitpod https www gitpod io for each task we use an ephemeral developer environment that we dispose of as soon as the task is completed environments are fully automated and we never run npm install or npm run dev manually we also don t have any code dependencies etc installed locally technology stack the course leverages the following technologies web application svelte https svelte dev is a compiler to develop highly performant web applications with great developer experience the application is styled with plain css api sveltekit https kit svelte dev is the library application framework powered by svelte it provides routing server side rendering and also enables us to develop a web application that works if javascript is disabled prisma https www prisma io is the object relational mapping orm library that let s us interact with the database based on models we define prisma generates the database schema and keeps the databsae in sync with our model s in addition it generates a typescript client we import into our code so that we have type safety when we work with database objects database postgres https www postgresql org is our database of choice for the course however thanks to prisma s support for various other databases it is a matter of changing configuration values to leverage a different database deployment the web application and api are hosted on vercel https vercel com whereas the database lives on railway https railway app pull requests each section of the course has a corresponding pull request https github com gitpod io full stack web development pulls q is 3apr is 3aclosed if you get stuck make sure you check out the source code to copy paste what you need | front_end |
|
jkuat | jkuat department of information technology this is a sample app for the jomo kenyatta university of agriculture and technology it department it contains information for the 3 levels of study available 1 diploma level 2 undergraduate level 3 masters level through the app users can obtain and download information pertaining to the department i e brochure for the department application form to the join the university for the dit courses fee statements for all levels of study registration forms for semester units in the various levels of study timetables for all the classes available at the various levels of study rules and regulations of the department screenshots img src screenshots one png width 300 img src screenshots two png width 300 img src screenshots three png width 300 img src screenshots four png width 300 img src screenshots five png width 300 | server |
|
Learn-Blockchain-Programming-with-JavaScript | learn blockchain programming with javascript a href https www packtpub com web development learn blockchain programming javascript utm source github utm medium repository utm campaign 9781789618822 img src https www packtpub com sites default files b12086 png alt learn blockchain programming with javascript height 256px align right a this is the code repository for learn blockchain programming with javascript https www packtpub com web development learn blockchain programming javascript utm source github utm medium repository utm campaign 9781789618822 published by packt build your very own blockchain and decentralized network with javascript and node js what is this book about learn blockchain programming with javascript begins by giving you a clear understanding of what blockchain technology is you ll then set up an environment to build your very own blockchain and you ll add various functionalities to it by adding functionalities to your blockchain such as the ability to mine new blocks create transactions and secure your blockchain through a proof of work you ll gain an in depth understanding of how blockchain technology functions this book covers the following exciting features gain an in depth understanding of blockchain and the environment setup create your very own decentralized blockchain network from scratch build and test the various endpoints necessary to create a decentralized network learn about proof of work and the hashing algorithm used to secure data mine new blocks create new transactions and store the transactions in blocks if you feel this book is for you get your copy https www amazon com dp 1789618827 today a href https www packtpub com utm source github utm medium banner utm campaign githubbanner img src https raw githubusercontent com packtpublishing github master github png alt https www packtpub com border 5 a instructions and navigations all of the code is organized into folders for example chapter02 the code will look like the following blockchain prototype createnewblock function following is what you need for this book learn blockchain programming with javascript is for javascript developers who wish to learn about blockchain programming or build their own blockchain using javascript frameworks with the following software and hardware list you can run all code files present in the book chapter 1 8 software and hardware list chapter software required os required 1 8 any text editor and node js windows mac os x related products other books you may enjoy learn bitcoin and blockchain packt https www packtpub com big data and business intelligence learn bitcoin and blockchain utm source github utm medium repository utm campaign 9781789536133 amazon https www amazon com dp 1789536138 building enterprise javascript applications packt https www packtpub com web development building enterprise javascript applications utm source github utm medium repository utm campaign 9781788477321 amazon https www amazon com dp 1788477324 get to know the author eric traub currently works as a software engineer in new york city he has extensive experience working as a teacher and instructing people in a variety of different subjects he changed his career from teaching to software engineering because of the excitement it brings to him and the passion that he has for it he is now lucky enough to have the opportunity to combine both of these passions software engineering and teaching suggestions and feedback click here https docs google com forms d e 1faipqlsdy7datc6qmel81fiuuymz0wy9vh1jhkvpy57oimekgqib ow viewform if you have any feedback or suggestions download a free pdf i if you have already purchased a print or kindle version of this book you can get a drm free pdf version at no cost br simply click on the link to claim your free pdf i p align center a href https packt link free ebook 9781789618822 https packt link free ebook 9781789618822 a p | blockchain |
|
ESP8266_IOT_PLATFORM | esp8266 iot platform esp8266 sdk provides users with a simple fast and efficient development platform for internet of things products the esp8266 iot platform is based on the freertos esp8266 sdk https github com espressif esp iot rtos sdk and adds on to it some commonly used functionalities in an example application of a smart plug this application uses the esp touch protocol to realise smart configuration of the device the communication protocols used are json and http rest an android mobile apk https github com espressifapp iot espressif android is also included as a basic template for the users code structure usr directory user main c the entry point for the main program user webserver c creates the tcp webserver using json packets and rest architecture user devicefind c creates a udp service which recieves special finder message on port 1025 and allows the user to discover devices on the network user esp platform c provides the espressif smart configuration api esp touch example communicates with the espressif cloud servers customize this to connect to your own servers maintains the network status and data transmission to server user plug c implements the functionality of a smart plug in this example user esp platform timer c implements the timer functionalities user light c could be used to output pwm signals that can be used for smart lighting user cgi c implents an adapter between the http webserver and the sdk upgrade directory upgrade c firmware upgrade example upgrade lib c operations on flash devices pertaining the upgrade of firmware include directory the include directory includes the relevant headers needed for the project of interest is user config h which can be used to configure or select the examples by setting the macros we can enable the relevant functionality e g plug device and light device please note that you have to adjust these parameters based on your flash map for more details please refer to 2a esp8266 iot sdk user manuel user esp platform h define esp param start sec 0x7d user light h define priv param start sec 0x7c user plug h define priv param start sec 0x7c driver directory this contains the gpio interface libesphttpd directory this directory implements a small http server it is compatible with most web browsers core contains the parser implementing the http protocol and a simple file system espfs is a file system with simple compression capabilites built in util contains the interface with wifi and dns related codes html light and html plug directories these directories contain the javascript and html pages and user interface resources usage configuration target device can be configured through defining user config h macro this application default configuration is a smart power plug or smart power socket define plug device 1 and supports the http server function define httpd server 1 compiling the code run the compilation script gen misc sh you will prompted for some configuration parameters user the firmware download tool to flash the device with the bins generated for my version of freertos esp8266 sdk 1 2 0 3 i have used the following parameters in the upload boot v1 4 b1 bin downloads to flash 0x00000 user1 2048 new 3 bin downloads to flash 0x10000 esp init data default bin downloads to 0x1fc000 blank bin downloads to flash 0x1fe000 | os |
|
article | wemobiledev articles articles in mp weixin qq com http mp weixin qq com you can scan the following qrcode to follow us qrcode for wemobiledev jpg assets qrcode for wemobiledev jpg | front_end |
|
PI | dbot user manual user manual for dbot project initial setup 1 install cassandra and change the following settings in cassandra yaml file authenticator passwordauthenticator roles validity in ms 0 roles update interval in ms 0 credentials validity in ms 0 credentials update interval in ms 0 authorizer cassandraauthorizer 2 clone the repository and run docker 1 git clone this repository 2 go to pi2021django 3 run docker compose up force recreate build 3 optional in case you want to run the server on another address not in 127 0 0 1 1 go to pi2021django project settings p y and change allowed hosts 127 0 0 1 2 also in settings py in databases change host 127 0 0 1 3 in pi2021django docker compose yml change 127 0 0 1 8000 grafana setup 1 go to port 3000 http localhost 3000 http 10 0 12 65 3000 or http 10 0 12 65 3000 http localhost 3000 2 login if asked to with username admin password admin 3 go to configuration data sources 4 click add data source and search for json 5 in url put http 10 0 12 65 3000 str user token grafana http 10 0 12 65 3000 str user token grafana where str user token is the current token you have when you logged in the application 6 click save test a message should appear saying data source is working create dashboard in grafana setup is required 1 go to create dashboard and click add an empty panel 2 click in metric and choose your metric example 1 temperature 3 above the graphic choose your time range 4 on the top right corner click apply add ad hoc filter to a dashboard in grafana setup is required 1 create or edit a dashboard 2 on the top right corner click on the icon to open dashboard settings 3 go to variables and click add variable 4 in type instead of query choose ad hoc filters 5 click update and go back to the dashboard edit panel 6 on the top left corner click on the icon and choose an attribute ex temperature 7 choose an operator ex and type a select value ex 10 8 the graphic should now only show values within the conditions of that new ad hoc filter added api endpoints authentication register new user post name email password bash register user post data example bash name test email test ua pt password randpassword authenticate user post email password bash authenticate user post data example bash email test ua pt password randpassword logout user get where str user token is the token given upon login bash logout user str user token database insert data into user database post where str user token is the token given upon login and str sensorid is the sensorid where the data will be bound to bash insert into db str user token str sensorid post data example bash sensorid 0001 temperature 10 timestamp 2020 06 01 00 02 10 or sensorid 0001 temperature 10 timestamp 2020 06 01 00 02 10 sensorid 0001 temperature 20 timestamp 2020 06 02 00 02 10 query data from user database post where str user token is the token given upon login and str sensorid is the sensorid where the data will be queried from if the sensorid given is all then the query will target all the sensor ids from the user database conditions attributes from ts to ts bash query db str user token str sensorid post data example multiple conditions can be given attributes is the values the query will return if from ts is empty then it will query data from all timestamps bash conditions temperature 5 attributes temperature from ts to ts or conditions temperature 5 temperature 10 attributes temperature from ts 2020 05 31 to ts 2020 06 02 get all attributes from user database get where str user token is the token given upon login bash get all attributes str user token grafana test connection where str user token is the token given upon login bash str user token grafana return available metrics when invoked dropdown metrics when editing a dashboard where str user token is the token given upon login bash str user token grafana search return data based on input data showed in graphic based on metric chosen date range and ad hoc filters where str user token is the token given upon login bash str user token grafana query return annotations where str user token is the token given upon login bash str user token grafana annotations return tag keys for ad hoc filters where str user token is the token given upon login bash str user token grafana tag keys return tag vaues for ad hoc filters where str user token is the token given upon login bash str user token grafana tag values | cassandra django-rest-framework grafana | server |
xTrader | xtrader by t0trader a win32 mfc c project based on ctp api shanghai futures information technology win32 mfc c bugs mfc c ctp 1 2 ntp 15 3 4 5 csv 6 ctp vc6 0 sp6 m sdk2003 unicode ansi xml utf8 vc6 vs2013 t0trader qq com ctp http 202 109 110 121 sim htm | server |
|
exp-up-down-counter-iitr | introduction b discipline b fill your discipline name here b lab b fill your lab name here b experiment b fill your experiment name and number here about the experiment fill a brief description of this experiment here b name of developer b fill the name of experiment owner here b institute b b email id b b department contributors list srno name faculty or student department institute email id 1 2 | ext-ph3 iitr | os |
Algorithms_for_MachineLearning | algorithms for machinelearning open source code for machine learning in computer vision | machine-learning computer-vision alogrithms | ai |
CS641-TA | cs641 ta cs641 mobile web development course material | front_end |
|
Udagram-Image-Filtering | udagram image filtering udagram is a simple cloud application developed alongside the udacity cloud engineering nanodegree it allows users to register and log into a web client post photos to the feed and process photos using an image filtering microservice aws eb endpoint udagram image filtering dev eu west 2 elasticbeanstalk com http udagram image filtering dev eu west 2 elasticbeanstalk com test image http udagram image filtering dev eu west 2 elasticbeanstalk com filteredimage image url https timedotcom files wordpress com 2019 03 kitten report jpg screenshots gameplay screenshot deployment screenshots myscreenshot png | cloud |
|
jetson-containers | a header for a software project about building containers for ai and machine learning https raw githubusercontent com dusty nv jetson containers docs docs images header jpg machine learning containers for jetson and jetpack l4t pytorch https img shields io github actions workflow status dusty nv jetson containers l4t pytorch jp51 yml label l4t pytorch packages l4t l4t pytorch l4t tensorflow https img shields io github actions workflow status dusty nv jetson containers l4t tensorflow tf2 jp51 yml label l4t tensorflow packages l4t l4t tensorflow l4t ml https img shields io github actions workflow status dusty nv jetson containers l4t ml jp51 yml label l4t ml packages l4t l4t ml l4t diffusion https img shields io github actions workflow status dusty nv jetson containers l4t diffusion jp51 yml label l4t diffusion packages l4t l4t diffusion l4t text generation https img shields io github actions workflow status dusty nv jetson containers l4t text generation jp51 yml label l4t text generation packages l4t l4t text generation modular container build system that provides various ai ml packages packages for nvidia jetson https developer nvidia com embedded computing rocket robot ml pytorch packages pytorch tensorflow packages tensorflow onnxruntime packages onnxruntime deepstream packages deepstream tritonserver packages tritonserver jupyterlab packages jupyterlab stable diffusion packages diffusion stable diffusion webui llm transformers packages llm transformers text generation webui packages llm text generation webui text generation inference packages llm text generation inference llava packages llm llava llama cpp packages llm llama cpp exllama packages llm exllama llamaspeak packages llm llamaspeak awq packages llm awq autogptq packages llm auto gptq minigpt 4 packages llm minigpt4 mlc packages llm mlc langchain packages llm langchain optimum packages llm optimum bitsandbytes packages llm bitsandbytes nemo packages nemo riva packages riva client l4t l4t pytorch packages l4t l4t pytorch l4t tensorflow packages l4t l4t tensorflow l4t ml packages l4t l4t ml l4t diffusion packages l4t l4t diffusion l4t text generation packages l4t l4t text generation vit nanoowl packages vit nanoowl nanosam packages vit nanosam segment anything sam packages vit sam track anything tam packages vit tam cuda cupy packages cupy cuda python packages cuda python pycuda packages pycuda numba packages numba cudf packages rapids cudf cuml packages rapids cuml robotics ros packages ros ros2 packages ros opencv cuda packages opencv realsense packages realsense zed packages zed vectordb nanodb packages vectordb nanodb faiss packages vectordb faiss raft packages rapids raft see the packages packages directory for the full list including pre built container images and ci cd status for jetpack l4t using the included tools you can easily combine packages together for building your own containers want to run ros2 with pytorch and transformers no problem just do the system setup docs setup md and build it on your jetson like this bash build sh name my container ros humble desktop pytorch transformers there are shortcuts for running containers too this will pull or build a l4t pytorch packages l4t l4t pytorch image that s compatible bash run sh autotag l4t pytorch sup run sh docs run md forwards arguments to docker run https docs docker com engine reference commandline run with some defaults added like runtime nvidia mounts a data cache and detects devices sup br sup autotag docs run md autotag finds a container image that s compatible with your version of jetpack l4t either locally pulled from a registry or by building it sup if you look at any package s readme like l4t pytorch packages l4t l4t pytorch it will have detailed instructions for running it s container documentation a href https www jetson ai lab com img align right width 200 height 200 src https nvidia ai iot github io jetson generative ai playground images jon gen ai panels png a package list packages package definitions docs packages md system setup docs setup md building containers docs build md running containers docs run md check out the tutorials at the jetson generative ai lab https www jetson ai lab com getting started refer to the system setup docs setup md page for tips about setting up your docker daemon and memory storage tuning bash sudo apt get update sudo apt get install git python3 pip git clone depth 1 https github com dusty nv jetson containers cd jetson containers pip3 install r requirements txt run sh autotag l4t pytorch or you can manually run a container image https hub docker com r dustynv of your choice without using the helper scripts above bash sudo docker run runtime nvidia it rm network host dustynv l4t pytorch r35 4 1 looking for the old jetson containers see the legacy https github com dusty nv jetson containers tree legacy branch | machine-learning dockerfiles jetson pytorch tensorflow pandas scikit-learn numpy ros-containers ros2-foxy docker containers nvidia | ai |
Breakout-RC-car | breakout rc car final project for embedded system design coded by chunguang eric xie and shangrong li this the final project for embedded system design using tiva tm4c123 micro controller with two motor drivers four motors bluetooth lcd touchable screen and external charger as power supply the system is coded fully in rtos the player would not be able to control the rc car until they finish a breakout game the game has a random trophy if the player got the trophy he she could touch the screen to choose between car operating or continue game if car operating is chosen the player could use his her smart phone to control the rc car using bluetooth if continue game is chosen the player could continue to play the breakout game demo video available here https drive google com drive folders 10ofchmsspa3ozakmuzwbmh6ov239uilf usp sharing have fun | os |
|
frontexpress | frontexpress http fontmeme com embed php text frontexpress name atype 201 20light ttf size 90 style color 6f6f75 https frontexpressjs com an express js style router for the front end code the front end like the back end same language same framework frontexpress demo https github com camelaissani frontexpress demo build status https travis ci org camelaissani frontexpress svg branch master https travis ci org camelaissani frontexpress code climate https codeclimate com github camelaissani frontexpress badges gpa svg https codeclimate com github camelaissani frontexpress coverage status https coveralls io repos github camelaissani frontexpress badge svg branch master https coveralls io github camelaissani frontexpress branch master dependencies https img shields io gemnasium mathiasbynens he svg size shield https img shields io badge size 3 55kb brightgreen svg npm https img shields io npm dm frontexpress svg https www npmjs com package frontexpress js import frontexpress from frontexpress front end application const app frontexpress handles http 401 app use req res next if res status 401 window alert you are not authenticated please sign in else next app get req res document queryselector content innerhtml hello world app post login user req res document queryselector content innerhtml welcome req params user start listening front end requests emitted received app listen features you already know expressjs http expressjs com then you know frontexpress simple minimal core extendable through plugins lighweight framework build your front end application by handling routes ideal for single page application manage ajax requests and browser history installation from npm repository bash npm install frontexpress from bower repository bash bower install frontexpress from cdn on jsdelivr https cdn jsdelivr net npm frontexpress latest frontexpress min js documentation website and documentation https frontexpressjs com tests clone the repository bash git clone git github com camelaissani frontexpress git cd frontexpress install the dependencies and run the test suite bash npm install npm test license mit license | front-end browser router history navigation javascript url-parsing url expressjs middleware spa-application spa | front_end |
PDEBench | pdebench the code repository for the neurips 2022 paper pdebench an extensive benchmark for scientific machine learning https arxiv org abs 2210 07182 tada simtech best paper award 2023 https www simtech uni stuttgart de press simtech best paper award 2023 benchmark for ml for scientific simulations confetti ball pdebench provides a diverse and comprehensive set of benchmarks for scientific machine learning including challenging and realistic physical problems this repository consists of the code used to generate the datasets to upload and download the datasets from the data repository as well as to train and evaluate different machine learning models as baselines pdebench features a much wider range of pdes than existing benchmarks and includes realistic and difficult problems both forward and inverse larger ready to use datasets comprising various initial and boundary conditions and pde parameters moreover pdebench was created to make the source code extensible and we invite active participation from the sciml community to improve and extend the benchmark visualizations of some pde problems covered by the benchmark https github com pdebench pdebench blob main pdebench examples png created and maintained by makoto takamoto makoto takamoto neclab eu takamtmk gmail com timothy praditia timothy praditia iws uni stuttgart de raphael leiteritz dan mackinlay francesco alesiani dirk pfl ger and mathias niepert datasets and pretrained models we also provide datasets and pretrained machine learning models pdebench datasets https darus uni stuttgart de dataset xhtml persistentid doi 10 18419 darus 2986 pdebench pre trained models https darus uni stuttgart de dataset xhtml persistentid doi 10 18419 darus 2987 dois doi 10 18419 darus 2986 https img shields io badge doi doi 3a10 18419 2fdarus 2986 red https doi org 10 18419 darus 2986 doi 10 18419 darus 2987 https img shields io badge doi doi 3a10 18419 2fdarus 2987 red https doi org 10 18419 darus 2987 installation using pip locally bash pip install upgrade pip wheel pip install from pypi bash pip install pdebench to include dependencies for data generation bash pip install pdebench datagen310 pip install datagen310 locally or bash pip install pdebench datagen39 pip install datagen39 locally gpu support for gpu support there are additional platform specific instructions for pytorch the latest version we support is v1 13 1 see previous versions linux cuda 11 7 https pytorch org get started previous versions linux and windows 2 for jax which is approximately 6 times faster for simulations than pytorch in our tests see jax pip installation gpu cuda installed via pip https github com google jax pip installation gpu cuda installed via pip easier installation using conda if you like you can also install dependencies using anaconda we suggest to use mambaforge https github com conda forge miniforge mambaforge as a distribution otherwise you may have to enable the conda forge channel for the following commands starting from a fresh environment conda create n myenv python 3 9 conda activate myenv install dependencies for model training conda install deepxde hydra core h5py c conda forge according to your hardware availability either install pytorch with cuda support see previous versions linux cuda 11 7 https pytorch org get started previous versions linux and windows 2 conda install pytorch 1 13 1 torchvision 0 14 1 torchaudio 0 13 1 pytorch cuda 11 7 c pytorch c nvidia or cpu only binaries https pytorch org get started previous versions linux and windows 2 conda install pytorch 1 13 1 torchvision 0 14 1 torchaudio 0 13 1 cpuonly c pytorch optional dependencies for data generation conda install clawpack jax jaxlib python dotenv configuring deepxde in our tests we used pytorch as backend for deepxde please follow the documentation https deepxde readthedocs io en latest user installation html working with different backends to enable this data generation the data generation codes are contained in data gen pdebench data gen gen diff react py to generate the 2d diffusion reaction data gen diff sorp py to generate the 1d diffusion sorption data gen radial dam break py to generate the 2d shallow water data gen ns incomp py to generate the 2d incompressible inhomogenous navier stokes data plot py to plot the generated data uploader py to upload the generated data to the data repository env is the environment data to store dataverse url and api token to upload the generated data note that the filename should be strictly env i e remove the example from the filename configs directory contains the yaml files storing the configuration for the simulation arguments for the simulation are problem specific and detailed explanation can be found in the simulation scripts src directory contains the simulation scripts for different problems sim diff react py for 2d diffusion reaction sim diff sorp py for 1d diffusion sorption and swe for the shallow water equation data generation for 1d advection burgers reaction diffusion 2d darcyflow compressible navier stokes equations the data generation codes are contained in data gen nle pdebench data gen data gen nle utils py util file for data generation mainly boundary conditions and initial conditions advectioneq directory with the source codes to generate 1d advection equation training samples burgerseq directory with the source codes to generate 1d burgers equation training samples compressiblefluid directory with the source codes to generate compressible navier stokes equations training samples reactiondiffusioneq directory with the source codes to generate 1d reaction diffusion equation training samples note darcyflow data can be generated by run darcyflow2d sh pdebench data gen data gen nle readme md in this folder save directory saving the generated training samples a typical example to generate training samples 1d advection equation in data gen data gen nle advectioneq bash python3 advection multi solution hydra py multi beta1e0 yaml which is assumed to be performed in each directory examples for generating other pdes are provided in run trainset sh in each pde s directories the config files for hydra are stored in config directory in each pde s directory data transformaion and merge into hdf5 format 1d advection burgers reaction diffusion 2d darcyflow compressible navier stokes equations save data as a numpy array so to read those data via our dataloaders the data transformation merge should be performed this can be done using data gen nle data merge py whose config file is located at data gen data gen nle config config yaml after properly setting the parameters in the config file type name of pdes dim number of spatial dimension bd boundary condition the corresponding hdf5 file could be obtained as bash python3 data merge py configuration you can set the default values for data locations for this project by putting config vars like this in the env file working dir data working archive data dir data archive there is an example in example env data download the download scripts are provided in data download pdebench data download there are two options to download data 1 using download direct py recommended retrieves data shards directly using urls sample command for each pde is given in the readme file in the data download pdebench data download directory 2 using download easydataverse py might be slow and you could encounter errors issues hence not recommended use the config files from the config directory that contains the yaml files storing the configuration any files in the dataset matching args filename will be downloaded into args data folder baseline models in this work we provide three different ml models to be trained and evaluated against the benchmark datasets namely fno https arxiv org pdf 2010 08895 pdf u net https www sciencedirect com science article abs pii s0010482519301520 via 3dihub and pinn https www sciencedirect com science article pii s0021999118307125 the codes for the baseline model implementations are contained in models pdebench models train models forward py is the main script to train and evaluate the model it will call on model specific script based on the input argument train models inverse py is the main script to train and evaluate the model for inverse problems it will call on model specific script based on the input argument metrics py is the script to evaluate the trained models based on various evaluation metrics described in our paper additionally it also plots the prediction and target data analyse result forward py is the script to convert the saved pickle file from the metrics calculation script into pandas dataframe format and save it as a csv file additionally it also plots a bar chart to compare the results between different models analyse result inverse py is the script to convert the saved pickle file from the metrics calculation script into pandas dataframe format and save it as a csv file this script is used for the inverse problems additionally it also plots a bar chart to compare the results between different models fno contains the scripts of fno implementation these are partly adapted from the fno repository https github com zongyi li fourier neural operator unet contains the scripts of u net implementation these are partly adapted from the u net repository https github com mateuszbuda brain segmentation pytorch pinn contains the scripts of pinn implementation these utilize the deepxde library https github com lululxvi deepxde inverse contains the model for inverse model based on gradient config contains the yaml files for the model training input the default templates for different equations are provided in the args pdebench models config args directory user just needs to copy and paste them to the args keyword in the config yaml pdebench models config config yaml file an example to run the forward model training can be found in run forward 1d sh pdebench models run forward 1d sh and an example to run the inverse model training can be found in run inverse sh pdebench models run inverse sh short explanations on the config args model name string containing the baseline model name either fno unet or pinn if training bool set true for training or false for evaluation continue training bool set true to continute training from a checkpoint num workers int number of workers for the pytorch dataloader batch size int training batch size initial step int number of time steps used as input for fno and u net t train int number of the last time step used for training for extrapolation testing set this to be nt model update int number of epochs to save model filename str has to match the dataset filename single file bool set false for 2d diffusion reaction 1d diffusion sorption and the radial dam break scenarios and set true otherwise reduced resolution int factor to downsample spatial resolution reduced resolution t int factor to downsample temporal resolution reduced batch int factor to downsample sample size used for training epochs int total epochs used for training learning rate float learning rate of the optimizer scheduler step int number of epochs to update the learning rate scheduler scheduler gamma float decay rate of the learning rate u net specific args in channels int number of input channels out channels int number of output channels ar mode bool set true for fully autoregressive or pushforward training pushforward bool set true for pushforward training false otherwise ar mode also has to be set true unroll step int number of time steps to backpropagate in the pushforward training fno specific args num channels int number of channels variables modes int number of fourier modes to multiply width int number of channels for the fourier layer inverse specific args base path string location of the data directory training type string type of training autoregressive single mcmc num samples int number of generated samples mcmc warmup steps 10 mcmc num chains 1 num samples max 1000 in channels hid 64 inverse model type string type of inverse inference model probrasterlatent initialconditioninterp inverse epochs int number of epochs for the gradient based method inverse learning rate float learning rate for the gradient based method inverse verbose flag bool some printing plotting specific args plot bool set true to activate plotting channel plot int determines which channel variable to plot x min float left spatial domain x max float right spatial domain y min float lower spatial domain y max float upper spatial domain t min float start of temporal domain t max float end of temporal domain datasets and pretrained models we provide the benchmark datasets we used in the paper through our darus data repository https darus uni stuttgart de dataset xhtml persistentid doi 10 18419 darus 2986 the data generation configuration can be found in the paper additionally the pretrained models are also available to be downloaded from pdebench pretrained models https darus uni stuttgart de dataset xhtml persistentid doi 10 18419 darus 2987 darus repository to use the pretrained models users can specify the argument continue training true in the config file pdebench models config config yaml directory tour below is an illustration of the directory structure of pdebench pdebench models pinn model physics informed neural network train py utils py pde definitions py fno model fourier neural operator train py utils py fno py unet model u net train py utils py unet py inverse model gradient based inverse method train py utils py inverse py config config all config files reside here train models inverse py run forward 1d sh analyse result inverse py train models forward py run inverse sh metrics py analyse result forward py data download data scripts to download data from darus config download direct py download easydataverse py visualize pdes py readme md download metadata csv data gen data scripts to generate data configs data gen nle src notebooks gen diff sorp py plot py example env gen ns incomp py gen diff react py uploader py gen radial dam break py init py citations please cite the following papers if you use pdebench datasets and or source code in your research details summary a href https arxiv org abs 2210 07182 pdebench an extensive benchmark for scientific machine learning neurips 2022 a summary br inproceedings pdebench2022 author takamoto makoto and praditia timothy and leiteritz raphael and mackinlay dan and alesiani francesco and pfl ger dirk and niepert mathias title pdebench an extensive benchmark for scientific machine learning year 2022 booktitle 36th conference on neural information processing systems neurips 2022 track on datasets and benchmarks url https arxiv org abs 2210 07182 details details summary a href https doi org 10 18419 darus 2986 pdebench datasets neurips 2022 a summary br data darus 2986 2022 author takamoto makoto and praditia timothy and leiteritz raphael and mackinlay dan and alesiani francesco and pfl ger dirk and niepert mathias publisher darus title pdebench datasets year 2022 doi 10 18419 darus 2986 url https doi org 10 18419 darus 2986 details details summary a href https arxiv org abs 2304 14118 learning neural pde solvers with parameter guided channel attention icml 2023 a summary br article cape takamoto 2023 author makoto takamoto and francesco alesiani and mathias niepert title learning neural pde solvers with parameter guided channel attention journal corr volume abs 2304 14118 year 2023 url https doi org 10 48550 arxiv 2304 14118 doi 10 48550 arxiv 2304 14118 eprinttype arxiv eprint 2304 14118 details code contributors makato takamoto https github com mtakamoto d nec laboratories europe https www neclab eu timothy praditia https github com timothypraditia stuttgart center for simulation science university of stuttgart https www simtech uni stuttgart de raphael leiteritz https github com leiterrl stuttgart center for simulation science university of stuttgart https www simtech uni stuttgart de francesco alesiani https github com falesiani nec laboratories europe https www neclab eu dan mackinlay https danmackinlay name csiro s data61 https data61 csiro au mario kalimuthu https github com kmario23 stuttgart center for simulation science university of stuttgart https www simtech uni stuttgart de john kim https github com johnmjkim anu techlauncher https comp anu edu au techlauncher csiro s data61 https data61 csiro au gefei shan https github com davecatmeow anu techlauncher https comp anu edu au techlauncher csiro s data61 https data61 csiro au yizhou yang https github com verdantwynnd anu techlauncher https comp anu edu au techlauncher csiro s data61 https data61 csiro au ran zhang https github com maphyca anu techlauncher https comp anu edu au techlauncher csiro s data61 https data61 csiro au simon brown https github com simonsybrown anu techlauncher https comp anu edu au techlauncher csiro s data61 https data61 csiro au license mit licensed except where otherwise stated see license txt file | ai benchmark jax machine-learning pytorch scientific scientific-computing sciml simulation deep-learning fluid-dynamics navier-stokes-equations neural-networks neural-operators partial-differential-equations physics-informed-neural-networks autoregressive-models | ai |
uwpcore.framework | div align center img src https github com bsautermeister uwpcore framework blob master assets uwpcore png alt uwpcore framework br div uwpcore framework for windows 10 mobile the uwpcore framework is a development acceleration library for the universal windows platform it is a collection of best practices and reusable services to simplify the development for windows 10 apps is the framework inspired by another one the framworks navigation system and application shell is maily inspired by template 10 https github com windows xaml template10 by jerry nixon the application shell has been further improved with swipe gestured inspired by justin xin liu https github com justinxinliu swipeablesplitview from time to time the ui of the shell has been updated to be more similar to microsofts default apps such as news weather sports or groove music how to use this framework 1 check out the repository 2 launch the uwpcore framework sln solution in visual studio 3 optional export a project template based on the uwpcore template project to get started even faster 1 select file export template 2 select project template as well as the template uwpcore template project 3 give it a handy name and use auto import to visual studio 4 create a new universal windows project in visual studio and use the previously exported template 5 add the framework project as an existing project to be able to debug within the code of the framework otherwise it is sufficient to reference the dlls of the framework 6 update the project reference to uwpcore framework 7 important clean and recompile the whole solution 8 you are now ready to get started 1 don t forget to modify the manifeest as well as the string resources accoding to your personal needs the generated app includes a home page a settings page including the functionality to switch the app theme as well as an about page furthermore it uses app localization english and germany and the mvvm pattern is there anything i should know before i can hack down my app it might be helpful to read the folowing sections first before you start with coding app the app class is the root of your app it inherits from uwpcore framework common universalapp which extends and simplifies the application base class of a standard uwp project it provides for example dirct access to the navigationservice to navigate to a page its constructor requires some important information that has to be defined csharp public app base typeof mainpage appbackbuttonbehaviour keepalive true new defaultmodule initializecomponent the parameters of the universalapp constructor define the starting page the behavior of the back button on the root page of the stack whether to use the appshell hamburger menu or not as well as multiple module definition instances for used di framework ninject http www ninject org within the oninitialize iactivatedeventargs method we can initialize the app such as defining the theme colors csharp public async override task oninitializeasync iactivatedeventargs args await base oninitializeasync args setup theme colors mainly for title bar colorpropertiesdark new appcolorproperties appconstants color accent colors white colors black colors white color fromargb 255 31 31 31 null null colorpropertieslight new appcolorproperties appconstants color accent colors black colors white colors black color fromargb 255 230 230 230 null null within the onstartasync startkind iactivatedeventargs method the app gets started activated and we can select the page we would like to navigate to this method is lauched even when the app is launched using cortana live tiles or toast notifications check the start kind and the event arguments to handle the app s startup properly csharp public override task onstartasync startkind startkind iactivatedeventargs args var pagetype defaultpage object parameter null start the user experience navigationservice navigate pagetype parameter return task fromresult object null in case we are using the appshell by using true for the thrid parameter in the universalapp constructor it is required to override the createnavigationmenuitems and createbottomdockednavigationmenuitems methods which populate the navigation menu in the app xaml file make sure that you specify the proper theme colors here as well this is required that even the default controls of the windows universal platform use these theme colors such as selected items of a listview control xml resourcedictionary themedictionaries resourcedictionary x key dark color x key systemaccentcolor 0063b1 color resourcedictionary resourcedictionary x key light color x key systemaccentcolor 0063b1 color resourcedictionary resourcedictionary themedictionaries pages every page is placed within the views folder since it is recommended to use the mvvm pattern the code behind of each page only assignes the proper view model to the data context all the magic of a page lives in the view model implementation located in the viewmodels folder viewmodels each view model inherits from wpcore framework mvvm viewmodelbase and offers public properties where the view is able to bind to for the main logic we recommend to encapsulate this functionality in services to be able to share these across multiple view models these service implementations can then be injected using ninject csharp private idialogservice dialogservice public mainviewmodel dialogservice injector get idialogservice feel free in case you prefer to inject the view model implementations as well to improve the testability of the view model in this case you have to create your own di module definition and add this to the last constructor parameter of the universalapp base class in app xaml cs services in our opinion the mvvm pattern is lacking the recommendation of using services to share functionality accross view models and pages some people prefer the terminology mvvms pattern each service is just a simple class that offers some functionality a service can be composed of other services which can be simply injected using the inject attribute of ninject csharp public class tilepinservice itilepinservice private itileservice tileservice private ideviceinfoservice deviceinfoservice inject public tilepinservice itileservice tileservice ideviceinfoservice deviceinfoservice tileservice tileservice deviceinfoservice deviceinfoservice to be able to inject our new service we have to include it in our custom module definition class csharp public class appmodule ninjectmodule public override void load services bind itilepinservice to tilepinservice insingletonscope make sure you add this custom module definition to the applications constructor csharp public app base typeof mainpage appbackbuttonbehaviour keepalive true new defaultmodule new appmodule | framework uwp windows-10 | front_end |
CSE-D | cse d full stack web development lab list of experiments 1 write code in html5 to develop simple webpage 2 write css5 html5 code to show dropdown menu 3 write html5 css and javscript code to create one page website having different menu items 4 write a program in css to show your city with building and moving cars 5 write a program to validate web form using javascript 6 write jquery code to show website slider 7 show version control in github 8 write a program in javascript to create a user login system 9 create a website showing jquery slider 10 write a program to show user details using html css ajax 11 write a program to display options in a search engine using ajax micro projects student must submit a report on one of the following micro projects 1 develop project mynote a html5 app 2 develop a bookstore application by using html5 css jquery in github 3 develop a shopping cart application by using html5 css jquery in github 4 develop an e learning system using html5 css jquery in github 5 build a personal portfolio webpage using html5 css jquery 6 develop google com search result page using html5 css jquery ajax 7 develop a webpage to display solar system using html5 css jquery ajax 8 build tajmahal using css 9 build a real time markdown editor with node js 10 develop an user model covering registration email verification send an email login with remember me 11 develop chess game using html5 css jquery ajax note take the code and modify it to create your own dream wbsite and upload to your github repository | front_end |
|
go-iost | iost a scalable developer friendly blockchain iost is a smart contract platform focusing on performance and developer friendliness features 1 the v8 javascript engine is integrated inside the blockchain so you can use javascript to write smart contracts 2 the blockchain is highly scalable with thousands of tps meanwhile it still has a more decentralized consensus than dpos 3 0 5 second block 0 5 minute finality 4 free transactions you can stake coins to get gas development environments os ubuntu 22 04 or later go 1 20 or later iost node uses cgo v8 javascript engine so only x64 is supported now deployment build local binary make build start a local devnet make debug build docker make image for documentation please visit iost developer https developers iost io welcome to our tech community at telegram https t me iostdev happy hacking | blockchain iost smart-contracts dapp | blockchain |
jigg | jigg build status https travis ci org mynlp jigg svg branch master https travis ci org mynlp jigg jigg is a natural language processing pipeline framework on jvm languages mainly for scala which is easy to use and extensible using jigg one can obtain several linguistic annotations on a given input from pos tagging parsing and coreference resolution from command lines the main features include easy to install basic components are included in a distributed single jar so no need to install similar interface to stanford corenlp http nlp stanford edu software corenlp shtml extensible easy to add new component is a pipeline parallel processing sentence level annotation is automatically parallelized jigg is distributed under the apache license version 2 0 http www apache org licenses license 2 0 html the core ideas and software designs are described in detail in our paper http mynlp github io jigg data jigg acl2016 pdf install the easist way to start jigg is to download the latet release package current version is 0 8 0 which includes the core jar file as well as several model files such as a stanford corenlp model file download jigg 0 8 0 https github com mynlp jigg releases download v 0 8 0 jigg 0 8 0 zip or you can get it in the command line bash wget https github com mynlp jigg releases download v 0 8 0 jigg 0 8 0 zip unzip jigg 0 8 0 zip enter the directory before running the following examples bash cd jigg 0 8 0 if you wish to build your own jar from sources please read here build your own jar advanced if you wish to use docker please read here use docker usage the following command launches jigg in a shell mode which parses a given input with berkeley parser after preprocessing tokenization and sentence splitting bash java cp jigg pipeline pipeline annotators corenlp tokenize ssplit berkeleyparser main info edu stanford nlp pipeline stanfordcorenlp adding annotator tokenize let s write some sentences in a line xml hello jigg this is the first sentence root document id d0 sentences sentence id s0 characteroffsetbegin 0 characteroffsetend 11 hello jigg tokens annotators corenlp berkeleyparser token pos uh characteroffsetend 5 characteroffsetbegin 0 id t0 form hello token pos prp characteroffsetend 10 characteroffsetbegin 6 id t1 form jigg token pos characteroffsetend 11 characteroffsetbegin 10 id t2 form tokens parse annotators berkeleyparser root s0 berksp0 span id s0 berksp0 symbol intj children t0 t1 t2 parse sentence sentence id s1 characteroffsetbegin 12 characteroffsetend 39 this is the first sentence tokens annotators corenlp berkeleyparser token pos dt characteroffsetend 4 characteroffsetbegin 0 id t3 form this token pos vbz characteroffsetend 7 characteroffsetbegin 5 id t4 form is token pos dt characteroffsetend 11 characteroffsetbegin 8 id t5 form the token pos jj characteroffsetend 17 characteroffsetbegin 12 id t6 form first token pos nn characteroffsetend 26 characteroffsetbegin 18 id t7 form sentence token pos characteroffsetend 27 characteroffsetbegin 26 id t8 form tokens parse annotators berkeleyparser root s1 berksp0 span id s1 berksp0 symbol s children s1 berksp1 s1 berksp2 t8 span id s1 berksp1 symbol np children t3 span id s1 berksp2 symbol vp children t4 s1 berksp3 span id s1 berksp3 symbol np children t5 t6 t7 parse sentence sentences document root the default output format of jigg is xml but it also supports json check outputformat option below one can see that jigg automatically detects sentence boundaries there are two sentences and performs tokenization e g period is recognized as a single word on which parse tree parse is built in jigg each nlp tool such as corenlp stanford corenlp or berkeleyparser berkeley parser is called an annotator jigg helps to construct easily a nlp pipeline by combining several annotators in the example above the pipeline is constructed by combining stanford corenlp which performs tokenization and sentence splitting and berkeley parser which performs parsing on tokenized sentences command line usage basic usage is described in the help message bash java cp jigg pipeline pipeline help usage outputformat str output format xml json default value is xml xml annotators str list of annotator names e g corenlp tokenize ssplit berkeleyparser required ssplit kuromoji jaccg checkrequirement str check requirement true false warn default value is true true file str input file if omitted read from stdin props str property file nthreads int number of threads for parallel annotation use all if 0 1 output str output file if omitted file xml is used gzipped if suffix is gz if json mode is selected suffix is json customannotatorclass str you can add an abbreviation for a custom annotator class with customannotatorclass xxx path package help str print this message and descriptions of specified annotators e g help ssplit mecab true currently the annotators listed below are installed see the detail of each annotator with help annotator name mecab ssplit jaccg cabocha berkeleyparser spacetokenize kuromoji syntaxnetpos dsplit knp corenlp knpdoc juman syntaxnetparse syntaxnet some annotators such as mecab jaccg kuromoji etc are specific for japanese processing as shown here more specific description for each annotator is described by giving argument to help option bash java cp jigg pipeline pipeline help berkeleyparser berkeleyparser requires tokenize requirementssatisfied pos parse berkeleyparser variational bool use variational rule score approximation instead of max rule default false false berkeleyparser grfilename str grammar file berkeleyparser accurate bool set thresholds for accuracy default set thresholds for efficiency false berkeleyparser usepos bool use annotated pos by another annotator false berkeleyparser viterbi bool compute viterbi derivation instead of max rule tree default max rule false a wrapper for berkeley parser the feature is that this wrapper is implemented to be thread safe to do this the wrapper keeps many parser instances the number can be specified by customizing nthreads the path to the model file can be changed by setting berkeleyparser grfilename if berkeleyparser usepos is true the annotator assumes the pos annotation is already performed and the parser builds a tree based on the assigned pos tags otherwise the parser performs joint inference of pos tagging and parsing which is the default behavior python wrapper and server jigg can be directly used in a python other languages would be supported in future script see python directory https github com mynlp jigg tree master python for details of this usage inspired by the wrapper mechanism of stanford corenlp jigg s wrapper is based on the server which can be instantiated by bash java cp jigg pipeline pipelineserver this will launch the server on your local system currently the server only supports post request see more detail in the help message by bash java cp jigg pipeline pipelineserver help an example of the call via curl is bash curl data urlencode annotators corenlp tokenize ssplit data urlencode q please annotate me http localhost 8080 annotate outputformat json now using wget would need many special cares so i recommend to use curl instead requirements here requires and reqruiementssatisfied describe the role of this annotator berkeleyparser intuitively the above description says berkeleyparser requires that the input text is already tokenized tokenize and after the annotation part of speech tags pos and parse tree parse are annotated on each sentence jigg checks with these kinds of information whether the given pipeline can be performed safely for example the following command will be failed bash java cp jigg pipeline pipeline annotators berkeleyparser annotator berkeleyparser requires tokenize annotators str list of annotator names e g corenlp tokenize ssplit berkeleyparser required berkeleyparser the error message says tokenize should be performed before running berkeleyparser parallel processing in the help message above we can see that berkeleyparser is implemented to be thread safe this means we can run berkeley parser in parallel which is not supported in the original software the most of supported annotators in jigg are implemented as thread safe meaning that annotation can be very efficient in a multi core environment to perform parallel annotation first prepare an input document whatever you want to analyze bash head input txt john blair co is close to an agreement to sell its tv station advertising representation operation and program production unit to an investor group led by james h rosenfield a former cbs inc executive industry sources said industry sources put the value of the proposed acquisition at more than 100 million john blair was acquired last year by reliance capital group inc which has been divesting itself of john blair s major assets then run jigg as follows bash java cp jigg pipeline pipeline annotators corenlp tokenize ssplit berkeleyparser file input txt or you can run jigg in pipe bash cat input txt java cp jigg pipeline pipeline annotators corenlp tokenize ssplit berkeleyparser output xml parallelization can be prohibited by giving nthreads 1 option bash cat input txt java cp jigg pipeline pipeline annotators corenlp tokenize ssplit berkeleyparser nthreads 1 output xml by default jigg tries to use as many threads as the machine can use on my laptop with 4 cores when annotating about 1000 sentences annotation with nthreads 1 takes about 154 seconds which is reduced to 79 seconds with parallel annotation you can also customize the number of threads for each annotator separately for example the following restrictes the number of threads of berkeleyparser to 2 while allow the corenlp to use 4 threads bash cat input txt java cp jigg pipeline pipeline annotators corenlp tokenize ssplit berkeleyparser nthreads 4 berkeleyparser nthreads 2 output xml full pipeline for english currently the main components in jigg are stanford corenlp while many components of stanford corenlp requries a model file it is included in the directory of the latest jigg above link so no need to download a model by yourself the following pipeline is one of full corenlp pipeline toward coreference resolution bash java cp jigg pipeline pipeline annotators corenlp tokenize ssplit parse lemma ner dcoref this is the usage of jigg just as a wrapper of stanford corenlp which may not be interesting more interesting example is to insert the berkeley parser into a pipeline of stanford corenlp bash java cp jigg pipeline pipeline annotators corenlp tokenize ssplit berkeleyparser corenlp lemma ner dcoref this command replaces the parser component in a corenlp pipeline with berkeley parser jigg alleviates to include a nlp tool into a pipeline as such the goal of jigg is to provide a platform on which a user can freely connect the tools to construct several nlp pipelines programmatic usage jigg pipeline can also be incorporated another java or scala project the easiest way to do this is add a dependency to maven in scala add the following line in the project build sbt scala librarydependencies com github mynlp jigg 0 8 0 in java add the following lines on pom xml xml dependencies dependency groupid com github mynlp groupid artifactid jigg artifactid version 0 8 0 version dependency dependencies jigg is written in scala so scala is the most preferable choice for a programmatic usage jigg provides a very similar interface to stanford corenlp scala import jigg pipeline pipeline import java util properties import scala xml node the behavior of pipeline can be customized with properties object which consists of the same options used in command line usages val props new properties props setproperty annotators corenlp tokenize ssplit berkeleyparser corenlp lemma ner dcoref the path to the model to the berkeley parser may be necessary props setproperty berkeleyparser grfilename path to eng sm6 gr pipeline is the main class which eats properties object val pipeline new pipeline props set the input text to be analyzed here val text string get the annotation result in scala s xml object node val annotation node pipeline annotate text the annotation result is obtained in scala xml object on which elements can be searched intuitively with expressions similar to x path the followings are an example scala val sentences seq node annotation sentence get all sentence elements for sentence sentences for each sentence val tokens sentence token get all tokens val nes sentence ne get all named entities for ne nes val tokenids ne tokens get the tokens attribute in a ne val netokens tokenids map id tokens find id id get form get surface form of each token consisting the ne println netokens mkstring print the detected ne on the result xml all annotated elements e g sentence token and ne are assigned unique ids so element search is basically based on these ids build your own jar advanced bash git clone git github com mynlp jigg git cd jigg bin sbt assembly the last command may take about 10 or 20 minutes including setup of scala and sbt this generates a self contained jar on target jigg assembly xxx jar where xxx is the current version number although this assembled jar is rather self contained several model files including the model files for berkeleyparser and jaccg are missing these can be obtained by bash wget https github com mynlp jigg models raw master jigg models jar and including it in the class path bash java cp target jigg assembly xxx jar jigg models jar jigg pipeline pipeline annotators note that in this usage the corenlp models should also be downloaded from the official homepage https stanfordnlp github io corenlp download html manually and included in your class path use docker to install docker follow the instruction https docs docker com install to build and run pipelineserver container bash git clone depth 1 https github com mynlp jigg git cd jigg curl sl https github com mynlp jigg models raw master jigg models jar o jar jigg models jar time docker compose build docker compose up d an example of the call via curl is bash curl data urlencode annotators ssplit kuromoji jaccg data urlencode q http localhost 8080 annotate outputformat xml supported annotators supported annotators in the current environment can be listed with the help command to see more details on each annotator try help annotator name see also command line usage command line usage citing in papers if you use jigg in research publications please cite hiroshi noji and yusuke miayo 2016 jigg a framework for an easy natural language processing pipeline http mynlp github io jigg data jigg acl2016 pdf in proceedings of the 54th annual meeting of the association for computational linguistics system demonstrations acknowledgements following sample files of ssplitkerasannotator bunsetsukerasannotator is generated by using bccwj corpus http pj ninjal ac jp corpus center bccwj trained model file src test resources data keras ssplit model h5 src test resources data keras bunsetsu model h5 lookup table file src test resources data keras jpnlookupcharacter json src test resources data keras jpnlookupwords json release note 0 8 0 many bug fixes kuromoji is modulated corenlp is upgraded to 3 9 1 support benepar and stanfordtypeddep combine them to obtain state of the art constituency and dependency parsers see help benear and help stanfordtypeddep 0 7 2 many improvements around ccg parsers including k best outputs of depccg and easyccg support of udpipe 0 7 1 bug fixes docker for jigg server annotators for ccg parsers candc easyccg and depccg 0 7 0 support corenlp 3 7 0 server mode several improvements including support of xml json inputs 0 6 1 bug fixes 0 6 1 new annotators syntaxnet coref in corenlp etc json output still incomplete bug fixes 0 6 0 the initial official release | ai |
|
Kishanuka_HelloWorldLabs | kishanuka helloworldlabs i am doing this project for embedded system design module | os |
|
GAN_Review | this repo contains gans review for topics of computer vision and time series news 2021 07 11 our preprint generative adversarial networks in time series a survey and taxonomy eoin brophy and zhengwei wang and qi she and tomas e ward https arxiv org pdf 2107 11098 pdf is out this work is currently in progress 2021 02 14 our paper generative adversarial networks in computer vision a survey and taxonomy zhengwei wang and qi she and tomas e ward https dl acm org doi abs 10 1145 3439723 arxiv version https arxiv org pdf 1906 01529 pdf has been published at acm computing surveys and we will continue to polish this work into the 5th version details of selected papers and codes can refer to gan cv folder https github com sheqi gan review tree master gan cv 2020 11 24 our paper generative adversarial networks in computer vision a survey and taxonomy zhengwei wang and qi she and tomas e ward https arxiv org pdf 1906 01529 pdf gets acceptted into acm computing surveys and we will continue to polish this work into the 5th version 2020 06 20 we have updated our 4th version of gan survey for computer vision paper it inlcudes more recent gans proposed at cvpr iccv 2019 2020 more intuitive visualization of gan taxonomy 2020 10 04 gans related to our latest paper will be updated shortly generative adversarial networks in computer vision p align center img src gan cv pic gans taxonomy png width 1000 img a survey and taxonomy of the recent gans development in computer vision please refer to the details in recent review paper generative adversarial networks in computer vision a survey and taxonomy zhengwei wang and qi she and tomas e ward https dl acm org doi abs 10 1145 3439723 arxiv version https arxiv org pdf 1906 01529 pdf we also provide a list of papers related to gans on computer vision in the gan cv csv file if you find this useful in your research please consider citing article wang2021generative title generative adversarial networks in computer vision a survey and taxonomy author wang zhengwei and she qi and ward tomas e journal acm computing surveys csur volume 54 number 2 pages 1 38 year 2021 publisher acm new york ny usa we have classified the two gan variants research lines based on recent gan developments below we provide a summary and the demo code of these models we have tested the codes below and tried to summary some of b lightweight b and b easy to reuse b module of state of the art gans architecture variant gans lapgan https github com jimfleming lapgan tensorflow https github com aaronyalai generative adversarial networks pytorch pytorch dcgan https github com carpedm20 dcgan tensorflow tensorflow https github com last one dcgan pytorch pytorch began https github com carpedm20 began tensorflow tensorflow https github com anantzoid began pytorch pytorch progan https github com tkarras progressive growing of gans tensorflow https github com nashory pggan pytorch pytorch sagan https github com brain research self attention gan tensorflow https github com heykeetae self attention gan pytorch biggan https github com taki0112 biggan tensorflow tensorflow https github com ajbrock biggan pytorch pytorch your local gan https github com giannisdaras ylg tensorflow https github com 188zzoon your local gan pytorch autogan https github com vita group autogan pytorch msg gan https github com akanimax msg stylegan tf tensorflow https github com akanimax msg gan v1 pytorch loss variant gans wgan https github com chengbinjin wgan tensorflow tensorflow https github com zeleni9 pytorch wgan pytorch wgan gp https github com changwoolee wgan gp tensorflow tensorflow https github com caogang wgan gp pytorch lsgan https github com xudonmao lsgan tensorflow https github com meliketoy lsgan pytorch pytorch f gan https github com lynnho f gan tensorflow tensorflow ugan https github com gokul uf tf unrolled gan tensorflow https github com andrewliao11 unrolled gans pytorch ls gan https github com maple research lab lsgan gp alt tensorflow https github com maple research lab glsgan gp pytorch mrgan https github com wiseodd generative models tree master gan mode regularized gan tensorflow and pytorch geometric gan https github com lim0606 pytorch geometric gan pytorch rgan https github com alexiajm relativisticgan tensorflow and pytorch sn gan https github com taki0112 spectral normalization tensorflow tensorflow https github com christiancosgrove pytorch spectral normalization gan pytorch realnessgan https github com taki0112 realnessgan tensorflow tensorflow https github com kam1107 realnessgan pytorch sphere gan https github com taki0112 spheregan tensorflow tensorflow https github com dotori hj spheregan pytorch implementation pytorch self supervised gan https github com zhangqianhui self supervised gans tensorflow https github com vandit15 self supervised gans pytorch pytorch gan review for time series a survey and taxonomy of the recent gans development in time series please refer to the details in recent review paper generative adversarial networks in time series a survey and taxonomy eoin brophy and zhengwei wang and qi she and tomas e ward https arxiv org pdf 2107 11098 pdf this work is currently in progress if you find this useful in your research please consider citing article brophy2021generative title generative adversarial networks in time series a survey and taxonomy author brophy eoin and wang zhengwei and she qi and ward tomas journal arxiv preprint arxiv 2107 11098 year 2021 datasets unlike computer vision having lots of well known and large scale benchmarking datasets time series benchmarking datasets are limited due to generalization and some privacy issues especially for clinical data below we provide some resources of well known time series datasets hopefully it is useful feel free to suggest any well known time series datasets to this repo by opening new issue we will review it and add it to the list we hope this can help push the time series research forward oxford man institute realised library updated daily https realized oxford man ox ac uk real multivariate time series dataset contains 2 689 487 instances and 5 attributes eeg motor movement imagery dataset 2004 https physionet org content eegmmidb 1 0 0 real multivariate time series contains 1 500 instances and 64 attributes ecg 200 2001 http www timeseriesclassification com description php dataset ecg200 real univariate time series contains 200 instance and 1 attribute epileptic seizure recognition dataset 2001 https archive ics uci edu ml datasets epileptic seizure recognition real multivariate time series dataset contains 11 500 instances and 179 attributes twoleadecg 2015 http www timeseriesclassification com description php dataset twoleadecg real multivariate time series dataset contains 1 162 instances and 2 attributes mimic iii clinical database 2016 https physionet org content mimiciii 1 4 real integer categorical multivariate time series mimic iii clinical database demo 2019 https physionet org content mimiciii demo 1 4 real integer categorical multivariate time series epilepsiae project database http www epilepsiae eu project outputs european database on epilepsy real multivariate time series dataset contains 30 instances physionet cinc https physionet org news post 231 lots of clinical data for challenging competition wrist ppg during exercise 2017 https physionet org content wrist 1 0 0 real multivariate time series dataset contains 19 instances and 14 attributes mit bih arrhythmia database 2001 https physionet org content mitdb 1 0 0 real multivariate time series dataset contains 201 instances and 2 attributes kdd cup dataset https kdd org kdd cup lots of real integer categorical multivariate time series datasets pems database updated daily https dot ca gov programs traffic operations mpr pems source real integer categorical multivariate time series datasets nottingham music database http abc sourceforge net nmd special text format time series discrete variant gans seqgan https arxiv org pdf 1609 05473 pdf tensorflow https github com lantaoyu seqgan pytorch https github com suragnair seqgan quant gan https arxiv org pdf 1907 06673 pdf code to be added continuous variant gans c rnn gan https arxiv org pdf 1611 09904 pdf tensorflow https github com olofmogren c rnn gan pytorch https github com cjbayron c rnn gan pytorch rcgan https arxiv org pdf 1706 02633 pdf tensorflow https github com ratschlab rgan sc gan https www springerprofessional de en continuous patient centric sequence generation via sequentially 16671112 code to be added nr gan https dl acm org doi abs 10 1145 3366174 3366186 code to be added time gan https papers nips cc paper 2019 file c9efe5f26cd17ba6216bbe2a7d26d490 paper pdf tensorflow https github com jsyoon0823 timegan sigcwgan https arxiv org pdf 2006 05421 pdf pytorch https github com sigcgans conditional sig wasserstein gans dat cgan https arxiv org pdf 2009 12682 pdf code to be added synsiggan https www mdpi com 2079 7737 9 12 441 code to be added | gan generative-adversarial-network deep-learning gans tensorflow | ai |
Employee-SQL-Database | employee sql database given a dataset of employees at a company a data model was created after inspecting the files the dataset comprises and modelled with an erd diagram a table schema was then devised using the erd so that the data could be loaded into sql for further analysis a sample analysis of 8 questions on the dataset was done which were 1 list the following details of each employee employee number last name first name sex and salary 2 list first name last name and hire date for employees who were hired in 1986 3 list the manager of each department with the following information department number department name the manager s employee number last name first name 4 list the department of each employee with the following information employee number last name first name and department name 5 list first name last name and sex for employees whose first name is hercules and last names begin with b 6 list all employees in the sales department including their employee number last name first name and department name 7 list all employees in the sales and development departments including their employee number last name first name and department name 8 in descending order list the frequency count of employee last names i e how many employees share each last name table schemata can be found in schema sql queries constituting the sample analysis can be found in queries sql | server |
|
github-stats | github stats prettier ignore start all contributors badge start do not remove or modify this section all contributors https img shields io badge all contributors 11 orange svg style flat square contributors all contributors badge end prettier ignore end your github contributions smartly organized and visualized showcase meaningful metrics on your cv what s this before stating whether this tool is useful or not it might be let s disclose its primary goal improving our skills why our because this tool is open source and everyone is more than welcome to contribute to it you can grab an issue at any time or join the discord https discord gg bqwyea6we6 server to discuss the project and its future nothing is set in stone so feel free to share your ideas and suggestions learn more here s a video describing the project and its goals on youtube https www youtube com watch v zm92xpdrotk a href https www youtube com watch v zm92xpdrotk img src https i3 ytimg com vi zm92xpdrotk maxresdefault jpg style width 450px a technologies involved the app is currently based on next js https nextjs org with typescript and tailwind css actually with daisyui https daisyui com a tailwind css component library we manage some data specifically from the github apis https docs github com en graphql using the graphql https graphql org endpoint and react query https tanstack com query latest there s a login feature with nextauth https next auth js org using github as a provider coming soon the plan is to also add at some point some kind of user profile and settings stored where it s up to you to decide it could be on mongodb with an orm like prisma or something entirely different a first start could be using localstorage to validate the concept and then decide which database to use testing will also be involved in the process not sure if vitest or jest for component testing and either cypress or playwright for e2e testing how to contribute as mentioned in the beginning you can grab an issue write a comment first or join the discord https discord gg bqwyea6we6 server so we can have a chat about the project the goal of this project isn t the outcome itself but rather the process of building it together as a result we ll end up having a nice tool to showcase our github contributions and a project we can use as a reference when we need to implement something similar in other projects instructions on how to run the app locally can be found in contributing md contributing md thanks for reading and happy coding contributors all contributors list start do not remove or modify this section prettier ignore start markdownlint disable table tbody tr td align center valign top width 14 28 a href https leonardomontini dev img src https avatars githubusercontent com u 7253929 v 4 s 100 width 100px alt leonardo montini br sub b leonardo montini b sub a br a href projectmanagement balastrong title project management a a href https github com balastrong github stats commits author balastrong title code a td td align center valign top width 14 28 a href https bio link anantchoubey img src https avatars githubusercontent com u 91460022 v 4 s 100 width 100px alt anant choubey br sub b anant choubey b sub a br a href https github com balastrong github stats commits author theanantchoubey title documentation a a href https github com balastrong github stats issues q author 3atheanantchoubey title bug reports a a href https github com balastrong github stats commits author theanantchoubey title code a td td align center valign top width 14 28 a href http priyank live img src https avatars githubusercontent com u 88102392 v 4 s 100 width 100px alt priyankar pal br sub b priyankar pal b sub a br a href https github com balastrong github stats commits author priyankarpal title documentation a a href https github com balastrong github stats commits author priyankarpal title code a a href ideas priyankarpal title ideas planning feedback a td td align center valign top width 14 28 a href https github com piyushjha0409 img src https avatars githubusercontent com u 73685420 v 4 s 100 width 100px alt piyush jha br sub b piyush jha b sub a br a href https github com balastrong github stats commits author piyushjha0409 title code a td td align center valign top width 14 28 a href https www bassemdimassi tech img src https avatars githubusercontent com u 75867744 v 4 s 100 width 100px alt dimassi bassem br sub b dimassi bassem b sub a br a href design dimassibassem title design a a href https github com balastrong github stats commits author dimassibassem title code a td td align center valign top width 14 28 a href http jakubfronczyk com img src https avatars githubusercontent com u 71935020 v 4 s 100 width 100px alt jakub fronczyk br sub b jakub fronczyk b sub a br a href https github com balastrong github stats commits author jakubfronczyk title code a td td align center valign top width 14 28 a href https github com black arm img src https avatars githubusercontent com u 68558867 v 4 s 100 width 100px alt antonio basile br sub b antonio basile b sub a br a href https github com balastrong github stats commits author black arm title code a td tr tr td align center valign top width 14 28 a href https github com agrimaagrawal img src https avatars githubusercontent com u 84567933 v 4 s 100 width 100px alt agrima agrawal br sub b agrima agrawal b sub a br a href https github com balastrong github stats issues q author 3aagrimaagrawal title bug reports a td td align center valign top width 14 28 a href https www linkedin com in hicham essaidi 840b11288 img src https avatars githubusercontent com u 85809218 v 4 s 100 width 100px alt hicham essaidi br sub b hicham essaidi b sub a br a href https github com balastrong github stats commits author heshamsadi title code a td td align center valign top width 14 28 a href https www anupamac me img src https avatars githubusercontent com u 35479077 v 4 s 100 width 100px alt anupam br sub b anupam b sub a br a href https github com balastrong github stats commits author luckyklyist title code a td td align center valign top width 14 28 a href http thiti wcydtt co img src https avatars githubusercontent com u 55313215 v 4 s 100 width 100px alt thititongumpun br sub b thititongumpun b sub a br a href https github com balastrong github stats commits author thititongumpun title code a td tr tbody table markdownlint restore prettier ignore end all contributors list end prettier ignore start markdownlint disable markdownlint restore prettier ignore end all contributors list end | nextjs hacktoberfest react typescript good-first-issue tailwindcss hacktoberfest2023 | front_end |
AI_Conf_2019_DL_4_NLP | ai conf 2019 dl 4 nlp slides and code tutorials for aiconf ny tutorial on deep learning methodologies for natural language processing notes you can access the ulmfit notebook on google colab https colab research google com drive 1q5luftt3wij4k9vnimyefk82ggerzk05 for a code sample of rnns with attention check out taming recurrent neural networks for better summarization http www abigailsee com 2017 04 16 taming rnns for better summarization html with links to the accompanying tensorflow implementation setup download via git 1 go to your home directory by opening your terminal and entering cd 2 clone the repository by entering git clone https github com garretthoffman ai conf 2019 dl 4 nlp download twitter glove vectors download the pre trained twitter glove word vectors from here https nlp stanford edu projects glove and place the file glove twitter 27b 50d txt in the data directory setup virtual environment option 1 dockerfiles recommended 3 after cloning the repo to your machine navigate into the repo and enter docker build t ai conf nlp image type f dockerfiles dockerfile image type dockerfiles where image type is either gpu or cpu note that in order to run these files on your gpu you ll need to have a compatible gpu with drivers installed and configured properly as described in tensorflow s documentation https www tensorflow org install 4 run the docker image by entering docker run it p 8888 8888 v path to repo root ai conf nlp image type where image type is either gpu or cpu depending on the image you built in the last step 5 after building starting and attaching to the appropriate docker container run the provided jupyter notebooks by entering jupyter notebook ip 0 0 0 0 allow root and navigate to the specified url http 0 0 0 0 8888 token jupyter notebook access token in your browser 6 choose 0x notebook title ipynb to open the applicable notebook note the ulmfit notebook must be run on google colab see link above debugging docker if you receive an error of the form warning error loading config file home rp docker config json stat home rp docker config json permission denied got permission denied while trying to connect to the docker daemon socket at unix var run docker sock get http 2fvar 2frun 2fdocker sock v1 26 images json dial unix var run docker sock connect permission denied it s most likely because you installed docker using sudo permissions with a packet manager such as brew or apt get to solve this permission denied simply run docker with sudo ie run docker commands with sudo docker command and options instead of just docker command and options option 2 local setup using miniconda if you don t have or don t want to use docker you can follow these steps to setup the notebook 3 install miniconda using one of the installers and the miniconda installation instructions https conda io miniconda html use python3 6 4 after the installation create a new virtual environment using this command conda create n strata nlp source activate venv 5 you are now in a virtual environment next up install tensorflow by following the instructions https www tensorflow org install 6 to install the rest of the dependenies navigate into your repository and run pip install r dockerfiles requirements txt 7 now you can run jupyter notebook to finally start up the notebook a browser should open automatically if not navigate to http 127 0 0 1 8888 http 127 0 0 1 8888 in your browser 8 choose 0x notebook title ipynb to open the applicable notebook note the ulmfit notebook must be run on google colab see link above | ai |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.