names
stringlengths 1
95
| readmes
stringlengths 0
399k
| topics
stringlengths 0
421
| labels
stringclasses 6
values |
---|---|---|---|
iotedge-eflow | azure iot edge for linux on windows welcome to the home of azure iot edge for linux on windows a composite project enabling the abilty to run linux based edge modules on windows using a curated virtual machine based on cbl mariner linux https github com microsoft cbl mariner with azure iot edge https github com azure iotedge built in azure iot edge for linux on windows supports the following versions 1 4 lts using azure iot edge 1 4 lts generally available generally availability announcement https aka ms azeflow lts blog continuous release cr using azure iot edge 1 3 currently in public preview public preview announcement https azure microsoft com en us updates public preview azure iot edge for linux on windows eflow update azure iot edge for linux on windows unsupported versions 1 1 lts using azure iot edge 1 1 lts end of life as of december 13 2022 getting started get started now https docs microsoft com azure iot edge how to install iot edge on windows learn more https aka ms azeflow docs sample codes samples eflow auto deploy sample eflowautodeploy eflow remote deploy sample azure arc microsoft intune eflowremotedeploy build debug iot edge linux modules using eflow https docs microsoft com azure iot edge tutorial develop for linux on windows view iotedge 2018 06 issues issues can be filed in the issues section of either the iotedge https github com azure iotedge issues or iotedge eflow https github com azure iotedge eflow issues github repositories depending on the specific issue that you are experiencing if you are encountering a production level issue in which you require assistance we strongly suggest that you create an azure support request https docs microsoft com en us azure iot fundamentals iot support help view iotedge 2018 06 create an azure support request feature requests feature requests can be filed in our iotedge eflow isssues https github com azure iotedge eflow issues page microsoft open source code of conduct this project has adopted the microsoft open source code of conduct https opensource microsoft com codeofconduct resources microsoft open source code of conduct https opensource microsoft com codeofconduct microsoft code of conduct faq https opensource microsoft com codeofconduct faq contact opencode microsoft com mailto opencode microsoft com with questions or concerns | server |
|
CHP013-Unity-step-by-step- | chp013 unity step by step unity in embedded system design and robotics a step by step guide | os |
|
nlp-stand-up-comedy | stand up comedy and nlp this natural language processing python project started from this tutorial https www youtube com watch v xvqsftusomc this project started because the instructor alice zhao saw a stand up comedian for the first time and she really loved it but she wanted to figure out what makes ali wong s routine different than other comedians https cdn images 1 medium com max 2824 1 9ygp3lt62ryl61h 005zrg png this project uses several nlp libraries including nltk textblob gensim as well as standard machine learning libraries like pandas and scikit learn i ll be using the the jupyter notebook ide from the anaconda distribution https www anaconda com distribution i will post the entire jupyter notebook at the end of this blog as well as code snippets that you can follow along with as i go there are a few python packages i will be using that are not included in anaconda wordcloud textblob and gensim they can be installed with the following commands conda install c conda forge wordcloud conda install c conda forge textblob conda install c conda forge gensim in this end to end project i will 1 start with a question 1 get and clean the data 1 perform exploratory data analysis 1 apply nlp techniques 1 share insights 1 start with a question the goal is to look at transcripts of various comedians and note their similarities and differences specifically i m trying to determine how ali wong s comedy style is different than other comedians 2 data gathering and cleaning here is the full jupyter notebook https gist github com nwams da9290c4a21c1fddfc5cba9f82f8ba5a with the entire code for this section gathering the transcript i need to get the transcript from her netflix special called baby cobra but how where do we get the transcript doing a google search for her transcript shows that a site called scraps from the loft https scrapsfromtheloft com has full transcripts of several comedians routines as well as ali wong s http scrapsfromtheloft com 2017 09 19 ali wong baby cobra 2016 full transcript when i inspect the elements using chrome developer tools i can see that all of the transcript data is in the div class post content here s a screenshot https cdn images 1 medium com max 2734 1 tu55elv0s7kkgosggbvetq png defining the scope of a machine learning project is a very important step so in deciding how much data to gather the scope will be limited to the following criteria 1 comedy specials within the past 5 years 1 use only comedy specials that have an imdb https www imdb com ref nv home rating of at least 7 5 stars and 2000 votes 1 select the top 12 stand up comics this is where domain expertise is crucial if you re not sure you should do a sanity check with someone who knows the industry well in order to gather this data i will have to scrape the website i will use the requests https 3 python requests org package to get info from the website and the beautiful soup https pypi org project beautifulsoup4 package to extract portions of the site here is the code snippet for gathering the data https medium com media 80cc3cc9934422b18aad9abf3d115587 next i created a dictionary named data where every key is a comedian and every value is the transcript i need the output in a clean and organized format that i can use for further analysis so i will be creating 1 a corpus and 2 a document term matrix a corpus is a collection of text and a document term matrix is just word counts in a matrix format creating a corpus in the code i go through a few steps to create a dictionary with the format key comedian value string and then i convert it to a pandas dataframe all of this can be found in the jupyter notebook now on to the next step cleaning the data why perform these cleaning steps because i want the computer to try to understand only the most important parts of the text here are some common data cleaning steps that are usually done on all text make text lower case remove punctuation remove stop words remove numerical values remove common non sensical text like newline characters n tokenize the text below is a code snippet that shows how to clean the data https medium com media fa6ea06607d1f0eb2d8f2676ff9b968d here is my corpus https cdn images 1 medium com max 2818 1 xl hvq5hlyxxu1vadwbxxw png creating a document term matrix let s walkthrough this concept using a line from john mulaney s routine http scrapsfromtheloft com 2017 08 02 john mulaney comeback kid 2015 full transcript all right petunia wish me luck out there you will die on august 7th 2037 how can a computer read this text i will need to 1 clean it 2 tokenize it and 3 put it into a matrix remember in the section above i did some initial cleaning by removing punctuation remove numbers and making all letters lowercase after the first pass at cleaning it now looks like this all right petunia wish me luck out there you will die on august now i will tokenize the text tokenization means splitting the text into smaller parts tokenization can be done as sentences bi grams or words i will tokenize by words so that every word will be its own item https cdn images 1 medium com max 2512 1 hf 7f8hjwv7r91jd7m8twa png now i will remove stop words words with very little meaning like a the and it after removing the stop words i m left with https cdn images 1 medium com max 2496 1 y vhyirlcebafe yrnq2a png more simply put i m left with just the following words right petunia wish luck die august this is called a bag of words model bag of words is just a group of text where the order doesn t matter it s a simple and powerful way to represent text data now using this i can create a document term matrix that counts the occurrence of words used by each comedian https cdn images 1 medium com max 2652 1 82rzspqlqudfazirxkendq png the numbers are word counts each row is a different document or transcript for our case each column is a different term or word for our case here s a code snippet below of the process of creating a document term matrix let s use scikit learn s countvectorizer to tokenize and remove stop words https medium com media c6130662eed986e19a168e4ea5098851 here s a screenshot of the actual document term matrix notice that is has over 7 000 columns so it s quite long and it would be even longer if we had included bi grams https cdn images 1 medium com max 2818 1 n5g0pav0emzjmculz2wo5a png if we wanted to further clean the text after tokenization there are more steps we could take stemming lemmatization which is grouping words like driving drive drives as the same we could do parts of speech tagging create bi grams like turning thank you into one term and also deal with typos etc 3 exploratory data analysis here is the full jupyter notebook https gist github com nwams d9219470fec8783e44c8062bd82ec99c with the entire code for this section here s the fun part the goal of eda is to summarize the main characteristics of the data which will help us to determine if the trends make sense it s often best to do this visually and it s best to do eda before applying machine learning algorithms i m going to start by looking at the most common words used most by each comedian i ll also look at the size of their vocabulary because it would be interesting to see if some comedians have a higher vocabulary than others and lastly i ll explore the amount of profanity this came after i saw the top words you ll notice that the comedians use a lot of cuss words here we go top words sort across the document term matrix to find the top words and visualize the data using word clouds http amueller github io word cloud as mentioned in the beginning of this blog the word cloud package is not included in anaconda so you ll have to download it using this command conda install c conda forge wordcloud i looked at the top 30 words said per each comedian along with a count of how many times it was said i noticed that there were a lot of non meaningful words such as like im and know amongst all comedians the picture below of the top 15 words said by each comedian shows that a lot of stop words are included top 15 words said by each comedian https cdn images 1 medium com max 2580 1 5hau71ybkftsmacojmww4q png top 15 words said by each comedian if these common words occur as top words by more than 50 of the comedians then i ll add those words to the list of stop words therefore i will be adding these to the list of stop words like im know just dont thats right people youre got time gonna think oh yeah said after recreating a new document term matrix that includes the additional stop words i made a word cloud to visualize the most common words for each comedian https cdn images 1 medium com max 2580 1 1l0dhhvz9e6nxirux8kelg png notice that ali wong says the s word a lot as well as ok and husband the instructor of this tutorial resonates with this because she says ok a lot too and thinks her routine is funny because she also talks about her husband a lot too you ll also notice that a lot of people say the f word a lot which we ll also explore in a bit number of words i want to see how big of a vocabulary they have so i ll count how many unique words they use but another thing that i ll look at is the number of words per minute based on how long their entire comedy special was here s a pandas dataframe that shows the unique words and the words per minute https cdn images 1 medium com max 2580 1 bh44oidigf9zecgvrvc1aa png i like to begin visualizing in the simplest ways possible first so i ll create a simple horizontal bar plot https cdn images 1 medium com max 2580 1 jz0wfv6wngohj72gaqoncw png i can see that ricky and bill have a the highest vocabulary while anthony has the lowest vocabulary additionally i can see that joe rogan talks the fastest while anthony talks the slowest but this visualization doesn t yield any particularly interesting insights regarding ali wong so let s move on profanity remember in the word cloud above i noted that people say the f word and the s word a lot so i m going to dive deeper into that by making a scatter plot that shows f word and s word usage https cdn images 1 medium com max 2446 1 8a4o8mekrd9vy9pbufszjg png notice that bill burr joe rogan and jim jefferies use the f word a lot the instructor doesn t like too much profanity so that might explain why she s never heard of these guys she likes clean humor so profanity could be a good indicator of the type of comedy she likes besides ali wong her other two favorite comedians are john mulaney and mike birbiglia notice how john and mike are also very low on the use of profanity mike actually doesn t use any s words or f words at all remember that the goal of exploratory data analysis is to take an initial look at the data to see if it makes sense there s always room for improvement like more intense clean up including more stop words bi grams etc however perfection can serve to your disadvantage the results including profanity are interesting and in general it makes sense since the science of machine learning is an iterative process it s best to get some reasonable results early on to determine whether your project is going in a successful direction or not delivering tangible results is key 4 apply nlp techniques sentiment analysis here s the full jupyter notebook https gist github com nwams e446596ce07b08386af2aedd3116f8a7 with the entire code for this section because order is important in sentiment analysis i will use the corpus not the document term matrix i will use textblob https textblob readthedocs io en dev a python library that provides rule based sentiment scores for each comedian i will assign a polarity score that tells how positive or negative they are and a subjectivity score that tells how opinionated they are but before we jump into using the textblob module it s important to understand what s happening in the code let s take a look under the hood a linguist named tom de smedt created a lexicon en sentiment xml https github com sloria textblob blob eb08c120d364e908646731d60b4e4c6c1712ff63 textblob en en sentiment xml where he manually assigned adjective words with polarity and subjectivity values subjectivity lexicon for english adjectives https cdn images 1 medium com max 2734 1 thkayfuevjfcakxlcsdokw png subjectivity lexicon for english adjectives from the image above note that wordnet https wordnet princeton edu is a large english dictionary created by princeton i will output a polarity score which tells how positive negative they are and a subjectivity score which tells how opinionated they are from the text py file in textblob s github https github com sloria textblob blob eb08c120d364e908646731d60b4e4c6c1712ff63 textblob text py there s a section that defines the following words have a polarity negative positive of 1 0 to 1 0 and a subjectivity objective subjective of 0 0 to 1 0 the part of speech tags pos tags nn noun and jj adjective and the reliability specifies if an adjective was hand tagged 1 0 or inferred 0 7 negation words e g not reverse the polarity of the following word https cdn images 1 medium com max 2734 1 6c4tuuk0kcjnc9qusmq 6g png i use the corpus during my sentiment analysis not the document term matrix because order matters for example great positive but not great negative textblob handles negation by multiplying the polarity by 0 5 and handles modifier words by multiplying the subjectivity of the following word by 1 3 since great has a subjectivity of 0 75 very great will have a subjectivity of 0 975 which is just 0 75 1 3 notice that the word great occurs in the lexicon https github com sloria textblob blob eb08c120d364e908646731d60b4e4c6c1712ff63 textblob en en sentiment xml 4 times in this situation textblob will just take the average of the 4 scores this is very basic and there s nothing fancy or advanced happening here behind the scenes so this is why it s important for us to know what s happening behind the scenes before we use a module overall textblob will find all of the words and phrases that it can assign a polarity and subjectivity score to and it averages all of them together so at the end of this task each comedian will be assigned one polarity score and one subjectivity score however be aware that this rules based approach that textblob uses is not the most sophisticated but it is a good starting point there are also other statistical methods out there like naive bayes after creating a simple sentiment analysis scatter plot of the polarity and the subjectivity i can see that dave chappelle s routine is the most negative out of them all and john mulaney is most similar to ali wong while ricky gervais isn t too far away either https cdn images 1 medium com max 2000 1 hhk l3fwzgemqktq mkxyg png in addition to the overall sentiment it would also be interesting to also look at the sentiment over time throughout each routine so i ll split each comedian s routine into 10 chunks and assess their polarity pattern to see if i can draw additional insights https cdn images 1 medium com max 3028 1 wo3wfgm jn350 qxieo pw png ali wong remained consistently positive throughout her routine louis c k and mike birbiglia are similar to her topic modeling here s the full jupyter notebook https gist github com nwams 9a44423d8e787e678a7fe233fa051265 with the entire code for this section now i d like to find themes across various comedy routines and see which comedians tend to talk about which themes in other words i want to see what topics are being said in this document i will use the document term matrix because order doesn t matter thus the bag of words model is a good starting point i will be using the gensim https github com rare technologies gensim library a python toolkit built specifically for topic modeling to apply a popular topic modeling technique called latent dirichlet allocation lda lda is one of many topic modeling techniques also i will be using nltk for parts of speech tagging what is lda and how does it work latent means hidden and dirichlet is a type of probability distribution so i m looking at the probability distribution in the text in order to find hidden topics lda is an unsupervised algorithm you ll find that lda is really useful when you have really large documents and have no idea what they re about here are two simple but important definitions every document consists of a mix of topics and every topic consists of a mix of words https cdn images 1 medium com max 3028 1 fz7e amdbje6owjkl3vhrq png the goal is for lda to learn what all of the topics in the document and what are all of the words in each topic every topic is a probability distribution of words something like this topic a 40 banana 30 kale 10 breakfast topic b 30 kitten 20 puppy 10 frog 5 cute here s a visual summary of what lda is about https cdn images 1 medium com max 3028 1 zgqpcbuxf5rtx1fpf5fh q png here s how lda works 1 you choose the number of topics you think are in your corpus example k 2 1 lda randomly and temporarily assigns each word in each document to one of the 2 topics the word banana in document 1 is randomly assigned to topic b the animal like topic 1 lda will go through every word its assigned topic and it will update the topic assignments so let s say banana is assigned to the animal topic it will decide whether it should re assign it to the food topic by first checking how often the animal topic occurs in the document then secondly it will check how often the word banana occurs in the animal topic both of those probabilities are low so it will re assign banana to the other food topic here s the math behind it proportion of words in document d that are currently assigned to topic t p topic t document d and proportion of assignments to topic t over all documents that come from this word w p word w topic t multiply those two proportions and assign w a new topic based on that probability p topic t document d p word w topic t 1 you have to go through multiple iterations of previous step go through a few dozen iterations yourself and eventually the topics should start making sense if the topics don t make sense then more data cleaning is needed the gensim package will do steps 2 and 3 for you it is up to you to set the number of topics you want step 1 and how many iterations you want to go through step 4 gensim will output the top words in each topic it s your job as a human to interpret the results to figure out what the topics are if not try altering the parameters either terms in the document term matrix number of topics iterations etc stop when you re able to come up with topics that make sense topic modeling with all of the text in my first attempt i set the number of topics to 2 and the number of times the algorithm will pass over the whole matrix to 10 i then trained the lda model using all of the words from my term document matrix i ll start by setting the number of topics to 2 assess and then increment if needed these are the words output from the lda model when number of topics is 2 https cdn images 1 medium com max 3028 1 fcqnfkrjltyrb5zt lq6sw png it s not making sense yet and there s overlap in the words here s the output when number of topics is 3 num topics 3 https cdn images 1 medium com max 3028 1 v1afx7kqjdjod phg5doew png num topics 3 it s not quite making sense yet let s increment again by setting the number of topics to 4 to see if it improves https cdn images 1 medium com max 3028 1 tdkrqbtf2ad4q reyufnxq png these topics aren t too meaningful topic modeling with nouns one popular technique is to only look at terms that are from one part of speech so i ll try modifying the bag of words to include only nouns the results below are from trying 2 3 and 4 topics respectively https cdn images 1 medium com max 2440 1 lpnmpb55smwf7we9bxye7w png again there isn t much improvement and there s still overlap topic modeling with nouns and adjectives so in this next attempt i will add adjectives my lda model will now assess nouns and adjectives the results are below https cdn images 1 medium com max 2430 1 q9jda0kuwpnvnrbvbc ubq png as you can see there still isn t much improvement so i experimented with tuning the hyper parameters increasing the number of passes from 10 to 100 and setting alpha and eta values for alpha i experimented with really low values as well as symmetric and auto for eta i experimented with low values in an attempt to get less overlap between topics alpha and eta are hyper parameters in gensims ldamodel that i can tune a high alpha value means that every document is likely to contain a mixture of most of the topics not just any single topic specifically while a low alpha value means that a document is more likely to be represented by just a few of the topics a high eta value means that each topic is likely to contain a mixture of most of the words not just any specific word while a low eta value means a topic is more likely to contain a mixture of just a few of the words simply put a high alpha will make documents appear more similar to each other and a high eta will make topics appear more similar to each other unfortunately tuning the hyper parameters did not yield any meaningful topics i also tried including verbs and retraining the model with nouns adjectives and verbs but that didn t help it either why my data isn t ideal for topic modeling the model assumes that every chunk of text that we feed into it contains words that are somehow related so starting with the right corpus is crucial however comedy specials are inherently dynamic in nature with no fixed topics in most streams since the subject matter is constantly switching throughout a comedian s routine there usually isn t one centralized topic whereas in contrast if we trained our lda model with wikipedia articles each article document is already highly contextualized as it usually talks about a single topic which is a good thing it s also good to note that the number of documents per topic is also important lda is extremely dependent on the words used in a corpus and how frequently they show up wrapping up in this project i did text pre processing exploratory data analysis sentiment analysis and topic modeling https cdn images 1 medium com max 3004 1 ukrgmp8vzrurg5ylmnypuw png nlp libraries here are some popular nltk libraries to be aware of and some brief details nltk this is the library that everyone starts with it has a lot of text pre processing capabilities like tokenization stemming parts of speech tagging etc textblob this was built on top of nltk is easy to use and includes some additional functionality like sentiment analysis and spell check gensim this library was built specifically for topic modeling and include multiple techniques including lda and lsi it can also calculate document similarity spacy this is the newest of the bunch and is known for its fast performance since it was written in cython https cython org it can do a lot of things that nltk can do 5 summary of insights remember the original question the instructor wanted to answer was what makes ali wong s comedy routine stand out ali talks about her husband a lot and the instructor also talks about her husband a lot during her lectures in terms of profanity ali had the highest s word to f word ratio the instructor doesn t mind the s word but does not like hearing the f word at all ali wong tends to be more positive and less opinionated which is similar to the instructors personality as well based on these findings who are some other comedians the instructor might like comedians who don t say the f word that often mike birbiglia no curse words at all and john mulaney comedians with a similar sentiment pattern louis c k and mike birbiglia mike birbiglia https www imdb com title tt2937390 ref fn al tt 1 is a comedian that she d probably like as well | ai |
|
Einstein | einstein machine learning deep learning predictive analytics natural language processing and smart data discovery official einstein analytics for partners https partners salesforce com ui core chatter groups groupprofilepage g 0f9300000009ogm einstein analytics discovery cert fp https partners salesforce com ui core chatter groups groupprofilepage g 0f93a00000024uo einstein platform for partners https trailhead salesforce com users nmoscaritolo trailmixes einstein platform for partners fy20 amer einstein ai partner summit seat request form https docs google com forms d e 1faipqlsdkzr3veq4d ikrsplaqoq4gtpq vpfxhdxsj6xctjpnvaaw viewform https metamind readme io v1 docs https github com metamind apex utils https github com salesforceidentity jwt learn einstein lead scoring on trailhead https developer salesforce com promotions orgs einsteinleadscoring einstein platform how to webinar get started with einstein discovery replay http salesforce vidyard com watch ozcijf3m5yajzk9qj9a5pg einstein platform how to webinar get started with einstein discovery slides https success salesforce com 0693a000007saduqas workbook deep learning and natural language processing https colab research google com drive 1dttxahcnxf1idendtoncbpyy42rnequq scrollto gvh mlh0fttj salesforce event log file browser https salesforce elf herokuapp com apex trigger event type https developer salesforce com docs atlas en us api meta api sforce api objects eventlogfile apextrigger htm einstein ai learning path https partnernavigator salesforce com s einsteinai step 1 einstein analytics learning path https partnernavigator salesforce com s einsteinanalytics step 1 certified einstein analytics and discovery consultant salesforce certified einstein analytics and discovery consultant https trailhead salesforce com help article salesforce certified einstein analytics and discovery consultant exam guide learn einstein analytics plus trailmix https trailhead salesforce com users ea trails trailmixes learn einstein analytics plus db total time is the time in nanoseconds for a database round trip cpu time is the cpu time in milliseconds used to complete the request this measure indicates the amount of activity taking place in the app server layer compare the db total time to cpu time to determine whether performance issues are occurring in the database layer or in your own code services data v46 0 query q select id eventtype logdate logfilelength logfile from eventlogfile where eventtype reportexport https instance salesforce com services data v34 0 query q select id eventtype logdate from eventlogfile where logdate day einstein platform a model is a machine learning construct used to solve a classification problem the model learns from data instead of from explicit rules sales cloud einstein einstein activity capture sales reps just connect their email and calendar to salesforce then their activities are automatically added to related salesforce records plus emails sent from salesforce go through their regular email account einstein lead scoring gives each lead a score based on how well it matches your company s particular lead conversion patterns einstein lead scoring shows you exactly which details about each lead have the greatest effect on its score predictive lead scoring constantly adjusts its analysis in order to discover any new patterns that emerge opportunity insights uses machine learning and sentiment analysis to help sales reps close more deals the same data collection that s used to log customer data and identify the best leads can also move the sales needle insights are tailored to the patterns and data specific to your organization three types of insights deal predictions your reps see predictions based on recent activity and existing opportunity data for example whether a deal is more or less likely to close or if a deal seems unlikely to close in time follow up reminders reps get reminders to follow up when a contact hasn t responded in a while they also get reminders if there hasn t been any communication related to an important opportunity for a significant period of time key moments reps are notified at key moments related to a deal such as when a contact mentions a competitor or is leaving their company einstein account insights helps your sales team maintain their relationships with customers by keeping the team informed about key business developments that affect customers einstein language language contains two nlp services einstein intent and einstein sentiment the einstein language apis support data in these file formats csv comma separated values tsv tab separated values json einstein intent the einstein intent api categorizes unstructured text into user defined labels to better understand what users are trying to accomplish use this api to analyze text from emails chats or web forms to determine which products prospects are interested in and send customer inquiries to the appropriate sales person route service cases to the correct agents or departments or provide self service options understand customer posts to provide personalized self service in your communities einstein sentiment the einstein sentiment api classifies text into positive negative and neutral classes to understand what the words people use can tell us about how they re feeling use this api to analyze emails social media and text from chat to identify the sentiment or emotion in a prospect s emails to trend a lead or opportunity up or down provide proactive service by helping dissatisfied customers first or extending promotional offers to satisfied customers monitor how people perceive your brand across social media channels identify brand evangelists and note customer satisfaction resources einstein api signup https api einstein ai signup einstein platform developer guide https metamind readme io docs intro to einstein language | ai |
|
MD | md sehatyuk mobile app | front_end |
|
geoseg | geoseg a computer vision package for automatic building segmentation and outline extraction table of contents a href requirements requirements a a href organization organization a a href models models a a href usage usage a a href performance performance a a href visualization visualization a a href todo todo a a href citation citation a requirements pytorch 0 4 1 python 3 organization geoseg data original image tiles dataset image mask slices from data checkpoint pre trained models logs curve raw snapshot speed csv result quantitative qualitative result src init py models network archs fcns unet etc estrain py losses py metrics py runner py test py train py vision py models details src models archs md usage download repo git clone https github com huster wgm geoseg git download data nz32km2 google drive https drive google com open id 1pnkglrt8j9h4cx9iys0bh9vamqs kotz or del baidu yun https pan baidu com s 1ujpzi8cgh h5kszhr1 bza del download data vaihingen isprs http www2 isprs org commissions comm3 wg4 2d sem label vaihingen html details about the datasets can be found at a href citation citation a download pre trainded models fcns on nz32km2 binary building segmentation google drive https drive google com open id 1mh8tspqcrj9anezlyixjzzrggg8dfrl on isprs vaihingen 6 class segmentation google drive https drive google com open id 1esdecmv5mx1vqsl0jyfeqvdyfm g 1k only fcn8s 16s and 32s others here checkpoint step by step tutorial jupyter notebook link how to ipynb performance patch performance result patchperforms csv area performance result areaperforms csv computational efficiency logs speed csv visualization snapshots logs snapshot learning curve logs curve br net on nz32km2 br net result single br net canny segmap edge 0 png todo update training testing data add support for more dataset citation nz32km2 dataset the location scale resolution and preprocessing of the nz32km2 dataset please refer to paper link https www mdpi com 2072 4292 10 8 1195 htm article wu2018boundary title a boundary regulated network for accurate roof segmentation and outline extraction author wu guangming and guo zhiling and shi xiaodan and chen qi and xu yongwei and shibasaki ryosuke and shao xiaowei journal remote sensing volume 10 number 8 pages 1195 year 2018 publisher multidisciplinary digital publishing institute isprs vaihingen dataset the location scale resolution and preprocessingof the isprs vaihingen dataset please refer to paper link https www mdpi com 2072 4292 11 9 1051 htm article wu2019stacked title a stacked fully convolutional networks with feature alignment framework for multi label land cover segmentation author wu guangming and guo yimin and song xiaoya and guo zhiling and zhang haoran and shi xiaodan and shibasaki ryosuke and shao xiaowei journal remote sensing volume 11 number 9 pages 1051 year 2019 publisher multidisciplinary digital publishing institute source code if you use the code for your research please cite the paper link https arxiv org pdf 1809 03175 pdf article wu2018geoseg title geoseg a computer vision package for automatic building segmentation and outline extraction author wu guangming and guo zhiling journal arxiv preprint arxiv 1809 03175 year 2018 | ai |
|
Amazon-Alexa-Reviews | amazon alexa reviews using natural language processing data visualizations and classification algorithms of machine learning about the data this dataset consists of a nearly 3000 amazon customer reviews input text star ratings date of review variant and feedback of various amazon alexa products like alexa echo echo dots alexa firesticks etc for learning how to train machine for sentiment analysis what you can do with this data you can use this data to analyze amazon s alexa product discover insights into consumer reviews and assist with machine learning models you can also train your machine models for sentiment analysis and analyze customer reviews how many positive reviews and how many negative reviews source extracted from amazon s website inspiration your data will be in front of the world s largest data science community what questions do you want to see answered | nlp sentiment-analysis data-analysis data-visualization bag-of-words feature-extraction modelling evaluation-metrics hyperparameter-tuning eda data-science | ai |
Zynq-Design-using-Vivado | embedded system design flow on zynq labs outline the purpose of the lab exercises of embedded system design flow on zynq is to walk you through a complete hardware and software processor system design each lab will build upon the previous lab the following diagram represents the completed design of all the labs in this workshop shown below p align center img src pics readme completed design jpg p p align center i completed design i p source files setup to use the source files for each of the labs in this workshop you have to download or clone this repository from github on the main github webpage for a repository you can select clone or download and select download zip to download an archive of the repository you can then extract this to a folder on your local machine if you prefer to use git you can clone this repository git clone https github com xupgit zynq design using vivado git in the instructions for the labs sources refers to the sources directory in this respoitory once you have copied or cloned it to a local directory labs refers to the location which you will use as your workspace for the labs in the workshop note board support for the pynq z1 and pynq z2 are not included in vivado by default the relevant files need to be extracted and saved to vivado installation data boards board files these files can be downloaded from pynq z1 board files https www xilinx com support documentation university vivado workshops vivado adv embedded design zynq materials 2018x pynqz1 pynq z1 zip pynq z2 board files https www xilinx com support documentation university vivado workshops vivado adv embedded design zynq materials 2018x pynqz2 pynq z2 zip hardware setup pynq z1 pynq z2 connect a micro usb from the board to the pc make sure that a jumper is connected to jtag between jp1 1 and jp1 2 and another one of them should be connected across the usb pins between j9 2 and j9 3 zybo make sure that the jp7 is set to select usb power and jp5 is set to jtag make sure that a micro usb cable is connected to the jtag prog connector next to the power supply connector zedboard make sure that two micro usb cables are used between the pc and the prog and the uart connectors of the board and that the board is placed in the jtag mode mio6 mio2 jumpers are in the dn position labs overview lab 1 in this lab you will use ip integrator to create a processing system based design consisting of the following arm cortex a9 core ps uart for serial communication ddr3 controller for external ddr3 sdram memory p align center img src pics readme l1view jpg width 40 height 80 p p align center i processor design of this lab i p lab 2 this lab guides you through the process of extending the processing system you created in the previous lab by adding two gpio general purpose input output ips p align center img src pics readme l2view jpg width 80 height 80 p p align center i extend the system from the previous lab i p lab 3 this lab guides you through the process of creating and adding a custom peripheral to a processor system by using the vivado ip packager you will create an axi4lite interface peripheral you will extend the lab 2 hardware design by creating and adding an axi peripheral to the system and connecting it to the leds on the zynq board you are using you will use the ip packager to generate the custom ip next you will connect the peripheral to the system and add pin location constraints to connect the led display controller peripheral to the on board led display finally you will add bram controller and bram before generating the bitstream p align center img src pics readme l3view jpg width 80 height 80 p p align center i design updated from the previous lab i p lab 4 this lab guides you through the process of writing a basic software application the software you will develop will write to the leds on the zynq board an axi bram controller and associated 8kb bram were added in the last lab the application will be run from the bram by modifying the linker script for the project to place the text section of the application in the bram you will verify that the design operates as expected by testing in hardware the design was extended at the end of the previous lab to include a memory controller and the bitstream should now be available a basic software application will be developed to access the leds on the zynq boards lab 5 this lab guides you through the process of writing a software application that utilizes the private timer of the cpu you will refer to the timer s api in the sdk to create and debug the software application the application you will develop will monitor the dip switch values and increment a count on the leds the application will exit when the center push button is pressed you will use the hardware design created in lab 4 to use cpu s private timer see figure you will develop the code to use it p align center img src pics readme l5view jpg width 80 height 80 p p align center i final design i p | os |
|
wine_food_pairing | wine food pairing introduction in this repository we build an nlp based method for pairing food with wine this is spread over two notebooks wine food pairing data prep ipynb this is the notebook file in which we create wine and food embeddings and prepare all the components we need to produce wine suggestions wine food pairings ipynb this is the notebook file in which we lay out the rules for pairing wine with food and visualize the results the raw data for these notebooks comes from two different sources the corpus of wine reviews was too large to be added to this repository you can find it at https www kaggle com roaldschuring wine reviews if you are interested in scraping more wine reviews please see the scraper used to mine the data from www winemag com in this github respository https github com roaldschuring studying grape styles the corpus of food reviews can be downloaded at https www kaggle com snap amazon fine food reviews other files in this repository include varieties all geos normalized csv a table normalizing the various geography tags of the wines found on winemag com list of foods csv a list of foods used to establish boundaries for nonaroma values in our foods technologies python jupyter notebook project description pairing wine with food is somewhat of a dark art what ultimately makes for great pairings is a delicate balance between the body non aroma and aroma characteristics in the wine and in the food in this repo we use data science techniques and the prevailing theory on wine food pairing to build a wine pairing engine getting started 1 clone this repo 2 run the web scraper available in the other repository outlined above to get a full and fresh set of wine reviews 3 download the amazon fine foods dataset 4 run wine food pairing data prep ipynb to create all the data artefacts needed to produce wine pairings 5 run wine food pairings ipynb to produce and visualize wine pairings authors roald schuring | ai |
|
gcp-iot-core-examples | microchip examples for google cloud platform iot core summary this project contains examples and tools for setting up a secure iot node with iot core using the atecc508a or atecc608a and winc1500 with a selection of energy efficent microcontrollers security devices atecc508a http www microchip com wwwproducts en atecc508a atecc608a http www microchip com wwwproducts en atecc608a connectivity atwinc1500 http www microchip com wwwproducts en atwinc1500 802 11 b g n module with integrated tls 1 2 stack atsamw25 http www microchip com wwwproducts en atsamw25 integrated module featuring atwinc1500 atsamd21g18a atecc508a microcontrollers or socs atsamd21 arm cortex m0 atsamg55 arm cortex m4 atsamw25 arm cortex m0 raspberry pi project organization these examples were built with atmel studio 7 http www atmel com microsite atmel studio atsamd21 quick start boards gcp iot core samd21 atsln atsamg55 quick start boards gcp iot core samg55 atsln atsamw25 quick start boards gcp iot core samw25 atsln getting started the first step is ensuring that you have a google cloud iot core https console cloud google com account set up with iot core the tutorials will walk you through the initial configuration of your account and creation of your first device registry 1 iot core getting started guide https cloud google com iot docs how tos getting started 2 iot core quick start https cloud google com iot docs quickstart examples fan controller this example features a number of interconnected devices to demonstrate data acquisition control and integration into google iot core hardware featured atecc508a security cryptographic authentication atwinc1500 wifi connectivity emc1414 temperature sensor emc2301 5v pwm fan controller license for more details of the included software and licenses please reference license md the included software is covered by the microchip license with the exception of the following asf headers and winc1500 driver software are covered by the bsd 3 clause license eclipse paho mqtt client is covered by the eclipse distribution license v 1 0 bsd 3 clause parson json c library is covered by the mit free software license | iot microchip google-iot gcp-iot atecc508 atecc608 mqtt tls winc1500 security cryptography elliptic-curves | server |
PB-LLM | pb llm partially binarized large language models yuzhang shang https 42shawn github io zhihang yuan http hahnyuan com qiang wu zhen dong https dong zhen com equal contribution this work explores network binarization a radical form of quantization compressing model weights to a single bit specifically for large language models llms compression due to previous binarization methods collapsing llms we propose a novel approach partially binarized llm pb llm which can achieve extreme low bit quantization while maintaining the linguistic reasoning capacity of quantized llms specifically our exploration first uncovers the ineffectiveness of na ve applications of existing binarization algorithms and highlights the imperative role of salient weights in achieving low bit quantization thus pb llm filters a small ratio of salient weights during binarization allocating them to higher bit storage i e partially binarization pb llm is extended to recover the capacities of quantized lmms by analyzing from the perspective of post training quantization ptq and quantization aware training qat under ptq combining the concepts from gptq we reconstruct the binarized weight matrix guided by the hessian matrix and successfully recover the reasoning capacity of pb llm in low bit under qat we freeze the salient weights during training explore the derivation of optimal scaling factors crucial for minimizing the quantization error and propose a scaling mechanism based on this derived scaling strategy for residual binarized weights those explorations and the developed methodologies significantly contribute to rejuvenating the performance of low bit quantized llms and present substantial advancements in the field of network binarization for llms the paper is available at arxiv https arxiv org abs 2310 00034 tested models huggingface models facebook opt 125m facebook opt 1 3b facebook opt 6 7b huggyllama llama 7b huggyllama llama 13b usage environment setting if you use conda you can create a new environment and install the dependencies with the following commands shell conda create n binary llm python 3 10 pip install the python dependencies shell pip install torch transformers lm eval accelerate tensorboardx bitsandbytes sentencepiece note python version must 3 10 ptq gptq pb the gptq pb is implemented in the gptq pb gptq pb folder please go to the folder and run the script with the desired arguments usage run py h plot load quantized seed seed nsamples nsamples percdamp percdamp low frac low frac blocksize blocksize groupsize groupsize salient metric magnitude hessian high bit high bit minlayer minlayer maxlayer maxlayer quant only quant only invert save disable gptq log wandb model wikitext2 ptb c4 xnor sign no 2bit 4bit prune positional arguments model model to load for example huggyllama llama 7b wikitext2 ptb c4 where to extract calibration data from xnor sign no 2bit 4bit prune quantization method xnor is the method used in paper prune is the method used in sparsegptq low frac low frac fraction of binarized weight salient metric magnitude hessian metric to measure salient weights for example shell cd gptq pb for llama 7b cuda visible devices 1 python run py huggyllama llama 7b c4 xnor low frac 0 5 high bit 8 salient metric hessian cuda visible devices 2 python run py huggyllama llama 7b c4 xnor low frac 0 8 high bit 8 salient metric hessian cuda visible devices 3 python run py huggyllama llama 7b c4 xnor low frac 0 9 high bit 8 salient metric hessian cuda visible devices 0 python run py huggyllama llama 7b c4 xnor low frac 0 95 high bit 8 salient metric hessian qat the qat for pb llm is implemented in the experiments experiments folder for example shell for opt 1 3b cuda visible devices 4 5 xdg cache home data shangyuzhang python experiments column quant frozen outliers py binarization method xnor outlier model save dir checkpoints opt1 3b granularity whole model model id facebook opt 1 3b train step 2000 dataset red pajama it will automatically evaluated on 7 zero shot qa tasks | neural-networks quantization | ai |
Vuejs-Vuetify-UI-Design-Sprint-Report-System | sprint project setup npm install compiles and hot reloads for development npm run serve compiles and minifies for production npm run build lints and fixes files npm run lint customize configuration see configuration reference https cli vuejs org config | os |
|
WeChat-MiniProgram-WebAR | 1 chinese readme https zhuanlan zhihu com p 72617098 2 chinese source code analysis https zhuanlan zhihu com p 74438078 updated date update 2022 07 23 new use an experimental worker on ios please see the image ar and 3d mask fixed issue image ar cannot run on ios for a while 2021 12 11 new added a image tracker using opencv webassembly please see the image ar using opencv 2021 09 07 new added a color tracker mode please see the color ar 2021 08 15 new added a video mask mode for image ar this is a css 3d transformation which does not require three js please see the image ar and video mask update replace the spirit geometry with a plane geometry 2021 04 03 update the access a camera mode of image ar is recoverd 2021 03 15 new the display of the ar mask is changed from 2d to 3d by three js update because the access a camera mode is slow it is removed 2019 08 16 update the project structure has been modified the color tracker and object tracker are removed fix access a camera mode that does not work properly on android 2019 08 06 fix issue when function wx canvastotempfilepath is called frequently on android wechat wechat will be crashed 2019 08 01 update the perspective transform is achieved 2019 07 15 update the nft natural feature tracking is achieved 2019 07 08 new the affine transform is achieved introduction on wechat web ar this is a wechat web ar on july 5 2019 wechat miniprogram supported ar it was added a new api named cameraframelistener cameraframelistener api https developers weixin qq com miniprogram dev api media camera cameracontext oncameraframe html we can create ar effects with the new api this demo demonstrates a ar tracker effect using tracking js and jsfeat library the tracking js brings computer vision algorithms and techniques into browser environment the jsfeat is also a javascript computer vision library we can do real time image and face detection tracking js https trackingjs com and jsfeat https inspirit github io jsfeat index page of the wechat mini program avatar screenshot index jpg avatar screenshot index 2 jpg image ar and 3d mask ios wechat version number 8 0 24 or above it will use an experimental worker android and other it will not use an experimental worker use the demo to scan a pattern image below avatar face pattern jpg a cat beard is on the pattern image avatar screenshot 1 1 jpg a effect of translating and scaling avatar screenshot 1 2 jpg a effect of rotating avatar screenshot 1 3 jpg image ar using opencv this is the same as above supports image rotation the image is rotated by 30 degrees avatar screenshot 5 1 jpg the image is rotated by 90 degrees avatar screenshot 5 2 jpg image ar and video mask use the demo to scan a rotating image below avatar screenshot 4 1 jpg a video player is on and aligned with the image avatar screenshot 4 2 jpg color ar use the demo to scan a yellow color expect a effect below avatar screenshot 5 color jpg a effect of hiddening avatar screenshot 5 mask jpg custom the color of the highlighted area avatar screenshot 5 setting jpg face ar use the demo to scan a face expect a effect below avatar screenshot 2 1 jpg a effect of translating and scaling avatar screenshot 2 2 jpg because the landmarks of the demo are simple and basic only a effect of translating and scaling is on a rotating image avatar screenshot 2 3 jpg how to replace the cat beard image you may replace the default url of a image for 2d mask file package image tracker pages photo photo js and package face tracker pages photo photo js javascript a url of sprite image const modelurl utils cat beard png the width and height of the modelurl image should be 256 x 256 512 x 512 and 1024 x 1024 etc how to replace the pattern image for image ar file package face tracker utils imagebusiness js javascript const patternimageurl face pattern jpg a pattern image avatar face pattern jpg how to put a image on other positions for image ar select a track point on a pattern image the point is used to set the cat beard image file package image tracker utils modelbusiness js javascript a index of a track point on a pattern image const trackpoint x 185 the width of the pattern image is 375 y 224 the height of the pattern image is 375 how to put a image on other positions for face ar this is a map of the 31 keypoints of a face landmarks avatar screenshot 3 jpg for example a number 27 and number 29 are the sides of the mouth file package face tracker utils modelbusiness js javascript index of the track points of the face const trackpointa index of a landmark id 27 x coordinate x 155 69898111309 the width of the face image is 375 const trackpointb index of a landmark id 29 x coordinate x 216 53075265284997 the width of the face image is 375 known issues the ar demo is very slow on ios wechat | wechat wechat-mini-program webar ar natural-feature-tracking markless-augmented-reality augmented-reality | ai |
Home-Cloud-Storage | home cloud storage the system is out dated and will be updated soon | cloud |
|
teddit | this is only a mirror repo main repository at codeberg https codeberg org teddit teddit please submit issues and prs on codeberg teddit teddit net https teddit net a free and open source alternative reddit front end focused on privacy inspired by the nitter https github com zedeus nitter project no javascript or ads all requests go through the backend client never talks to reddit prevents reddit from tracking your ip or javascript fingerprint unofficial api https codeberg org teddit teddit wiki teddit api rss json support no rate limits or reddit account required lightweight teddit frontpage 30 http requests with 270 kb of data downloaded vs reddit frontpage 190 requests with 24 mb self hostable anyone can setup an instance an instance can either use reddit s api with or without oauth so reddit api key is not necessarily needed join the teddit discussion room on matrix teddit matrix org https matrix to teddit matrix org xmr 832ogrwuoss2jgyg7wjtqshidk7dergndfpenq9dzmghnxqtjrby1xgbqc3gw3gaifrm9e84j91vdmzrjosj32nkaznacej instances https teddit net https teddit net official instance community instances https teddit ggc project de https teddit ggc project de https teddit kavin rocks https teddit kavin rocks https teddit zaggy nl https teddit zaggy nl https teddit namazso eu https teddit namazso eu https teddit nautolan racing https teddit nautolan racing https teddit tinfoil hat net https teddit tinfoil hat net https teddit domain glass https teddit domain glass ibarajztopxnuhabfu7f onion http ibarajztopxnuhabfu7fg6gbudynxofbnmvis3ltj6lfx47b6fhrd5qd onion xugoqcf2pftm76vbznx4 i2p http xugoqcf2pftm76vbznx4xuhrzyb5b6zwpizpnw2hysexjdn5l2tq b32 i2p changelog see changelog md installation docker compose method console wget https codeberg org teddit teddit raw branch main docker compose yml docker compose build docker compose up teddit should now be running at http localhost 8080 docker image is available at https hub docker com r teddit teddit https hub docker com r teddit teddit environment variables the following variables may be set to customize your deployment at runtime variable description domain defines url for teddit to use i e teddit domain com defaults to 127 0 0 1 use reddit oauth boolean if true reddit app id must be set with your own reddit app id if false teddit uses reddit s public api defaults to false cert dir defines location of certificates if using https i e home teddit le live teddit net no trailing slash theme automatically theme the user s browser experience options are auto dark sepia or you can set white by setting the variable to empty defaults to auto flairs enabled enables the rendering of user and link flairs on teddit defaults to true highlight controversial enables controversial comments to be indicated by a typographical dagger defaults to true api enabled teddit api feature might increase loads significantly on your instance defaults to true video enabled enables video playback within teddit defaults to true redis enabled enables redis caching if disabled does not allow for any caching of reddit api calls defaults to true redis db sets the redis db name if required redis host sets the redis host location if required defaults to 127 0 0 1 redis password sets the redis password if required redis port sets the redis port if required defaults to 6379 ssl port sets the ssl port teddit listens on defaults to 8088 nonssl port sets the non ssl port teddit listens on defaults to 8080 listen address sets the address teddit listens for requests on defaults to 0 0 0 0 https enabled boolean sets whether or not to enable https for teddit defaults to false redirect http to https boolean sets whether to force redirection from http to https defaults to false redirect www boolean redirects from www to non www url for example if true teddit will redirect https www teddit com to https teddit com defaults to false use compression boolean if set to true teddit will use the https github com expressjs compression node js compression middleware to compress http requests with deflate gzip defaults to true use view cache boolean if this is set to true view template compilation caching is enabled defaults to false use helmet boolean recommended to be true when using https defaults to false use helmet hsts boolean recommended to be true when using https defaults to false trust proxy boolean enable trust proxy if you are using a reverse proxy like nginx or traefik defaults to false trust proxy address location of trust proxy defaults to 127 0 0 1 nsfw enabled boolean enable nsfw over 18 content if false a warning is shown to the user before opening any nsfw post when the nfsw content is disabled nsfw posts are hidden from subreddits and from user page feeds note users can set this to true or false from their preferences defaults to true post comments sort defines default sort preference options are confidence default sorting option in reddit top new controversal old random qa live defaults to confidence reddit app id if use reddit oauth config key is set to true you have to obtain your reddit app id for testing purposes it s okay to use this project s default app id create your reddit app here https old reddit com prefs apps make sure to create an installed app type of app default is abfyqddc9qph1w manual 1 install node js https nodejs org 1 optional install redis server https redis io highly recommended it works as a cache for reddit api calls 1 optional install ffmpeg https ffmpeg org it s needed if you want to support videos console linux apt install redis server ffmpeg macos brew install redis 1 clone and set up the repository console git clone https codeberg org teddit teddit cd teddit npm install no optional cp config js template config js edit the file to suit your environment redis server npm start teddit should now be running at http localhost 8080 | front_end |
|
FreeRTOS-Cpp | freertos cpp the purpose of this project is to create a modern c 17 header only interface to the freertos kernel api goals of this project include expose freertos modules as classes that allow an object oriented programming interface use templates to simplify queues and statically allocated modules with varying memory requirements make use of the c type system provide a minimal c interface that enables the user to create higher level abstractions the api documentation and examples https jonenz github io freertos cpp index html are intended to mimic the kernel documentation and examples changes to references and examples are intended to reflect how one would use the c interface instead of the original interface repository structure cmake examples config eventgroups kernel messagebuffer mutex queue semaphore streambuffer task timer freertos cpp cmakelists txt include freertos eventgroups hpp kernel hpp messagebuffer hpp mutex hpp queue hpp semaphore hpp streambuffer hpp task hpp timer hpp freertos kernel cmake directory that contains auxillary cmake modules this is used to provide a cmake configuration for the freertos kernel examples directory that contains all of the examples in the the api documentation there is a generic config file that s needed to ensure all of the examples correctly compile freertos cpp directory that contains the interface library this is the only directory that is needed to make use of this library freertos kernel directory where the freertos kernel is cloned as a submodule from the official git repo this version of the kernel is not required to use the project but it is the version that is tested for compilation of examples usage the recommended way of using this project is to add it or your fork of it as a submodule in the desired project then simply add freertos cpp include as an include path in the project a simple cmake configuration file is also provided this project makes use of c 17 language features so be sure to enable c 17 features for your compiler | c-plus-plus c-plus-plus-17 freertos freertos-kernel embedded | os |
IMPROVED-VEHICLE-COUNTING-AND-CLASSIFICATION-ON-MONOCLUAR-TRAFFIC-VIDEO-SEQUENCES | improved vehicle counting and classification on monocluar traffic video sequences this project is a computer vision based vehicle counting system the system is capable of performing vehicle detetcion tracking counting classification into light medium and heavy speed estimation and traffic flow estimation it harnesses the power of computer vision to get deeper insights into our traffic the focus is on development of vehicle counting techniques and their comparative analysis on datasets getting started download a copy of the project onto the system using a web browser or terminal commands unzip the porject and you are good to go prerequisites python v3 5 br opencv open source computer vision v3 4 br anaconda create a separate environment for your project br use the following commands to install the packages into your environment br conda env create f environment yaml br source activate cv br or you can install everything globally search for step by step guides to install opencv the dependencies will be installed on the way br files in the box why do i see so many files and what are their roles br here s an overview of the files br feature based detection br vehicle detection kmeans py vehicle detetcion using k means clustering on fast features br vehicle detetcion hclustering py vehicle detetcion using hierarchical clustering on fast features br blob based detection and tracking br single ref line py vehicle counting using single reference line br multiple reference lines py vehicle counting using multiple refernce lines br region based py region based vehicle counting system includes a bonus speed estimation module as well br run py the final system multiple reference lines based counting coupled with speed estimation and traffic flow estimation br all the files have the module for vehicle classfification and video writing let s run this thing activate your environment change the directory to the project folder create an input and results folder place the input video in the input folder run the file using python br example br python run py br python single ref line py br | ai |
|
dataengineering-db | data engineering information relating to topics on data engineering data infrastructure data storing etc also an in depth look at data analytics data pipelines and business intelligence through the eyes of big data technologies and frameworks such as apache spark apache cassandra and embulk references data warehouse toolkit third edition ralph kimball margy ross https www amazon com data warehouse toolkit definitive dimensional dp 1118530802 ref pd lpo sbs 14 t 0 encoding utf8 psc 1 refrid 594qz596bptx3yc1bpy0 essential sql alchemy 2nd edition jason myers rick copeland https www amazon com essential sqlalchemy rick copeland dp 0596516142 technologies at a glance apache spark https spark apache org apache cassandra http cassandra apache org embulk http www embulk org docs | data-engineer data-infrastructure data-analysis data-warehouse sqlalchemy-database sqlalchemy sqlite3 | server |
tkeel | h1 align center tkeel h1 h5 align center next generation iot open source platform h5 h6 align center high performance high security and easy to use h6 div align center go report card https goreportcard com badge github com tkeel io tkeel https goreportcard com report github com tkeel io tkeel github release latest semver https img shields io github v release tkeel io tkeel github https img shields io github license tkeel io tkeel style plastic godoc https godoc org github com tkeel io tkeel status png http godoc org github com tkeel io tkeel div div align center img png docs images img system png div tkeel is a strong and reusable iot platform that helps you build solutions quickly the architecture is based on a microservices model providing a pluggable architecture and a data plane that is stable and quick responsive solving the difficult problems of building an application with high performance from device to application modular access etc readme zh md let s start quick installation of the tkeel platform via the cli https github com tkeel io cli tool example https github com tkeel io tkeel tree main example will help you quickly understand how to use our tkeel iot open platform the official website documentation will have details from installation to usage details architecture perhaps you are interested in the tkeel iot platform let me give you a brief introduction div align center img png docs images img layer png i data selectable paragraph architecture i div resource something related to data storage which can be any database you use core like the name suggests this is the data core of the entire platform providing some form of data organisation as well as processing methods provides different forms in which data can be organised such as time series attribute relationships etc making data into easily understood objects that can be easily constructed and developed data interaction is resolved by means of snapshots and subscriptions event data service provides the pluggable capability that applications need plus some core functionality message routing tenant management rights control interface easy tools and friendly interfaces to application through encapsulation application applications of different volumes can be everything your existing platform has to offer existing platforms can simply use the data they need by calling the api provided by interface reasons to trust us keep it simple the tkeel platform summarises the generic problems encountered in iot development over the years as well as some tough challenges finally there is this open platform that can solve the pain points in iot development tkeel is good at dealing with data flow and abstraction in distributed systems shielding the underlying complexity and providing simpler more developer oriented abstractions outwards to help users quickly build iot solutions lightweight but powerful the tkeel iot open platform is based on microservices architecture is not vendor bound and offers an efficient development approach that is scalable reliable and high performance with the power of dapr https dapr io a simpler abstraction is provided to the user in the form of sidecar https docs dapr io concepts dapr services sidecar interacting via http and grpc the tkeel platform allows your code to ignore the hosted environment allowing device docked applications plugins highly portable and not restricted by programming language allowing developers to develop to their heart s content using their preferred technology stack pluginisation the plugin implementation is based on the openapi convention making deployment simple and lightweight through a cloud native approach the plugin mechanism makes it easy to reuse plugins that you or others have made public we provide an official plugin repository where developers can pick and choose plugins for their own scenarios of course if you can make your plugins publicly available developers with similar needs would appreciate it with plugin guide you will find that implementing plugins is a very simple task focus on data the tkeel iot open platform defines data entities through a data centre tkeel io core https github com tkeel io core and simulates and abstracts real world objects things you can define relational mappings for more faster and easier data refinement through the platform s powerful capabilities with the design of data entities we can adapt this abstract design to messages measurement points objects and relationships and the platform provides multi level and multi latitude data services configuring relational mapping eliminates the need to remember complex message topics and message formats as we provide a high performance data processing solution roadmap we have planned a roadmap https github com tkeel io tkeel issues 30 to do more support for the project shall we talk if you have any suggestions or ideas you are welcome to file an issue https github com tkeel io keel issues at any time we ll look forward to sharing them together to make the world a better place thank you very much for your feedback and suggestions community documents docs development readme md will give you an idea of how you can start contributing to tkeel contributing the development guide docs development developing tkeel md explains how to configure your development environment we have this code of conduct that we expect project participants to follow please read it in full so that you know what will and will not be tolerated find us you may have many questions and we will ensure that they are answered as soon as possible social platforms links email tkeel yunify com weibo tkeel repos repo descriptions tkeel https github com tkeel io tkeel as what you see the code for the platform and an overview of the platform are included cli https github com tkeel io cli the tkeel cli is the main tool for various tkeel related tasks helm https github com tkeel io helm charts helm charts corresponding to tkeel core https github com tkeel io core tkeel s data centre | iot | server |
mobile-policy | category management policy 16 2 improving the acquisition and management of common information technology mobile devices and services the office of management and budget omb is accepting public comment on draft guidance to improve the acquisition and management of mobile services and devices this policy is the third in a series of category management policies to drive greater performance efficiencies and savings in commonly purchased information technology goods and services the public comment period has ended thank you for your comments omb will analyze all feedback submitted during the public comment period and revise the policy as necessary the proposed guidance is now open for public comment on this page the public feedback period will be 30 days closing on april 28 2016 following the public comment period feedback received will be analyzed to help inform the development of any final policy public domain this project is in the worldwide public domain license md as stated in contributing contributing md this project is in the public domain within the united states and copyright and related rights in the work worldwide are waived through the cc0 1 0 universal public domain dedication https creativecommons org publicdomain zero 1 0 all contributions to this project will be released under the cc0 dedication by submitting a pull request you are agreeing to comply with this waiver of copyright interest privacy all comments messages pull requests and other submissions received through official white house pages including this github page may be subject to archiving requirements see the https www whitehouse gov privacy for more information developing on the site locally this site uses jekyll http jekyllrb com sass http sass lang com bourbon http bourbon io neat http neat bourbon io and requires ruby 2 x install dependencies with bundler bundle install and run the site with jekyll bundle exec jekyll serve watch if all goes well visit the site at http localhost 4000 | server |
|
PostAR | what does your app do mobile app allows user to login and out using firebase user name and password authentication footer navigation tabs for navigating to list view that displays all data points camera view and settings view settings view demonstrates gps and gyroscope access and allows for user sign out camera view displays markers of posts within a certain radius of the user these posts are clickable and will show the full message once clicked backend it sets up the api flow that will allow the app to authenticate with the server post location messages to save in database and retrieve messages in a 200 meter radius it is deployed on heroku uses mongodb for the database and jwt for authentication the landing page is hosted at the heroku deployment who worked on it ernesto l cortez tomas hernandez michael fernandez jessica vega what were you able to complete for this handin mobile app implemented facebook login form was created to allow user to input a message to send messages sent using the app post to the database posts within certain radius are represented by a marker on the camera view not done yet but hopefully before class message pops up when user submits a post backend various bug fixes and cleaned up code what are known problems if any with your project mobile app camera view still stretches beyond header and footer and is scrollable this is undesired behavior backend jwt is unencrypted security concerns how would you improve it if you had more time mobile app for this milestone we would have liked to include picture and video messages it would have been nice to implement a more clean and simplified design backend make more security considerations as far as encryption is concerned finish up the landing page with app images and team information pictures links | os |
|
Stm32-FatFs-FreeRTOS | freertos fatfs in stm32 arm cortex m0 this project is designed as an example of a stm32cubeide generated system with freertos multitask feautures and fatfs file system for controlling an spi connected mmc sd memory card for more information you can take a look here freertos https www freertos org fatfs http elm chan org fsw ff 00index e html features multitask mutexes thread safety cmsis rtos api version 1 fatfs application interface memory state manipulation of folders files error messages overview i was created 3 task with different priority and sizes with safe access via mutex to sd card blink simple task for blinking led sdinfo give infomation about sd card and files usage memory free space list of files folders with details date type path name sdmanager create read write find folders files note this example was built based on tutorial created by kiwih https github com kiwih for more information tutorial an sd card over spi https 01001000 xyz 2020 08 09 tutorial stm32cubeide sd card original code fatfs without freertos cubeide sd card https github com kiwih cubeide sd card installation this example was created using stm32f072 discovery kit waveshare sd card module in addition i used converter usb uart with pl2303hx to read uart messages images image jpg connections since the spi2 is connected to st mems motion senso one of the properties 32f072bdiscovery https www st com en evaluation tools 32f072bdiscovery html overview so i used spi1 and defined sd spi handle to spi1 sh define sd spi handle hspi1 finally in table form the connections are as follows sd adapter side stm32f072 side miso pb4 mosi pb5 sclk pa5 cs pb3 3v 3 3v gnd gnd testing images test png license creative commons zero v1 0 universal | fatfs freertos stm32 spi sd | os |
MLBox | mlbox machine learning algorithms implementations blogs decision tree http pytlab github io 2017 07 09 e6 9c ba e5 99 a8 e5 ad a6 e4 b9 a0 e7 ae 97 e6 b3 95 e5 ae 9e e8 b7 b5 e5 86 b3 e7 ad 96 e6 a0 91 naive bayes http pytlab github io 2017 07 11 e6 9c ba e5 99 a8 e5 ad a6 e4 b9 a0 e5 ae 9e e8 b7 b5 e6 9c b4 e7 b4 a0 e8 b4 9d e5 8f b6 e6 96 af naive bayes logistic http pytlab github io 2017 07 13 e6 9c ba e5 99 a8 e5 ad a6 e4 b9 a0 e7 ae 97 e6 b3 95 e5 ae 9e e8 b7 b5 logistic e5 9b 9e e5 bd 92 e4 b8 8e e6 a2 af e5 ba a6 e4 b8 8a e5 8d 87 e7 ae 97 e6 b3 95 e4 b8 8a logistic http pytlab github io 2017 07 15 e6 9c ba e5 99 a8 e5 ad a6 e4 b9 a0 e7 ae 97 e6 b3 95 e5 ae 9e e8 b7 b5 logistic e5 9b 9e e5 bd 92 e4 b8 8e e6 a2 af e5 ba a6 e4 b8 8a e5 8d 87 e7 ae 97 e6 b3 95 e4 b8 8b svm http pytlab github io 2017 08 15 e6 9c ba e5 99 a8 e5 ad a6 e4 b9 a0 e7 ae 97 e6 b3 95 e5 ae 9e e8 b7 b5 e6 94 af e6 8c 81 e5 90 91 e9 87 8f e6 9c ba svm e7 ae 97 e6 b3 95 e5 8e 9f e7 90 86 svm http pytlab github io 2017 08 30 e6 9c ba e5 99 a8 e5 ad a6 e4 b9 a0 e7 ae 97 e6 b3 95 e5 ae 9e e8 b7 b5 svm e6 a0 b8 e5 87 bd e6 95 b0 e5 92 8c e8 bd af e9 97 b4 e9 9a 94 svm smo http pytlab github io 2017 09 01 e6 9c ba e5 99 a8 e5 ad a6 e4 b9 a0 e7 ae 97 e6 b3 95 e5 ae 9e e8 b7 b5 svm e4 b8 ad e7 9a 84smo e7 ae 97 e6 b3 95 platt smo svm http pytlab github io 2017 10 15 e6 9c ba e5 99 a8 e5 ad a6 e4 b9 a0 e7 ae 97 e6 b3 95 e5 ae 9e e8 b7 b5 platt smo e5 92 8c e9 81 97 e4 bc a0 e7 ae 97 e6 b3 95 e4 bc 98 e5 8c 96svm http pytlab github io 2017 10 24 e6 9c ba e5 99 a8 e5 ad a6 e4 b9 a0 e7 ae 97 e6 b3 95 e5 ae 9e e8 b7 b5 e6 a0 87 e5 87 86 e4 b8 8e e5 b1 80 e9 83 a8 e5 8a a0 e6 9d 83 e7 ba bf e6 80 a7 e5 9b 9e e5 bd 92 lasso http pytlab github io 2017 10 27 e6 9c ba e5 99 a8 e5 ad a6 e4 b9 a0 e5 ae 9e e8 b7 b5 e5 b2 ad e5 9b 9e e5 bd 92 e5 92 8classo e5 9b 9e e5 bd 92 http pytlab github io 2017 11 03 e6 9c ba e5 99 a8 e5 ad a6 e4 b9 a0 e7 ae 97 e6 b3 95 e5 ae 9e e8 b7 b5 e6 a0 91 e5 9b 9e e5 bd 92 | ai |
|
Powerpointer-For-Local-LLMs | powerpoint generator using python pptx and local large language models this is a powerpoint generator that uses python pptx and local llm s using the oobabooga text generation webui api to generate beautiful and informative presentations powerpointer doesn t use marp it directly creates the powerpoints so you can easily make changes to them or finish it within powerpoint it also makes placeholders for images you can even select between 7 designs to make the powerpoints more beautiful this is a port from my powerpointer which uses the gpt 3 5 openai api powerpointer https github com cybertimon powerpointer the goal was to have this running completely local with no costs using for example a llama based model you can support this by giving this repo a star i optimized the prompts to work with the vicuna and alpaca like models you can select the model type in the powerpointer py file or can create a new prompt format in the pormpts py file how it works it asks the user about the informations of the powerpoint then it generates the text for the powerpoint using some hacky prompts and the text generation webui api the python pptx library converts the generated text using my powerpoint format into a powerpoint presentation how to use this to make this work clone the repository and install the following packages pip install python pptx regex collection after this start your oobabooga text generation webui instance with an instruct finetuned model and the api extension extensions api 13b models and upwards work the best but you sometimes also receive good output with 7b models if you run oobabooga on a remote machine or not on a different port ip you have to open powerpointer py and change the host or url variable when you are there also make sure that the model type for the prompt format is set correctly vicuna or alpaca finally start the powerpoint generator by running python3 powerpointer py known issues because of the limitation of small and sometimes dumb local models the generator easily hallucinates things the generator sometimes ignores the selected slide count the generator sometimes forgets to include the additional info because of my pro code it s complicated to add new templates i m searching for an easier way please report any issues and feel free to make a pull request to fix my code i wrote at night made by cybertimon timon cybertimon ch demo screenshots here are some screenshots from the local generated powerpoints alt text https raw githubusercontent com cybertimon powerpointer for local llms main examples ai sample png alt text https raw githubusercontent com cybertimon powerpointer for local llms main examples ai sample2 png alt text https raw githubusercontent com cybertimon powerpointer for local llms main examples example run png | ai |
|
awesome-llm-engineering | img src icon png align right awesome llm engineering awesome https cdn jsdelivr net gh sindresorhus awesome d7305f38d29fed78fa85652e3a63e154dd8e8829 media badge svg https github com sindresorhus awesome readme a curated list for large language model engineering on the application layer includes but is not limited to articles documentation papers etc nlp books natural language processing with transformers revised edition https www amazon com natural language processing transformers revised dp 1098136799 gpt 3 building innovative nlp products using large language models https www amazon com gpt 3 building innovative products language dp 1098113624 tools chatgpt https openai com blog chatgpt chatbot developed by openai gpt 3 a i coding assistant https marketplace visualstudio com items itemname arrendy gpt3 vscode extension vscode extension utilizing gpt3 chatgpt helper https marketplace visualstudio com items itemname kiranshah chatgpt helper vscode extension utilizing gpt3 fine tuning fine tuning docs huggingface fine tuning docs https huggingface co docs transformers training openai fine tuning docs https beta openai com docs guides fine tuning fine tuning articles sam altman on ai for the next era https greylock com greymatter sam altman ai for the next era security related fine tuning papers fine mixing mitigating backdoors in fine tuned language models https arxiv org pdf 2210 09545 pdf adversarial fine tuning for backdoor defense https arxiv org pdf 2202 06312v1 pdf fine tuning mitigate backdoor attacks https www researchgate net publication 366423678 fine tuning is all you need to mitigate backdoor attacks prompt engineering articles openai best practices for prompt engineering https help openai com en articles 6654000 best practices for prompt engineering with openai api how to give clear and effective instructions to gpt 3 and codex 3 principles for prompt engineering https www linkedin com pulse 3 principles prompt engineering gpt 3 ben whately learnings from using and working with language models prompt engineering tips and tricks https blog andrewcantino com blog 2021 04 21 prompt engineering tips and tricks what prompt engineering is why it matters and some tips and tricks to help you do it well prompt engineering in gpt3 https www analyticsvidhya com blog 2022 05 prompt engineering in gpt 3 prompting gpt3 openai gpt3 and prompt engineering https medium com swlh openai gpt 3 and prompt engineering dcdc2c5fcd29 deep dive into the prompts prompt engineering approaches https arize com blog course llmops operationalizing llms at scale prompt engineering covers few shot instructor based chain of thought automatic prompting and prompt templates papers promptsource https arxiv org pdf 2202 01279 pdf survey of prompting methods https arxiv org pdf 2107 13586 pdf prefix tuning optimizing continuous prompts for generation https aclanthology org 2021 acl long 353 pdf contribute contributions are always welcome please read the contribution guidelines contributing md first | gpt-3 machine-learning nlp prompt-engineering | ai |
prompt-engine | prompt engine this repo contains an npm utility library for creating and maintaining prompts for large language models llms background llms like gpt 3 and codex have continued to push the bounds of what ai is capable of they can capably generate language and code but are also capable of emergent behavior like question answering summarization classification and dialog one of the best techniques for enabling specific behavior out of llms is called prompt engineering crafting inputs that coax the model to produce certain kinds of outputs few shot prompting is the discipline of giving examples of inputs and outputs such that the model has a reference for the type of output you re looking for prompt engineering can be as simple as formatting a question and passing it to the model but it can also get quite complex requiring substantial code to manipulate and update strings this library aims to make that easier it also aims to codify patterns and practices around prompt engineering see how to get codex to produce the code you want https microsoft github io prompt engineering article for an example of the prompt engineering patterns this library codifies installation npm install prompt engine usage the library currently supports a generic promptengine a codeengine and a chatengine all three facilitate a pattern of prompt engineering where the prompt is composed of a description examples of inputs and outputs and an ongoing dialog representing the ongoing input output pairs as the user and model communicate the dialog ensures that the model which is stateless has the context about what s happened in the conversation so far see architecture diagram representation img src https user images githubusercontent com 17247257 178334939 65e0e3ce 39b3 4abc a889 7f2c0fb75f60 png width 500 code engine code engine creates prompts for natural language to code scenarios see typescript syntax for importing codeengine js import codeengine from prompt engine nl code prompts should generally have a description which should give context about the programming language the model should generate and libraries it should be using the description should also give information about the task at hand js const description natural language commands to javascript math code the code should log the result of the command to the console nl code prompts should also have examples of nl code interactions exemplifying the kind of code you expect the model to produce in this case the inputs are math queries e g what is 2 2 and code that console logs the result of the query js const examples input what s 10 plus 18 response console log 10 18 input what s 10 times 18 response console log 10 18 by default codeengine uses javascript as the programming language but you can create prompts for different languages by passing a different codepromptconfig into the constructor if for example we wanted to produce python prompts we could have passed codeengine a pythonconfig specifying the comment operator it should be using js const pythonconfig commentoperator const codeengine new codeengine description examples flowresettext pythonconfig with our description and our examples we can go ahead and create our codeengine js const codeengine new codeengine description examples now that we have our codeengine we can use it to create prompts js const query what s 1018 times the ninth power of four const prompt codeengine buildprompt query the resulting prompt will be a string with the description examples and the latest query formatted with comment operators and line breaks js natural language commands to javascript math code the code should log the result of the command to the console what s 10 plus 18 console log 10 18 what s 10 times 18 console log 10 18 what s 1018 times the ninth power of four given the context a capable code generation model can take the above prompt and guess the next line console log 1018 math pow 4 9 for multi turn scenarios where past conversations influences the next turn code engine enables us to persist interactions in a prompt js assumes existence of code generation model let code model generatecode prompt adds interaction codeengine addinteraction query code now new prompts will include the latest nl code interaction js codeengine buildprompt how about the 8th power produces a prompt identical to the one above but with the nl code dialog history js what s 1018 times the ninth power of four console log 1018 math pow 4 9 how about the 8th power with this context the code generation model has the dialog context needed to understand what we mean by the query in this case the model would correctly generate console log 1018 math pow 4 8 chat engine just like code engine chat engine creates prompts with descriptions and examples the difference is that chat engine creates prompts for dialog scenarios where both the user and the model use natural language the chatengine constructor takes an optional chatconfig argument which allows you to define the name of a user and chatbot in a multi turn dialog js const chatengineconfig user ryan bot gordon chat prompts also benefit from a description that gives context this description helps the model determine how the bot should respond js const description a conversation with gordon the anxious robot gordon tends to reply nervously and asks a lot of follow up questions similarly chat engine prompts can have examples interactions js const examples input who made you response i don t know man that s an awfully existential question how would you answer it input good point do you at least know what you were made for response i m ok at riveting but that s not how i should answer a meaning of life question is it these examples help set the tone of the bot in this case gordon the anxious robot now we can create our chatengine and use it to create prompts js const chatengine new chatengine description examples flowresettext chatengineconfig const userquery what are you made of const prompt chatengine buildprompt userquery when passed to a large language model e g gpt 3 the context of the above prompt will help coax a good answer from the model like subatomic particles at some level but somehow i don t think that s what you were asking as with code engine we can persist this answer and continue the dialog such that the model is aware of the conversation context js chatengine addinteraction userquery subatomic particles at some level but somehow i don t think that s what you were asking managing prompt overflow prompts for large language models generally have limited size depending on the language model being used given that prompt engine can persist dialog history it is possible for dialogs to get so long that the prompt overflows the prompt engine pattern handles this situation by removing the oldest dialog interaction from the prompt effectively only remembering the most recent interactions you can specify the maximum tokens allowed in your prompt by passing a maxtokens parameter when constructing the config for any prompt engine js let promptengine new promptengine description examples flowresettext modelconfig maxtokens 1000 available functions the following are the functions available on the promptengine class and those that inherit from it command parameters description returns buildcontext none constructs and return the context with parameters provided to the prompt engine context string buildprompt prompt string combines the context from buildcontext with a query to create a prompt prompt string builddialog none builds a dialog based on all the past interactions added to the prompt engine dialog string addexample interaction interaction input string response string adds the given example to the examples none addinteraction interaction interaction input string response string adds the given interaction to the dialog none removefirstinteraction none removes and returns the first interaction in the dialog interaction string removelastinteraction none removes and returns the last interaction added to the dialog interaction string resetcontext none removes all interactions from the dialog returning the reset context context string for more examples and insights into using the prompt engine library have a look at the examples https github com microsoft prompt engine tree main examples folder yaml representation it can be useful to represent prompts as standalone files versus code this can allow easy swapping between different prompts prompt versioning and other advanced capabiliites with this in mind prompt engine offers a way to represent prompts as yaml and to load that yaml into a prompt engine class see examples yaml examples for examples of yaml prompts and how they re loaded into prompt engine contributing this project welcomes contributions and suggestions most contributions require you to agree to a contributor license agreement cla declaring that you have the right to and actually do grant us the rights to use your contribution for details visit https cla opensource microsoft com when you submit a pull request a cla bot will automatically determine whether you need to provide a cla and decorate the pr appropriately e g status check comment simply follow the instructions provided by the bot you will only need to do this once across all repos using our cla this project has adopted the microsoft open source code of conduct https opensource microsoft com codeofconduct for more information see the code of conduct faq https opensource microsoft com codeofconduct faq or contact opencode microsoft com mailto opencode microsoft com with any additional questions or comments statement of purpose this library aims to simplify use of large language models and to make it easy for developers to take advantage of existing patterns the package is released in conjunction with the build 2022 ai examples https github com microsoft build2022 ai examples as the first three use a multi turn llm pattern that this library simplifies this package works independently of any specific llm prompt generated by the package should be useable with various language and code generating models trademarks this project may contain trademarks or logos for projects products or services authorized use of microsoft trademarks or logos is subject to and must follow microsoft s trademark brand guidelines https www microsoft com en us legal intellectualproperty trademarks usage general use of microsoft trademarks or logos in modified versions of this project must not cause confusion or imply microsoft sponsorship any use of third party trademarks or logos are subject to those third party s policies | ai |
|
Arabic_tweets_NLP | arabic nlp this is an arabic tweets analysis data description the data is tweets collected from a api twitter developer account with a size of 29 000 tweets goal is to build a model that can classify tweets into groups that we defined sports politics religions economy visualization whatsapp image 2021 12 28 at 3 22 30 pm https user images githubusercontent com 47735276 147579637 8a9387f3 e776 4994 ac78 952a346e1bbe jpeg the words above are the 30 most repeated words in the dataset whatsapp image 2021 12 28 at 2 59 06 pm https user images githubusercontent com 47735276 147579981 0c446f9e b923 4d92 9f3f c4e2ed31b1fb jpeg | ai |
|
web-dev-2022 | web development 2022 looking to hone your skills as a web developer become a stronger software engineer and implement modern industry practices we are a group of software engineers and teaching fellows fullstack academy https www fullstackacademy com passionate about education this repository contains source code assets and videos for our live workshops held on december 2021 our lineup consists of react hooks https github com iseykim react hooks portfolio building with gatsby js https github com amberabreu codyportfolio boilerplate git workflow project management wireframing and prototyping in figma firebase https github com margaritadanshina todo firebase ethics in engineering patterns in algorithm datastructure setup this repository uses git subtree to pull from remote repositories into a centralized origin to pull latest changes simply run the git pull script bash give execute permission chmod x pull repos run the command pull repos usage we will be doing live demos and pushing changes from our respective workshop and remote repository as such it is recommended to sandbox your own modification and exploration in a separate branch to keep the merge strategy straightforward for instance if you want to follow along with the react hooks demo make a branch first at the end of the workshop switch to main and run pull repos who we are sey kim www linkedin com in sey kim danny lahamar www linkedin com in daniellahamar amberabreu https www linkedin com in amber abreu | front_end |
|
pocket-newton | pocket newton | os |
|
Bunhead-sweets | bunhead sweets web and mobile applications development course project 1 en mysql workbench correr el archivo database sql 2 en mysql workbench correr el archivo mock data sql 3 en mongosh crear la base de datos bunhead sweets db 4 en el directorio server usar el comando npm install 5 en el directorio server usar el comando npm run devstart 6 revisar la aplicacion en http localhost 3001 http localhost 3001 7 revisar las instrucciones en el archivo charts and reports js 8 en el directorio client usar el comando npm install 9 en el directorio client usar el comando npm start | front_end |
|
vanilla-js-projects | vanilla javascript projects coffee p align center img src http i imgur com 14kw76s jpg p introduction during the first four weeks of the bootcamp we learned how to code with javascript using the nodejs platform but as you probably know javascript s beginnings were in the browser in 1995 https en wikipedia org wiki javascript beginnings at netscape at that time personal computers were starting to become more powerful and there was a realization that the web needed to become more dynamic before that time web browsers were quite dumber than they are now a browser would receive an html page from a web server either dynamically created or as a static file and render the page http www pathinteractive com blog design development rendering a webpage with google webmaster tools making additional requests for every img element therein after rendering a page the possible interactions were limited the user clicks on an a anchor element on the page making the browser issue a get request to the url specified in the href attribute of the anchor effectively loading another page the user fills out a form and submits it making the browser issue either a get or post request depending on the method attribute of the form to the url specified in the action attribute of the form this would also result in another page load this method of creating web sites and rendering them is exactly how you built your reddit clone project https github com decodemtl node express reddit clone using nodejs express and pug on the server side in this model the browser did very little this was fine for a while but with the advent of javascript in browsers in 1995 things would dramatically change browser javascript is the same language you learned while studying nodejs but it offers dynamic functionality that is meant for the browser while nodejs gives you modules to access the computer s file system https nodejs org api fs html create http servers https nodejs org api http html lower level socket servers https nodejs org api net html and a huge library of modules http npmjs com to connect to databases https www npmjs com package mysql read the keyboard https www npmjs com package prompt and even control robots https www npmjs com package johnny five browser javascript gives you the dom http htmldog com guides javascript intermediate thedom the dom document object model is accessible using the globally predefined document variable it is a representation of the html document that your browser has rendered as a tree of javascript objects with the dom you can change any part of an already rendered web page by writing code inside a script tag using the javascript language you already know events and callbacks http htmldog com guides javascript intermediate events through the dom you can make a page dynamic by listening to various events you can execute callback functions on page load page scroll mouse move mouse click keyboard input and many more ajax http htmldog com guides javascript intermediate ajax ajax asynchronous javascript and xml has become a buzzword over the years it refers to a functionality that allows you to dynamically make http requests using javascript code in a script tag without reloading the page throughout the years various apis were added to browser javascript allowing us to do things like drawing on a web page http htmldog com guides javascript advanced canvas store user information in the browser http htmldog com guides javascript advanced localstorage and manipulate the browser history https developer mozilla org en us docs web api history api to name a few together this set of functionalities allows us to build things like gmail netflix google maps and pretty much any application we can imagine all running inside a web browser it is this set of functionalities that allows facebook and twitter to show you notifications without having to refresh the page same thing for upvoting things on reddit while staying in place they are collectively referred to as vanilla js http vanilla js com in contrast with the many https angularjs org frameworks http emberjs com and libraries https jquery com that are built on top of them in this course we will be looking at two libraries built on top of browser javascript jquery https jquery com jquery was created as a response to a huge problem at the time namely the discrepancy between javascript implementations in browsers it provided a unified api to access the dom events ajax and more without having to worry about browser differences react https facebook github io react react is a user interface library that was created as a response to the difficulty of managing big browser based applications using the primitive vanilla js functions it turns out that manipulating the dom directly using the document object is error prone at scale and becomes quickly unmanageable react tries to solve this problem in a declarative manner similar to sql rather than calling dom functions to manipulate the content of a page react allows us to define user interface components in a declarative way and automatically takes care of calling the appropriate dom functions the similarity to sql lies in the fact that we never have to tell an sql database how we want things to be computed but simply what to compute think of it as the difference between order by createdat desc and data sort function a b projects before learning any of these libraries we will work on getting familiar with vanilla javascript since everything is built on top of it it will pay off to know how it works in order to do this we will be building three front end projects that run completely in the browser in these projects our web server s only role will be to produce a static index html file that contains a script tag where everything will happen one implication of this is that search engine crawlers like google bot will not be able to view the content of our applications since everything will happen after the page has loaded long after the http request response cycle has ended weather application p align center img src weather app demo gif p we will start by building a first project together it will be a simple weather application that asks the user for their city and dynamically displays a weather box with the current temperature basic weather conditions as well as an icon representing the current state of things flickr browser p align center img src flickr api project gif p even though flickr already has an excellent web app to browse photos we will use the flickr api to create an infinitely scrolling browser based on a search word blackjack game p align center img src blackjack gif p this one is a bit extreme but we re giving you quite a bit of code to help you make sure to personalize it so it doesn t look too generic baby steps before starting to work on the projects we ll have to learn about the building blocks of browser based javascript to follow along create a file called index html with the following structure html doctype html html head head body div id app div script src app js script body html then create an empty app js file in which you ll be able to test the functions you will learn about the browser will load app js as soon as it encounters the script tag and will execute the contained javascript in the context of the current page creating and manipulating elements the dom api can be accessed through the document global variable this variable is made available to you in javascript running in the context of the current page and it represents the currently displayed document let s start from the javascript console open your developer tools move to the console tab and write the following javascript document getelementbyid app you ll get something like this getelementbyid document getelementbyid png this is the dom in action we queried the document for an element by its id and since we have a div id app in our page we got that back as a return value now try this javascript document getelementsbytagname body this time notice you get something that looks like an array expand it and hover over the first element you ll see it light up on the page modern browsers have a way to query for elements using a css selector try this javascript document queryselector app you ll get back exactly the same element let s move to our app js and write the following javascript var app document queryselector app var thetitle document createelement h1 thetitle innertext hello world then refresh the page hmm it looks like something is missing when we create a new element using the dom we have to add it to an already existing element or else it won t appear on the page add the following line to complete this javascript app appendchild thetitle refresh the page one more time to see your dynamically created h1 appear on the page another way to accomplish the same thing is using the innerhtml property of an element javascript var app document queryselector app app innerhtml h1 hello world h1 in this case the browser will parse the html string and create the elements on the fly the flipside is that it will remove anything that was previously in app so it s not as useful as appendchild now modify your index html to add a link to a style css file in style css write the following css red color red this says that any element with the class red will have its text color in red then open your console and write the following javascript var h1 document queryselector h1 there s only one right now h1 classlist add red as you do this watch the text of the h1 change from its black default to red sometimes we need to manipulate the styling of an element without adding removing a class name we can do this using the inline style attribute of the element instead of taking a string it s an object with one property for each style try the following in the console after refreshing the page javascript document queryselector h1 style color red it will have the same effect it turns out that often manipulating the classes of an element is cleaner than changing its inline style because it s easier to remove the class by using classlist remove handling events handling events is done by calling the addeventlistener method of a dom element many types of events exist as an example there are mouse events form events keyboard events let s add a button to our page and do something when it is clicked change the code of app js to the following javascript var app document queryselector app add a title var thetitle document createelement h1 thetitle innertext hello world app appendchild thetitle add a button var thebutton document createelement button thebutton innertext click me app appendchild thebutton here s the new part thebutton addeventlistener click function thetitle classlist add red then reload the page and try clicking on the button notice the title text change to red change the classlist add to classlist toggle and try clicking the button many times move to your html file and put the following inside the app div html input type text id textbox button click me button then change the whole code of app js to the following javascript var textbox document queryselector textbox var thebutton document queryselector app button css for the element with tag name button inside the element with id app we could also have given the button an id textbox addeventlistener input function console log this value thebutton addeventlistener click function alert the value of the input box is textbox value then reload the page and go to the console tab of your developer tools they are open right type some text in the input box and see the console log s coming up at any point click on the button to get an alert notice the use of this inside the callback to the input event listener when an event handler is called this will always refer to the element on which addeventlistener was called this allows us to re use the same event handler for many elements stopping to listen to an event if we want to stop listening to an event on an element we have to call the removeeventlistener method on that element but to do this we need to have a reference to the function that was passed to addeventlistener if we used an inline function expression then we can t do that let s try with the following code javascript var textbox document queryselector textbox var thebutton document queryselector app button css for the element with tag name button inside the element with id app we could also have given the button an id function printvalue console log this value textbox addeventlistener input printvalue thebutton addeventlistener click function when the button is clicked stop listening to input events on the input box textbox removeeventlistener input printvalue try running this code in the browser and after pressing the button the event listening should stop default event behaviors some events have a default behavior attached to them for example when you click on an a element the browser will load its href property effectively leaving the current page this default behavior can be prevented by accessing the event object and calling its preventdefault method when an event handler gets called it receives the event object as its first argument let s see how that works clear the content of the app div in your html and change app js to the following code javascript var app document queryselector app var thelink document createelement a thelink innertext a link to decodemtl thelink setattribute href http www decodemtl com this is how we set html attributes app appendchild thelink thelink addeventlistener click function event event preventdefault console log prevented browsing to this href by using preventdefault try refreshing the page and clicking on the link and nothing should happen event bubbling the dom tree is a nested structure if you click on a nested element it seems logical that you also clicked on its parent and that parent s parent and so on let s visualize this behavior with an example change the content of the app div to the following html p id theparagraph this is a a id thelink href http www decodemtl com link a p then change app js to the following code javascript this gives some height to the app div so we can click it document queryselector app style height 400px this makes the app div visible note that the css background color is written backgroundcolor in the dom document queryselector app style backgroundcolor ccc document body addeventlistener click function console log the body was clicked document queryselector app addeventlistener click function console log app was clicked document queryselector theparagraph addeventlistener click function console log theparagraph was clicked document queryselector thelink addeventlistener click function event event preventdefault we need to do this otherwise we will leave the page if we click the link console log thelink was clicked then refresh the page in your browser and try clicking in different places click on the link then on the text inside the paragraph and then anywhere else on the page with one click up to four event handlers are getting called starting from the element that was clicked and bubbling up to the body at any point in time we can call the event object s stoppropagation method to stop this bubbling try adding event stoppropagation in the different event handlers and observe the result in your browser p align center img src event bubbling png p event delegation warning this topic is extremely important and you should understand it well before moving on let s look at an example by writing the following code inside the app div of our html html h1 event delegation h1 ul id thelist li first list item li li second list item li li third list item li li fourth list item li li fifth list item li ul then let s try to attach an event handler to each list item and print its inner text when it is clicked javascript var listitems document queryselectorall thelist li select all the lis inside thelist it turns out that queryselectorall does not return an array but an array like object called a nodelist here s one way we can iterate over the nodelist items using the array prototype array prototype foreach call listitems function listitem add one event listener per list item listitem addeventlistener click function console log you clicked on this innertext nodelist actually has a foreach method but it s not the same as arrays it doesn t however have map or filter or any of the other useful array methods try running your code in the browser to show yourself that it works even though this code works there are mainly two things that are wrong with it 1 if we dynamically add new lis to the ul after setting up the event listeners clicking them will do nothing ask yourself why 2 setting up multiple event handlers can be extremely resource intensive and we can do much better imagine an image gallery with 100s of images one event handler per image will simply not cut it it s not by accident that we looked at event propagation in the last section using the propagation we can solve these two issues in one shot first off though write the following code in your developer tools console javascript var thelist document queryselector thelist var newitem document createelement li newitem innertext sixth list item thelist appendchild newitem then try clicking on the sixth li that was added dynamically and prove to yourself that nothing happens now let s fix it change the code of app js to the following javascript var thelist document queryselector thelist thelist addeventlistener click function event while this represents the thelist element event has a property called target which represents the actual originator of the event we can use this to our advantage first check if the target is an li that is a direct child of the list if event target parentnode thelist we definitely clicked on an li console log you clicked on event target innertext with this code in place we should get the same initial behavior as the previous code one advantage is that we only have a single event handler no matter how many lis now try running the following code in your console javascript var thelist document queryselector thelist var newitem document createelement li newitem innertext sixth list item thelist appendchild newitem then try clicking on the sixth li and notice the difference it gets console logged this is event delegation in a nutshell and you should use it wherever you can this is also a great transition for the next section where we will learn about ajax making http requests in the context of a web page will often result in adding new elements dynamically setting up event delegation will enable us to listen to clicks on those new elements without having to manually attach event handlers to each one of them making http requests browser javascript would be pretty boring if all you could do was manipulate the page what makes things really interesting is the ability to make http requests in the context of an already loaded web page this enables us to load external information then use what we already learned to make that information appear on the page originally this was accomplished with a contraption called xmlhttprequest and the api for it was less than stellar modern browsers offer a much better alternative using the globally available fetch function as a bonus fetch uses promises to give back its results allowing us to write much nicer code the process of fetching external information and updating the page as a result is often called ajax which stands for asynchronous javascript and xml after four weeks of bootcamping we now understand what asynchronous means we also know what javascript means we haven t really looked at xml it turns out to be quite a heavy data format both in terms of its size and its parsing many new apis prefer to use the json format instead since json is native to javascript this is great for us let s look at how we can use fetch at first only to make an http request and then eventually to build something with it let s start by changing the code of app js to the following javascript fetch http www rbcroyalbank com then function response return response text parsing the response as text returns another promise so we chain it then function textresponse console log textresponse try refreshing the browser and look at your console tab cors error cors error png what s that an error what the browser is telling us here is that we re not allowed to see the response from the royal bank site since this request is running in the context of our web browser this makes a lot of sense because such a request would have access to our cookies seeing the response could allow us to retrieve some sensitive information from anyone who loads up our web page in their browser imagine that we were allowed to see the content of this 3rd party site here s a hand waving example of what we could do javascript fetch https www onlinebank com accounts then function response return response text then function textresponse we now have an html page with the user s bank accounts let s send it to ourselves fetch http www evil domain com bank accounts method post body textresponse having setup such a page we can then send the link to unsuspecting people and harvest all their banking information browsers prevent this behavior by default and only allow us to look at the response of fetch requests when they are made to our own domain name called the origin a system called cors cross origin resource sharing allows web servers to cooperate with web application builders by setting appropriate headers in the http response a web server can tell the browser that the response can be allowed to be seen by the originator of the ajax request your online bank would never do such a thing but a lot of apis offering publicly available information will enable cors on some of their endpoints one example of such an api is the reddit json api this is probably the last time we will look at reddit in this bootcamp because quite frankly we re starting to be fed up with it let s look at an example of making an ajax call to the reddit json api parsing the result and doing something with it javascript fetch https www reddit com r montreal json then function response return response json parsing as json returns a promise let s chain it then function jsonresponse var posts jsonresponse data children posts foreach function post i console log post i 1 post data title here we are retrieving the front page of r montreal and prining each title on the console try it for yourself from then on the possibilities are endless rather than printing the result in the console we can construct some dom elements and display them on the page let s try to do that write the following code in app js javascript fetch https www reddit com r montreal json then function response return response json parsing as json returns a promise let s chain it then function jsonresponse jsonresponse data children map function post post post data reddit has a weird format create a box for each post var linkbox document createelement p create a link element for each post var link document createelement a link setattribute href post url link setattribute target blank make the link open in a new tab link innertext post title add the link to the paragraph linkbox appendchild link return the paragraph from the map callback return linkbox foreach function linkparagraph document body appendchild linkparagraph try it again by refreshing the browser as you can imagine this ajax code could be executed as a result of an event a timer or anything else that you can implement using javascript conclusion by marrying events ajax calls and dom manipulation we can build fully functioning web applications that run in the browser we ll do that in the next section by building three small web applications one of the things that we ll see while doing this is that directly using the dom can be quite cumbersome and managing the state of our application can quickly get out of hand next week we ll start looking at how to solve some of those problems in a declarative way using the react ui library built by facebook https facebook github io react project 1 weather app introduction for this first project we will be holding your hand throughout we will build a basic version of a weather application that shows the current temperature in a city here s what the final basic version of the project will look like p align center img src weather app demo gif p as discussed in class if you want to use this project as part of your portfolio it would be interesting for you to do more than the basic version some suggestions will be given to you at the end of the workshop instuctions preparing to prepare coding for the project we will get ourselves some keys for the two apis we will use dark sky weather api let s get an api key for the dark sky api 1 go to https darksky net dev and signup 2 confirm your email 3 login and find your api key here https darksky net dev account 4 copy your api key somewhere and we ll add it to the code later p align center img src dark sky api key png p google maps geocoding api let s get an api key for the google maps geocoding api 1 go to https developers google com maps documentation geocoding get api key 2 click on get a key 3 enter a name for your application 4 copy your api key somewhere and we ll add it to the code later p align center img src google maps api key png p let s start to code our app will be composed of a basic index html file that has the main user interface and an app js where all the interesting stuff will happen create an index html file with the following content html doctype html html head meta charset utf 8 title weather app title link rel stylesheet href style css head body div id app h1 weather app h1 div class main app form class city form input type text class city input button type button class get weather button get weather button form div class city weather the content diplayed here will be generated by dom operations div div div script src app js script body html this html file lays out the structure of our application as will always be the case everything will happen inside the app div this is better than throwing everything directly in the body think of it as a namespace then let s create the app js that our html is referring to the first thing we ll do is add our api keys to the javascript file before that though let s make sure our api keys are working go to your browser and open the index html file that you just created open the developer tools and move to the console tab then try running the following command making sure to replace the part that says your api key here with your api key javascript fetch https api darksky net forecast your api key here 37 8267 122 4233 then response response json then data console log data oops what happened dark sky is preventing us to make http requests to it from a browser in a real application this would make sense because your api key should be kept secret if we were building a real application we would have to create our own web server that acts as a proxy to the dark sky api this way we could keep our api key hidden on the server and even add our own logic to do rate limiting caching and so on since we are in development mode here we are going to use a shortcut a service called cors anywhere https cors anywhere herokuapp com will allow us to proxy our requests through it and that service will automatically add the appropriate access control allow origin header in its response but wait how can such a service exist isn t it unsafe to allow us to make ajax calls to a site that doesn t allow it in the first place well not really if our browser is making request to the cors anywhere domain then the cookies of the original website will not be passed along so we will get a pretty generic response in our case we re using this to bypass the dark sky protection of our api key we simply have to know that what we re doing would be bad in a real application where our secret api key would become exposed let s try that same request but make it go thru the cors anywhere proxy javascript fetch https cors anywhere herokuapp com https api darksky net forecast your api key here 37 8267 122 4233 then response response json then data console log data finally we are getting some data the url looks weird but basically everything after the herokuapp com will be passed as the path to the proxy and the proxy will make the request on our behalf from the server side safe of any cookies the google maps api does not have this limitation so we can geocode directly from the browser try it anyway just to be sure that your api key works javascript fetch https maps googleapis com maps api geocode json address montreal key your api key here then response response json then data console log data now that we ve tested our apis let s add the necessary information to our app js file create the file and add the following code in it javascript var darksky api url https api darksky net forecast var darksky api key your api key here var cors proxy https cors anywhere herokuapp com var google maps api key your api key here var google maps api url https maps googleapis com maps api geocode json next we ll create some utility functions that will allow us to access the two apis and return only the data that we need let s start by creating a function called getcoordinatesforcity the function will take a city string as parameter and return a promise for a coordinates object here is the code of the function add it to your app js javascript this function returns a promise that will resolve with an object of lat lng coordinates function getcoordinatesforcity cityname this is an es6 template string much better than verbose string concatenation var url google maps api url address cityname key google maps api key return fetch url returns a promise for a response then response response json returns a promise for the parsed json then data data results 0 geometry location transform the response to only take what we need test your function by going in the browser and refreshing your page since the app js code we are writing is in the global scope the function we just created is available call it like so javascript getcoordinatesforcity montreal then console log and make sure that you see an object with lat lng properties printed out then we ll add our second utility function called getcurrentweather it will take an object with lat lng and query the dark sky api for the current weather we only care about the current weather so we can optimize our api call let s add this to our app js javascript function getcurrentweather coords template string again i hope you can see how nicer this is var url cors proxy darksky api url darksky api key coords lat coords lng units si exclude minutely hourly daily alerts flags return fetch url then response response json then data data currently same idea different api the units si exclude minutely hourly daily alerts flags part in the query string of the url is explained in the dark sky api documentation https darksky net dev docs forecast basically units si means we ll get things back in celcius and kilometers and the exclude says to only send us the currently data it makes the api response smaller and therefore faster to transfer across the wire again let s test our function by refreshing the browser and trying the following code javascript getcurrentweather lat 45 5 lng 73 5 then console log this should print an object in the console that contains a bunch of weather properties for the basic version we will only be using the temperature the two functions we created will make up our basic flow of data we can test them together in the browser s console this way javascript getcoordinatesforcity montreal then getcurrentweather then data console log the current temperature is data temperature this should print out the current temperature for the city you asked for warning warning warning before moving on the the next section make sure that you understand everything that we did so far otherwise if you simply copied and pasted what we gave you without understanding it you will not have learned anything if you cannot explain exactly what is going on then ask one of your classmates or a ta to be sure wiring it up to the dom we are building this project following a logical progression if this was a console based project we would be done the last section gave us the data that we needed and we were able to print it on the console however this project also has a user interface component in the browser the user interface is created by using html and css along with javascript for dom manipulations and events this is exactly what we ll do in this section we already wrote the html for the user interface and the css will be left to you as an optional but strongly suggested exercise let s write the javascript code for the ui now first let s create one variable for each dom element we will need to target add the following code to app js after the two functions you wrote in the last section javascript var app document queryselector app var cityform app queryselector city form var cityinput cityform queryselector city input var getweatherbutton cityform queryselector get weather button var cityweather app queryselector city weather notice that we are introducing a new thing here the queryselector method does not only exist on the document object but also on all the elements inside the document if we call queryselector on an element we will only get back elements that are its descendants this is much more robust than querying the whole document now that we have a reference to all the needed dom elements let s wire up an event handler we ll do it a first time the wrong way then we ll see why it s wrong and we ll fix it after what we want to do is setup the app so that when the user clicks on the get weather button we start the process of fetching the data let s add the following code at the end of our app js javascript getweatherbutton addeventlistener click function var city cityinput value grab the current value of the input getcoordinatesforcity city get the coordinates for the input city then getcurrentweather get the weather for those coordinates then function weather cityweather innertext current temperature weather temperature refresh your browser enter a city name and click the get weather button wait a few seconds and you should see a message displayed in the page with the current temperature we just ajaxed now let s see why this is not the best way to setup our event refresh your browser type a city name in the input field and press enter on your keyboard what s happening well by default the browser tries to submit the form using the standard browser mechanism except our form is not meant to be submitted let s fix this first go back to your html and change the button type button to button type submit the button now becomes a submit button for the form this means that now whether you click the button or press enter in the input field the form will first fire off a submit event let s change our event code a little bit to this javascript cityform addeventlistener submit function event this line changes event preventdefault prevent the form from submitting this code doesn t change var city cityinput value getcoordinatesforcity city then getcurrentweather then function weather cityweather innertext current temperature weather temperature now refresh the browser and try again you should get a consistent behavior whether you press enter in the input field or click the get weather button we are done with the basics congrats cleaning things up a bit let s clean up our code a bit in two ways first off notice that we re not using the getweatherbutton variable anymore so let s remove it from the code then let s clean up all the global variables that we just polluted our scope with whenver javascript runs in the context of your web page it runs in the global scope as you already know one way to create a new scope in javascript is to write code inside a function it turns out that all the code that we wrote in app js doesn t need to expose any variables to the outside world we only used those variables to help run our logic in order to fix this we will wrap all the code that we wrote inside a contraption called iife or immediately invoked function expression http benalman com news 2010 11 immediately invoked function expression the code of app js will look like this javascript function var darksky api url https api darksky net forecast var darksky api key your api key here var cors proxy https cors anywhere herokuapp com var google maps api key your api key here var google maps api url https maps googleapis com maps api geocode json function getcurrentweather coords var url cors proxy darksky api url darksky api key coords lat coords lng return fetch url then response response json then data data currently function getcoordinatesforcity cityname var url google maps api url address cityname key google maps api key return fetch url then response response json then data data results 0 geometry location var app document queryselector app var cityform app queryselector city form var cityinput cityform queryselector city input var getweatherbutton cityform queryselector get weather button var cityweather app queryselector city weather cityform addeventlistener submit function event event preventdefault prevent the form from submitting var city cityinput value getcoordinatesforcity city then getcurrentweather then function weather cityweather innertext current temperature weather temperature this says here s an anonymous function now run it the advantage is that all the variables and functions we declared are now scoped to this function and will not pollute the global scope since we don t need those variables outside of that scope this is perfectly fine it is a good practice to wrap your code in an iife to prevent it from polluting the global scope do it whenever possible the reason for the extra parentheses is to prevent a syntaxerror as explained in the article linked above possible improvements as discussed in class if you want to use this project for your portfolio you can make a lot of enhancements to it here are some suggestions 1 when we click get weather right now there is no indication that the browser is doing anything useful if the two api calls take more than a few milliseconds to run we will not see what is going on this is bad for the user experience if you look at the gif at the beginning of this seciton you ll notice that after entering a city name the word loading appears below to be replaced by the result when it arrives implement this in your own app 2 add css to make it look nice here the sky is the limit bigger fonts custom fonts custom colors and backgrounds you can use the skycons http darkskyapp github io skycons icons since the api returns an icon property in the data you could also look for another set of icons that is more original or in line with the style of your app 3 add more weather information the currently section contains the wind speed wind direction and many other interesting bits of information about the weather find a nice way to display them and do it 4 add a five day forecast to your application with its own styling and icons 5 when geocoding we are taking the first result that google maps api gives us inside this result there is the full name of the location that we requested for example if you query for address montreal the response will contain the string montreal qc canada use this to display it along with your weather information 6 use google places autocomplete https developers google com maps documentation javascript places autocomplete to provide an input box that will suggest options to the user this is better than guessing what the user wanted to type and will make you learn about a new api project 2 flickr api photo browser it s your turn now based on what you learned while doing the previous project you will build a flickr photo browser here s an example of what a super basic version of the browser will look if you want to use this as part of your portfolio you should definitely follow some of the improvements that we suggest p align center img src flickr api project gif p 1 get an api key here https www flickr com services api misc api keys html you ll have to get a yahoo account yes we know for this app you only need the key and not the secret flickr api key flickr api key png 2 read the documentation for the flickr search api https www flickr com services api flickr photos search html even though it mentions xml results you can get json back by using this url format https api flickr com services rest method flickr photos search format json nojsoncallback 1 api key your api key text the search text 3 read the documentation on how to build urls for flickr images https www flickr com services api misc urls html 4 write a function called getphotosforsearch that takes a search term and returns an array of photo objects you ll have to transform the flickr response quite a bit ideally you will return an array of objects with each object having thumb large and title properties these properties should be urls built using the documentation in step 3 5 wire up a search form submit event to start the search using the word s in the form input when receiving the results clear a pre existing container div and put the results in there each result should have this shape html a href url of the large image target blank img src url of the thumbnail alt title of the image a in order to create such elements it would help to have a helper function called createflickrthumb that returns an a element like this javascript function createflickrthumb photodata var link document createelement a link setattribute href photodata large link setattribute target blank var image document createelement img image setattribute src photodata thumb image setattribute alt photodata title link appendchild image return link 6 once the basics are working it s time to add some improvements make the gallery look nice with css make the gallery responsive using a block grid instead of linking to each image prevent the click and display a popup image with an x infinite scroll using window addeventlistener scroll try to figure out when the scrolling has reached the bottom of the page and start loading the next page of results project 3 vanilla blackjack introduction in this workshop you will be using the elegant deckofcardsapi http deckofcardsapi com in order to build your own game of blackjack once complete your game will look something like the picture below at which point you will be able to customize and skin it to your own liking by adding onto index html and styles css p align center img src blackjack gif p you are provided with 3 files index html app js styles css while index html and styles css are sufficiently complete for the basic game of blackjack app js is fairly empty only the function names and state variables are provided for you you re job is to complete app js in order to create a functioning game of blackjack getting starting create an index html file with the following content html doctype html html head meta charset utf 8 title vanilla blackjack title link rel stylesheet type text css href styles css head body h1 vanilla blackjack h1 button id new game shuffle new deck button div class game container div id game area div id dealer area h2 dealer span id dealer number span h2 div id dealer cards cards will appear here div div div id player area h2 player span id player number span span id announcement span h2 div id player cards cards will appear here div div div div id action area button id next hand style display none next hand button button id hit me style display none hit me button button id stay style display none i ll stay button div div script type text javascript src app js script body html create a styles css with the following content css body height 100vh background darkslategray h1 margin 2rem text align center color white font family fantasy h2 font family fantasy font size 26px color navy game container display flex flex flow row nowrap align items center perspective 1000px game area z index 1 width 85 transform rotatex 35deg padding 0 2rem 2rem 2rem background forestgreen dealer area margin bottom 2rem dealer cards transform translatey 40px display flex justify content center height 168px player area announcement margin left 25 player cards display flex justify content center height 168px action area z index 10 width 15 display flex flex direction column next hand hit me margin bottom 3rem button background steelblue padding 1rem font size 1rem border radius 1rem outline none img width 130px height 190px media min width 1200px body padding 0 12 testing the deck of cards api visit the deckofcardsapi http deckofcardsapi com and familiarize yourself with the first two api calls shuffle the cards draw a card open a new tab in chrome and open your console in developer tools paste the following into the console in order to observe the parsed response from deckofcardsapi javascript fetch https deckofcardsapi com api deck new shuffle deck count 6 then response response json then data console log data inside the response find the deck id and replace the value in the following api call in order to draw 4 cards from the deck you just shuffled javascript fetch https deckofcardsapi com api deck deck id draw count 4 then response response json then data console log data writing the game logic create an app js with the following content instructions are given inside each of the hallowed out functions your job is to fill out these functions to get the game running smoothly javascript app state these variables represent the state of our application they tell us at any given moment the state of our blackjack game you might find it useful to use these to debug issues by console logging them in the functions below var deckid var dealercards var playercards var playerscore 0 var dealerscore 0 var roundlost false var roundwon false var roundtied false game play nodes these nodes will be used often to update the ui of the game assign this variable to the dom node which has id dealer number var dealerscorenode select the dom node which has id player number var playerscorenode select the dom node which has id dealer cards var dealercardsnode select the dom node which has id player cards var playercardsnode selec the dom node which has id announcement var announcementnode selec the dom node which has id new game var newdecknode selec the dom node which has id next hand var nexthandnode selec the dom node which has id hit me var hitmenode selec the dom node which has id stay var staynode on click events these events define the actions to occur when a button is clicked these are provided for you and serve as examples for creating further possible actions of your own choosing newdecknode onclick getnewdeck nexthandnode onclick newhand hitmenode onclick hitme player staynode onclick settimeout dealerplays 600 game mechanics functions function getnewdeck this function needs to 1 call the resetplayingarea function 2 make a call to deckofcardsapi in order to retrieve a new deck id 3 set the value of our state variable deckid to the retrieved deck id 4 change the display property of style on the nexthandnode element in order to provide the player with the next hand button 5 hide the hit me and stay buttons by changing their style display to none 6 catch any errors that may occur on the fetch and log them function computescore cards this function receives an array of cards and returns the total score function newhand this function needs to 1 call the resetplayingarea function 2 make a call to deckofcardsapi using the deckid state variale in order to retrieve draw 4 cards from the deck 3 once 4 cards have been drawn push 2 of them to our dealercards state array and 2 to our playercards state array 4 set our dealerscore state variable to and then set the textcontent value of the dealerscorenode to dealerscore 5 foreach card in playercards and dealercards create an img element and assign the src of these to their respective card images don t forget to append these newly created img elements to the respective dealer cards and player cards dom elements in order to have them show up in the html 6 finally compute the player s score by calling computescore and update the playerscorenode to reflect this 7 if player score is 21 announce immediate victory by setting roundwon true announcementnode textcontent blackjack you win 8 catch and log possible error from the fetch function resetplayingarea this function needs to 1 reset all state variables to their defaults 2 reset the gameplay ui by updating textcontent of all nodes which may be displaying data from a previous round in the game ex dealerscorenode 3 remove all img elements inside dealercardsnode and playercardsnode function hitme target this function needs to 1 if any of roundlost or roundwon or roundtied is true return immediately 2 using the same deckid fetch to draw 1 card 3 depending on wether target is player or dealer push the card to the appropriate state array playercards or dealercards 4 create an img and set it s src to the card image and append it to the appropriate dom element for it to appear on the game play ui 5 if target player compute score and immediately announce loss if score 21 by setting roundlost true and updating announcementnode to display a message delivering the bad news 6 if target dealer just call the dealerplays function immediately after having appended the img to the game play ui 7 catch error and log function dealerplays this function needs to 1 if any of roundlost or roundwon or roundtied is true return immediately 2 compute the dealer s score by calling the computescore function and update the ui to reflect this if dealerscore 17 a delay here makes for nicer game play because of suspence settimeout hitme dealer 900 else if dealerscore 21 roundwon true update the ui to reflect this else if dealerscore playerscore roundlost true update the ui to reflect this else if dealerscore playerscore roundtied true update the ui to reflect this else roundwon true update the ui to reflect this hiding the dealer s first card now that the game is running smoothly we need to hide the dealer s first card in order for this to be real blackjack use this image and modify your app js to hide the dealer s first card until it is his turn to play p align center img src card png p challenge now that your game is running smoothly here are your options for challenges on this project 1 add betting to the game 2 make the app look professional | front_end |
|
jurassicIT | jurassicit jurassic it jurassic information technology | xhtml | server |
eufemia | div align center a href https eufemia dnb no img src logo png height 100 a h1 dnb s design system h1 downloads https img shields io npm dt dnb eufemia style flat square npm version https img shields io npm v dnb eufemia style flat square last commit https img shields io github last commit dnbexperience eufemia style flat square eufemia actions https github com dnbexperience eufemia actions workflows actions yml badge svg https github com dnbexperience eufemia actions workflows actions yml codeql https github com dnbexperience eufemia actions workflows codeql analysis yml badge svg https github com dnbexperience eufemia actions workflows codeql analysis yml div this is a monorepo and uses yarn workspaces to manage the sub packages workspaces workspaces dnb eufemia https github com dnbexperience eufemia tree main packages dnb eufemia repo for the npm package dnb eufemia https www npmjs com package dnb eufemia design system portal https github com dnbexperience eufemia tree main packages dnb design system portal source code for the portal website eufemia dnb no https eufemia dnb no quick start bash yarn add dnb eufemia contribution find more information about how to contribute in eufemia portal contribute https eufemia dnb no contribute our contributors a href https github com dnbexperience eufemia graphs contributors img src https contrib rocks image repo dnbexperience eufemia a licence go to license https github com dnbexperience eufemia blob main license | dnb ux design-system react web accessibility a11y wcag uu universell-utforming | os |
MiniSQL | minisql minisql python uwp minisql https github com alanshaw github minisql tree master minisql https github com alanshaw github minisql tree master bookmanagement | python uwp windows database sql | os |
emr-vis-web | nlpreviz emr vis web screenshot https github com nlpreviz emr vis web raw master screenshot png emr vis web provides the frontend view for emr nlp server https github com nlpreviz emr nlp server getting started to get started install the pre requisites and then clone emr vis web as described below prerequisites 1 you need git to clone the emr vis web repository you can get it from http git scm com http git scm com 2 you must have node js and its package manager npm installed you can download them from http nodejs org http nodejs org or get them using your favourite package manager for example if you are on a mac and have homebrew homebrew installed run brew install node 3 we use the apache tomcat http tomcat apache org server to deploy the app on a mac with homebrew homebrew you may use brew install tomcat to get it 4 we have separate repository for our backend service visit emr nlp server https github com nlpreviz emr nlp server for more clone emr vis web 1 navigate to the home directory of your tomcat server you can use catalina version and find out what catalina home is set to 2 cd to the webapps directory if you are using the default tomcat setup your present working directory would be something like usr local cellar tomcat 7 0 54 libexec webapps 3 clone the emr vis web repository into the webapps direcory using git git cd webapps git clone https github com nlpreviz emr vis web git cd emr vis web install dependencies 1 make sure you have node js node installed 2 we have preconfigured npm to automatically run bower and grunt so all you need to do is npm install this would run the following steps get the tools we depend upon via npm the node package manager npm download the angular code and javascript dependencies via bower a client side code package manager bower and set the config variables using grunt a javascript task runner grunt 3 skip this step to leave default settings as it is in case you need to change the backend service s path edit the config backend variable in package json or use the following commands npm config set emr vis web backend relative path to backend service npm start valid examples of this path include http localhost 9090 backendservice backendservice etc editing package json would be a permanent solution while using the npm config lets you include the config settings for the current terminal session run the application if you haven t built the backend project as yet please do so now refer to the readme on emr nlp server https github com nlpreviz emr nlp server for more information remember to go through step 3 to set the correct path to the backend service if you plan to modify the defaults now browse to the app at http localhost 8080 emr vis web app index html or your localhost root emr vis web app defining custom variables the tool is currently configured to make predictions for pre defined colonoscopy quality variables to define your own variables you will need to edit app js controllers js app js controllers js as follows rootscope config variables any adenoma biopsy rootscope config variablemapping any adenoma newvar display name biopsy newvar2 remember to follow the instructions in emr nlp server https github com nlpreviz emr nlp server as well notes the wordtree is adapted from the library by silverasm available at https github com silverasm wordtree our project depends on the javascript libraries listed in bower json bower json git http git scm com bower http bower io npm https www npmjs org node http nodejs org grunt http gruntjs com homebrew http brew sh license this project is released under the gpl 3 license take a look at the license license md file in the source for more information | natural-language-processing interactive-visualizations interactive-learning clinical-research clinical-notes | ai |
libplanet | libplanet discord https img shields io discord 928926944937013338 svg color 7289da logo discord logocolor white discord build status circleci https circleci com gh planetarium libplanet tree main svg style shield circleci codecov https codecov io gh planetarium libplanet branch main graph badge svg codecov nuget https img shields io nuget v libplanet svg style flat nuget nuget prerelease https img shields io nuget vpre libplanet svg style flat nuget libplanet is a net library for creating multiplayer online game in decentralized fashion which means the whole gameplay occurs on a peer to peer network among equal nodes rather than an authorized central server under the hood it incorporates many features e g digital signature bft consensus data replication of a blockchain it has competitive advantages over other solutions for decentralized gaming embeddable a game app does not have to communicate with another running process hence it doesn t require extra marshaling or processes management to draw a parallel libplanet is closer to sqlite than mysql or postgresql isomorphic libplanet is a net library so every game logic can be written in the same language c and run on the blockchain no glue code or smart contracts are needed token independent unlike almost every blockchain system it does not force users to create and deal with yet another cryptocurrency your game can be free to play and enjoyed by regular gamers to learn more about why planetarium is creating technology for fully decentralized games please refer to our blog post discord https planetarium dev discord circleci https app circleci com pipelines github planetarium libplanet codecov https codecov io gh planetarium libplanet nuget https www nuget org packages libplanet digital signature https en wikipedia org wiki digital signature bft https en wikipedia org wiki byzantine fault tolerance blockchain https en wikipedia org wiki blockchain blog post https medium com planetarium introducing planetarium powering games with freedom 22ab1ab70e0e nuget for every stable release we pack libplanet into a nupkg and upload it to nuget and github releases page you can find the changelog for versions from releases page to use libplanet in your game your project needs to add a dependency to libplanet package on visual studio ide run the following command in package manager console install package libplanet if you prefer dotnet cli run the following command instead bash dotnet add package libplanet see also microsoft s docs on different ways to install nuget package 1 in addition to stable releases we also provide pre release packages for every day and every merge commit it is packed into a nupkg and uploaded to nuget with a hyphen suffixed version name for a merge commit build a version name looks like 0 1 0 dev 20181231235959 a0b1c2d where 20181231235959 is a utc timestamp of the build and a0b1c2d is the first 7 hexadecimals of the git commit hash for a daily build a version name is like 0 1 0 nightly 20181231 a0b1c2d unfortunately unity currently does not support nuget there are some unity plug ins to deal with nuget package system and these seem immature at present to use libplanet on unity you need to manually extract libplanet dll from libplanet nupkg file and place it inside of your unity project we are acknowledging the fact libplanet is currently not very usable together with unity and promise to make it better in the next few minor releases until then you could try msbuildforunity which is experimental as of january 2020 releases https github com planetarium libplanet releases msbuildforunity https github com microsoft msbuildforunity 1 https docs microsoft com nuget consume packages ways to install a package build you could build libplanet dll and libplanet stun dll assemblies from the source code the following command installs dependencies required library packages and builds the whole libplanet solution bash dotnet build note that dotnet command is distributed together with net core sdk if you d like to contribute code to the libplanet project in earnest please read our contributor guide contributing md net core https dot net | libplanet planetarium game-development csharp dotnet p2p unity unity3d net blockchain hacktoberfest | blockchain |
StudyOfTime | studyoftime industrial engineering tool for ios | os |
|
healthcare-payments-blockchain | project is depreciated code provided for reference only healthcare payments on blockchain this is a prototype useful for exploring blockchain or as a basis for a project it is not intended for production use without further modification and testing in the instamed innovation lab we built a blockchain prototype focused on healthcare payments among providers payers and patients one of the prototype s purposes is to evaluate the value of blockchain in driving a better healthcare payments experience for all stakeholders learn more about the project https developers instamed com healthcare payments blockchain this is a hyperledger fabric https www hyperledger org projects fabric blockchain project that implements the fhir financial module it is built with convector https github com worldsibu convector and follows the fhir spec https www hl7 org fhir a vuejs demo frontend app is included in the project in packages frontend the live demo can be found at https blockchain demo instamed com the live network block browser can be found at https blockchain demo instamed com 8443 a video describing this flow can be found at https vimeo com 325931177 e21834462d prerequisites node https nodejs org en download 8 11 0 docker community edition https www docker com community edition npx https www npmjs com package npx if you are running in ubuntu make sure you meet all the prerequisites for fabric and convector prerequisites ubuntu https docs worldsibu com article 120 install on ubuntu how to run the project detailed instructions for installing on ubuntu can be found here https developers instamed com healthcare payments blockchain install blockchain on linux start from scratch bash install dependencies npm install start the blockchain and the server npm start create some mock data automatically to setup the network npm run mockdata start the server npm run server start you can now run transactions there s a postman file included to help you talk to the endpoints fhir financial postman collection json read first the section identities on the project of this readme you should send transactions all transactions from postman collection json in the order defined before install the remaining views views are associated to databases and fabric doesn t generate them until at least 1 value was saved there npm run views install this will install a development hyperledger fabric network and remove any previous one with hurley https github com worldsibu hurley install the chaincode with the name financial in the network start the nodejs server install couchdb views instantiate the chaincode servers create some mock data for you to get the front end to work properly you need to run the postman added in the repository by configuring the fingerprints correctly go to the postman collection settings and set the value to the variable patientfingerprint to use the same for every transaction after you run npm run user fingerprint home hyperledger fabric network hfc org1 user1 then go and set the value of consortiumadminfingerprint to npm run user fingerprint home hyperledger fabric network hfc org2 user1 and then value of providerfingerprint to npm run user fingerprint home hyperledger fabric network hfc org3 user1 individual tasks just start the server in dev mode npm run server start run this in case after the npm start you close the terminal this won t install the network again just the nodejs server enable the block browser capabilities the front end project makes it possible to visualize blocks in the network as well as its contents blocks images blocks png the current project uses the byzantine browser https github com in the keyhole byzantine browser s api to get the blocks from the transactions to the ledger in realtime for now it uses a fork from worldsibu that enables tls in the server https github com worldsibu byzantine browser make sure you already started the blockchain healthcare payments blockchain with npm start so a blockchain network is running on your computer with hurley https github com worldsibu hurley you have to run npm install twice for the backend and the frontend bash go outside this folder and clone the repo git clone https github com worldsibu byzantine browser git cd byzantine browser npm install cd ui npm install npm run build cd copy the keys from the hyperledger fabric network directory we re assuming here you have installed the byzantine browser in that same parent directory as the blockchain cp home hyperledger fabric network hfc org1 hfc key store replace the env in the root of the byzantine browser folder or create it if it doesn t exist with the information below bash userid user1 network url grpc localhost 7051 event url grpc localhost 7052 use your favorite text editor or use nano nano env copy text from above and right click to paste into terminal control o control x run the byzantine server runapiserver sh explore the project code structure packages financial cc contains the whole smart contract with all its models and controllers packages server contains the server calling the blockchain chaincode config json links the controllers and packages the config for the smart contract dev env a folder containing development environment needed files like the couchdb views and the installation script fhir financial postman collection json import this file into postman to see the queries to the database follow the numbers in the tasks to create a full flow identities on the project payer organizations provider organizations and consumer participants are identified in the blockchain through a fingerprint of a certificate generated from the certificate authority the logic goes as follows a identity user is created in the certificate authority that user is enrolled in the blockchain network in the case of the development environment the identity is registered and then enrolled by default extract the fingerprint from the cert by calling bash i e npm run user fingerprint home hyperledger fabric network hfc org1 user1 npm run user fingerprint home hyperledger fabric network hfc org user the result fingerprint looks like a5 eb e4 1e 8e 86 03 72 00 3f ea ca d2 9d 98 08 ca 70 24 f6 that same fingerprint will be validated when a transaction is signed by a identity from the blockchain be sure to pass it throught postman when registering a new payer organization or consumer participant as a param called fingerprint transactions will validate that the right identity is trying to perform requests for example to create a consumer participant the following json is valid json participant id consumer bob fingerprint a5 eb e4 1e 8e 86 03 72 00 3f ea ca d2 9d 98 08 ca 70 24 f6 you will need two different identities one can be shared between the payer and instamed working on behalf of the patients and the other one for a provider the reason for this is that some data is stored only accessible to some identities look for private collections later in this document therefore a switch in the identity is made go to the postman collection settings and set the value to the variable patientfingerprint to use the same for every transaction after you run npm run user fingerprint home hyperledger fabric network hfc org1 user1 then go and set the value of consortiumadminfingerprint to npm run user fingerprint home hyperledger fabric network hfc org2 user1 and then value of providerfingerprint to npm run user fingerprint home hyperledger fabric network hfc org3 user1 private collections for this project running locally organizations are related to hurley organizations in the followin order organization hurley org abc healthcare org1msp instamed patient org2msp xyz provider org3msp routing the server to query the different collections to query the private collections from the nodejs server you can pass the id of the user s nodes you d like to access any value of packages server src config identities json and the server will route the read query to those nodes i e get https user payer will send a transaction and look for data inside of the payer s nodes i e get https user provider will send a transaction and look for data inside of the provider s nodes i e get https user patient will send a transaction and look for data inside of the patient s nodes a practical example get the fingerprint of the user1 in the org1 bash npm run user fingerprint home hyperledger fabric network hfc org1 user1 a5 eb e4 1e 8e 86 03 72 00 3f ea ca d2 9d 98 08 ca 70 24 f6 be sure that your server is using the identity of user1 in org1 defined in packages server src config identities json every transaction sent from the server will be signed with the user1 in org1 identity so the chaincode can safely check for the fingerprint through the this sender except for the transaction made by the provider mark the payment as made through another certificate in org2 running local environment call the server located in http localhost 8080 check the couchdb server provisioned at http localhost 5084 utils database ch1 financial all docs architecture development environment images devenv png raw true development environment production environment images prodenv png raw true production environment tests run unit tests optional debugging by default the project will run unit tests in debug mode to explore the code go to a new chrome window and put the address to chrome inspect add the server as a connection in the tab top of the screen connection then click the button add connection and add localhost 9229 write debugger in the code line you d like the debugger to stop and run the tests start unit tests bash include npx if you use npx for package management npx lerna run test scope financial cc stream install in the blockchain bash be sure you started the blockchain before with npm run env restart npm run cc start upgrade your chaincode to the blockchain bash i e npm run cc upgrade 1 2 npm run cc upgrade version edits are done by instamed development user | fhir hyperledger-fabric convector typescript healthcare blockchain nodejs payment insttamed | blockchain |
nbn-service-check | note the launtel api has been disabled and as a result this wrapper will not function and has been archived refer to this map for latest nbn information https github com lukeprior nbn upgrade map nbn service check api this is an unnofficial api that can return the nbn availability information for properties including technology type maximum line speed and co existance status if available using the api the main api has been limited to extension users to remain within free plan limits bots and other applications can access this mirror hosted on deta without restirctions the api can be accessed at https nbn service check deta dev check address and will automatically attempt to match any input to a valid address powered by vercel https raw githubusercontent com lukeprior nbn availability extension main powered by vercel svg https vercel com | server |
|
Hands-On-RTOS-with-Microcontrollers | hands on rtos with microcontrollers hands on rtos with microcontrollers published by packt download a free pdf i if you have already purchased a print or kindle version of this book you can get a drm free pdf version at no cost br simply click on the link to claim your free pdf i p align center a href https packt link free ebook 9781838826734 https packt link free ebook 9781838826734 a p | os |
|
Machine-Learning-for-Cybersecurity-Cookbook | machine learning for cybersecurity cookbook a href https www packtpub com security machine learning for cybersecurity cookbook utm source github utm medium repository utm campaign 9781789614671 img src https www packtpub com media catalog product cache e4d64343b1bc593f1c5348fe05efa4a6 9 7 9781789614671 original jpeg alt machine learning for cybersecurity cookbook height 256px align right a this is the code repository for machine learning for cybersecurity cookbook https www packtpub com security machine learning for cybersecurity cookbook utm source github utm medium repository utm campaign 9781789614671 published by packt implement smart ai systems for preventing cyber attacks and detecting threats and network anomalies what is this book about organizations today face a major threat in terms of cybersecurity from malicious urls to credential reuse and having robust security systems can make all the difference with this book you ll learn how to use python libraries such as tensorflow and scikit learn to implement the latest artificial intelligence ai techniques and handle challenges faced by cybersecurity researchers this book covers the following exciting features learn how to build malware classifiers to detect suspicious activities apply ml to generate custom malware to pentest your security use ml algorithms with complex datasets to implement cybersecurity concepts create neural networks to identify fake videos and images secure your organization from one of the most popular threats insider threats defend against zero day threats by constructing an anomaly detection system detect web vulnerabilities effectively by combining metasploit and ml understand how to train a model without exposing the training data if you feel this book is for you get your copy https www amazon com dp 1789614678 today a href https www packtpub com utm source github utm medium banner utm campaign githubbanner img src https raw githubusercontent com packtpublishing github master github png alt https www packtpub com border 5 a instructions and navigations all of the code is organized into folders for example chapter02 the code will look like the following from sklearn model selection import train test split import pandas as pd following is what you need for this book if you re a cybersecurity professional or ethical hacker who wants to build intelligent systems using the power of machine learning and ai you ll find this book useful familiarity with cybersecurity concepts and knowledge of python programming is essential to get the most out of this book with the following software and hardware list you can run all code files present in the book chapter 1 8 software and hardware list chapter software required os required 1 python environment version depends on recipe windows mac os x and linux any 2 cuckoo sandbox latest windows mac os x and linux any 3 upx packer 3 95 windows mac os x and linux any 5 kali linux 2019 3 windows mac os x and linux any 6 wireshark 3 0 6 windows mac os x and linux any 7 octave latest windows mac os x and linux any appendix virtualbox latest windows mac os x and linux any we also provide a pdf file that has color images of the screenshots diagrams used in this book click here to download it https static packt cdn com downloads 9781789614671 colorimages pdf related products hands on machine learning for cybersecurity packt https www packtpub com in big data and business intelligence hands machine learning cybersecurity utm source github utm medium repository utm campaign 9781788992282 amazon https www amazon com dp 1788992288 hands on artificial intelligence for cybersecurity packt https www packtpub com in data hands on artificial intelligence for cybersecurity utm source github utm medium repository utm campaign 9781789804027 amazon https www amazon com dp 1789804027 get to know the author emmanuel tsukerman graduated from stanford university and obtained his ph d from uc berkeley in 2017 dr tsukerman s anti ransomware product was listed in the top 10 ransomware products of 2018 by pc magazine in 2018 he designed an ml based instant verdict malware detection system for palo alto networks wildfire service of over 30 000 customers in 2019 dr tsukerman launched the first cybersecurity data science course suggestions and feedback click here https docs google com forms d e 1faipqlsdy7datc6qmel81fiuuymz0wy9vh1jhkvpy57oimekgqib ow viewform if you have any feedback or suggestions download a free pdf i if you have already purchased a print or kindle version of this book you can get a drm free pdf version at no cost br simply click on the link to claim your free pdf i p align center a href https packt link free ebook 9781789614671 https packt link free ebook 9781789614671 a p | ai |
|
MLJ.jl | div align center img src material mljlogo2 svg alt mlj width 200 div h2 align center a machine learning framework for julia p align center a href https github com alan turing institute mlj jl actions img src https github com alan turing institute mlj jl workflows ci badge svg alt build status a a href https alan turing institute github io mlj jl dev img src https img shields io badge docs stable blue svg alt documentation a a href https opensource org licenses mit img src https img shields io badge license mit yelllow alt bibtex a a href bibliography md img src https img shields io badge cite bibtex blue alt bibtex a p h2 mlj machine learning in julia is a toolbox written in julia providing a common interface and meta algorithms for selecting tuning evaluating composing and comparing about 200 machine learning models https alan turing institute github io mlj jl dev model browser model browser written in julia and other languages new to mlj start here https alan turing institute github io mlj jl dev integrating an existing machine learning model into the mlj framework start here https alan turing institute github io mlj jl dev quick start guide to adding models wanting to contribute start here contributing md phd and postdoc opportunies see here https sebastian vollmer ms jobs mlj was initially created as a tools practices and systems project at the alan turing institute https www turing ac uk in 2019 current funding is provided by a new zealand strategic science investment fund https www mbie govt nz science and technology science and innovation funding information and opportunities investment funds strategic science investment fund ssif funded programmes university of auckland awarded to the university of auckland mlj has been developed with the support of the following organizations div align center img src material turing logo png width 100 img src material uoa logo png width 100 img src material iqvia logo png width 100 img src material warwick png width 100 img src material julia png width 100 div the mlj universe the functionality of mlj is distributed over several repositories illustrated in the dependency chart below these repositories live at the juliaai https github com juliaai umbrella organization div align center img src material mlj stack svg alt dependency chart div dependency chart for mlj repositories repositories with dashed connections do not currently exist but are planned proposed br p align center a href contributing md contributing a nbsp nbsp a href organization md code organization a nbsp nbsp a href roadmap md road map a br contributors core design a blaom f kiraly s vollmer lead contributor a blaom active maintainers a blaom s okon t lienart d aluthge | machine-learning julia pipelines tuning data-science tuning-parameters predictive-modeling classification regression statistics clustering stacking ensemble-learning pipeline | ai |
querying-with-sql | querying with sql a mini project using example datasets to demonstrate data engineering data modeling and data analysis data engineering i designed the table schemas prior to loading the data files into a newly created database for all the 6 csv files i defined data columns data types primary keys and foreign keys data analysis once the database is complete i answered some sample questions using various query syntax bonus introducing python using pandas and sqlalchemy packages from python to import sql database in order to create the below visualizations from the data histogram img ex1 png bar plot img ex2 png | server |
|
SchoolLibrary | schoollibrary olympiad in information technologies | server |
|
m2-devtools | magento 2 devtools circleci https circleci com gh magento m2 devtools svg style svg https circleci com gh magento m2 devtools an extension for google chrome and likely mozilla firefox https developer mozilla org en us docs mozilla add ons webextensions that exposes helpful debugging utilities for magento 2 front ends early release this is a very new project with little to no documentation published to solicit feedback from early adopters the extension is currently only available through manual installation of the development build and will be published to the chrome web store at a future time documentation docs readme md usage whenever you navigate to a page running magento 2 a new tab should appear in devtools from this extension p align center img src screenshot png p in progress features requirejs optimizer https requirejs org docs optimization html configuration generator including magento module for quick install requirejs module registry inspector possible future features uicomponents explorer inspector think react angular devtools m2 front end best practices checks running development build google chrome prerequisites node js 8 x npm 6 x setup 1 clone the repository 2 run npm install 3 run npm start 4 navigate to chrome extensions 5 enable developer mode 6 click load unpacked 7 select the extension folder in the root of this repository notes to run a single build use npm run build instead of npm start if you have chrome devtools open when you make a change in src you ll need to close and re open devtools to see the changes if you need to debug the devtools page react app in src open the magento 2 tab in devtools then right click inspect element this will open a new instance of the devtools pointed at the react application | front_end |
|
ai-research-assistant | a r i a aria your ai research assistant license https img shields io github license lifan0127 ai research assistant https github com lifan0127 ai research assistant blob master license using zotero plugin template https img shields io badge using zotero 20plugin 20template blue style flat square logo github https github com windingwind zotero plugin template latest release https img shields io github v release lifan0127 ai research assistant https github com lifan0127 ai research assistant releases downloads latest release https img shields io github downloads lifan0127 ai research assistant latest total if you have previously installed aria formerly zotero ra version 0 0 10 or below please manually remove it under tools add ons remove because the legacy versions have incompetible plugin id and cannot be auto updated aria is a zotero plugin powered by large language models llms a r i a is the acronym of ai research assistant in reverse order aria assets images aria png zotero and gpt requirements currently only zotero 6 is supported compatibility with zotero 7 has not been tested since v0 1 0 aria requires the openai gpt 4 model how can i access gpt 4 https help openai com en articles 7102672 how can i access gpt 4 installation download the latest release xpi file from github https github com lifan0127 ai research assistant releases latest in zotero select em tools em from the top menu bar and then click on em addons em on the add ons manager panel click the gear icon at the top right corner and select em install add on from file em select the xpi file you just downloaded and click em open em which will start the installation process quickstart aria can be activated through the shift r shortcut before using aria you need to provide an openai api key https platform openai com account api keys follow the in app instruction to add a key and b restart zotero b screenshots docs configuration md after restart you should see the activated aria window as shown above and can start using it through conversations update aria can perform automatic update when internet access is available to check for available update select em tools em from the top menu bar and then click on em addons em to manually update aria click em more em under aria and then click the gear icon at the top right corner select em check for updates em screenshots docs update md limitations the following are known limitations based on user feedback currently aria can query your zotero library through the zotero search api the ability to query the zotero sqlite database for document count and other metrics will be delivered in a future release aria has no awareness of your zotero application state selected item current tab highlighted text and therefore cannot answer the related questions this capability will be added over time troubleshooting interaction with zotero in an open conversational manner and through a probabilistic model can lead to many different often unexpected outcomes if you experience any error please create an github issue with a screenshot of the error message from your aria chat window thank you agent stopped due to max iterations for certain questions the bot will make multiple api calls iteratively for response synthesis sometimes it may fail to produce an answer before reaching the max iterations aria tab not in preferences panel you may choose the advanced tab in preferences and open the configuration editor under advanced configuration from there please search for aria and then double click on the extensions zotero aria openai api key entry to add your openai api key development refer to the zotero plugin development https www zotero org support dev client coding plugin development guide to find instructions on how to setup the plugin in your local environment | ai-assistant research-paper zotero | ai |
iot_projects | esp8266 esp8266 related projects and reference material follow my channel for more iot videos https www youtube com c stechiezdiy support my channel https www buymeacoffee com stechiezdiy | server |
|
blockchain | ibp docs this repo holds the source for the ibp documentation live links saas prod https cloud ibm com docs blockchain saas staging https test cloud ibm com docs blockchain sw prod https www ibm com docs en blockchain platform 2 5 4 sw staging https wwwstage ibm com docs en blockchain platform 2 5 4 push changes to prod steps saas go to https github ibm com cloud docs blockchain merge the next prod push pr sw go to wfm https wfm dcs ibm com product ssvkz7 2 5 4 run the production build push changes to staging steps saas code committed to master will automatically push out to test staging sw go to wfm https wfm dcs ibm com product ssvkz7 2 5 4 run the staging build | blockchain |
|
Olympia | olympia | cloud |
|
course-java-web-development | ipt course java web development course progeram 1 java fundamentals stack and heap quick review literals assignments and variables scope garbage collection handling exceptions common exceptions and errors 4 h 2 oop principles encapsulation inheritance and polymorphism overriding overloading 4 h 3 string processing data formatting resource bundles regular expressions java util and java math stringtokenizer date calendar locale random optional observable observable interface bigdecimal 6 h 4 generics and collections tostring hashcode and equals collections overview collection interfaces sorted collections comparators using collections generic types 6 h 5 java i o files streams i o basics autocloseable closeable and flushable interfaces i o exceptions serialization java io and nio 8 h 6 threads concurrency defining instantiating and starting threads synchronizing code thread problems immutable classes 8 h 7 functional programming and lambda expressions fundamentals functional interfaces method references constructor references 4 h 8 the stream api stream basics reduction operations mapping collecting iterators 4 h 9 build tools basics ant vs maven vs gradle practical examples 4 h 10 www www introduction ip addresses ports dns proxy hosts file cookies http ajax 4 h 11 servlet container servlets jsps intro web xml servlets session management and object scope filters listeners jsps expression language el tags jstl 8 h 12 serialization deserialization jaxb 4 h 13 web services soap rest xml json popular java libraries axis2 jackson 14 h 14 introduction to spring di aop and mvc 14 h 15 popular patterns singleton adapter proxy builder factory command strategy observer decorator solid principles https en wikipedia org wiki solid object oriented design 4 h 16 relational databases fundamentals acid relations transactions indexes triggers views relational algebra and sql queries 8 h 17 unit testing with junit object mocking 8 h open source learning resources textbooks tij4 thinking in java 4th edition bruce eckel https archive org details tij4ccr1 tij4 thinking in java 4th edition code examples bruce eckel https github com bruceeckel tij4 code ipj java http www introprogramming info intro java book ods open data structures in java pat morin http opendatastructures org ods java pdf jd java data particle http www theparticle com javadata2 html ojt oracle java tutorials https docs oracle com javase tutorial index html | front_end |
|
WordTokenizers.jl | wordtokenizers colprac contributor s guide on collaborative practices for community packages https img shields io badge colprac contributor s 20guide blueviolet https github com sciml colprac github release https img shields io github release juliatext wordtokenizers jl svg https github com juliatext wordtokenizers jl releases build status https travis ci org juliatext wordtokenizers jl svg branch master https travis ci org juliatext wordtokenizers jl codecov https codecov io gh juliatext wordtokenizers jl branch master graph badge svg https codecov io gh juliatext wordtokenizers jl build status https ci appveyor com api projects status github juliatext wordtokenizers jl branch master svg true https ci appveyor com project oxinabox wordtokenizers jl history hitcount http hits dwyl io juliatext wordtokenizers svg http hits dwyl io juliatext wordtokenizers some basic tokenizers for natural language processing installation as per standard julia package installation https julialang github io pkg jl dev managing packages adding registered packages 1 pkg add wordtokenizers usage the normal way to use this package is to call tokenize str to split up a string into words or split sentences str to split up a string into sentences maybe even tokenize split sentences str to do both tokenize and split sentences are configurable functions that call one of the tokenizers or sentence splitters defined below they have sensible defaults set but you can override the method used by calling set tokenizer func or set sentence splitter func passing in your preferred function func from the list below or from elsewhere configuring them this way will throw up a method overwritten warning and trigger recompilation of any methods that use them this means if you are using a package that uses wordtokenizers jl to do tokenization sentence splitting via the default methods changing the tokenizer splitter will change the behavior of that package this is a feature of corpusloaders jl https github com juliatext corpusloaders jl if as a package author you don t want to allow the user to change the tokenizer in this way you should use the tokenizer you want explicitly rather than using the tokenize method example setting tokenizer tinysegmenter jl you might like to for example use tinysegmenter jl s tokenizer https github com juliastrings tinysegmenter jl for japanese text we do not include tinysegmenter in this package because making use of it within wordtokenizers jl is trivial just import tinysegmenter set tokenizer tinysegmenter tokenize full example julia julia using wordtokenizers julia text julia tokenize text print default tokenizer julia import tinysegmenter julia set tokenizer tinysegmenter tokenize julia tokenize text print tinysegmenter s tokenizer substring string word tokenizers the word tokenizers basically assume sentence splitting has already been done poorman s tokenizer poormans tokenize deletes all punctuation and splits on spaces in some ways worse than just using split punctuation space tokenize punctuation space tokenize marginally improved version of the poorman s tokenizer only deletes punctuation occurring outside words penn tokenizer penn tokenize this is robert macintyre s original tokenizer used for the penn treebank splits contractions improved penn tokenizer improved penn tokenize nltk s improved penn treebank tokenizer very similar to the original some improvements on punctuation and contractions this matches to nltk s nltk tokenize treebankwordtokenizer tokenize nltk word tokenizer nltk word tokenize nltk s even more improved version of the penn tokenizer this version has better unicode handling and some other changes this matches to the most commonly used nltk word tokenize minus the sentence tokenizing step to me it seems like a weird historical thing that nltk has 2 successive variations on improving the penn tokenizer but for now i am matching it and having both see nltk 2005 https github com nltk nltk issues 2005 reversible tokenizer rev tokenize and rev detokenize this tokenizer splits on punctuations space and special symbols the generated tokens can be de tokenized by using the rev detokenizer function into the state before tokenization toktok tokenizer toktok tokenize this tokenizer is a simple general tokenizer where the input has one sentence per line thus only final period is tokenized this is an enhanced version of the original toktok tokenizer https github com jonsafari tok tok it has been tested on and gives reasonably good results for english persian russian czech french german vietnamese tajik and a few others default tokenizer tweet tokenizer tweet tokenizer nltk s casual tokenizer for that is solely designed for tweets apart from being twitter specific this tokenizer has good handling for emoticons and other web aspects like support for html entities this closely matches nltk s nltk tokenize tweettokenizer sentence splitters we currently only have one sentence splitter rule based sentence spitter rulebased split sentences uses a rule that periods question marks and exclamation marks followed by white space end sentences with a large list of exceptions split sentences is exported as an alias for the most useful sentence splitter currently implemented which atm is the only sentence splitter rulebased split sentences default sentence splitter example julia julia tokenize the package s tokenizers range from simple e g poorman s to complex e g penn print substring string the package s tokenizers range from simple e g poorman s to complex e g penn julia julia text the leatherback sea turtle is the largest measuring six or seven feet 2 m in length at maturity and three to five feet 1 to 1 5 m in width weighing up to 2000 pounds about 900 kg most other species are smaller being two to four feet in length 0 5 to 1 m and proportionally less wide the flatback turtle is found solely on the northerncoast of australia julia split sentences text 3 element array substring string 1 the leatherback sea turtle is the largest measuring six or seven feet 2 m in length at maturity and three to five feet 1 to 1 5 m in width weighing up to 2000 pounds about900 kg most other species are smaller being two to four feet in length 0 5 to 1 m and proportionally less wide the flatback turtle is found solely on the northern coast of australia julia tokenize split sentences text 3 element array array substring string 1 1 substring string the leatherback sea turtle is the largest measuring six up to 2000 pounds about 900 kg substring string most other species are smaller being two to four 0 5 to 1 m and proportionally less wide substring string the flatback turtle is found solely on the northern coast of australia experimental api i am trying out an experimental api where these are added as dispatches to base split so split foo words is the same as tokenize foo and split foo sentences is the same as split sentences foo using tokenbuffer api for custom tokenizers we offer a tokenbuffer api and supporting utility lexers for high speed tokenization writing your own tokenbuffer tokenizers tokenbuffer turns a string into a readable stream used for building tokenizers utility lexers such as spaces and span class x x first x last number span read characters from the stream and into an array of tokens lexers return true or false to indicate whether they matched in the input stream they can therefore be combined easily e g spacesornumber ts spaces ts number ts either skips whitespace or parses a number token if possible the simplest useful tokenizer splits on spaces using wordtokenizers tokenbuffer isdone spaces character function tokenise input ts tokenbuffer input while isdone ts spaces ts character ts end return ts tokens end tokenise foo bar baz foo bar baz many prewritten components for building custom tokenizers can be found in src words fast jl and src words tweet tokenizer jl these components can be mixed and matched to create more complex tokenizers here is a more complex example julia julia using wordtokenizers tokenbuffer isdone character spaces present in fast jl julia using wordtokenizers nltk url1 nltk url2 nltk phonenumbers present in tweet tokenizer jl julia function tokeinze input urls ts nltk url1 ts nltk url2 ts ts tokenbuffer input while isdone ts spaces ts continue urls ts nltk phonenumbers ts character ts end return ts tokens end tokeinze generic function with 1 method julia tokeinze a url https github com juliatext wordtokenizers jl and phonenumber 0 987 2344321 6 element array string 1 a url https github com juliatext wordtokenizers jl url detected and phonenumber 0 987 2344321 phone number detected tips for writing custom tokenizers and your own tokenbuffer lexer 1 the order in which the lexers are written needs to be taken care of in some cases for example 987 654 3210 matches as a phone number as well as numbers but number will only match up to 987 and split after it julia julia using wordtokenizers tokenbuffer isdone character spaces nltk phonenumbers number julia order1 ts number ts nltk phonenumbers ts order1 generic function with 1 method julia order2 ts nltk phonenumbers ts number ts order2 generic function with 1 method julia function tokenize1 input ts tokenbuffer input while isdone ts order1 ts character ts end return ts tokens end tokenize1 generic function with 1 method julia function tokenize2 input ts tokenbuffer input while isdone ts order2 ts character ts end return ts tokens end tokenize2 generic function with 1 method julia tokenize1 987 654 3210 number ts nltk phonenumbers ts 5 element array string 1 987 654 3210 julia tokenize2 987 654 3210 nltk phonenumbers ts number ts 1 element array string 1 987 654 3210 2 boundserror and errors while handling edge cases are most common and need to be taken of while writing the tokenbuffer lexers 3 for some tokenbuffer ts use flush ts over push ts tokens input i j to make sure that characters in the buffer i e ts buffer also gets flushed out as separate tokens julia julia using wordtokenizers tokenbuffer flush spaces character isdone julia function tokenize input ts tokenbuffer input while isdone ts spaces ts continue my pattern ts character ts end return ts tokens end julia function my pattern ts matches the pattern for 2 continuous ts idx 1 length ts input return false if ts ts idx ts ts idx 1 flush ts using flush ts idx 2 return true end return false end my pattern generic function with 1 method julia tokenize hi hello 3 element array string 1 hi hello julia function my pattern ts matches the pattern for 2 continuous ts idx 1 length ts input return false if ts ts idx ts ts idx 1 push ts tokens without using flush ts idx 2 return true end return false end my pattern generic function with 1 method julia tokenize hi hello 2 element array string 1 hihello statistical tokenizer sentencepiece unigram encoder is basically the sentencepiece processor s re implementation in julia it can used vocab file generated by sentencepiece library containing both vocab and log probability for more detail about implementation refer the blog post here https tejasvaidhyadev github io blog sentencepiece note sentencepiece escapes the whitespace with a meta symbol u 2581 pretrained wordtokenizer provides pretrained vocab file of albert both version 1 and version 2 julia julia subtypes pretrainedtokenizer 2 element array any 1 albert v1 albert v2 julia tokenizerfiles albert v1 4 element array string 1 albert base v1 30k clean vocab albert large v1 30k clean vocab albert xlarge v1 30k clean vocab albert xxlarge v1 30k clean vocab datadeps will handle all the downloading part for us you can also create an issue or pr for other pretrained models or directly load by providing path in load function julia julia spm load albert version1 loading default albert base vocab in sentencepiece wordtokenizers sentencepiecemodel dict shots 11 2373 7281 ordered 9 84973 1906 dev 12 0915 14439 silv 12 6564 21065 doubtful 12 7799 22569 without 8 34227 367 pol 10 7694 4828 chem 12 3713 17661 1947 11 7544 11199 disrespect 13 13 26682 2 julia tk tokenizer spm i love the julia language or tk spm i love the julia language 4 element array string 1 i love the julia language julia subword tokenizer spm unfriendly 2 element array string 1 un friendly julia para spm julia is a high level high performance dynamic language for technical computing 17 element array string 1 j ulia is a high level high performance dynamic language for technical computing indices is usually used for deep learning models index of special tokens in albert are given below 1 pad 2 unk 3 cls 4 sep 5 mask julia julia ids from tokens spm tk 4 element array int64 1 32 340 15 5424 817 we can also get sentences back from tokens julia sentence from tokens tk i love the julia language julia sentence from token subword unfriendly julia sentence from tokens para julia is a high level high performance dynamic language for technical computing contributing contributions in the form of bug reports pull requests additional documentation are encouraged they can be made to the github repository we follow the colprac guide for collaborative practices https github com sciml colprac new contributor should make sure to read that guide all contributions and communications should abide by the julia community standards https julialang org community standards software contributions should follow the prevailing style within the code base if your pull request or issues are not getting responses within a few days do not hesitate to bump them by posting a comment such as any update on the status of this sometimes github notifications get lost support feel free to ask for help on the julia discourse forum https discourse julialang org or in the natural language channel on julia slack which you can join here https slackinvite julialang org you can also raise issues in this repository to request improvements to the documentation | nlp tokenization lexer information-retrieval data-mining | ai |
langkit | langkit langkit graphic static img langkit graphic png langkit is an open source text metrics toolkit for monitoring language models it offers an array of methods for extracting relevant signals from the input and or output text which are compatible with the open source data logging library whylogs https whylogs readthedocs io en latest want to experience langkit go to this notebook https github com whylabs langkit blob main langkit examples intro to langkit ipynb table of contents motivation motivation features features installation installation usage usage modules modules motivation productionizing language models including llms comes with a range of risks due to the infinite amount of input combinations which can elicit an infinite amount of outputs the unstructured nature of text poses a challenge in the ml observability space a challenge worth solving since the lack of visibility on the model s behavior can have serious consequences features the currently supported metrics include text quality https github com whylabs langkit blob main langkit docs features quality md readability score complexity and grade scores text relevance https github com whylabs langkit blob main langkit docs features relevance md similarity scores between prompt responses similarity scores against user defined themes security and privacy https github com whylabs langkit blob main langkit docs features security md patterns count of strings matching a user defined regex pattern group jailbreaks similarity scores with respect to known jailbreak attempts prompt injection similarity scores with respect to known prompt injection attacks refusals similarity scores with respect to known llm refusal of service responses sentiment and toxicity https github com whylabs langkit blob main langkit docs features sentiment md sentiment analysis toxicity analysis installation to install langkit use the python package index pypi as follows pip install langkit all usage langkit modules contain udfs that automatically wire into the collection of udfs on string features provided by whylogs by default all we have to do is import the langkit modules and then instantiate a custom schema as shown in the example below python import whylogs as why from langkit import llm metrics results why log prompt hello response world schema llm metrics init the code above will produce a set of metrics comprised of the default whylogs metrics for text features and all the metrics defined in the imported modules this profile can be visualized and monitored in the whylabs platform https whylabs ai safeguard large language models utm source github utm medium referral utm campaign langkit or they can be further analyzed by the user on their own accord more examples are available here https github com whylabs langkit tree main langkit examples modules you can have more information about the different modules and their metrics here https github com whylabs langkit blob main langkit docs modules md frequently asked questions you can check some frequently asked questions on our faqs section https github com whylabs langkit blob main langkit docs faq md | large-language-models machine-learning nlg nlp observability prompt-engineering prompt-injection | ai |
expressNoteTakerNinthEdition | github repo size https img shields io github repo size stevensjones expressnotetakerninthedition project title expressnotetakerninthedition description an application that can be used to write save and delete notes equiped with an express backend and saves and retrieves note data from a json file table of contents title title description description table of contents tableofcontents prerequisites prerequisites tests tests contributing contributing usage usage license license further contact furthercontact prerequisites none tests none contributing steven jones usage anyone in need of an application that creates and deletes notes perhaps to organize one s thoughts and keep track of tasks one may need to complete license 2020 steven jones all rights reserved further contact feel free to reach out to me with questions involving this project on github at stevensjones https github com stevensjones | server |
|
polish-nlp-resources | polish nlp resources this repository contains pre trained models and language resources for natural language processing in polish created during my research some of the models are also available on huggingface hub https huggingface co sdadas if you d like to use any of those resources in your research please cite bibtex misc polish nlp resources author s l awomir dadas title a repository of polish nlp resources howpublished github year 2019 url https github com sdadas polish nlp resources contents word embeddings word embeddings word2vec word2vec fasttext fasttext glove glove high dimensional word vectors high dimensional word vectors compressed word2vec compressed word2vec wikipedia2vec wikipedia2vec language models language models elmo elmo roberta roberta bart bart gpt 2 gpt 2 longformer longformer text encoders text encoders machine translation models machine translation models convolutional models for fairseq convolutional models for fairseq t5 based models t5 based models fine tuned models fine tuned models dictionaries and lexicons dictionaries and lexicons links to external resources links to external resources repositories of linguistic tools and resources repositories of linguistic tools and resources publicly available large polish text corpora 1gb publicly available large polish text corpora 1gb models supporting polish language models supporting polish language word embeddings the following section includes pre trained word embeddings for polish each model was trained on a corpus consisting of polish wikipedia dump polish books and articles 1 5 billion tokens at total word2vec word2vec trained with gensim https radimrehurek com gensim 100 dimensions negative sampling contains lemmatized words with 3 or more ocurrences in the corpus and additionally a set of pre defined punctuation symbols all numbers from 0 to 10 000 polish forenames and lastnames the archive contains embedding in gensim binary format example of usage python from gensim models import keyedvectors if name main word2vec keyedvectors load word2vec polish bin print word2vec similar by word bierut cyrankiewicz 0 818274736404419 gomu ka 0 7967918515205383 raczkiewicz 0 7757788896560669 jaruzelski 0 7737460732460022 pu ak 0 7667238712310791 download github https github com sdadas polish nlp resources releases download v1 0 word2vec zip fasttext fasttext trained with gensim https radimrehurek com gensim vocabulary and dimensionality is identical to word2vec model the archive contains embedding in gensim binary format example of usage python from gensim models import keyedvectors if name main word2vec keyedvectors load fasttext 100 3 polish bin print word2vec similar by word bierut bieruty 0 9290274381637573 gierut 0 8921363353729248 bieruta 0 8906412124633789 bierutow 0 8795544505119324 bierutowsko 0 839280366897583 download onedrive https witedupl my sharepoint com u g personal dadass wit edu pl eeodv cq0ktaupma0e9iilebmtvvw4ozabbpuaxumfd8ea e 5naf5z glove global vectors for word representation glove trained using the reference implementation from stanford nlp 100 dimensions contains lemmatized words with 3 or more ocurrences in the corpus example of usage python from gensim models import keyedvectors if name main word2vec keyedvectors load word2vec format glove 100 3 polish txt print word2vec similar by word bierut cyrankiewicz 0 8335597515106201 gomu ka 0 7793121337890625 bieruta 0 7118682861328125 jaruzelski 0 6743760108947754 minc 0 6692837476730347 download github https github com sdadas polish nlp resources releases download v1 0 glove zip high dimensional word vectors pre trained vectors using the same vocabulary as above but with higher dimensionality these vectors are more suitable for representing larger chunks of text such as sentences or documents using simple word aggregation methods averaging max pooling etc as more semantic information is preserved this way glove 300d part 1 github https github com sdadas polish nlp resources releases download glove hd glove 300 3 polish zip 001 500d part 1 github https github com sdadas polish nlp resources releases download glove hd glove 500 3 polish zip 001 part 2 github https github com sdadas polish nlp resources releases download glove hd glove 500 3 polish zip 002 800d part 1 github https github com sdadas polish nlp resources releases download glove hd glove 800 3 polish zip 001 part 2 github https github com sdadas polish nlp resources releases download glove hd glove 800 3 polish zip 002 part 3 github https github com sdadas polish nlp resources releases download glove hd glove 800 3 polish zip 003 word2vec 300d onedrive https witedupl my sharepoint com u g personal dadass wit edu pl eq7qa6pkpupbtzyyp8kaafmb0z9fdhfme7kxm tcrwh9ha e rgekmu 500d onedrive https witedupl my sharepoint com u g personal dadass wit edu pl efbt7wvy7evhuqziuupnjzsbxtn2l896ldvvhrbicmuh a e f0lgvc 800d onedrive https witedupl my sharepoint com u g personal dadass wit edu pl eda0vukicnpnk4omf2eolzkbtjbmtmymkqqz yoexw98ta e rku4pp fasttext 300d onedrive https witedupl my sharepoint com u g personal dadass wit edu pl esj0xtxmtk5jhiocp5oxt7ibuumaejczfwvqn17c2qngcg e 9aory9 500d onedrive https witedupl my sharepoint com u g personal dadass wit edu pl evivrrf38fjmv1ihx2ardnebffoe mlsdhccmg49iqeccq e g36nj7 800d onedrive https witedupl my sharepoint com u g personal dadass wit edu pl eshkej7jlglhoaikiydl0nkb z8vjyfcehx3tpe7l1knfg e fkobga compressed word2vec this is a compressed version of the word2vec embedding model described above for compression we used the method described in compressing word embeddings via deep compositional code learning https arxiv org abs 1711 01068 by shu and nakayama compressed embeddings are suited for deployment on storage poor devices such as mobile phones the model weights 38mb only 4 4 size of the original word2vec embeddings although the authors of the article claimed that compressing with their method doesn t hurt model performance we noticed a slight but acceptable drop of accuracy when using compressed version of embeddings sample decoder class with usage python import gzip from typing import dict callable import numpy as np class compressedembedding object def init self vocab path str embedding path str to lowercase bool true self vocab path str vocab path self embedding path str embedding path self to lower bool to lowercase self vocab dict str int self load vocab vocab path embedding np load embedding path self codes np ndarray embedding embedding files 0 self codebook np ndarray embedding embedding files 1 self m self codes shape 1 self k int self codebook shape 0 self m self dim int self codebook shape 1 def load vocab self vocab path str dict str int open func callable gzip open if vocab path endswith gz else open with open func vocab path rt encoding utf 8 as input file return line strip idx for idx line in enumerate input file def vocab vector self word str if word pad return np zeros self dim val str word lower if self to lower else word index int self vocab get val self vocab unk codes self codes index code indices np array idx self k offset for idx offset in enumerate np nditer codes return np sum self codebook code indices axis 0 if name main word2vec compressedembedding word2vec 100 3 vocab gz word2vec 100 3 compressed npz print word2vec vocab vector bierut download github https github com sdadas polish nlp resources releases download v1 0 compressed zip wikipedia2vec wikipedia2vec https wikipedia2vec github io is a toolkit for learning joint representations of words and wikipedia entities we share polish embeddings learned using a modified version of the library in which we added lemmatization and fixed some issues regarding parsing wiki dumps for languages other than english embedding models are available in sizes from 100 to 800 dimensions a simple example python from wikipedia2vec import wikipedia2vec wiki2vec wikipedia2vec load wiki2vec plwiki 100 bin print wiki2vec most similar wiki2vec get entity boles aw bierut entity boles aw bierut 1 0 word bierut 0 75790733 word gomu ka 0 7276504 entity krajowa rada narodowa 0 7081445 entity w adys aw gomu ka 0 7043667 download embeddings 100d https witedupl my sharepoint com u g personal dadass wit edu pl ee dfnilujxcihmfujrsqzubbpst44eyctmanpb tq ykw e zzwiuf 300d https witedupl my sharepoint com u g personal dadass wit edu pl ewbzb1a89yjjku3vzfpobtub5wtnaqisznkt2aaksp6xdq e hhxsf0 500d https witedupl my sharepoint com u g personal dadass wit edu pl erysjueo dlkpubbv a86 0brhdb88tjgr wtzbkxhfjg e bpjh80 800d https witedupl my sharepoint com u g personal dadass wit edu pl eqjt8qyrmlfeqtc 1zdoi54bzoqxilvoqibhra9euiov7w e slfqri language models elmo embeddings from language models elmo is a contextual embedding presented in deep contextualized word representations https arxiv org abs 1802 05365 by peters et al sample usage with pytorch below for a more detailed instructions for integrating elmo with your model please refer to the official repositories github com allenai bilm tf https github com allenai bilm tf tensorflow and github com allenai allennlp https github com allenai allennlp pytorch python from allennlp commands elmo import elmoembedder elmo elmoembedder options json weights hdf5 print elmo embed sentence za ci g l ja download github https github com sdadas polish nlp resources releases download v1 0 elmo zip roberta language model for polish based on popular transformer architecture we provide weights for improved bert language model introduced in roberta a robustly optimized bert pretraining approach https arxiv org pdf 1907 11692 pdf we provide two roberta models for polish base and large model a summary of pre training parameters for each model is shown in the table below we release two version of the each model one in the fairseq https github com pytorch fairseq format and the other in the huggingface transformers https github com huggingface transformers format more information about the models can be found in a separate repository https github com sdadas polish roberta table thead th model th th l h a th th batch size th th update steps th th corpus size th th fairseq th th transformers th thead tr td roberta nbsp base td td 12 nbsp nbsp 768 nbsp nbsp 12 td td 8k td td 125k td td 20gb td td a href https github com sdadas polish roberta releases download models roberta base fairseq zip v0 9 0 a td td a href https github com sdadas polish roberta releases download models transformers v3 4 0 roberta base transformers zip v3 4 a td tr tr td roberta 8209 v2 nbsp base td td 12 nbsp nbsp 768 nbsp nbsp 12 td td 8k td td 400k td td 20gb td td a href https github com sdadas polish roberta releases download models v2 roberta base fairseq zip v0 10 1 a td td a href https github com sdadas polish roberta releases download models v2 roberta base transformers zip v4 4 a td tr tr td roberta nbsp large td td 24 nbsp nbsp 1024 nbsp nbsp 16 td td 30k td td 50k td td 135gb td td a href https github com sdadas polish roberta releases download models roberta large fairseq zip v0 9 0 a td td a href https github com sdadas polish roberta releases download models transformers v3 4 0 roberta large transformers zip v3 4 a td tr tr td roberta 8209 v2 nbsp large td td 24 nbsp nbsp 1024 nbsp nbsp 16 td td 2k td td 400k td td 200gb td td a href https github com sdadas polish roberta releases download models v2 roberta large fairseq zip v0 10 2 a td td a href https github com sdadas polish roberta releases download models v2 roberta large transformers zip v4 14 a td tr tr tr td distilroberta td td 6 nbsp nbsp 768 nbsp nbsp 12 td td 1k td td 10ep td td 20gb td td n a td td a href https github com sdadas polish roberta releases download models v2 distilroberta transformers zip v4 13 a td tr table l the number of encoder blocks h hidden size a the number of attention heads br example in fairseq python import os from fairseq models roberta import robertamodel robertahubinterface from fairseq import hub utils model path roberta large fairseq loaded hub utils from pretrained model name or path model path data name or path model path bpe sentencepiece sentencepiece vocab os path join model path sentencepiece bpe model load checkpoint heads true archive map robertamodel hub models cpu true roberta robertahubinterface loaded args loaded task loaded models 0 roberta eval roberta fill mask druga wojna wiatowa zako czy a si w mask roku topk 1 roberta fill mask ludzie najbardziej boj si mask topk 1 druga wojna wiatowa zako czy a si w 1945 roku 0 9345270991325378 1945 ludzie najbardziej boj si mierci 0 14140743017196655 mierci it is recommended to use the above models but it is still possible to download our old model https github com sdadas polish nlp resources releases download roberta roberta zip trained on smaller batch size 2k and smaller corpus 15gb bart bart is a transformer based sequence to sequence model trained with a denoising objective can be used for fine tuning on prediction tasks just like regular bert as well as various text generation tasks such as machine translation summarization paraphrasing etc we provide a polish version of bart base model trained on a large corpus of texts extracted from common crawl 200 gb more information on the bart architecture can be found in bart denoising sequence to sequence pre training for natural language generation translation and comprehension https arxiv org abs 1910 13461 example in hugginface transformers python import os from transformers import bartforconditionalgeneration pretrainedtokenizerfast model dir bart base transformers tok pretrainedtokenizerfast tokenizer file os path join model dir tokenizer json model bartforconditionalgeneration from pretrained model dir sent druga mask wiatowa zako czy a si w mask roku kapitulacj hitlerowskich mask batch tok sent return tensors pt generated ids model generate batch input ids print tok batch decode generated ids skip special tokens true druga wojna wiatowa zako czy a si w 1945 roku kapitulacj hitlerowskich niemiec download for fairseq v0 10 https github com sdadas polish nlp resources releases download bart base bart base fairseq zip or huggingface transformers v4 0 https github com sdadas polish nlp resources releases download bart base bart base transformers zip gpt 2 gpt 2 is a unidirectional transformer based language model trained with an auto regressive objective originally introduced in the language models are unsupervised multitask learners https d4mucfpksywv cloudfront net better language models language models are unsupervised multitask learners pdf paper the original english gpt 2 was released in four sizes differing by the number of parameters small 112m medium 345m large 774m xl 1 5b models for huggingface transformers we provide polish gpt 2 models for huggingface transformers the models have been trained using megatron lm https github com nvidia megatron lm library and then converted to the huggingface format the released checkpoints support longer contexts than the original gpt 2 by openai small and medium models support up to 2048 tokens twice as many as gpt 2 models and the same as gpt 3 large and xl models support up to 1536 tokens example in transformers python from transformers import pipeline generator pipeline text generation model sdadas polish gpt2 medium results generator policja skontrolowa a trze wo kierowc w max new tokens 1024 do sample true repetition penalty 1 2 num return sequences 1 num beams 1 temperature 0 95 top k 50 top p 0 95 print results 0 get generated text policja skontrolowa a trze wo kierowc w teraz policjanci przypominaj kierowcom o zachowaniu bezpiecznej odleg o ci i rodkach ostro no ci zwi zanych z pandemi kieruj cy po spo yciu alkoholu s bardziej wyczuleni na innych uczestnik w ruchu drogowego oraz maj wi ksz sk onno do brawury i ryzykownego zachowania zw aszcza wobec pieszych dodatkowo nie zawsze pami taj oni zasady obowi zuj cych u nas przepis w prawa reguluj cych kwestie dotycz ce odpowiedzialno ci small https huggingface co sdadas polish gpt2 small medium https huggingface co sdadas polish gpt2 medium large https huggingface co sdadas polish gpt2 large and xl https huggingface co sdadas polish gpt2 xl models are available on the huggingface hub models for fairseq we provide polish versions of the medium and large gpt 2 models trained using fairseq library example in fairseq python import os from fairseq import hub utils from fairseq models transformer lm import transformerlanguagemodel model dir gpt2 medium fairseq loaded hub utils from pretrained model name or path model dir checkpoint file model pt data name or path model dir bpe hf byte bpe bpe merges os path join model dir merges txt bpe vocab os path join model dir vocab json load checkpoint heads true archive map transformerlanguagemodel hub models model hub utils generatorhubinterface loaded args loaded task loaded models model eval result model sample policja skontrolowa a trze wo kierowc w beam 5 sampling true sampling topk 50 sampling topp 0 95 temperature 0 95 max len a 1 max len b 100 no repeat ngram size 3 print result 0 policja skontrolowa a trze wo kierowc w pojazd w wszystko dzia o si na drodze gminnej mi dzy radwanowem a boguchowem oko o godziny 12 30 do naszego komisariatu zg osi si kierowca kt rego zaniepokoi o zachowanie kieruj cego w chwili wjazdu na t drog prawdopodobnie nie mia zapi tych pas w informuje st asp anna w grzyniak z policji w brzezinach okaza o si e kieruj cy by pod wp ywem alkoholu download medium https github com sdadas polish nlp resources releases download gpt 2 gpt2 medium fairseq 7z or large https github com sdadas polish nlp resources releases download gpt 2 gpt2 large fairseq 7z model for fairseq v0 10 longformer one of the main constraints of standard transformer architectures is the limitation on the number of input tokens there are several known models that allow processing of long documents one of the popular ones being longformer introduced in the paper longformer the long document transformer https arxiv org abs 2004 05150 we provide base and large versions of polish longformer model the models were initialized with polish roberta v2 weights and then fine tuned on a corpus of long documents ranging from 1024 to 4096 tokens example in huggingface transformers python from transformers import pipeline fill mask pipeline fill mask model sdadas polish longformer base 4096 fill mask stolica oraz najwi ksze miasto francji to mask base https huggingface co sdadas polish longformer base 4096 and large https huggingface co sdadas polish longformer large 4096 models are available on the huggingface hub text encoders the purpose of text encoders is to produce a fixed length vector representation for chunks of text such as sentences or paragraphs these models are used in semantic search question answering document clustering dataset augmentation plagiarism detection and other tasks which involve measuring semantic similarity or relatedness between text passages paraphrase mining and semantic textual similarity we share two models based on the sentence transformers https www sbert net library trained using distillation method described in the paper making monolingual sentence embeddings multilingual using knowledge distillation https arxiv org abs 2004 09813 a corpus of 100 million parallel polish english sentence pairs from the opus https opus nlpl eu project was used to train the models you can download them from the hugginface hub using the links below table thead th student model th th teacher model th th download th thead tr td polish roberta base v2 td td paraphrase distilroberta base v2 td td a href https huggingface co sdadas st polish paraphrase from distilroberta st polish paraphrase from distilroberta a td tr tr td polish roberta base v2 td td paraphrase mpnet base v2 td td a href https huggingface co sdadas st polish paraphrase from mpnet st polish paraphrase from mpnet a td tr table a simple example in sentence transformers library python from sentence transformers import sentencetransformer from sentence transformers util import cos sim sentences bardzo lubi je s odycze uwielbiam zajada si s odko ciami model sentencetransformer sdadas st polish paraphrase from mpnet results model encode sentences convert to tensor true show progress bar false print cos sim results 0 results 1 tensor 0 9794 device cuda 0 information retrieval mmlw musz mie lepsz wiadomo is a set of text encoders trained using multilingual knowledge distillation method https arxiv org abs 2004 09813 on a diverse corpus of 60 million polish english text pairs which included both sentence and paragraph aligned translations the encoders are available in sentence transformers https www sbert net format we provide five encoders optimized for text information retrieval tasks the models were trained using a two step process in the first step the encoders were initialized with polish roberta and multilingual e5 checkpoints and then distilled utilising english bge as a teacher model the second step involved fine tuning the obtained models on polish ms marco https huggingface co datasets clarin knext msmarco pl dataset with contrastrive loss in the table below we present the details of the released models table thead th student model th th teacher model th th pirb br ndcg 10 th th download th thead tr td colspan 4 strong encoders based on polish roberta strong td tr tr td a href https huggingface co sdadas polish roberta base v2 polish roberta base v2 a td td a href https huggingface co baai bge base en bge base en a td td 56 24 td td a href https huggingface co sdadas mmlw retrieval roberta base mmlw retrieval roberta base a td tr tr td a href https huggingface co sdadas polish roberta large v2 polish roberta large v2 a td td a href https huggingface co baai bge large en bge large en a td td 58 15 td td a href https huggingface co sdadas mmlw retrieval roberta large mmlw retrieval roberta large a td tr tr td colspan 4 strong encoders based on multilingual e5 strong td tr tr td a href https huggingface co intfloat multilingual e5 small multilingual e5 small a td td a href https huggingface co baai bge small en bge small en a td td 52 34 td td a href https huggingface co sdadas mmlw retrieval e5 small mmlw retrieval e5 small a td tr tr td a href https huggingface co intfloat multilingual e5 base multilingual e5 base a td td a href https huggingface co baai bge base en bge base en a td td 56 09 td td a href https huggingface co sdadas mmlw retrieval e5 base mmlw retrieval e5 base a td tr tr td a href https huggingface co intfloat multilingual e5 large multilingual e5 large a td td a href https huggingface co baai bge large en bge large en a td td 58 05 td td a href https huggingface co sdadas mmlw retrieval e5 large mmlw retrieval e5 large a td tr table please note that the developed models require the use of specific prefixes and suffixes when encoding texts for roberta based encoders each query should be preceded by the prefix zapytanie and no prefix is needed for passages for e5 based models queries should be prefixed with query and passages with passage an example of how to use the models python from sentence transformers import sentencetransformer from sentence transformers util import cos sim query prefix zapytanie zapytanie for roberta query for e5 answer prefix empty for roberta passage for e5 queries query prefix jak do y 100 lat answers answer prefix trzeba zdrowo si od ywia i uprawia sport answer prefix trzeba pi alkohol imprezowa i je dzi szybkimi autami answer prefix gdy trwa a kampania politycy zapewniali e rozprawi si z zakazem niedzielnego handlu model sentencetransformer sdadas mmlw retrieval roberta base queries emb model encode queries convert to tensor true show progress bar false answers emb model encode answers convert to tensor true show progress bar false best answer cos sim queries emb answers emb argmax item print answers best answer trzeba zdrowo si od ywia i uprawia sport machine translation models this section includes pre trained machine translation models convolutional models for fairseq we provide polish english and english polish convolutional neural machine translation models trained using fairseq https github com pytorch fairseq sequence modeling toolkit both models were trained on a parallel corpus of more than 40 million sentence pairs taken from opus http opus nlpl eu collection example of usage fairseq sacremoses and subword nmt python packages are required to run this example python from fairseq models import basefairseqmodel model path polish english model basefairseqmodel from pretrained model name or path model path checkpoint file checkpoint best pt data name or path model path tokenizer moses bpe subword nmt bpe codes code cpu true print model translate sentence zesp astronom w odkry w konstelacji panny niezwyk planet beam 5 a team of astronomers discovered an extraordinary planet in the constellation of virgo polish english convolutional model download github https github com sdadas polish nlp resources releases download nmt models conv polish english conv zip english polish convolutional model download github https github com sdadas polish nlp resources releases download nmt models conv english polish conv zip t5 based models we share mt5 and flan t5 models fine tuned for polish english and english polish translation the models were trained on 70 million sentence pairs from opus http opus nlpl eu you can download them from the hugginface hub using the links below an example of how to use the models python from transformers import pipeline generator pipeline translation model sdadas flan t5 base translator en pl sentence a team of astronomers discovered an extraordinary planet in the constellation of virgo print generator sentence max length 512 translation text zesp astronom w odkry niezwyk planet w gwiazdozbiorze panny the following models are available on the huggingface hub mt5 base translator en pl https huggingface co sdadas mt5 base translator en pl mt5 base translator pl en https huggingface co sdadas mt5 base translator pl en flan t5 base translator en pl https huggingface co sdadas flan t5 base translator en pl fine tuned models byt5 text correction a small multilingual utility model intended for simple text correction it is designed to improve the quality of texts from the web often lacking punctuation or proper word capitalization the model was trained to perform three types of corrections restoring punctuation in sentences restoring word capitalization and restoring diacritical marks for languages that include them the following languages are supported belarusian be danish da german de greek el english en spanish es french fr italian it dutch nl polish pl portuguese pt romanian ro russian ru slovak sk swedish sv ukrainian uk the model takes as input a sentence preceded by a language code prefix for example python from transformers import pipeline generator pipeline text2text generation model sdadas byt5 text correction sentences pl ciekaw jestem na co licza onuce stawiajace na sykulskiego w nadziei na zwrot ku rosji de die frage die sich die europ er stellen m ssen lautet ist es in unserem interesse die krise auf taiwan zu beschleunigen ru 26 1910 generator sentences max length 512 ciekaw jestem na co licz onuce stawiaj ce na sykulskiego w nadziei na zwrot ku rosji die frage die sich die europ er stellen m ssen lautet ist es in unserem interesse die krise auf taiwan zu beschleunigen 26 1910 the model is available on the huggingface hub byt5 text correction https huggingface co sdadas byt5 text correction dictionaries and lexicons polish english and foreign person names this lexicon contains 346 thousand forenames and lastnames labeled as polish english or foreign other crawled from multiple internet sources possible labels are p n polish forename p l polish lastname e n english forename e l english lastname f foreign other for each word there is an additional flag indicating whether this name is also used as a common word in polish c for common u for uncommon download github lexicons names named entities extracted from sjp pl this dictionary consists mostly of the names of settlements geographical regions countries continents and words derived from them relational adjectives and inhabitant names besides that it also contains names of popular brands companies and common abbreviations of institutions names this resource was created in a semi automatic way by extracting the words and their forms from sjp pl using a set of heuristic rules and then manually filtering out words that weren t named entities download github lexicons named sjp links to external resources repositories of linguistic tools and resources computational linguistics in poland ipi pan http clip ipipan waw pl lrt g4 19 research group wroclaw university of technology http nlp pwr wroc pl narzedzia i zasoby clarin repository of linguistic resources https clarin pl eu dspace gonito net evaluation platform with some challenges for polish https gonito net awesome nlp polish ksopyla https github com ksopyla awesome nlp polish publicly available large polish text corpora 1gb oscar corpus common crawl extract https oscar corpus com cc 100 web crawl data common crawl extract http data statmt org cc 100 the polish parliamentary corpus http clip ipipan waw pl ppc redistributable subcorpora of the national corpus of polish http zil ipipan waw pl distrnkjp polish wikipedia dumps https dumps wikimedia org plwiki opus parallel corpora https opus nlpl eu corpus from poleval 2018 language modeling task http 2018 poleval pl index php tasks c4 and mc4 corpora contains 180gb of compressed polish text https huggingface co datasets allenai c4 nllb parallel corpus 1613 language pairs of which 43 include polish https huggingface co datasets allenai nllb culturax a combination of mc4 and oscar corpora cleaned and deduplicated https huggingface co datasets uonlp culturax models supporting polish language sentence analysis tokenization lemmatization pos tagging etc spacy https spacy io models pl a popular library for nlp in python which includes polish models for sentence analysis stanza https stanfordnlp github io stanza a collection of neural nlp models for many languages from stndordnlp trankit https github com nlp uoregon trankit a light weight transformer based python toolkit for multilingual natural language processing by the university of oregon krnnt https github com kwrobel nlp krnnt and kftt https github com kwrobel nlp kftt neural morphosyntactic taggers for polish morfeusz http morfeusz sgjp pl a classic polish morphosyntactic tagger language tool https github com languagetool org languagetool java based open source proofreading software for many languages with sentence analysis tools included stempel https github com dzieciou pystempel algorythmic stemmer for polish polemma https huggingface co amu cai polemma large plt5 based lemmatizer of named entities and multi word expressions for polish available in small https huggingface co amu cai polemma small base https huggingface co amu cai polemma base and large https huggingface co amu cai polemma large sizes machine translation marian nmt https marian nmt github io an efficient c based implementation of neural translation models many pre trained models are available including those supporting polish pl de https huggingface co helsinki nlp opus mt pl de pl en https huggingface co helsinki nlp opus mt pl en pl es https huggingface co helsinki nlp opus mt pl es pl fr https huggingface co helsinki nlp opus mt pl fr pl sv https huggingface co helsinki nlp opus mt pl sv de pl https huggingface co helsinki nlp opus mt de pl es pl https huggingface co helsinki nlp opus mt es pl fr pl https huggingface co helsinki nlp opus mt fr pl m2m https github com pytorch fairseq tree master examples m2m 100 2021 a single massive machine translation architecture supporting direct translation for any pair from the list of 100 languages details in the paper beyond english centric multilingual machine translation https arxiv org pdf 2010 11125 pdf mbart 50 https huggingface co facebook mbart large 50 many to many mmt 2021 a multilingual bart model fine tuned for machine translation in 50 languages three machine translation models were published many to many https huggingface co facebook mbart large 50 many to many mmt english to many https huggingface co facebook mbart large 50 one to many mmt and many to english https huggingface co facebook mbart large 50 many to one mmt for more information see multilingual translation with extensible multilingual pretraining and finetuning https arxiv org abs 2008 00401 nllb https github com facebookresearch fairseq tree nllb 2022 nllb no language left behind is a project by meta ai aiming to provide machine translation models for over 200 languages a set of multilingual neural models ranging from 600m to 54 5b parameters is available for download for more details see no language left behind scaling human centered machine translation https research facebook com publications no language left behind language models multilingual bert https github com google research bert blob master multilingual md 2018 bert bidirectional encoder representations from transformers is a model for generating contextual word representations multilingual cased model provided by google supports 104 languages including polish xlm roberta https github com pytorch fairseq tree master examples xlmr 2019 cross lingual language model trained on 2 5 terabytes of data from commoncrawl and wikipedia supports 100 languages including polish see unsupervised cross lingual representation learning at scale https arxiv org pdf 1911 02116 pdf for details slavic bert https github com deepmipt slavic bert ner slavic bert 2019 multilingual bert model supporting bulgarian bg czech cs polish pl and russian ru languages mt5 https github com google research multilingual t5 2020 google s text to text transformer for 101 languages based on the t5 architecture details in the paper mt5 a massively multilingual pre trained text to text transformer https arxiv org abs 2010 11934 herbert https huggingface co allegro 2020 polish bert based language model trained by allegro for huggingface transformers in base https huggingface co allegro herbert base cased and large https huggingface co allegro herbert large cased variant plt5 https huggingface co allegro plt5 large 2021 polish version of the t5 model available in small https huggingface co allegro plt5 small base https huggingface co allegro plt5 base and large https huggingface co allegro plt5 large sizes byt5 https huggingface co docs transformers model doc byt5 2021 a multilignual sequence to sequence model similar to t5 but using raw byte sequences as inputs instead of subword tokens introduced in the paper byt5 towards a token free future with pre trained byte to byte models https arxiv org abs 2105 13626 xlm roberta xl and xxl https github com pytorch fairseq blob main examples xlmr readme md 2021 large scale versions of xlm roberta models with 3 5 and 10 7 billion parameters respectively for more information see larger scale transformers for multilingual masked language modeling https arxiv org pdf 2105 00572 pdf mluke https huggingface co docs transformers model doc mluke 2021 a multilingual version of luke transformer based language model enriched with entity metadata the model supports 24 languages including polish for more information see mluke the power of entity representations in multilingual pretrained language models https arxiv org pdf 2110 08151 pdf xglm https huggingface co facebook xglm 4 5b 2021 a gpt style autoregressive transformer language model trained on a large scale multilingual corpus the model was published in several sizes but only the 4 5b variant includes polish language for more information see few shot learning with multilingual language models https arxiv org abs 2112 10668 papugapt2 https huggingface co flax community papugapt2 2021 polish gpt like autoregressive models available in base https huggingface co flax community papugapt2 and large https huggingface co flax community papugapt2 large sizes mgpt https huggingface co sberbank ai mgpt 2022 another multilingual gpt style model with 1 3b parameters covering 60 languages the model has been trained by sberbank ai for more information see mgpt few shot learners go multilingual https arxiv org abs 2204 07580 flan t5 https huggingface co docs transformers model doc flan t5 2022 an improved version of t5 model fine tuned on a broad set of downstream tasks in multiple languages flan t5 models can be used in zero shot and few shot scenarios they can also be further fine tuned for specific task for more information see scaling instruction finetuned language models https arxiv org pdf 2210 11416 pdf xlm v https huggingface co facebook xlm v base 2023 a multilingual transformer based language model utilising large vocabulary of 1 million tokens which brings significant improvements on downstream tasks for some languages apart from a larger vocabulary the model s architecture is similar to previously published xlm r models for more information see xlm v overcoming the vocabulary bottleneck in multilingual masked language models https arxiv org abs 2301 10472 umt5 https console cloud google com storage browser t5 data pretrained models t5x umt5 xxl 2023 an improved mt5 model trained using a more uniform language distribution for more information see unimax fairer and more effective language sampling for large scale multilingual pretraining https arxiv org pdf 2304 09151 pdf mlongt5 https console cloud google com storage browser t5 data pretrained models t5x mlongt5 2023 a multilingual version of longt5 which is an extension of the t5 model that handles long inputs of up to 16k tokens supports 101 languages including polish for more information see mlongt5 a multilingual and efficient text to text transformer for longer sequences https arxiv org pdf 2305 11129 pdf sentence encoders universal sentence encoder https tfhub dev google universal sentence encoder multilingual large 1 2019 use universal sentence encoder generates sentence level langauge representations pre trained multilingual model supports 16 langauges arabic chinese simplified chinese traditional english french german italian japanese korean dutch polish portuguese spanish thai turkish russian laser language agnostic sentence representations https github com facebookresearch laser 2019 a multilingual sentence encoder by facebook research supporting 93 languages labse https tfhub dev google labse 1 2020 language agnostic bert sentence embedding model supporting 109 languages see language agnostic bert sentence embedding https arxiv org abs 2007 01852 for details sentence transformers https github com ukplab sentence transformers 2020 sentence level models based on the transformer architecture the library includes multilingual models supporting polish more information on multilingual knowledge distillation method used by the authors can be found in making monolingual sentence embeddings multilingual using knowledge distillation https arxiv org abs 2004 09813 laser2 and laser3 https github com facebookresearch laser blob main nllb readme md 2022 new versions of the laser sentence encoder by meta ai developed as a part of the nllb no language left behind project laser2 supports the same set of languages as the first version of the encoder which includes polish laser3 adds support to less common languages mostly low resource african languages see bitext mining using distilled sentence representations for low resource languages https arxiv org pdf 2205 12654 pdf for more details e5 https huggingface co intfloat multilingual e5 large 2022 a general purpose text encoder which can be applied to a variety of tasks such as information retrieval semantic textual similarity text reranking or clustering the models were trained using a large dataset of text pairs extracted from commoncrawl recently multilingual e5 models supporting polish have been published in small https huggingface co intfloat multilingual e5 small base https huggingface co intfloat multilingual e5 base and large https huggingface co intfloat multilingual e5 large versions see text embeddings by weakly supervised contrastive pre training https arxiv org abs 2212 03533 for more details optical character recognition ocr easy ocr https github com jaidedai easyocr optical character recognition toolkit with pre trained models for over 40 languages including polish tesseract https github com tesseract ocr tesseract popular ocr software developed since 1980s supporting over 100 languages for integration with python wrappers such as pytesseract https github com madmaze pytesseract or ocrmypdf https github com ocrmypdf ocrmypdf can be used speech processing speech recognition text to speech voice cloning etc quartznet nvidia nemo https catalog ngc nvidia com orgs nvidia teams nemo models stt pl quartznet15x5 2021 nvidia nemo is a toolkit for building conversational ai models apart from the framework itself nvidia also published many models trained using their code which includes a speech recognition model for polish based on quartznet architecture xls r https huggingface co facebook wav2vec2 xls r 300m 2021 xls r is a multilingual version of wav2vec 2 0 model by meta ai which is a large scale pre trained model for speech processing the model is trained in a self supervised way so it needs to be fine tuned for solving specific tasks such as asr several fine tuned checkpoints for polish speech recognition exist on the huggingface hub e g wav2vec2 large xlsr 53 polish https huggingface co jonatasgrosman wav2vec2 large xlsr 53 polish m ctc t https huggingface co speechbrain m ctc t large 2021 a speech recognition model from meta ai supporting 60 languages including polish for more information see pseudo labeling for massively multilingual speech recognition https arxiv org abs 2111 00161 whisper https github com openai whisper 2022 whisper is a model released by openai for asr and other speech related tasks supporting 82 languages the model is available in five sizes tiny 39m params base 74m small 244m medium 769m and large 1 5b more information can be found in the paper robust speech recognition via large scale weak supervision https cdn openai com papers whisper pdf mms https github com facebookresearch fairseq blob main examples mms readme md 2023 massively multilingual speech mms is a large scale multilingual speech foundation model published by meta ai along with the pre trained models they also released checkpoints fine tuned for specific tasks such as speech recognition https github com facebookresearch fairseq blob main examples mms readme md asr text to speech https github com facebookresearch fairseq blob main examples mms readme md tts and language identification https github com facebookresearch fairseq blob main examples mms readme md lid for more information see scaling speech technology to 1 000 languages https research facebook com publications scaling speech technology to 1000 languages seamlessm4t https github com facebookresearch seamless communication 2023 multilingual and multitask model trained on text and speech data it covers almost 100 languages including polish and can perform automatic speech recognition asr as well as multimodal translation tasks across languages speech to text text to speech speech to speech text to text see seamlessm4t massively multilingual multimodal machine translation https dl fbaipublicfiles com seamless seamless m4t paper pdf for more details sonar https github com facebookresearch sonar supported languages and download links 2023 multilingual embeddings for speech and text with a set of additional models fine tuned for specific tasks such as text translation speech to text translation or cross lingual semantic similarity see sonar sentence level multimodal and language agnostic representations https ai meta com research publications sonar sentence level multimodal and language agnostic representations for more details xtts https huggingface co coqui xtts v1 2023 a text to speech model that allows voice cloning by using just a 3 second audio sample of the target voice supports 13 languages including polish multimodal models multilingual clip sbert https huggingface co sentence transformers clip vit b 32 multilingual v1 2021 clip contrastive language image pre training is a neural network introducted by openai https github com openai clip which enables joint vector representations for images and text it can be used for building image search engines this is a multilingual version of clip trained by the authors of the sentence transformers https www sbert net library multilingual clip m clip https huggingface co m clip m bert base vit b 2021 this is yet another multilingual version of clip supporting polish language trained by the swedish institute of computer science sics layoutxlm https huggingface co microsoft layoutxlm base 2021 a multilingual version of layoutlmv2 https huggingface co docs transformers model doc layoutlmv2 model pre trained on 30 million documents in 53 languages the model combines visual spatial and textual modalities to solve prediction problems on visually rich documents such as pdfs or docs see layoutlmv2 multi modal pre training for visually rich document understanding https arxiv org abs 2012 14740 and layoutxlm multimodal pre training for multilingual visually rich document understanding https arxiv org abs 2104 08836 for details | natural-language-processing polish-language polish word-embedding lexicons machine-learning language-models | ai |
computer-vision-tasks | computer vision tasks | stereo-calibration motion-tracking background-subtraction | ai |
IoTagent-LoRaWAN | fiware iot agent for the lorawan protocol fiware iot agents https nexus lab fiware org static badges chapters iot agents svg https www fiware org developers catalogue license apgl https img shields io github license atos research and innovation iotagent lorawan svg https opensource org licenses agpl 3 0 docker https img shields io docker pulls fiware iotagent lorawan svg https hub docker com r fiware iotagent lorawan https img shields io badge tag fiware iot orange svg logo stackoverflow https stackoverflow com questions tagged fiware iot br documentation badge https img shields io readthedocs fiware lorawan svg http fiware lorawan readthedocs io en latest badge latest build status https github com atos research and innovation iotagent lorawan actions workflows test yml badge svg https github com atos research and innovation iotagent lorawan actions query branch 3amaster coverage status https coveralls io repos github atos research and innovation iotagent lorawan badge svg branch master https coveralls io github atos research and innovation iotagent lorawan branch master status https nexus lab fiware org static badges statuses iot lorawan svg cii best practices https bestpractices coreinfrastructure org projects 4827 badge https bestpractices coreinfrastructure org projects 4827 the internet of things agent for lorawan protocol enables data and commands to be exchanged between iot devices and the ngsi https swagger lab fiware org url https raw githubusercontent com fiware specifications master openapi ngsiv2 ngsiv2 openapi json interface of a context broker using the lorawan https lora alliance org about lorawan protocol it is based on the iot agent node js library https github com telefonicaid iotagent node lib further general information about the fiware iot agents framework its architecture and the common interaction model can be found in the library s github repository this project is part of fiware https www fiware org for more information check the fiware catalogue entry for the iot agents https github com fiware catalogue tree master iot agents books documentation https fiware lorawan readthedocs io mortar board academy https fiware academy readthedocs io en latest iot agents idas whale docker hub https hub docker com r fiware iotagent lorawan dart roadmap docs roadmap md contents background background install install usage usage api api roadmap roadmap quality assurance quality assurance license license background architecture as explained in what is lorawan https lora alliance org about lorawan the proposed network architecture for a lorawan based system relies on a mesh network architecture composed of end nodes concentrators network servers and application servers this iota is fully compliant with this architecture providing interoperability between fiware ngsi context brokers and lorawan devices general https raw githubusercontent com atos research and innovation iotagent lorawan master docs img iotagent lorawan arch png supported stacks the things network https www thethingsnetwork org v3 api chirpstack https www chirpstack io 3 11 0 data models cayennelpp https developers mydevices com cayenne docs lora lora cayenne low power payload cbor https datatracker ietf org doc html rfc7049 proprietary format decoded by lorawan application server install information about how to install the lorawan iot agent can be found at the corresponding section of the installation administration guide docs installationguide md a dockerfile is also available for your use further information can be found here docker readme md docker images docker images are available on docker hub https hub docker com r fiware iotagent lorawan latest refers to the last release this behaviour will be introduced since 1 2 5 release edge refers to the version in master this behaviour is introduced since 1 2 5 next activities refers to specific releases additionally usage is not recommended we release images for each branch on this repository https hub docker com r ioeari iotagent lora usage information about how to use the iot agent can be found in the user programmers manual https fiware lorawan readthedocs io en latest users manual index html api apiary reference for the configuration api can be found here https telefonicaiotiotagents docs apiary io reference configuration api more information about iot agents and their apis can be found in the iot agent library documentation https iotagent node lib readthedocs io roadmap the roadmap of this fiware ge is described here docs roadmap md quality assurance this project is part of fiware https fiware org and has been rated as follows version tested https img shields io badge dynamic json svg label version url https fiware github io catalogue json iotagent lora json query version colorb blue documentation https img shields io badge dynamic json svg label completeness url https fiware github io catalogue json iotagent lora json query doccompleteness colorb blue https img shields io badge dynamic json svg label usability url https fiware github io catalogue json iotagent lora json query docsoundness colorb blue responsiveness https img shields io badge dynamic json svg label time 20to 20respond url https fiware github io catalogue json iotagent lora json query timetocharge colorb blue https img shields io badge dynamic json svg label time 20to 20fix url https fiware github io catalogue json iotagent lora json query timetofix colorb blue fiware testing https img shields io badge dynamic json svg label tests 20passed url https fiware github io catalogue json iotagent lora json query failurerate colorb blue https img shields io badge dynamic json svg label scalability url https fiware github io catalogue json iotagent lora json query scalability colorb blue https img shields io badge dynamic json svg label performance url https fiware github io catalogue json iotagent lora json query performance colorb blue https img shields io badge dynamic json svg label stability url https fiware github io catalogue json iotagent lora json query stability colorb blue license fiware iot agent for lorawan protocol is licensed under affero general public license gpl version 3 license 2019 atos spain s a the following third party library is used under license 1 iotagent node lib https github com telefonicaid iotagent node lib agpl 2014 2019 telefonica investigaci n y desarrollo are there any legal issues with agpl 3 0 is it safe for me to use there is absolutely no problem in using a product licensed under agpl 3 0 issues with gpl or agpl licenses are mostly related with the fact that different people assign different interpretations on the meaning of the term derivate work used in these licenses due to this some people believe that there is a risk in just using software under gpl or agpl licenses even without modifying it for the avoidance of doubt the owners of this software licensed under an agpl 3 0 license wish to make a clarifying public statement as follows please note that software derived as a result of modifying the source code of this software in order to fix a bug or incorporate enhancements is considered a derivative work of the product software that merely uses or aggregates i e links to an otherwise unmodified version of existing software is not considered a derivative work and therefore it does not need to be released as under the same license or even released as open source | iot iota fiware-iot-agents lorawan fiware the-things-network cbor cayennelpp iot-agent | server |
HomoScriptor | div align center homoscriptor a community driven human written dataset for language model fine tuning place this tag where you want the button to render stargazers stars shield stars url forks forks shield forks url disscussion discuss shield discuss url together let s create a remarkable dataset that fuels innovation and drives the progress of language models homoscriptor is a vibrant and collaborative project that thrives on community contributions contributing md br it serves as a curated collection of human written datasets specifically designed for fine tuning large language models llms br with its diverse range of categories and organized json files div file structure contributing md contributing md guidelines for contributing to the dataset data data language tasks json data language tasks json json file of language tasks including rhyming poetry tongue twisters summarising and some of the differences between uk and us spelling logic tasks json data logic tasks json json file containing logic related tasks including puzzles riddles and brainteasers license license license information for the dataset readme md readme md contains information about the dataset key features categorized json files our dataset is thoughtfully organized with each category having its own json file this structured approach makes it effortless to explore specific linguistic domains and seamlessly incorporate them into your llm training pipeline short and long variant outputs every task in the json files includes both short and long variant outputs this versatility allows you to tailor the dataset to your specific needs accommodating a wide range of applications and use cases open source and collaborative homoscriptor embraces the power of community collaboration we actively encourage and welcome contributors to join our project and contribute to its growth your input and expertise can help enhance the dataset s overall quality and ensure its relevance to the broader language model research community contributing we firmly believe that the strength of homoscriptor lies in its community of contributors we invite language model enthusiasts researchers and data scientists from all backgrounds to join us in shaping the future of language models contributing to homoscriptor is a rewarding experience as it allows you to leave your mark on this dynamic dataset if you are interested in contributing please refer to the guidelines outlined in the contribute file contributing md we look forward to your valuable contributions and appreciate your dedication to advancing the field of language modeling together let s create a remarkable dataset that fuels innovation and drives the progress of language models stars shield https img shields io github stars homoscriptor project homoscriptor svg style for the badge stars url https github com homoscriptor project homoscriptor stargazers forks shield https img shields io github forks homoscriptor project homoscriptor svg style for the badge forks url https github com homoscriptor project homoscriptor network members discuss shield https img shields io github discussions homoscriptor project homoscriptor svg style for the badge discuss url https github com homoscriptor project homoscriptor discussions | dataset datasets fine-tuning finetuning llm | ai |
Mootor | mootor html5 library for mobile application development mootor is minimalist html5 library for mobile application development demo http emi420 github io mootor demo img width 457 alt screenshot 2023 07 06 at 00 20 40 src https github com emi420 mootor assets 1226194 27b39076 f39c 4fb1 8e77 bd41d011efca documentation getting started https github com emi420 mootor blob master getting started md ui components https github com emi420 mootor blob master doc ui md javascript https github com emi420 mootor blob master doc js md about we received public founding by mincyt mincyt gob ar http mincyt gob ar license you may use any mootor project under the terms of either the mit license or the gnu general public license gpl version 3 c 2012 emilio mariscal want to contribute or use this software fork the project or write me to emi420 at gmail com | front_end |
|
Kunafa | awesome kotlin badge https kotlin link awesome kotlin svg https github com kotlinby awesome kotlin maven central https img shields io maven central v com narbase kunafa core svg https mvnrepository com artifact com narbase kunafa core div align center img alt kunafa logo src https github com narbase kunafa raw master logo png height 86 div div align center h1 kunafa h1 p easy to use high level framework in kotlin for front end web development p br div create web apps without using html css or javascript documentation find kunafa documentation here https docs kunafa narbase com work in progress philosophy web apps framework without using html css or javascript problem web technologies are pain html is verbose css is unexpected and javascript is javascript the no of technologies a developer needs to learn to write web apps is big that is html css and javascript at the least then there are javascript frameworks react angular vue less scss haml there are also packaging tools gulp webpack modern javascript frameworks solve the javascript problem not the front end problem react use jsx embedded html in javascript while angular requires css and html proposal an easy to use high level framework in kotlin for web development you do not need to learn the web stack only the framework to be able to write web apps developer experience developers only need to use kotlin for development you write the view similar to android xml layouts in kotlin dsl e g kotlin verticallayout style width matchparent height matchparent backgroundcolor color 240 240 240 button text click me the view component similar to android activity or ios viewcontroller implements certain life cycle functions the framework views contains easy to understand and familiar components and layouts managers i e button textview textinput horizontallayout verticallayout and so on the framework makes laying out objects easy e g match parent wrap content you can wrap any html css and js into a framework component to use it inside the framework features intuitive dsl for creating views type safe css dsl for complete control of views appearance automatic css rule sets caching flexible components to abstract any logic full routing support links url params redirecting navigation control very easy to wrap any 3rd party library as kunafa component implementation kotlin transpiles to javascript and is well designed to support dsls code is turned to javascript at compile time and a basic html file loads the generated js file at runtime the js file will generate the required html and css files containing the whole application getting started hello kunafa getting started guide https github com narbase kunafa wiki hello kunafa to add kunafa to your project first you need to add it to your build gradle file as a dependency groovy compile com narbase kunafa core latest version if you have kotlin js plugin configured then you can directly use it the code and webpack will include kunafa in the generated bundle now you are ready to use kunafa in any kotlin js project for a complete example check the kunafa todo repository https github com kabbura kunafa todo your feedback is most welcomed let us know how is your experience with kunafa | kotlin javascript javascript-framework kotlin-js css javascript-frameworks html frontend front-end single-page-app single-page-applications | front_end |
FreeRTOS-raspi3 | freertos ported to raspberry pi 3 64bit i have not yte test on real hardware yet i test with qemu 6 1 0 how to build install aarch64 toolchain make how to run with qemu make run make run qemu system aarch64 m raspi3 m 1024 serial null serial mon stdio nographic kernel kernel8 elf hello world 0000000000000001 00000000000001f6 this port based on xilinx cortex a53 port | aarch64 freertos raspberrypi | os |
MDEN | mden mobile development engineer notes | front_end |
|
ci-experiments | ci experiments coverity scan build status coverity scan https scan coverity com projects 13021 badge svg flat 1 https scan coverity com projects zebastian ci experiments linux osx build status https travis ci org zebastian ci experiments svg branch master https travis ci org zebastian ci experiments windows build status https ci appveyor com api projects status cb0woxak6orvynsl svg true https ci appveyor com project zebastian ci experiments analyse verschiedener cloud testing anbieter im rahmen der seminararbeit des fachs software engineering projekt das eigentliche projekt ist eine ausf hrbare dummy datei in src main c diese wird durch die cmake build konfiguration in cmakelists txt beschrieben testing provider travis in der datei travis yml findet sich die konfiguration des projekts zum bauen auf osx und linux appveyor in der datei appveyor yml findet sich die konfiguration des projekts zum bauen auf windows | cloud |
|
Demand-Skills-for-Data-Scientists | demand skills for data scientists this project is a part of the subject problem solving in information technology psit faculty of information technology king mongkut s institute of technology ladkrabang kmitl this project is about doing data analysis we ve chosen the title demand skills for data scientists for this project our main target is to how should data scientists who want to be in demand by employers spend their learning budget information project site https bit ly 2cgk9ao project present video https youtu be qcdugkqtylu project detail 1 talk about what is data scientists 2 talk about employment opportunity from web job search 3 talk about salary 4 talk about general skill in data scientist job listing 5 talk about what did the most data scientist graduate in 6 talk about popular languge for data scientist 7 talk about what is data scientist do overall operation if you are concerned about being a data scientist you should studying business computers analyzing statistics computer science machine learning and more and you should study python r and sql languages and others for to be a good data scientist and have a good job img src img screenshot 8 png statistics project started 2 november 2018 completed 17 december 2018 project status completed main language python python module pygal pandas numpy group members img src img member 1 png width 120px height 120px img src img member 4 jpg width 120px height 120px img src img member 3 png width 120px height 120px img src img member 2 png width 120px height 120px kuroishi https github com kuroishi1221 toplordsaito https github com toplordsaito chokcolate https github com chokcolate tanjry https github com tanjry ratchanon br chumbunyeanyong waruwat br chaidit terawat br kanjanapanwong jharinya br jaipakdee 61070182 61070214 61070093 61070021 credits https www emolument com salary reports jobs data science eic analysis economic intelligence center eic https www kaggle com kaggle kaggle survey 2018 https www kaggle com discdiver | server |
|
engapp-mdp | engapp mobile development pattern pref cio seja bem vindo a esta documenta o nela vamos exibir como n s desenvolvedores mobile da engapp organizamos criamos e montamos nossos softwares utilizando o flutter http flutter dev flutter exibiremos nosso guia de boas pr ticas para padronizar nosso desenvolvimento e consequentemente espalhar para os que ainda n o tenham um rumo ou est o come ando nessa rea e querem um norte para tal ent o vamos l introdu o se voc chegou at esse documento porque desenvolve software para mobile usando especificamente o flutter que nossa stack oficial para tal ent o vamos come ar vamos passar pelos seguintes t picos estrutura do projeto diferen a entre fun es e classes e como escrever as mesmas declaracao de variaveis gerenciamento de estado obs essa documenta o aqui ser nossa base inicial nada impede que ela seja alterada conforme fomos descobrimos maneiras mais f ceis melhores de fazer o que fazemos atualmente obs 2 caso voc tenha um outro padr o para organizar n o se prenda a este conte do aqui voc ir encontrar como a eng app organiza e por fim padroniza seus projetos estrutura do projeto quando se fala em estrutura do projeto quero dizer como organizar seus arquivos suas pastas sub pastas sub arquivos enfim como seu projeto dever se organizar daqui pra frente segue exemplo pastas folder https i imgur com mbv52nu png sempre vamos iniciar dentro da pasta lib com a pasta src dentro dela seguimos com a cria o de nossos componentes criando a pasta components dentro teremos por exemplo a login component e dentro de cada algum componente teremos sua pasta de utils com o arquivo nome do componente pricipal utils dart assim como nosso arquivo para gerenciar nossos requests de api no nosso arquivo de utils vamos separar nossos componentes widgets do componente principal segue exemplo componentes componente https i imgur com tv23fjw png componente com isso mantemos nosso c digo limpo e conciso e no componente utils referente ao principal estar o nossos widgets desenvolvidos no nosso arquivo de network onde ficar o todas as conex es necess rias daquele componente por exemplo no network login dart teremos apenas a requisi o para fazer o login e assim por diante com os demais componentes do projeto fun o classe ou vari vel local como voc j deve saber existem diversas maneiras de se declarar um componente no flutter esses 3 tipos s o os mais utilizados fun o classe e vari vel local nenhum dos 3 est errado todos executam de maneira correta o que o usu rio digitou por m diversos jeitos de se declarar a mesma coisa pode ficar confuso e at mesmo de dif cil manuten o futuramente tanto para voc como para outro programador que venha pegar seu c digo vamos a alguns exemplos fun o widget functionwidget widget child return container child child classe class classwidget extends statelesswidget final widget child const classwidget key key this child super key key override widget build buildcontext context return container child child percebemos aqui que ambas fazem o que foi solicitado eles criam um container e chamam um child como widget filho no papel o que ocorre exatamente isso por m o framework desconhece fun es mas reconhece classes nossa rvore de widgets ficaria assim no primeiro exemplo container container e no segundo classwidget container classwidget container e isso faz uma diferen a enorme em rela o a como o framework se comporta quando temos que atualizar um componente aqui est uma lista de itens comparando o uso de fun es com classe 1 classes otimiza o na performance const constructor operator override rebuild mais granular possuem hot reload podemos v las no widget inspector debugfillproperties podemos atribuir chaves podemos usar seu pr prio contexto garantem que todos os widgets s o usados da mesma forma garantem que se trocarmos entre dois layouts diferentes o m todo dispose vai funcionar corretamente e ir liberar os recursos as fun es geralmente utilizam estado anterior 2 fun es possuem menos c digo podemos ver claramente e com mais detalhe agora n o estamos proibindo o uso de fun es por m para declarar componentes visuais temos a prefer ncia e ser sempre nossa escolha n mero 1 o uso de classes utilizamos fun es para declarar m todos como de requisi es alguma verifica o entre outros obs n o recomendamos o uso de declarar vari veis locais por n o ser t o leg vel quanto uma classe quando se vai observar um c digo vari veis j vimos at ent o a nossa recomenda o para estruturar seu projeto e como criar da melhor forma seus componentes widgets outro assunto bastante importante no tocante a como devemos declarar nossas vari veis sejam elas simples como declarar um inteiro ou string seja ao nomear suas fun es e at mesmo suas classes como j dizia robert c martin autor do livro codigo limpo https www amazon com br c digo limpo habilidades pr ticas software dp 8576082675 codigo limpo declara que devemos pensar bem no nome de nossas vari veis n o importa se vamos demorar ou n o para dar um nome mas esse nome tem que ser o mais claro e de prefer ncia que d significado ao que voc quer que execute seguindo claro o princ pio da responsabilidade nica no qual aquela classe aquela fun o vai executar apenas uma nica tarefa e nada al m disso vamos a alguns exemplos ao decorrer da nossa jornada j vimos v rios exemplos de vari veis como esse daqui var x at ent o pra quem est criando o c digo pode ser at que n o veja tanto problema essa vari vel com o nome x afinal ele mesmo est sabendo como utilizar ela seja para armazenar algum inteiro uma string no entanto conforme o tempo vai passando voc come a a desenvolver mais seu projeto e acaba esquecendo aquela vari vel x futuramente quando se necess rio dar a manuten o voc possivelmente vai ficar perdido tentando descobrir o que aquela vari vel x est fazendo ali e qual seu uso isso algo que robert c martin e n s da engapp reprovamos consideramos isso como bad practice jamais em hip tese alguma permitiremos o uso de vari veis desse tipo o c digo que devemos escrever a nossa comunica o com outros profissionais se futuro programador ou ate mesmo voce nao entender o que est acontecendo com seu c digo como voc poder cuidar dele at mesmo entender ele imposs vel dessa maneira sempre h um por m esse por m se d caso sua aplica o seja pra mostrar algo r pido pro cliente ou at mesmo pra vc ver como est sua aplica o se seu projeto estiver tomando maiores propor es crescendo cada vez tira um dia livre para refatorar essas vari veis voc pode at achar que perda de tempo fazer isso tudo bem opini o sua mas torca entao pra nao ter que dar manuten o no seu c digo futuramente | front_end |
|
FreeRTOS-port-for-LinkIt-SDK | freertos port for linkit sdk freertos port for mediatek arm cm4 mcu in linkit sdk for the complete linkit sdk contain the freertos port please visit https labs mediatek com en support resources | os |
|
mECU | a miata ecu mecu overview i plan to develop an open source ecu which runs on an stm32 much like some of the existing ecu solutions available rusefi openecu i m pulling some design ideas and concepts from them but hopefully develop something of my own tailored to my own requirements a 1992 mazda mx 5 with a bp swap design goals i hope to support cop for at least 4 cylinders and with wasted spark up to 8 i don t want to develop a device that can do everything and the kitchen sink that s more suited to other more entrenched developers like haltech aem etc i hope to build a kit that someone can throw together really fast or buy preassmbled much like megasquirt is was q a q why not megasquirt i don t really agree with their design decisions and the price is a bit ridiculous for what you get and i believe that having more custom solutions for different car families is more what communities tend towards regardless on different forums you hear suggestions for this or that ems anyways so why not roll my own learn something and hopefully don t grenade my car | os |
|
Interview-Preparation-WAY | way to improvement knowledges and experiences 200 books and materials what we expected and recommended happy to learn again and again no computer science subjects resources status 1 algorithm algorithm https github com urunov interview preparation way tree master books algorithm book 2 database database https github com urunov interview preparation way tree master books database book 3 design pattern design pattern https github com urunov interview preparation way tree master books designpattern book 4 java java https github com urunov interview preparation way tree master books java book 5 ocp ocp https github com urunov interview preparation way tree master books ocp book 6 operation system operation system https github com urunov interview preparation way tree master books operationsystem book 7 spring spring https github com urunov interview preparation way tree master books spring book 8 system design system design https github com urunov interview preparation way tree master books systemdesign book 9 dev ops dev https github com urunov interview preparation way tree master books dev ops book 10 interview guides interview https github com urunov interview preparation way tree master interview book | design-patterns java ocp | os |
jbc | jbc jack s blockchain simple blockchain to learn and talk about how blockchains work getting started these instructions will get you a copy of the project up and running on your local machine for development and testing purposes see deployment for notes on how to deploy the project on a live system prerequisites as always you ll need to install the python libraries in requirements txt pip install r requirements txt running a node to run the node on the command line there are a few options m tells the node to not only receive nodes but also mine p port num will tell node py which port to run on this is important when running multiple nodes locally as described below hard linking directory for multiple nodes in order to run a different node we want to hard link the main jbc directory into another directory to do this use the linknodes sh https gist github com jackschultz 5bdc628739c9ceae9ec96fadf9ed8557 script in the directory above jbc for example linknodes sh 5001 will create a directory named jbc5001 then in that directory you ll be able to run a node on a different port to gather blocks or mine as well contributing feel free to clone run and give feedback and pull requests finding bugs is a great help to the project as well as a great way for everyone to learn and feel free to help update this readme file to better describe how to run this locally license this project is licensed under the mit license see the license md license md file for details | blockchain |
|
ZocSec.SecurityAsCode.GitHub | zocsec securityascode github p img src zocsecshieldblue png align right welcome to the zocdoc information security team zocsec securityascode repository for github we use aws s in built technologies to automate the remediation of common security problems in this repository zocsec presents code configuration used to lock down our github environment p project list these are the projects we re currently ready share github inventory tool github inventory tool this github python script collects all repositories private and public from authenticated github account github automated security github automated security an automated means to secure private github repositories from unintentionally becomes public and enable scan for vulnerable dependencies github enable vuln scan github enable vuln scan a simple python script that enable scan for vulnerable dependencies on all repos under any organizational github we will be sharing more of our projects in the future contributions we welcome contributions and pull requests to this repo give us feedback the primary contributors to this effort are jay ball veggiespam https github com veggiespam and gary tsai garymalaysia https github com garymalaysia this project was released to the public as part of the zocdoc s zocsec securityascode initiative copyright 2018 2019 zocdoc inc www zocdoc com vim spell expandtab | open-source | server |
adaptnlp | welcome to adaptnlp a high level framework and library for running training and deploying state of the art natural language processing nlp models for end to end tasks p align center a href https github com novetta adaptnlp img src https raw githubusercontent com novetta adaptnlp master docs assets images company logo png width 400 a p ci https github com novetta adaptnlp workflows ci badge svg pypi https img shields io pypi v adaptnlp color blue label pypi 20version https pypi org project adaptnlp description what is adaptnlp adaptnlp is a python package that allows users ranging from beginner python coders to experienced machine learning engineers to leverage state of the art natural language processing nlp models and training techniques in one easy to use python package utilizing fastai https docs fast ai with huggingface s transformers https github com huggingface transformers library and humboldt university of berlin s flair https github com flairnlp flair library adaptnlp provides machine learning researchers and scientists a modular and adaptive approach to a variety of nlp tasks simplifying what it takes to train perform inference and deploy nlp based models and microservices what is the benefit of adaptnlp rather than just using transformers despite quick inference functionalities such as the pipeline api in transformers it still is not quite as flexible nor fast enough with adaptnlp s easy inference modules these tend to be slightly faster than the pipeline interface bare minimum the same speed while also providing the user with simple intuitive returns to alleviate any unneeded junk that may be returned along with this with the integration of the fastai library the code needed to train or run inference on your models has a completely modular api through the fastai callback https docs fast ai callbacks core system rather than needing to write your entire torch loop if there is anything special needed for a model a callback can be written in less than 10 lines of code to achieve your specific functionalities finally when training your model fastai is on the forefront of beign a library constantly bringing in the best practices for achiving state of the art training with new research methodologies heavily tested before integration as such adaptnlp fully supports training with the one cycle policy and using new optimizer combinations such as the ranger optimizer with cosine annealing training through simple one line fitting functions fit one cycle and fit flat cos installation directions pypi to install with pypi please use bash pip install adaptnlp or if you have pip3 bash pip3 install adaptnlp conda coming soon developmental builds to install any developmental style builds please follow the below directions to install directly from git stable master branch the master branch generally is not updated much except for hotfixes and new releases to install please use bash pip install git https github com novetta adaptnlp developmental branch include note html content generally this branch can become unstable and it is only recommended for contributors or those that really want to test out new technology please make sure to see if the latest tests are passing a green checkmark on the commit message before trying this branch out you can install the developmental builds with bash pip install git https github com novetta adaptnlp dev docker images there are actively updated docker images hosted on novetta s dockerhub https hub docker com r novetta adaptnlp the guide to each tag is as follows latest this is the latest pypi release and installs a complete package that is cuda capable dev these are occasionally built developmental builds at certain stages they are built by the dev branch and are generally stable api the api builds are for the rest api https novetta github io adaptnlp rest to pull and run any adaptnlp image immediatly you can run bash docker run itp 8888 8888 novetta adaptnlp tag replacing tag with any of the afformentioned tags earlier afterwards check localhost 8888 or localhost 888 lab to access the notebook containers navigating the documentation the adaptnlp library is built with nbdev https nbdev fast ai so any documentation page you find including this one can be directly run as a jupyter notebook each page at the top includes an open in colab button as well that will open the notebook in google colaboratory to allow for immediate access to the code the documentation is split into six sections each with a specific purpose getting started https novetta github io adaptnlp this group contains quick access to the homepage what are the adaptnlp cookbooks and how to contribute models and model hubs https novetta github io adaptnlp model html these contain any relevant documentation for the adaptivemodel class the huggingface hub model search integration and the result class that various inference api s return class api this section contains the module documentation for the inference framework the tuning framework as well as the utilities and foundations for the adaptnlp library inference and training cookbooks https novetta github io adaptnlp cookbook html these two sections provide quick access to single use recipies for starting any adaptnlp project for a particular task with easy to use code designed for that specific use case there are currently over 13 different tutorials available with more coming soon nlp services with fastapi https novetta github io adaptnlp rest this section provides directions on how to use the adaptnlp rest api for deploying your models quickly with fastapi contributing there is a contribution guide available here https novetta github io adaptnlp contributing testing adaptnlp is run on the nbdev framework to run all tests please do the following 1 pip install nbverbose 2 git clone https github com novetta adaptnlp 3 cd adaptnlp 4 pip install e 5 nbdev test nbs this will run every notebook and ensure that all tests have passed please see the nbdev documentation https nbdev fast ai for more information about it contact please contact zachary mueller at zmueller novetta com with questions or comments regarding adaptnlp follow us on twitter at thezachmueller https twitter com thezachmueller and adaptnlp https twitter com adaptnlp for updates and nlp dialogue license this project is licensed under the terms of the apache 2 0 license | nlp pytorch transformers natural-language-processing machine-learning deep-learning deep-learning-tutorial docker fine-tuning language-models easy api-rest bert gpt xlnet gpu ulmfit | ai |
Design-and-Development-of-Embedded-Systems | design and development of embedded systems the assignments for dit 165 v17 design and development of embedded systems course gothenburg university pdf file of the assignments https drive google com open id 0b4ap5pig0ciasm10s3fqlvdkowc click to open | os |
|
turnip_cli | turnip cli an extensible c cli command line interface designed for embedded systems this example is built with freertos libopencm3 stm32f1 however it s straight forward to rip out the cli part please note the assets inside the lib folder are not covered under mit they have their own licences creating commands each command is a seperate h cpp pair or just h please see inc hellocmd h for an example each command class should implement c virtual const char parse char input the parse function what gets called when the user enters the command the class should also inherit from from the cmd class just look at the code you should be able to figure it out registering commands commands are dynamically registered like so c cmd menu new hellocmd hello uart1 0 running the cli just call present it will not return the first argument is the menu shown above which is built at runtime c cli present menu building bash git clone https github com machine hum turnip cli cd turnip cli git submodule update init recursive cd lib libopencm3 make cd make make flash | os |
|
chromelens | alt text https raw githubusercontent com jin chromelens master images logo png chromelens logo a href https chrome google com webstore detail chrome lens idikgljglpfilbhaboonnpnnincjhjkd chrome web store a chromelens is a google chrome extension that provides a suite of tools to help with web accessibility development lens vision simulator interact with a website as a completely partially blind or a colorblind person accessibility audit run a website through an series of accessibility rules and easily discover elements in your website that do not comply with them tab tracker key website features should be navigable solely via the keyboard tab button while not making the user jump through hoops to get to a feature with the tab tracker you can visually track the flow of navigation through a website website find out more about chromelens why we made it and more about the extension at a href http chromelens xyz chromelens xyz a credits https github com googlechrome accessibility developer tools https github com niklasvh html2canvas https github com altreus colourblind reviews for chromelens http briancoords com tech analyzing accessibility chromelens http www jaredrigby co uk 2016 07 11 accessibility testing with chromelens html | accessibility chrome-plugin color-blindness web-accessibility google-chrome devtools-extension devtools ux | front_end |
D.Eco | this repository contains the ios android design and code base for the d eco application this is the senior project for drury students enrolled in csci 371 spring 2017 | os |
|
IPT_zju | ipt zju information processing technology zju assignment01 assignment02 assignment03 assignment04 assignment05 k means assignment06 assignment07 project | server |
|
EdgarAnalytics | table of contents 1 understanding the challenge readme md understanding the challenge 2 introduction readme md introduction 3 challenge summary readme md challenge summary 4 details of challenge readme md details of challenge 5 implementation details readme md implementation details 6 input files readme md input files 7 output file readme md output file 8 example readme md example 9 writing clean scalable and well tested code readme md writing clean scalable and well tested code 10 repo directory structure readme md repo directory structure 11 testing your directory structure and output format readme md testing your directory structure and output format 11 instructions to submit your solution readme md instructions to submit your solution 13 faq readme md faq understanding the challenge we highly recommend that you take a few dedicated minutes to read this readme in its entirety before starting to think about potential solutions you ll probably find it useful to review the examples and understand the problem at a high level before digging into the specific details many of which are covered in the faq introduction many investors researchers journalists and others use the securities and exchange commission s electronic data gathering analysis and retrieval edgar system to retrieve financial documents whether they are doing a deep dive into a particular company s financials or learning new information that a company has revealed through their filings the sec maintains edgar weblogs showing which ip addresses have accessed which documents for what company and at what day and time this occurred imagine the sec has asked you to take the data and produce a dashboard that would provide a real time view into how users are accessing edgar including how long they stay and the number of documents they access during the visit while the sec usually makes its edgar weblogs publicly available after a six month delay imagine that for this challenge the government entity has promised it would stream the data into your program in real time and with no delay your job as a data engineer is to build a pipeline to ingest that stream of data and calculate how long a particular user spends on edgar during a visit and how many documents that user requests during the session challenge summary for this challenge we re asking you to take existing publicly available edgar weblogs and assume that each line represents a single web request for an edgar document that would be streamed into your program in real time using the data identify when a user visits calculate the duration of and number of documents requested during that visit and then write the output to a file your role on the project is to work on the data pipeline to hand off the information to the front end as the backend data engineer you do not need to display the data or work on the dashboard but you do need to provide the information you can assume there is another process that takes what is written to the output file and sends it to the front end if we were building this pipeline in real life we d probably have another mechanism to send the output to the gui rather than writing to a file however for the purposes of grading this challenge we just want you to write the output to files details of challenge for the purposes of this challenge an ip address uniquely identifies a single user a user is defined to have visited the edgar system if during the visit the ip address requested one or more documents also for the purposes of this challenge the amount of time that elapses between document requests should be used to determine when a visit also referred to as a session begins and ends a single user session is defined to have started when the ip address first requests a document from the edgar system and continues as long as the same user continues to make requests the session is over after a certain period of time has elapsed we ll provide you that value and the user makes no requests for documents in other words this period of inactivity helps to determine when the session is over and the user is assumed to have left the system the duration of any particular session is defined to be the time between the ip address first request and the last one in the same session prior to the period of inactivity if the user returns later to access another document requests that subsequent request would be considered the start of a new session implementation details your program should expect two input files be sure to read the section repo directory structure for details on where these files should be located log csv edgar weblog data inactivity period txt holds a single value denoting the period of inactivity that should be used to identify when a user session is over as you process the edgar weblogs line by line the moment you detect a user session has ended your program should write a line to an output file sessionization txt listing the ip address duration of the session and number of documents accessed the value found in inactivity period txt should be used to determine when a session has ended and when a new session has possibly started however once you reach the end of the log csv that last timestamp should signal the end of all current sessions regardless of whether the period of inactivity has been met input files log csv the sec provides weblogs stretching back years and is regularly updated although with a six month delay https www sec gov dera data edgar log file data set html for the purposes of this challenge you can assume that the data is being streamed into your program in the same order that it appears in the file with the first line after the header being the first request and the last line being the latest you also can assume the data is listed in chronological order for the purposes of this challenge while you re welcome to run your program using a subset of the data files found at the sec s website you should not assume that we ll be testing your program on any of those data files also while we won t expect your program to be able to process all of the sec s weblogs there is over 1tb of data you should be prepared to talk about how you might design or redesign your program should the challenge be changed to require you to process hundreds of gigabytes or even a terabyte for the purposes of this challenge below are the data fields you ll want to pay attention to from the sec weblogs ip identifies the ip address of the device requesting the data while the sec anonymizes the last three digits it uses a consistent formula that allows you to assume that any two ip fields with the duplicate values are referring to the same ip address date date of the request yyyy mm dd time time of the request hh mm ss cik sec central index key accession sec document accession number extention value that helps determine the document being requested there are other fields that can be found in the weblogs for the purposes of this challenge your program can ignore those other fields unlike other weblogs that contain the actual http web request the sec s files use a different but deterministic convention for the purposes of this challenge you can assume the combination of cik accession and extention fields uniquely identifies a single web page document request don t assume any particular format for any of those three fields e g the fields can consist of numbers letters hyphens periods and other characters the first line of log csv will be a header denoting the names of the fields in each web request each field is separated by a comma your program should only use this header to determine the order in which the fields will appear in the rest of the other lines in the same file inactivity period txt this file will hold a single integer value denoting the period of inactivity in seconds that your program should use to identify a user session the value will range from 1 to 86 400 i e one second to 24 hours output file once your program identifies the start and end of a session it should gather the following fields and write them out to a line in the output file sessionization txt the fields on each line must be separated by a ip address of the user exactly as found in log csv date and time of the first webpage request in the session yyyy mm dd hh mm ss date and time of the last webpage request in the session yyyy mm dd hh mm ss duration of the session in seconds count of webpage requests during the session unlike the input weblog data file and for the purposes of this challenge your program should not write a header line to the output file but instead write just the results each line should have the fields in the exact order detailed above fields must be separated by a comma if your program is able to detect multiple user sessions ending at the same time it should write the results to the sessionization txt output file in the same order as the user s first request for that session appeared in the input log csv file example suppose your input files contained only the following few lines note that the fields we are interested in are in bold below but will not be like that in the input file there s also an extra newline between records below but the input file won t have that inactivity period txt 2 log csv ip date time zone cik accession extention code size idx norefer noagent find crawler browser 101 81 133 jja 2017 06 30 00 00 00 0 0 1608552 0 0001047469 17 004337 index htm 200 0 80251 0 1 0 0 0 0 0 9 0 0 0 107 23 85 jfd 2017 06 30 00 00 00 0 0 1027281 0 0000898430 02 001167 index htm 200 0 2825 0 1 0 0 0 0 0 10 0 0 0 107 23 85 jfd 2017 06 30 00 00 00 0 0 1136894 0 0000905148 07 003827 index htm 200 0 3021 0 1 0 0 0 0 0 10 0 0 0 107 23 85 jfd 2017 06 30 00 00 01 0 0 841535 0 0000841535 98 000002 index html 200 0 2699 0 1 0 0 0 0 0 10 0 0 0 108 91 91 hbc 2017 06 30 00 00 01 0 0 1295391 0 0001209784 17 000052 txt 200 0 19884 0 0 0 0 0 0 0 10 0 0 0 106 120 173 jie 2017 06 30 00 00 02 0 0 1470683 0 0001144204 14 046448 v385454 20fa htm 301 0 663 0 0 0 0 0 0 0 10 0 0 0 107 178 195 aag 2017 06 30 00 00 02 0 0 1068124 0 0000350001 15 000854 xbrl zip 404 0 784 0 0 0 0 0 0 0 10 0 1 0 107 23 85 jfd 2017 06 30 00 00 03 0 0 842814 0 0000842814 98 000001 index html 200 0 2690 0 1 0 0 0 0 0 10 0 0 0 107 178 195 aag 2017 06 30 00 00 04 0 0 1068124 0 0000350001 15 000731 xbrl zip 404 0 784 0 0 0 0 0 0 0 10 0 1 0 108 91 91 hbc 2017 06 30 00 00 04 0 0 1618174 0 0001140361 17 026711 txt 301 0 674 0 0 0 0 0 0 0 10 0 0 0 the single line on inactivity period txt tells us that once two seconds have elapsed since a user made a document request we can assume that user s particular visit has ended any subsequent requests would be considered a new session the first day and time listed in the input file is 2017 06 30 and the time is 00 00 00 that means at that date and time the following ip addresses initiated a visit to edgar 101 81 133 jja made a request for cik 1608552 0 accession 0001047469 17 004337 and extention index htm 107 23 85 jfd made a request for cik 1027281 0 accession 0000898430 02 001167 and extention index htm 107 23 85 jfd made a request for cik 1136894 0 accession 0000905148 07 003827 and extention index htm so for the first second of data that your program has encountered it knows one user has accessed one document and a second user has requested two first second illustration images first second png when your program reads in the input file s fourth line it should detect that the day and time has advanced by one second so now this is what we know second second illustration images second second png then when it reaches the sixth and seventh line third second illustration images third second png when it first reads the eighth line it should detect that the time is now 2017 06 30 00 00 03 for one user 101 8 33 jja its session has ended because two seconds of inactivity have passed for that user because there was only one request only one web page document was accessed end of third second illustration images end of third png at that point the output file sessionization txt should contain the following line 101 81 133 jja 2017 06 30 00 00 00 2017 06 30 00 00 00 1 1 after processing the eighth line of the input file and as we examine the timestamp in the ninth line of the input file we detect that the time has progressed to 2017 06 30 00 00 04 for a second user 108 91 91 hbc we now see that two seconds of inactivity has elapsed and we can identify a second session fourth second illustration images fourth second png the output file sessionization txt should now consist of the following data 101 81 133 jja 2017 06 30 00 00 00 2017 06 30 00 00 00 1 1 108 91 91 hbc 2017 06 30 00 00 01 2017 06 30 00 00 01 1 1 finally after your program processes the ninth and 10th line it should detect that the end of file has been reached and there are no more requests for any users at this point it should identify all sessions regardless of the period of inactivity end of file illustration images end of file png at that point it should write the results to the output file and the entire content of sessionization txt should be 101 81 133 jja 2017 06 30 00 00 00 2017 06 30 00 00 00 1 1 108 91 91 hbc 2017 06 30 00 00 01 2017 06 30 00 00 01 1 1 107 23 85 jfd 2017 06 30 00 00 00 2017 06 30 00 00 03 4 4 106 120 173 jie 2017 06 30 00 00 02 2017 06 30 00 00 02 1 1 107 178 195 aag 2017 06 30 00 00 02 2017 06 30 00 00 04 3 2 108 91 91 hbc 2017 06 30 00 00 04 2017 06 30 00 00 04 1 1 notice from the above output that the first two lines were the ones we had already written the third line details the session for 107 23 85 jfd next because its first document request came at 2017 06 30 00 00 00 which is earlier than any of the other remaining sessions the fourth line belongs to ip address 106 120 173 jie because that user s first document request came at 2017 06 30 00 00 02 the first document request from 107 178 195 aag also comes at the same time but it is listed after 106 120 173 jie in the input file so that is why it is listed on the fifth line the second session detected for 108 91 91 hbc concludes the sessionization txt file writing clean scalable and well tested code as a data engineer it s important that you write clean well documented code that scales for large amounts of data for this reason it s important to ensure that your solution works well for a large number of records rather than just the above example it s also important to use software engineering best practices like unit tests especially since data is not always clean and predictable for more details about the implementation please refer to the faq below if further clarification is necessary email us at cc insightdataengineering com but please do so only after you have read through the readme and faq one more time and cannot find the answer to your question before submitting your solution you should summarize your approach dependencies and run instructions if any in your readme you may write your solution in any mainstream programming language such as c c c clojure erlang go haskell java python ruby or scala once completed submit a link to a github repo with your source code in addition to the source code the top most directory of your repo must include the input and output directories and a shell script named run sh that compiles and runs the program s that implement the required features if your solution requires additional libraries environments or dependencies you must specify these in your readme documentation see the figure below for the required structure of the top most directory in your repo or simply clone this repo repo directory structure the directory structure for your repo should look like this readme md run sh src sessionization py input inactivity period txt log csv output sessionization txt insight testsuite run tests sh tests test 1 input inactivity period txt log csv output sessionization txt your own test 1 input your own inputs output sessionization txt don t fork this repo and don t use this readme instead of your own the content of src does not need to be a single file called sessionization py which is only an example instead you should include your own source files and give them expressive names testing your directory structure and output format to make sure that your code has the correct directory structure and the format of the output files are correct we have included a test script called run tests sh in the insight testsuite folder the tests are stored simply as text files under the insight testsuite tests folder each test should have a separate folder with an input folder for inactivity period txt and log csv and an output folder for sessionization txt you can run the test with the following command from within the insight testsuite folder insight testsuite run tests sh on a failed test the output of run tests sh should look like fail test 1 thu mar 30 16 28 01 pdt 2017 0 of 1 tests passed on success pass test 1 thu mar 30 16 25 57 pdt 2017 1 of 1 tests passed one test has been provided as a way to check your formatting and simulate how we will be running tests when you submit your solution we urge you to write your own additional tests test 1 is only intended to alert you if the directory structure or the output for this test is incorrect your submission must pass at least the provided test in order to pass the coding challenge instructions to submit your solution to submit your entry please use the link you received in your coding challenge invite email you will only be able to submit through the link one time do not attach a file we will not admit solutions which are attached files use the submission box to enter the link to your github repo or bitbucket only link to the specific repo for this project not your general profile put any comments in the readme inside your project repo not in the submission box we are unable to accept coding challenges that are emailed to us faq here are some common questions we ve received if you have additional questions please email us at cc insightdataengineering com and we ll answer your questions as quickly as we can during pst business hours and update this faq again only contact us after you have read through the readme and faq one more time and cannot find the answer to your question which github link should i submit you should submit the url for the top level root of your repository for example this repo would be submitted by copying the url https github com insightdatascience edgar analytics into the appropriate field on the application do not try to submit your coding challenge using a pull request which would make your source code publicly available do i need a private github repo no you may use a public repo there is no need to purchase a private repo you may also submit a link to a bitbucket repo if you prefer are the session durations inclusive or exclusive as shown in the above example the duration is inclusive in other words if the timestamps for the session start is 00 00 01 and session end is 00 00 03 the duration is 3 seconds what if there is a single request in a session as shown in the above example the minimum duration for a session is 1 second if a user requests the same document more than once during a session how many webpage requests is that every time a user accesses an edgar document that request should be counted even if the user is requesting the same document multiple times for instance if within a session there are two requests once for cik 1608552 0 accession 0001047469 17 004337 and extention index htm and then a second time for the same exact combination the count of webpage requests for that session would be 2 how do you know when a session is over as shown in the above example the session is over when the end of the file is reached or after a period of inactivity has elapsed with no requests from that user for example if the inactivity period is 2 seconds and the session start is 00 00 01 and there are no further requests from that user by 00 00 04 then the session is considered over at 00 00 01 where can i get obtain the input file log csv we ve provided one example as shown above in this readme for you to better understand the challenge but you should create your own data to test your program you can obtain other data directly from the sec https www sec gov dera data edgar log file data set html but be aware that the weblog files are quite large and you also may have problems decompressing the archive file unzip may not work on the edgar zip file and you may have to use open source software such as 7zip if you are unable to decompress the zip file revert to creating your own data for the challenge do not spend too long on trying to decompress the archive file may i use r matlab or other analytics programming languages to solve the challenge it s important that your implementation scales to handle large amounts of data while many of our fellows have experience with r and matlab applicants have found that these languages are unable to process data in a scalable fashion so you must consider another language may i use distributed technologies like hadoop or spark your code will be tested on a single machine so using these technologies will negatively impact your solution we re not testing your knowledge on distributed computing but rather on computer science fundamentals and software engineering best practices what sort of system should i use to run my program on windows linux mac you may write your solution on any system but your source code should be portable and work on all systems additionally your run sh must be able to run on either unix or linux as that s the system that will be used for testing linux machines are the industry standard for most data engineering teams so it is helpful to be familiar with this if you re currently using windows we recommend installing a virtual unix environment such as virtualbox or vmware and using that to develop your code otherwise you also could use tools such as cygwin or docker or a free online ide such as cloud9 how fast should my program run while there are no strict performance guidelines to this coding challenge we will consider the amount of time your program takes when grading the challenge therefore you should design and develop your program in the optimal way i e think about time and space complexity instead of trying to hit a specific run time value can i use pre built packages modules or libraries this coding challenge can be completed without any exotic packages while you may use publicly available packages modules or libraries you must document any dependencies in your accompanying readme file when we review your submission we will download these libraries and attempt to run your program if you do use a package you should always ensure that the module you re using works efficiently for the specific use case in the challenge since many libraries are not designed for large amounts of data should i use the pandas library for python while the pandas library is useful for many problems related to small batches of data it is not scalable at dealing with streaming data problems like this challenge as a result you should strongly consider alternative algorithms and data structus that scale with larger streaming data will you email me if my code doesn t run unfortunately we receive hundreds of submissions in a very short time and are unable to email individuals if their code doesn t compile or run this is why it s so important to document any dependencies you have as described in the previous question we will do everything we can to properly test your code but this requires good documentation more so we have provided a test suite so you can confirm that your directory structure and format are correct can i use a database engine this coding challenge can be completed without the use of a database however if you use one it must be a publicly available one that can be easily installed with minimal configuration do i need to use multi threading no your solution doesn t necessarily need to include multi threading there are many solutions that don t require multiple threads cores or any distributed systems but instead use efficient data structures what should the format of the output be in order to be tested correctly you must use the format described above you can ensure that you have the correct format by using the testing suite we ve included should i check if the files in the input directory are text files or non text files binary no for simplicity you may assume that all of the files in the input directory are text files with the format as described above can i use an ide like eclipse or intellij to write my program yes you can use whatever tools you want as long as your run sh script correctly runs the relevant target files and creates the sessionization txt file in the output directory what should be in the input directory you can put any text file you want in the directory since our testing suite will replace it indeed using your own input files would be quite useful for testing the file size limit on github is 100 mb so you won t be able to include the larger sample input files in your input directory how will the coding challenge be evaluated generally we will evaluate your coding challenge with a testing suite that provides a variety of inputs and checks the corresponding output this suite will attempt to use your run sh and is fairly tolerant of different runtime environments of course there are many aspects e g clean code documentation that cannot be tested by our suite so each submission will also be reviewed manually by a data engineer how long will it take for me to hear back from you about my submission we receive hundreds of submissions and try to evaluate them all in a timely manner we try to get back to all applicants within two or three weeks of submission but if you have a specific deadline that requires expedited review please email us at cc insightdataengineering com | server |
|
design-system | bc government design system img https img shields io badge lifecycle maturing 007ec6 https github com bcgov repomountie blob master doc lifecycle badges md the design system helps developers and designers build better digital products and services it s a collection of digital resources and tools including a library of reusable ui interface components and design patterns the system makes it easier and faster to build custom b c government websites and applications components are collectively built by the government community meet accessibility standards and are open for input and improvement documentation https developer gov bc ca design system about the design system files in this repository docs project documentation images icons openshift openshift specific files scripts helper scripts templates application templates deployment local development developer workstation requirements setup application specific setup deployment openshift see openshift readme md getting help or reporting an issue to report bugs issues feature requests please file an issue https github com bcdevops opendev template issues how to propose a component if you would like to propose a component to the design system please see our propose a component github issue template propose a new component md guideline please note that this project is released with a contributor code of conduct code of conduct md by participating in this project you agree to abide by its terms license copyright 2016 province of british columbia licensed under the apache license version 2 0 the license you may not use this file except in compliance with the license you may obtain a copy of the license at http www apache org licenses license 2 0 unless required by applicable law or agreed to in writing software distributed under the license is distributed on an as is basis without warranties or conditions of any kind either express or implied see the license for the specific language governing permissions and limitations under the license | citz design-system design-patterns | os |
Employee-Database | employee database utilized data engineering and data analysis to build a sql database of employees of a corporation called pewlett hackard from the 1980s and 1990s there are six csv files holding the data of employees the sql tables were designed and the data in the csvs were successfully imported into a sql database data engineering inspected the csvs and sketched out an erd of the tables employee erd https user images githubusercontent com 119978382 222480968 89546f2c 2299 405a a618 b1da8f131d92 png used the information from erd to create a table schema for each of the six csv files and specify data types primary keys foreign keys and other constraints imported each csv file into the corresponding sql table and make sure to import the data in the same order that the tables were created data analysis list the employee number last name first name sex and salary of each employee list the first name last name and hire date for the employees who were hired in 1986 list the manager of each department along with their department number department name employee number last name and first name list the department number for each employee along with that employee s employee number last name first name and department name list first name last name and sex of each employee whose first name is hercules and whose last name begins with the letter b list each employee in the sales department including their employee number last name and first name list each employee in the sales and development departments including their employee number last name first name and department name list the frequency counts in descending order of all the employee last names that is how many employees share each last name data visualization historgram visualization of the most common salary ranges commonsalary https user images githubusercontent com 119978382 222600913 eccf7ff9 fb48 4994 93e8 e899efa53f85 png bar chart for average salary by title average salary by title https user images githubusercontent com 119978382 222600920 f3fed6d6 ecc2 45d9 8124 2fe37fc9f2c3 png | server |
|
ZemberekDotNet | zemberekdotnet test status https img shields io azure devops tests jnrmnt zemberekdotnet 13 https img shields io azure devops tests jnrmnt zemberekdotnet 13 code coverage https img shields io azure devops coverage jnrmnt zemberekdotnet 13 https img shields io azure devops coverage jnrmnt zemberekdotnet 13 build status https dev azure com jnrmnt zemberekdotnet apis build status zemberekdotnet branchname master https dev azure com jnrmnt zemberekdotnet build latest definitionid 13 branchname master release status https vsrm dev azure com jnrmnt apis public release badge dbf777b3 aa03 4952 92dc 55f20eba6724 1 1 https vsrm dev azure com jnrmnt apis public release badge dbf777b3 aa03 4952 92dc 55f20eba6724 1 1 zemberekdotnet is the c net port of zemberek nlp https github com ahmetaa zemberek nlp natural language processing tools for turkish this library will be kept in sync with zemberek nlp and same module structure will be maintained in net platform using nuget packages under seperate projects modules module package name description status all zemberekdotnet all zemberekdotnet all wrapper package that includes all the modules nuget https img shields io nuget v zemberekdotnet all https www nuget org packages zemberekdotnet all nuget https img shields io nuget dt zemberekdotnet all https www nuget org packages zemberekdotnet all core zemberekdotnet core zemberekdotnet core special collections hash functions and helpers nuget https img shields io nuget v zemberekdotnet core https www nuget org packages zemberekdotnet core nuget https img shields io nuget dt zemberekdotnet core https www nuget org packages zemberekdotnet core morphology zemberekdotnet morphology zemberekdotnet morphology turkish morphological analysis disambiguation and word generation nuget https img shields io nuget v zemberekdotnet morphology https www nuget org packages zemberekdotnet morphology nuget https img shields io nuget dt zemberekdotnet morphology https www nuget org packages zemberekdotnet morphology tokenization zemberekdotnet tokenization zemberekdotnet tokenization turkish tokenization and sentence boundary detection nuget https img shields io nuget v zemberekdotnet tokenization https www nuget org packages zemberekdotnet tokenization nuget https img shields io nuget dt zemberekdotnet tokenization https www nuget org packages zemberekdotnet tokenization normalization zemberekdotnet normalization zemberekdotnet normalization basic spell checker word suggestion noisy text normalization nuget https img shields io nuget v zemberekdotnet normalization https www nuget org packages zemberekdotnet normalization nuget https img shields io nuget dt zemberekdotnet normalization https www nuget org packages zemberekdotnet normalization ner ner zemberekdotnet ner turkish named entity recognition nuget https img shields io nuget v zemberekdotnet ner https www nuget org packages zemberekdotnet ner nuget https img shields io nuget dt zemberekdotnet ner https www nuget org packages zemberekdotnet ner classification zemberekdotnet classification zemberekdotnet classification text classification based on java port of fasttext project nuget https img shields io nuget v zemberekdotnet classification https www nuget org packages zemberekdotnet classification nuget https img shields io nuget dt zemberekdotnet classification https www nuget org packages zemberekdotnet classification language identification zemberekdotnet langid zemberekdotnet langid fast identification of text language nuget https img shields io nuget v zemberekdotnet langid https www nuget org packages zemberekdotnet langid nuget https img shields io nuget dt zemberekdotnet langid https www nuget org packages zemberekdotnet langid language modeling zemberekdotnet lm zemberekdotnet lm provides a language model compression algorithm nuget https img shields io nuget v zemberekdotnet lm https www nuget org packages zemberekdotnet lm nuget https img shields io nuget dt zemberekdotnet lm https www nuget org packages zemberekdotnet lm applications zemberekdotnet apps zemberekdotnet apps console applications pending grpc server zemberekdotnet grpc zemberekdotnet grpc grpc server for access from other languages pending examples zemberekdotnet examples zemberekdotnet examples usage examples pending target platforms packages are targeting net standart 2 1 framework so that it can be used within net core and net framework projects examples console applications will also be prepared with net core aiming that the whole library can be used cross platform ci cd repository is configured to continuously trigger a build test and release cycle using azure devops at the end of a successful release it automatically publishes the artifacts to nuget org | nlp machine-learning natural-language-processing turkish morphology language zemberek-nlp zemberek csharp nuget | ai |
sql-challenge | sql challenge repository of data modeling data engineering and data analysis for the employee database sql challenge instructions this assignment is divided into three parts data modeling data engineering and data analysis data modeling inspect the csvs and sketch out an erd of the tables feel free to use a tool like http www quickdatabasediagrams com http www quickdatabasediagrams com data engineering use the provided information to create a table schema for each of the six csv files remember to specify data types primary keys foreign keys and other constraints for the primary keys verify that the column is unique otherwise create a composite key https en wikipedia org wiki compound key which takes two primary keys to uniquely identify a row be sure to create tables in the correct order to handle foreign keys import each csv file into the corresponding sql table hint to avoid errors be sure to import the data in the same order that the tables were created also remember to account for the headers when importing data analysis once you have a complete database perform these steps 1 list the following details of each employee employee number last name first name sex and salary 2 list first name last name and hire date for employees who were hired in 1986 3 list the manager of each department with the following information department number department name the manager s employee number last name first name 4 list the department of each employee with the following information employee number last name first name and department name 5 list first name last name and sex for employees whose first name is hercules and last names begin with b 6 list all employees in the sales department including their employee number last name first name and department name 7 list all employees in the sales and development departments including their employee number last name first name and department name 8 list the frequency count of employee last names i e how many employees share each last name in descending order bonus optional as you examine the data you begin to suspect that the dataset is fake maybe your boss gave you spurious data in order to test the data engineering skills of a new employee to confirm your hunch you decide to create a visualization of the data to present to your boss follow these steps 1 import the sql database into pandas yes you could read the csvs directly in pandas but you are after all trying to prove your technical mettle this step may require some research feel free to use the following code to get started be sure to make any necessary modifications for your username password host port and database name sql from sqlalchemy import create engine engine create engine postgresql localhost 5432 your db name connection engine connect consult the sqlalchemy documentation https docs sqlalchemy org en latest core engines html postgresql for more information if you re using a password do not upload your password to your github repository review this video https www youtube com watch v 2uatpmnvh0i and the github website https help github com en github using git ignoring files for more information 2 create a histogram to visualize the most common salary ranges for employees 3 create a bar chart of average salary by title submission create an image file of your erd create a sql file of your table schemata create a sql file of your queries optional create a jupyter notebook of the bonus analysis create and upload a repository with the above files to github and post a link on bootcamp spot ensure your repository has regular commits and a thorough readme md file | server |
|
PROJECT-UDACITY- | project udacity my submission to project specification engineering full stack apps in the cloud greetings i present you my project this github repository and this link has been submitted for review this is my link to my running elastic beanstalk deployment app http imagefiltredapp env eba vcqrbxfu us east 1 elasticbeanstalk com filteredimage image url https upload wikimedia org wikipedia commons b bd golden tabby and white kitten n01 jpg a screenshot of the elastic beanstalk application dashboard is included in a deployment screenshot directory thank you | cloud |
|
ml-workshop-4-of-4 | advanced machine learning with scikit learn imbalanced classification and text data part 4 of 4 other parts part 1 https github com amueller ml workshop 1 of 4 part 2 https github com amueller ml workshop 2 of 4 part 3 https github com amueller ml workshop 3 of 4 content working with imbalanced data https amueller github io ml workshop 4 of 4 slides 01 imbalanced data html feature selection https amueller github io ml workshop 4 of 4 slides 02 feature selection html working with text data https amueller github io ml workshop 4 of 4 slides 03 working with text data html building custom estimators and extending scikit learn https github com amueller ml workshop 4 of 4 blob master notebooks 04 20custom 20estimators ipynb instructor andreas mueller http amuller github io amuellerml https twitter com amuellerml columbia university book introduction to machine learning with python http shop oreilly com product 0636920030515 do this repository will contain the teaching material and other info associated with the workshop advanced machine learning with scikit learn part ii ii please download the large movie review dataset from http ai stanford edu amaas data sentiment before coming to the workshop about the workshop scikit learn is a machine learning library in python that has become a valuable tool for many data science practitioners this training will cover some advanced topics in using scikit learn and how to build your own models or feature extraction methods that are compatible with scikit learn we will also discuss different approaches to feature selection and resampling methods for imbalanced data finally we ll discuss how to do classification of text data using the bag of words model and its variants prerequisites this workshop assumes familiarity with jupyter notebooks and basics of pandas matplotlib and numpy it also assumes experience using scikit learn and familiarity with the api obtaining the tutorial material if you are familiar with git it is most convenient if you clone the github repository this is highly encouraged as it allows you to easily synchronize any changes to the material git clone https github com amueller ml workshop 4 of 4 if you are not familiar with git you can download the repository as a zip file by heading over to the github repository https github com amueller ml workshop 4 of 4 in your browser and click the green download button in the upper right images download repo png please note that i may add and improve the material until shortly before the tutorial session and we recommend you to update your copy of the materials one day before the tutorials if you have an github account and forked cloned the repository via github you can sync your existing fork with via the following commands git pull origin master installation notes this tutorial will require recent installations of numpy http www numpy org scipy http www scipy org matplotlib http matplotlib org pillow https python pillow org pandas http pandas pydata org scikit learn http scikit learn org stable 0 22 1 ipython http ipython readthedocs org en stable jupyter notebook http jupyter org mlxtend imbalance learn the last one is important you should be able to type jupyter notebook in your terminal window and see the notebook panel load in your web browser try opening and running a notebook from the material to see check that it works for users who do not yet have these packages installed a relatively painless way to install all the requirements is to use a python distribution such as anaconda https www continuum io downloads which includes the most relevant python packages for science math engineering and data analysis anaconda can be downloaded and installed for free including commercial use and redistribution the code examples in this tutorial requires python 3 5 or later after obtaining the material we strongly recommend you to open and execute a jupyter notebook jupter notebook check env ipynb that is located at the top level of this repository inside the repository you can open the notebook by executing bash jupyter notebook check env ipynb inside this repository inside the notebook you can run the code cell by clicking on the run cells button as illustrated in the figure below images check env 1 png finally if your environment satisfies the requirements for the tutorials the executed code cell will produce an output message as shown below images check env 2 png | ai |
|
computer_vision | computer vision build status https travis ci org agv iit kgp computer vision svg branch master https travis ci org agv iit kgp computer vision code climate https codeclimate com github agv iit kgp computer vision badges gpa svg https codeclimate com github agv iit kgp computer vision coverage status https coveralls io repos agv iit kgp computer vision badge svg https coveralls io r agv iit kgp computer vision this is the computer vision stack for the team agv iit kgp http www agv iitkgp ac in currently we are in the learning phase and are trying to implement the algorithms of the book feature extraction and image processing all our work takes place through github pull requests https github com agv iit kgp computer vision pulls you can find the list of relevant resources at awesome computer vision https github com agv iit kgp awesome computer vision and if you want to ask questions then we have a public chat room at gitter gitter https badges gitter im join chat svg https gitter im agv iit kgp computer vision utm source badge utm medium badge utm campaign pr badge utm content badge if you are a iit kgp student and you are interested in working with us then go though the ipython notebook tutorial 1 ipynb and then get in touch with us through email at gupta harsh96 at gmail dot com using the tutorial the tutorial is a ipython notebook ipython notebooks are interactive computational enviornments with formatted text intructions and executable code cells install the ipython it can be very easily installed through the anaconda package available at the continuum analytics web site http continuum io downloads clone the repo git clone https github com agv iit kgp computer vision git open the tutorial ipython notebook tutorial 1 ipynb you can also view the tutorial online at nbviewer http nbviewer ipython org github agv iit kgp computer vision blob master tutorial 201 ipynb though the code cells at nbviewer won t be executable | ai |
|
NLP | nlp road 1 math foundation 1 math foundation tangyudi study 163 machine learning tangyudi net163 0 math 2 machine learning 1 machine learning andrew ng coursera machine learning andrew ng coursera 2 machine learning tangyudi study 163 machine learning tangyudi net163 3 deep learning 1 deep learning specialization andrew ng coursera deep learning andrew ng coursera course link https www coursera org specializations deep learning 2 deep learning limu deep leaning limu mxnet course link https space bilibili com 209599371 channel detail cid 23541 gur lstm 3 tools tensorflow examples tensorflow examples tensorflow2 in deeplearning tensorflow2 and deep learning net163 tensorflow in practice specialization coursera tensorflow in practice specialization andrew ng coursera tutorial 4 nlp courses 1 nlp course kaikeba artificial intelligence for nlp 2 first intro to nlp fastai first intro to nlp fastai 3 nlp daniel jurafsky stanford natural language processing daniel jurafsky stanford 4 nlp with deep learning cs224n stanford natural language processing with deep learning cs224n stanford 5 algorithm 1 algorithms design and analysis stanford algorithms desing and analysis stanford 2 design of computer program cs212 udacity design of computer program cs212 udacity udacity advanced 6 references references 1 references 2 hands on tensorflow references hands on tensorflow 3 references 4 fluent python references fluent python 5 references 7 projects and competitions kaggle readme md | machine-learning nlp math | ai |
IISSI-Web-Project | iissi web project web project made for iissi subject introduction to software engineering and information systems in university of seville using php html and css and a sql database | server |
|
AirSENSE-ESP32-NonOS-Demo | airsense esp32 rtos we are using freertos because nonos is very stupid | os |
|
Skeleton-Stylus | skeleton stylus http getskeleton com skeleton is a simple responsive boilerplate to kickstart any responsive project check out http getskeleton com for documentation and details getting started install global dependancies node js http nodejs org bower http bower io sudo npm install bower g grunt js http grunt js sudo npm install g grunt cli install local dependancies download zip or clone the repo cd to project folder run sudo npm install first time users run grunt to watch and compile stylus files what s in the download the download includes skeleton s css normalize css as a reset a sample favicon and an index html as a starting point skeleton index html styl skeleton styl images favicon png package json gruntfile js readme md why it s awesome skeleton is lightweight and simple it styles only raw html elements with a few exceptions and provides a responsive grid nothing more minified it s less than a kb it s a starting point not a ui framework no compiling or installing just vanilla css browser support chrome latest firefox latest opera latest safari latest ie latest the above list is non exhaustive skeleton works perfectly with almost all older versions of the browsers above though ie certainly has large degradation prior to ie9 license all parts of skeleton are free to use and abuse under the open source mit license http opensource org licenses mit license php colophon skeleton was built using sublime text 3 http www sublimetext com 3 and designed with sketch http bohemiancoding com sketch the typeface raleway http www google com fonts specimen raleway was created by matt mcinerney http matt cc and pablo impallari http www impallari com code highlighting by google s prettify library https code google com p google code prettify icons in the header of the documentation are all derivative work of icons from the noun project thenounproject com feather http thenounproject com term feather 22073 by zach vandehey pen http thenounproject com term pen 21163 with cap by ed harrison pen http thenounproject com term pen 32847 with clicker by matthew hall and watch http thenounproject com term watch 48015 by julien deveaux acknowledgement skeleton was created by dave gamache https twitter com dhg for a better web | front_end |
|
duke-coursera-dennis | duke coursera dennis repo f r alles zur spezialisierung cloud engineering duke uni | cloud |
|
personal-projects | personal projects welcome to my personal repo in here i d like to practice git ansible elk docker and anything else that catches my interest i aim to get better at cloud automation and increase my knowledge of devops tools and methodologies through hands on use of said tools and methodologies | cloud |
|
Open3D-ML | p align center img src https raw githubusercontent com isl org open3d master docs static open3d logo horizontal png width 320 span style font size 220 b ml b span p ubuntu ci https github com isl org open3d ml workflows ubuntu 20ci badge svg style check https github com isl org open3d ml workflows style 20check badge svg pytorch badge https img shields io badge pytorch supported brightgreen style flat logo pytorch tensorflow badge https img shields io badge tensorflow supported brightgreen style flat logo tensorflow installation installation get started getting started structure repository structure tasks algorithms tasks and algorithms model zoo model zoo md datasets datasets how tos how tos contribute contribute open3d ml is an extension of open3d for 3d machine learning tasks it builds on top of the open3d core library and extends it with machine learning tools for 3d data processing this repo focuses on applications such as semantic point cloud segmentation and provides pretrained models that can be applied to common tasks as well as pipelines for training open3d ml works with tensorflow and pytorch to integrate easily into existing projects and also provides general functionality independent of ml frameworks such as data visualization installation users open3d ml is integrated in the open3d v0 11 python distribution and is compatible with the following versions of ml frameworks pytorch 1 8 2 tensorflow 2 5 2 cuda 10 1 11 on gnu linux x86 64 optional you can install open3d with bash make sure you have the latest pip version pip install upgrade pip install open3d pip install open3d to install a compatible version of pytorch or tensorflow you can use the respective requirements files bash to install a compatible version of tensorflow pip install r requirements tensorflow txt to install a compatible version of pytorch pip install r requirements torch txt to install a compatible version of pytorch with cuda on linux pip install r requirements torch cuda txt to test the installation use bash with pytorch python c import open3d ml torch as ml3d or with tensorflow python c import open3d ml tf as ml3d if you need to use different versions of the ml frameworks or cuda we recommend to build open3d from source http www open3d org docs release compilation html getting started reading a dataset the dataset namespace contains classes for reading common datasets here we read the semantickitti dataset and visualize it python import open3d ml torch as ml3d or open3d ml tf as ml3d construct a dataset by specifying dataset path dataset ml3d datasets semantickitti dataset path path to semantickitti get the all split that combines training validation and test set all split dataset get split all print the attributes of the first datum print all split get attr 0 print the shape of the first point cloud print all split get data 0 point shape show the first 100 frames using the visualizer vis ml3d vis visualizer vis visualize dataset dataset all indices range 100 visualizer gif docs images getting started ml visualizer gif loading a config file configs of models datasets and pipelines are stored in ml3d configs users can also construct their own yaml files to keep record of their customized configurations here is an example of reading a config file and constructing modules from it python import open3d ml as ml3d import open3d ml torch as ml3d or open3d ml tf as ml3d framework torch or tf cfg file ml3d configs randlanet semantickitti yml cfg ml3d utils config load from file cfg file fetch the classes by the name pipeline ml3d utils get module pipeline cfg pipeline name framework model ml3d utils get module model cfg model name framework dataset ml3d utils get module dataset cfg dataset name use the arguments in the config file to construct the instances cfg dataset dataset path path to your dataset dataset dataset cfg dataset pop dataset path none cfg dataset model model cfg model pipeline pipeline model dataset cfg pipeline semantic segmentation running a pretrained model for semantic segmentation building on the previous example we can instantiate a pipeline with a pretrained model for semantic segmentation and run it on a point cloud of our dataset see the model zoo model zoo for obtaining the weights of the pretrained model python import os import open3d ml as ml3d import open3d ml torch as ml3d cfg file ml3d configs randlanet semantickitti yml cfg ml3d utils config load from file cfg file model ml3d models randlanet cfg model cfg dataset dataset path path to your dataset dataset ml3d datasets semantickitti cfg dataset pop dataset path none cfg dataset pipeline ml3d pipelines semanticsegmentation model dataset dataset device gpu cfg pipeline download the weights ckpt folder logs os makedirs ckpt folder exist ok true ckpt path ckpt folder randlanet semantickitti 202201071330utc pth randlanet url https storage googleapis com open3d releases model zoo randlanet semantickitti 202201071330utc pth if not os path exists ckpt path cmd wget o format randlanet url ckpt path os system cmd load the parameters pipeline load ckpt ckpt path ckpt path test split dataset get split test data test split get data 0 run inference on a single example returns dict with predict labels and predict scores result pipeline run inference data evaluate performance on the test set this will write logs to logs pipeline run test users can also use predefined scripts readme md using predefined scripts to load pretrained weights and run testing training a model for semantic segmentation similar as for inference pipelines provide an interface for training a model on a dataset python use a cache for storing the results of the preprocessing default path is logs cache dataset ml3d datasets semantickitti dataset path path to semantickitti use cache true create the model with random initialization model randlanet pipeline semanticsegmentation model model dataset dataset max epoch 100 prints training progress in the console pipeline run train for more examples see examples https github com isl org open3d ml tree master examples and the scripts https github com isl org open3d ml tree master scripts directories you can also enable saving training summaries in the config file and visualize ground truth and results with tensorboard see this tutorial docs tensorboard md 3dml models training and inference for details img width 640 src https user images githubusercontent com 41028320 146465032 30696948 54f7 48df bc48 add8d2e38421 jpg 3d object detection running a pretrained model for 3d object detection the 3d object detection model is similar to a semantic segmentation model we can instantiate a pipeline with a pretrained model for object detection and run it on a point cloud of our dataset see the model zoo model zoo for obtaining the weights of the pretrained model python import os import open3d ml as ml3d import open3d ml torch as ml3d cfg file ml3d configs pointpillars kitti yml cfg ml3d utils config load from file cfg file model ml3d models pointpillars cfg model cfg dataset dataset path path to your dataset dataset ml3d datasets kitti cfg dataset pop dataset path none cfg dataset pipeline ml3d pipelines objectdetection model dataset dataset device gpu cfg pipeline download the weights ckpt folder logs os makedirs ckpt folder exist ok true ckpt path ckpt folder pointpillars kitti 202012221652utc pth pointpillar url https storage googleapis com open3d releases model zoo pointpillars kitti 202012221652utc pth if not os path exists ckpt path cmd wget o format pointpillar url ckpt path os system cmd load the parameters pipeline load ckpt ckpt path ckpt path test split dataset get split test data test split get data 0 run inference on a single example returns dict with predict labels and predict scores result pipeline run inference data evaluate performance on the test set this will write logs to logs pipeline run test users can also use predefined scripts readme md using predefined scripts to load pretrained weights and run testing training a model for 3d object detection similar as for inference pipelines provide an interface for training a model on a dataset python use a cache for storing the results of the preprocessing default path is logs cache dataset ml3d datasets kitti dataset path path to kitti use cache true create the model with random initialization model pointpillars pipeline objectdetection model model dataset dataset max epoch 100 prints training progress in the console pipeline run train below is an example of visualization using kitti the example shows the use of bounding boxes for the kitti dataset img width 640 src https github com isl org open3d ml blob master docs images visualizer boundingboxes png raw true for more examples see examples https github com isl org open3d ml tree master examples and the scripts https github com isl org open3d ml tree master scripts directories you can also enable saving training summaries in the config file and visualize ground truth and results with tensorboard see this tutorial docs tensorboard md 3dml models training and inference for details img width 640 src https user images githubusercontent com 41028320 146465084 bc397e4c 494a 4464 a73d 525e82a9b6ce jpg using predefined scripts scripts run pipeline py https github com isl org open3d ml blob master scripts run pipeline py provides an easy interface for training and evaluating a model on a dataset it saves the trouble of defining specific model and passing exact configuration python scripts run pipeline py tf torch c path to config pipeline semanticsegmentation objectdetection extra args you can use script for both semantic segmentation and object detection you must specify either semanticsegmentation or objectdetection in the pipeline parameter note that extra args will be prioritized over the same parameter present in the configuration file so instead of changing param in config file you may pass the same as a command line argument while launching the script for eg launch training for randlanet on semantickitti with torch python scripts run pipeline py torch c ml3d configs randlanet semantickitti yml dataset dataset path path to dataset pipeline semanticsegmentation dataset use cache true launch testing for pointpillars on kitti with torch python scripts run pipeline py torch c ml3d configs pointpillars kitti yml split test dataset dataset path path to dataset pipeline objectdetection dataset use cache true for further help run python scripts run pipeline py help repository structure the core part of open3d ml lives in the ml3d subfolder which is integrated into open3d in the ml namespace in addition to the core part the directories examples and scripts provide supporting scripts for getting started with setting up a training pipeline or running a network on a dataset docs markdown and rst files for documentation examples place for example scripts and notebooks ml3d package root dir that is integrated in open3d configs model configuration files datasets generic dataset code will be integratede as open3d ml tf torch datasets metrics metrics available for evaluating ml models utils framework independent utilities available as open3d ml tf torch utils vis ml specific visualization functions tf directory for tensorflow specific code same structure as ml3d torch this will be available as open3d ml tf torch directory for pytorch specific code available as open3d ml torch dataloaders framework specific dataset code e g wrappers that can make use of the generic dataset code models code for models modules smaller modules e g metrics and losses pipelines pipelines for tasks like semantic segmentation utils utilities for scripts demo scripts for training and dataset download scripts tasks and algorithms semantic segmentation for the task of semantic segmentation we measure the performance of different methods using the mean intersection over union miou over all classes the table shows the available models and datasets for the segmentation task and the respective scores each score links to the respective weight file model dataset semantickitti toronto 3d s3dis semantic3d paris lille3d scannet randla net tf 53 7 https storage googleapis com open3d releases model zoo randlanet semantickitti 202201071330utc zip 73 7 https storage googleapis com open3d releases model zoo randlanet toronto3d 202201071330utc zip 70 9 https storage googleapis com open3d releases model zoo randlanet s3dis 202201071330utc zip 76 0 https storage googleapis com open3d releases model zoo randlanet semantic3d 202201071330utc zip 70 0 https storage googleapis com open3d releases model zoo randlanet parislille3d 202201071330utc zip randla net torch 52 8 https storage googleapis com open3d releases model zoo randlanet semantickitti 202201071330utc pth 74 0 https storage googleapis com open3d releases model zoo randlanet toronto3d 202201071330utc pth 70 9 https storage googleapis com open3d releases model zoo randlanet s3dis 202201071330utc pth 76 0 https storage googleapis com open3d releases model zoo randlanet semantic3d 202201071330utc pth 70 0 https storage googleapis com open3d releases model zoo randlanet parislille3d 202201071330utc pth kpconv tf 58 7 https storage googleapis com open3d releases model zoo kpconv semantickitti 202010021102utc zip 65 6 https storage googleapis com open3d releases model zoo kpconv toronto3d 202012221551utc zip 65 0 https storage googleapis com open3d releases model zoo kpconv s3dis 202010091238 zip 76 7 https storage googleapis com open3d releases model zoo kpconv parislille3d 202011241550utc zip kpconv torch 58 0 https storage googleapis com open3d releases model zoo kpconv semantickitti 202009090354utc pth 65 6 https storage googleapis com open3d releases model zoo kpconv toronto3d 202012221551utc pth 60 0 https storage googleapis com open3d releases model zoo kpconv s3dis 202010091238 pth 76 7 https storage googleapis com open3d releases model zoo kpconv parislille3d 202011241550utc pth sparseconvunet torch 68 https storage googleapis com open3d releases model zoo sparseconvunet scannet 202105031316utc pth sparseconvunet tf 68 2 https storage googleapis com open3d releases model zoo sparseconvunet scannet 202105031316utc zip pointtransformer torch 69 2 https storage googleapis com open3d releases model zoo pointtransformer s3dis 202109241350utc pth pointtransformer tf 69 2 https storage googleapis com open3d releases model zoo pointtransformer s3dis 202109241350utc zip using weights from original author object detection for the task of object detection we measure the performance of different methods using the mean average precision map for bird s eye view bev and 3d the table shows the available models and datasets for the object detection task and the respective scores each score links to the respective weight file for the evaluation the models were evaluated using the validation subset according to kitti s validation criteria the models were trained for three classes car pedestrian and cyclist the calculated values are the mean value over the map of all classes for all difficulty levels for the waymo dataset the models were trained on three classes pedestrian vehicle cyclist model dataset kitti bev 3d 0 70 waymo bev 3d 0 50 pointpillars tf 61 6 55 2 https storage googleapis com open3d releases model zoo pointpillars kitti 202012221652utc zip pointpillars torch 61 2 52 8 https storage googleapis com open3d releases model zoo pointpillars kitti 202012221652utc pth avg 61 01 48 30 best 61 47 57 55 https storage googleapis com open3d releases model zoo pointpillars waymo 202211200158utc seed2 gpu16 pth wpp train pointrcnn tf 78 2 65 9 https storage googleapis com open3d releases model zoo pointrcnn kitti 202105071146utc zip pointrcnn torch 78 2 65 9 https storage googleapis com open3d releases model zoo pointrcnn kitti 202105071146utc pth wpp train the avg metrics are the average of three sets of training runs with 4 8 16 and 32 gpus training was for halted after 30 epochs model checkpoint is available for the best training run training pointrcnn to use ground truth sampling data augmentation for training we can generate the ground truth database as follows python scripts collect bboxes py dataset path path to data root this will generate a database consisting of objects from the train split it is recommended to use this augmentation for dataset like kitti where objects are sparse the two stages of pointrcnn are trained separately to train the proposal generation stage of pointrcnn with pytorch run the following command train rpn for 100 epochs python scripts run pipeline py torch c ml3d configs pointrcnn kitti yml dataset dataset path path to dataset mode rpn epochs 100 after getting a well trained rpn network we can train rcnn network with frozen rpn weights train rcnn for 70 epochs python scripts run pipeline py torch c ml3d configs pointrcnn kitti yml dataset dataset path path to dataset mode rcnn model ckpt path path to checkpoint epochs 100 model zoo for a full list of all weight files see model weights txt https storage googleapis com open3d releases model zoo model weights txt and the md5 checksum file model weights md5 https storage googleapis com open3d releases model zoo integrity txt datasets the following is a list of datasets for which we provide dataset reader classes semantickitti project page http semantic kitti org toronto 3d github https github com weikaitan toronto 3d semantic 3d project page http www semantic3d net s3dis project page http buildingparser stanford edu dataset html paris lille 3d project page https npm3d fr paris lille 3d argoverse project page https www argoverse org kitti project page http www cvlibs net datasets kitti eval object php obj benchmark 3d lyft project page https level 5 global data nuscenes project page https www nuscenes org waymo project page https waymo com open scannet project page http www scan net org for downloading these datasets visit the respective webpages and have a look at the scripts in scripts download datasets https github com isl org open3d ml tree master scripts download datasets how tos visualize network predictions docs howtos md visualize network predictions visualize custom data docs howtos md visualize custom data adding a new model docs howtos md adding a new model adding a new dataset docs howtos md adding a new dataset distributed training docs howtos md distributed training visualize and compare input data ground truth and results in tensorboard docs tensorboard md inference with intel openvino docs openvino md contribute there are many ways to contribute to this project you can implement a new model add code for reading a new dataset share parameters and weights for an existing model report problems and bugs please make your pull requests to the dev https github com isl org open3d ml tree dev branch open3d is a community effort we welcome and celebrate contributions from the community if you want to share weights for a model you trained please attach or link the weights file in the pull request for bugs and problems open an issue https github com isl org open3d ml issues please also check out our communication channels to get in contact with the community communication channels github issue https github com isl org open3d issues bug reports feature requests etc forum https github com isl org open3d discussions discussion on the usage of open3d discord chat https discord com invite d35bgvn online chats discussions and collaboration with other users and developers citation please cite our work pdf https arxiv org abs 1801 09847 if you use open3d bib article zhou2018 author qian yi zhou and jaesik park and vladlen koltun title open3d a modern library for 3d data processing journal arxiv 1801 09847 year 2018 | 3d-perception datasets pretrained-models lidar rgbd tensorflow pytorch visualization semantic-segmentation object-detection 3d-object-detection | ai |
llm | llm interfacing with large language models remote and local from lean | ai |
|
awesome-IoT-hybrid | awesome iot hybrid awesome https cdn rawgit com sindresorhus awesome d7305f38d29fed78fa85652e3a63e154dd8e8829 media badge svg https github com sindresorhus awesome the missing awesome list collection of awesome iot and hybrid apps frameworks tools resources videos and shiny things iot iot os os frameworks tools frameworks tools resources websites projects resources websites projects iiot iiot hybrid desktop hybrid desktop hybrid mobile hybrid mobile tools plugins tools plugins miscellaneous miscellaneous iot tessel https tessel io arduino http www arduino cc beagleboard http beagleboard org bone hue http www developers meethue com raspberry pi https www raspberrypi org onion omega https www kickstarter com projects onion onion omega invention platform for the internet of video share particle https www particle io os riot os http www riot os org node os https node os com contiki os http www contiki os org raspbian http raspbian org project brillo https developers google com brillo balenaos https www balena io os frameworks tools cylonjs http cylonjs com node red http nodered org iot eclipse http iot eclipse org gladys project http gladysproject com lelylan https github com lelylan lelylan balenacloud https www balena io resources websites projects hackday https hackaday io projects instructables tech http www instructables com tag type id category technology hackster http www hackster io my controller https www mycontroller org home kaa project https www kaaproject org iiot industrial iot opc router https www opc router com iiot gateway workflow engine with various plug ins mqtt bridge opc ua bridge sql bridge rest bridge sap bridge hybrid desktop nw js https github com nwjs nw js electron https github com atom electron chromium embedded framework https bitbucket org chromiumembedded cef appjs http appjs com macgap https github com macgapproject hybrid mobile react native http facebook github io react native nativescript https www nativescript org phonegap http phonegap com corona http coronalabs com ionic http ionicframework com appcelerator http www appcelerator com intel xdk https software intel com en us html5 tools trigger io https trigger io crosswalk https crosswalk project org telerik platform http www telerik com platform meteor https www meteor com tabris js https tabrisjs com tools plugins cordova phonegap ibeacon plugin https github com petermetz cordova plugin ibeacon miscellaneous firefox os https www mozilla org en us firefox os leap motion https www leapmotion com contributing 1 fork it 2 create your branch git checkout b my new branch 3 commit your changes git commit am fix stuff 4 push to the branch git push origin my new branch 5 submit a pull request license the mit license mit copyright c 2014 michael lancaster permission is hereby granted free of charge to any person obtaining a copy of this software and associated documentation files the software to deal in the software without restriction including without limitation the rights to use copy modify merge publish distribute sublicense and or sell copies of the software and to permit persons to whom the software is furnished to do so subject to the following conditions the above copyright notice and this permission notice shall be included in all copies or substantial portions of the software the software is provided as is without warranty of any kind express or implied including but not limited to the warranties of merchantability fitness for a particular purpose and noninfringement in no event shall the authors or copyright holders be liable for any claim damages or other liability whether in an action of contract tort or otherwise arising from out of or in connection with the software or the use or other dealings in the software | server |
|
Spring-Cloud-in-Python | spring cloud in python pre commit https github com my sweet home 2020 a cat workflows pre commit badge svg before starting to commit tbd poetry or pipenv or other package manager cmt here i use poetry as an example first dev python version 3 7 9 1 download poetry by using bash for linux and unix like users curl ssl https raw githubusercontent com python poetry poetry master get poetry py python for windows user you will need your powershell or just use the subsystem if you wish to use powershell the follow cmd is for you invoke webrequest uri https raw githubusercontent com python poetry poetry master get poetry py usebasicparsing content python 2 install existing dependencies bash running locally for dev test env etc poetry install on pp prod poetry install no dev 3 add new package never ever pip install bash example poetry add django for dev only poetry add dev ipython if you wish to test something on you local but not add it to the lock poetry add no root xxx 4 to run python or some other packages that offer cli bash boot django server in local poetry run python manage py runserver or a lazy way just enable the shell poetry shell python manage py runserver 5 after running poetry install on local for the very first time please run bash poetry run pre commit install 6 start to commit test 1 show test coverage report on terminal poetry run pytest cov report term cov spring cloud tests | cloud |
|
LLM_AssemblyLine | llm assemblyline llm assemblyline is a tool designed to empower users with little to no programming experience to rapidly build customized ai applications this intelligent tool focuses on creating both personal and enterprise level workflows based on large language model llm prompts attempting to streamline the enterprise productivity innovation process to the individual level by simplifying the ai development process llm assemblyline aims to democratize ai and make it more accessible to everyone and enterprises of all sizes demo https ai wonderbricks com about us https www wonderbricks com https user images githubusercontent com 89369032 232462002 491b87f9 0795 4d1d bb94 f8430118e398 mp4 getting started 1 installation git clone https github com wonderbricks tech llm assemblyline git npm install 2 replace the params in env template in the project directory openai api key your api key model name model name 3 rename the env template to env local available scripts in the project directory you can run yarn start local it runs the app in the development mode open http localhost 3000 http localhost 3000 to view it in your browser contributing we welcome contributions from the community if you d like to contribute to llm assemblyline please feel free to reach us via info wonderbricks com license llm assemblyline is released under mit license license md | ai |
|
microservices | udagram image filtering application udagram is a simple cloud application developed alongside the udacity cloud engineering nanodegree it allows users to register and log into a web client post photos to the feed and process photos using an image filtering microservice the project is split into two parts 1 frontend angular web application built with ionic framework 2 backend restful api node express application getting started tip it s recommended that you start with getting the backend api running since the frontend web application depends on the api prerequisite 1 the depends on the node package manager npm you will need to download and install node from https nodejs com en download https nodejs org en download this will allow you to be able to run npm commands 2 environment variables will need to be set these environment variables include database connection details that should not be hard coded into the application code environment script a file named set env sh has been prepared as an optional tool to help you configure these variables on your local development environment we do not want your credentials to be stored in git after pulling this starter project run the following command to tell git to stop tracking the script in git but keep it stored locally this way you can use the script for your convenience and reduce risk of exposing your credentials git rm cached set env sh afterwards we can prevent the file from being included in your solution by adding the file to our gitignore file 1 database create a postgresql database either locally or on aws rds the database is used to store the application s metadata we will need to use password authentication for this project this means that a username and password is needed to authenticate and access the database the port number will need to be set as 5432 this is the typical port that is used by postgresql so it is usually set to this port by default once your database is set up set the config values for environment variables prefixed with postgres in set env sh if you set up a local database your postgres host is most likely localhost if you set up an rds database your postgres host is most likely in the following format us west 1 rds amazonaws com you can find this value in the aws console s rds dashboard 2 s3 create an aws s3 bucket the s3 bucket is used to store images that are displayed in udagram set the config values for environment variables prefixed with aws in set env sh 3 backend api launch the backend api locally the api is the application s interface to s3 and the database to download all the package dependencies run the command from the directory udagram api bash npm install to run the application locally run bash npm run dev you can visit http localhost 8080 api v0 feed in your web browser to verify that the application is running you should see a json payload feel free to play around with postman to test the api s 4 frontend app launch the frontend app locally to download all the package dependencies run the command from the directory udagram frontend bash npm install install ionic framework s command line tools for us to build and run the application bash npm install g ionic prepare your application by compiling them into static files bash ionic build run the application locally using files created from the ionic build command bash ionic serve you can visit http localhost 8100 in your web browser to verify that the application is running you should see a web interface tips 1 take a look at udagram api does it look like we can divide it into two modules to be deployed as separate microservices 2 the dockerignore file is included for your convenience to not copy node modules copying this over into a docker container might cause issues if your local environment is a different operating system than the docker image ex windows or macos vs linux 3 it s useful to lint your code so that changes in the codebase adhere to a coding standard this helps alleviate issues when developers use different styles of coding eslint has been set up for typescript in the codebase for you to lint your code run the following bash npx eslint ext js ts src to have your code fixed automatically run bash npx eslint ext js ts src fix 4 set env sh is really for your backend application frontend applications have a different notion of how to store configurations configurations for the application endpoints can be configured inside of the environments environment ts files 5 in set env sh environment variables are set with export var value setting it this way is not permanent every time you open a new terminal you will have to run set env sh to reconfigure your environment variables to verify if your environment variable is set you can check the variable with a command like echo postgres username | cloud |
|
decentralized_AI | decentralized ai final project for siraj s school of ai authors benoit courty matthew mcateer alexandre moreau and jeddi mees for more background info read our whitepaper https github com trancept decentralized ai blob master whitepaper whitepaper md this is our try at building a decentralized ai well it is just a semantic segmentation task that run in a decentralized fashion the task is done on a machine in the internet like in a proprietary cloud but on a decentralized cloud you do not have to create an account with the computer owner all is handle by iexec the semantic segmentation is done by the mask rcnn https github com matterport mask rcnn project trained on the coco dataset http cocodataset org submit an image frontui https raw githubusercontent com trancept decentralized ai master img front ai2 jpg get the result in the work tab frontuiwork https raw githubusercontent com trancept decentralized ai master img front work jpg and you are done sampleresult https raw githubusercontent com trancept decentralized ai master img iexec team mrcnn png other sample mask r cnn https github com trancept decentralized ai blob master img 20180604 143926 png the docker image was based on the modern deep learning container from waleed abdulla https hub docker com r waleedka modern deep learning with the mask rcnn added into it along with a modified version of the demo packaged for iexec iexec is a whole ecosystem with a market place for dapps oracle mechanism scheduler workers https cdn images 1 medium com max 1200 1 iierfys1iqvvxncxfrghfa jpeg dedicated to off chain computing in a fully decentralized way the v2 is just out speaking from 1st of june 2018 iexec sdk https github com iexecblockchaincomputing iexec sdk is a nodejs application who allow to easily create and manage your application the result is that you can call it quite like an api to get your resulting image how to run iexec front side you could use it on the browser http nrxubuntu eastus2 cloudapp azure com get eth and rlc for kovan connect to metamask and switch to kovan ethereum test network ask for free eth on kovan faucet https gitter im kovan testnet and for free rlc on iexec marketplace https market iex ec then transfert rlc from your wallet to your account on top left of iexec marketplace https market iex ec build it from source cd frontend npm install npm run dev your browser will automatically go to localhost 8081 so you can access the frontend choose an image from your harddisk or copy past an url choose a worker in the list on the right click on iexec button openmined side in openmined https github com trancept decentralized ai tree master openmined folder you will find a demo of how to use open mined to train a model using decentralized grid computing capabilities how we make it building the docker image docker build docker keras cpu t trancept keras mrcnn v0 docker run v pwd iexec trancept keras mrcnn v0 http fr ubergizmo com wp content uploads 2017 11 nouvel algorithme correction panoramas google street view jpg docker push trancept keras mrcnn v0 iexec project init project get money iexec wallet getrlc for eth on kovan you have to go to ask for it on kovan faucet https gitter im kovan testnet check your wallet iexec wallet show you need to have eth and rlc send money to the iexec account marketplace to use it iexec account deposit 100 check money iexec account show deploy adding docker image to iexec edit iexec js run iexec app deploy iexec app show prepare order iexec order init buy important you have to edit iexec json at these step to edit the params string to match the parameters you want to send to the job how to execute iexec dapp easiest way the easiest way is to go to https market iex ec and place a buy order with an available sell order id dapp address 0xc790d024ec41a7649e7a0590e4ae05891fa61ef8 work params cmdline https storage canalblog com 78 32 802934 60160490 jpg command line way clone the repository change the image url in iexec json run you have to initiate an order to buy computing ressource then find one available then buy it show available computing ressource iexec orderbook show category 3 check a ressource iexec order show 170 buy the ressource iexec order fill 170 check the status iexec work show 0xfda65e0d09bf434ea1e52f4ec044a07d6e7d592d watch download | ai |
|
jku-ws20 | special topics cloud computing from an engineering perspective in this repository the content for the special topic cloud computing from an engineering perspective is maintained in this course we will cover the journey from code to a scalable application running in the cloud therefore we will learn how to build applications using continuous integration pipelines after building a state of the art cloud technology to operate applications namely kubernetes is explained then ways to deploy applications into a kubernetes environment are explored finally we will learn how to operate an application in a production environment table of contents this part of the course consists of five topic areas building a microservice 1 20building 20a 20microservice continuous integration 2 20continuous 20integration kubernetes 3 20kubernetes continuous deployment 4 20continuous 20deployment operations 5 20operations maintainers johannes br uer octocat johannes b andreas grimmer octocat agrimmer | cloud |
|
planeui | plane ui the modern html5 cross device responsive front end ui framework plane ui ui p aper p lane ui html5 https pandao github io planeui poster jpg lisence lisence html5 css3 web google material design scss jquery jquery web components commonjs amd cmd css ie8 ie8 github https github com pandao planeui archive master zip https github com pandao planeui archive master zip bower bower install planeui html link rel stylesheet type text css href dist css planeui min css script type text javascript src js jquery 2 1 1 min js script script type text javascript src dist js planeui js script ie8 9 ie8 html link rel stylesheet type text css href dist css planeui min css if gte ie 9 ie script type text javascript src js jquery 2 1 1 min js script endif if lt ie 9 script type text javascript src js jquery 1 11 3 min js script script type text javascript src dist js planeui patch ie8 min js script endif if lt ie 10 script type text javascript src dist js planeui patch ie9 min js script endif script type text javascript src dist js planeui js script xs sm 640px md 768px ipad lg 992px ipad pc xl 1200px pc xxl 1400px pc html div class pui layout pui layout fixed div pui layout fixed 960px pui layout fixed 980 pui layout fixed 1000 pui layout fixed 1200 pui layout fixed 1360 pui layout fixed 1400 pui layout fixed 1500 pui layout fixed 1600 pui layout fixed 1700 pui layout fixed 1800 12 html div class pui grid div class pui row div class pui grid xs 3 div div class pui grid xs 3 div div class pui grid xs 3 div div class pui grid xs 3 div div div class pui row div class pui grid xs 4 div div class pui grid xs 4 div div class pui grid xs 4 div div div class pui row div class pui grid xs 3 div div class pui grid xs 6 div div class pui grid xs 3 div div div class pui row div class pui grid xs 5 div div class pui grid xs 7 div div div flexbox ie9 html div class pui flexbox pui flex column header header div class pui flex div footer footer div https pandao github io planeui https pandao github io planeui arrow article app layout animations basic badge label tag button button sheet breadcrumb card colors material design colors color picker material design color picker checkbox close button comment dialog date picker fonts font sizer file input fullpage flexbox layout forms form validator grid layout gallery icons font awesome image list listview loading menu menubar menu accordion mask notice pagination progress rating radio button ring progress search slider switch button scrollto anchor container sidenav side slide off canvas plus tab texts table top10 tooltip timeline time picker uploader z depth material design z depth 1 jquery https jquery org jquery jquery license normalize css http necolas github io normalize css normalize css license font awesome http fontawesome io font awesome gpl license cc by 3 0 license http www iconfont cn license html5 shiv https github com afarkas html5shiv html5 shiv mit and gpl2 licenses respond https github com scottjehl respond respond mit license selectivizr http selectivizr com selectivizr mit license modernizr http modernizr com modernizr mit license flexie http flexiejs com flexie js mit license prefixes scss https github com pandao prefixes scss prefixes scss mit license 2 bootstrap http getbootstrap com bootstrap foundation http foundation zurb com foundation semantic ui http semantic ui com semantic ui amaze ui http amazeui org amaze ui ui kit http www getuikit net ui kit google material design http www google com design 3 gulp js http gulpjs com sass scss http www sass lang com mit license plane ui gbs graded browser support html 5 css3 es5 6 a b c d a webkit chrome 31 safari 7 opera 29 android 4 2 uc qq chrome ios safari 7 1 firefox 31 ie10 wp 8 1 ie b ios 6 x android 2 3 x firefox opera chromium ie9 wp ie c ie8 android 2 2 x d ie6 7 ios 7 android 4 2 chrome latest firefox latest safari 6 opera latest internet explorer 9 ie 9 html5 flexbox ie 8 ie 7 window phone node webkit phonegap android ios bug https github com pandao planeui blob master change md license the mit license mit https github com pandao planeui blob master license plane ui mit https github com pandao planeui blob master license copyright c 2014 2015 pandao | front_end |
|
poseWrangler | posewrangler alt tag epic pose wrangler docs site html images v2 png overview posewrangler is a tool for interfacing with epic s mayauerbfplugin the plugin is distributed by epic games and installed via quixel bridge this is the same version distributed through quixel bridge with the maya plugin v6 9 2 or later supports scenes created with the uerbfsolvernode multiple driver support initial blendshape support wip supports maya 2018 2022 support for custom mirror mappings to allow for rigs with naming conventions that deviate from the default ue5 conventions fully automatable via python and mayapy serialization deserialization to dictionary or json file support for custom extensions and context menu actions contributors chris theodosius chris evans judd simantov david corral borna berc opening the tool to load the tool you can call it like so from epic pose wrangler import main pose wrangler main posewrangler | front_end |
|
martinchavez.dev | personal website | cloud |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.