diff --git "a/devpost_com/devpost_com_samples_markdown.json" "b/devpost_com/devpost_com_samples_markdown.json" new file mode 100644--- /dev/null +++ "b/devpost_com/devpost_com_samples_markdown.json" @@ -0,0 +1,702 @@ +[ + { + "url": "https://devpost.com/software/built-with/scss?page=2", + "domain": "devpost.com", + "file_source": "part-00564-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\nDevpost\n\nParticipate in our public hackathons\n\nDevpost for Teams\n\nAccess your company's private hackathons\n\nGrow your developer ecosystem and promote your platform\n\nDrive innovation, collaboration, and retention within your organization\n\nBy use case\n\nBlog\n\nInsights into hackathon planning and participation\n\nCustomer stories\n\nInspiration from peers and other industry leaders\n\nPlanning guides\n\nBest practices for planning online and in-person hackathons\n\nWebinars & events\n\nUpcoming events and on-demand recordings\n\nHelp desk\n\nCommon questions and support documentation\n\nSort by:\n\nTransform your imagination into unique digital postcards by crafting personalized greetings using AI tools and NFTs.\n\nNiya is a web application that connects AI-driven disease prediction, mental health journaling, and mood tracking. It gives users control over their physical and mental well-being.\n\nSecure, Scalable, and Innovative Authentication for Physical Therapy App\n\nA Twitch Extension for letting viewers apply and customize Filters to the stream.\n\nCombination of chatbot, spooky sounds and spooky timer. Simple website for simple halloween effects.\n\nSanctuary is a virtual planer and wellness site, with beautiful wonderland themed art.\n\nInput health data and get a grade and suggestion for lifestyle based on data input.\n\nNever present the same way again\n\nBestfreelancerscript offers a superior quality and fully customizable Upwork clone script that enables startups and entrepreneurs to start an online freelancer business within a small budget.\n\nYour Personalized AI-Powered Technical Interviewer: Simulate real-world coding interviews with tailored feedback to help you prepare for success.\n\nSnap to Schedule: Your Personal AI Scheduling Assistant. For students, young professionals, parents, and more.\n\nAn approachable interactive web-app course on financial literacy\n\nAn interactive map using leaflet.js and information from BYU I-Belong.\n\nFeedback from potential users indicated a desire for a comprehensive health assistant that combines various health services in one platform.\n\nTo-Do. Today. Two-Done. Two-Done is more than just a productivity tool; it’s a comprehensive tool for enhancing productivity and diagnostic capabilities in online education.\n\nWho doesn't love free stuff?Especially if it's free education!That's where freeCodeCamp comes in. freeCodeCamp is mine and many people's first mentor.This is a speedily done tribute for freeCodeCamp.\n\nEffortlessly embed and showcase your Bitbucket markdown files directly within Confluence pages for seamless documentation and collaboration.\n\nMaking sure your team's signal is covered? Check for strong device connections? Find and Manage Weak Network Signals from devices using TFind\n\nHomeConnect is here to simplify your search and connect you to the perfect home in Fairfax. Whether you're renting or buying, we make it easy to find options that fit your needs.\n\nMy portfolio\n\nA responsive website that provides information for people in need for food/shelter/health in the BC area, pointing them to the best nearby resources to find hands-on help.\n\nHurricaneGIS empowers communities to report real-time hazards like flooding and debris on an interactive map, providing vital visual data and information to aid disaster recovery efforts.\n\nTecDroid empowers youth, women, and marginalized communities through robotics, innovation, and social action, breaking barriers in STEM while impacting communities and fostering future generations.\n\nPersonalized Recommendations, Powered by Smart Data", + "content_format": "markdown" + }, + { + "url": "https://devpost.com/software/where-s-wally-cyq0d5", + "domain": "devpost.com", + "file_source": "part-00514-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\n## Inspiration\n\nPeople with no sense of direction.\n\n## What it does\n\nAllows the user to point in a direction and then tells the user what is in that direction.\n\n## How we built it\n\nWe split up the work into front-end and back-end; the python back-end tracks the user's gps location and uses the gps location to pull data from some map APIs. Then, the data is written to JSON files formatted to easily extract the gps information for a large amount of arbitrarily named points (because Google uses a unique textual identifier for every place it knows). Finally, the JSON files are uploaded to firebase directly from the python script when they are created. The app takes the data and sorts it to present it to the user: using the Mio armband's sensors and the phone's gps, the app determines what direction the user points in and constructs a circular sector; since location data already filters by radius, we only need to check that each point is counter-clockwise of the initial side and clockwise of the terminal side of the sector. Thus we quickly sort the data cached from the APIs.\n\n## Challenges we ran into\n\nNone of us have ever developed an app before, and most of us have experience mostly with hardware development and hacking. We also had trouble with Google's data so we had to come up with a clever and serviceable fake.\n\n## Accomplishments that I'm proud of\n\n## What I learned\n\nHow to read and write to JSON, basic java and javascript, using python with online APIs and interacting with firebase through python.\n\n## What's next for Where's Wally?\n\nFinding all the closest Walmarts.", + "content_format": "markdown" + }, + { + "url": "https://devpost.com/software/faux-news", + "domain": "devpost.com", + "file_source": "part-00514-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\n## Inspiration\n\nFake news is a problem that faces everyone, how do we tell what is real from what is not? Tim Cook, the CEO of Apple, recently said “we have to give the consumer tools to help with this and we’ve got to filter out part of it before it ever gets there without losing the openness of the internet. It is not something that has a simple solution.” Normally, to confirm an article’s validity people have to read through citations and trace them back to reliable sources. It is a time consuming and tedious process that no normal news consumer does. Hence, many people fall prey to believing very biased or flatly untrue news, leading to misinformed decision making. With the advent of software like BS detector, finding out sources of untrue news has become easier. However, these tools often use a database of biased or untrusted sites which is less versatile and allows novel sources of fake news to slip through the cracks.. Other tools traces sources to their sources and evaluates the validity of those. This still requires that manual upkeep. We wanted something different.\n\n## What it does and how we built it\n\nAs two programmers who have based a whole venture off of natural language processing, we knew the power of NLP. We had seen it identify tweet authors based on 10 past tweets from different authors. We have seen it predict protein protein interactions. We wanted to apply this power to this problem. We hypothesized there is a fundamental difference between real and fake news is written, specifically in their headlines. Fake news’ headlines generally grab attention more immediately to get more clicks. The articles text itself we believed would have more noise, thus we targeted headlines of the text in the articles because of the stronger perceived differences. We wanted to differentiate between real and fake news based on their headlines. We used IBM Bluemix and IBM Watson to achieve this, and we retrieved fake and real news from the popular crowdsource database, Kaggle. The fake news dataset contained over 12000 fake news headlines as found by BS detector, and the real dataset had over 420,000 news headlines. Due to the limitations of Bluemix’s Natural language classifier, we only trained the model using around 6,000 headlines from each set, trying to ensure a balanced dataset, while maintaining a large enough testing set. We then tested on a 13,000 set of real and fake headlines, with around 6000 fake ones, and around 7000 real ones. We achieved a surprisingly high accuracy of 92.4% Each prediction also came with a corresponding prediction confidence level. We were so surprised by this we went back to check if there were any duplicates in training and testing sets, and we found there were none. We have implemented this in a Python GUI application that will take an input of a text of headline and it will output whether the algorithm believes it is real or fake.\n\n## Challenges we ran into\n\nThe classifier took a long time to train, and with Internet connectivity issues our Python programs would often crash, since they used the IBM Bluemix API to access the Natural Language Classification service we created.\n\n## Accomplishments that we're proud of\n\nWe achieved a high accuracy of 92.4%.\n\n## What we learned\n\nMachine learning is surprisingly powerful.\n\n## What's next for Faux News\n\nWe want to implement this as a legitimate web app that users can quickly access and enter headlines of articles they come across very easily. It will output whether the article contains real or fake news based on our classifier, and will provide simple and quick information for our users. This has the potential to transform consumers' mindsets and empower them.", + "content_format": "markdown" + }, + { + "url": "https://devpost.com/Simawn", + "domain": "devpost.com", + "file_source": "part-00070-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\nDevpost\n\nParticipate in our public hackathons\n\nDevpost for Teams\n\nAccess your company's private hackathons\n\nGrow your developer ecosystem and promote your platform\n\nDrive innovation, collaboration, and retention within your organization\n\nBy use case\n\nBlog\n\nInsights into hackathon planning and participation\n\nCustomer stories\n\nInspiration from peers and other industry leaders\n\nPlanning guides\n\nBest practices for planning online and in-person hackathons\n\nWebinars & events\n\nUpcoming events and on-demand recordings\n\nHelp desk\n\nCommon questions and support documentation\n\nneko atsume x lootboxes\n\nSimple and fun game where you and your friends try to guess the prices of some Amazon items, ranging from normal to cute to \"wait, is that legal?\". Obviously the one closest to the actual price wins\n\nNew in the neighborhood? Find amnenities and other useful services that are few minutes of walk away!\n\nMeowney has financial advice, if you have coin.\n\nAn appointment system for easier interaction between healthcare professionals and their patients\n\nnever go to boring parties anymore\n\nA web application that searches and aggregates posts for a specific term from Facebook, Instagram, and Twitter\n\nThe tweet that leads to ultimate discoveries from your local community.", + "content_format": "markdown" + }, + { + "url": "https://devpost.com/software/candles-and-line", + "domain": "devpost.com", + "file_source": "part-00035-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\n## Inspiration\n\nThis data visualization is a very common representation for stock prices, and you can easily see it when you go to website/application showing live stock prices. Thus, I have tried to reproduce that.\n\n## What it does\n\nIt takes in the name of the company, whose stock prices you are interested in and creates a chart for that. The name of the company should be given by the code it referred by on this site. The visualization has line chart which represent Relative Strength Index of the stock and candles which represent actual price ranges of the stock.\n\n## How we built it\n\nThis project is written only in Python. We have used the following python packages to build the project:\n\n* yfinance (stock prices collection)\n* Plotly (data visualization) We have made a function to calculate RSI of the concerned stock.\n\n## Challenges we ran into\n\nIt was difficult to combine the two charts into a single chart, while also having both the y-axes into consideration.\n\n## Accomplishments that we're proud of\n\nThe ability to scroll or zoom into a specific part of timeline is a really nice feature of my implementation. Also, the good relation between RSI and stock prices, helps make our data visualization provide a good story for the user.\n\n## What we learned\n\nRSI is a good parameter for stock markets, and can be used to buy stocks. Whenever there is sudden drop in RSI of a reputed stock, this indicates a good time to buy it. And we should sell stocks when RSI starts declining after a long rise.\n\n## What's next for Candles and Line\n\n* Present both the y-scales on a single chart.\n* Creating a dashboard for showing multiple stocks\n* Trying to implement sub-plots in dashboard", + "content_format": "markdown" + }, + { + "url": "https://devpost.com/software/tru-jr0iml", + "domain": "devpost.com", + "file_source": "part-00275-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\n## Inspiration\n\nPredatory journals are the scientific equivalent of fake news. They fast track the publishing of research articles through low editorial standards and non-rigorous science. These non-peer reviewed articles are damaging for researchers, the open access scientific community, and scientists looking to publish their work.\n\n## What it does\n\nOur solution is a browser extension that works with google chrome. The user receives a pop-up warning when they are viewing an article that comes from a potentially predatory journal.\n\n## How we built it\n\nWe used python to extract journal names from two separate lists of predatory journals: Beall's list (https://beallslist.weebly.com/) and Scholarly Open Access (https://web.archive.org/web/20170111172309/https://scholarlyoa.com/individual-journals/)\n\n## Challenges we ran into\n\nIntegrating Javascript into an HTML extension, formatting the predatory journal names to be searchable.\n\n## Accomplishments that we're proud of\n\nIncorporating JQuery into a google chrome extension to allow real time search of our predatory journal database.\n\n## What we learned\n\nWe had very little previous experience with Javascript and learned how to search google for coding solutions. This was also our first experience using Github for a group coding project.\n\n## What's next for Tru\n\nBetter user interface that provides animated warnings. Machine learning integration to suggest similar articles for the user that aren't from predatory journals.\n\n## Built With\n\n* javascript\n* jupyter\n* python", + "content_format": "markdown" + }, + { + "url": "https://devpost.com/software/issue-dependency-tracker", + "domain": "devpost.com", + "file_source": "part-00514-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\n## Inspiration\n\nWhen working on a big project with cross-department dependencies we were usually tracking all our work in Jira. If some of the issues were dependent or blocked we linked issues together and waited till blockers were addressed.\n\nThis works great when the relationships aren't complex and when you need to track only 1st level links. You can simply see them in Linked issues section in issue detail.\n\nBut often it happened that issue that was blocking the first was blocked by the other. And this is how the idea of the Issue Dependency map was born.\n\n## What it does\n\nIssue Dependency Tracker visualizes nested issue dependencies on the simple treemap.\n\nUsing this map you’ll:\n\nunderstand the full context of the relationship between the issues,\n\nsee what is that status and type of linkage of the issues (also nested issues).\n\n## How I built it\n\nWe pulled all the dependencies of the subject issue and structure it accordingly in preparation to send to a 3rd party service called https://quickchart.io/ that exposes an API to send data and return a processed diagram image. This dependency diagram then gets embedded into the issue fragment using the Image component from Forge-UI\n\n## Challenges I ran into\n\nDue to Forge limitations and restrictions in rendering any sort of pure HTML and CSS elements we had to come up with an alternative to generating the diagram\n\nTranslating the dependencies from current structure into a new one was a bit of a hassle\n\n## Accomplishments that I'm proud of\n\nThe initial planned implementation was going to be with the use of SVG canvas but we quickly decided against that due to the level of complexity required to build a dependency system based on SVG shapes. It was a great moment when one of our team members suggested this 3rd party library which enabled us to build the Forge app quick.\n\n## What I learned\n\nThat when there are limitations you get creative :)\n\n## What's next for Issue Dependency Tracker\n\nInitial step will be to convert it into a Conect app to publish on the Marketplace. Eventually, when Forge is ready, port back to Forge.", + "content_format": "markdown" + }, + { + "url": "https://devpost.com/software/documenting-with-docusaurus-version-2-for-beginners", + "domain": "devpost.com", + "file_source": "part-00564-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\n## Inspiration\n\nLearning and sharing are two important elements to grow professionally. It is for this reason that I was motivated to participate in this competition, inspiring me to share my knowledge with other people who are passionate about technology and want to make a small contribution to society. I was also inspired by all those people who are starting in the world of programming, developing algorithms to show their creativity and what they are capable of worldwide.\n\nAnother inspiration I had was to build an example site using Docusaurus. For that I chose one of my favorite themes called OpenXR. Therefore, I found it very cool that while users learn to use Docusaurus they also learn about Virtual Reality topics.\n\n## What it does\n\nTo help generate new knowledge I have created a tutorial which is based on Docusaurus Version 2. All the documentation is available in my Github repository, which is published. This tutorial shows step by step what must be done to install, configure and deploy our website locally. Once we are satisfied with the work, the tutorial will guide you to be able to put the project in a Github repository and see what our website would look like on the internet. The images I have provided are very intuitive and I am sure they are very useful.\n\nAfter the steps in the tutorial, two videos were also created that are available on YouTube where it is explained what the user must do to work with Docusaurus.\n\n## How I built it\n\nThe tutorial is designed based on people who have no experience at all working with Docusaurus version 2. It is for that reason that it is divided into sections. Each section is in charge of showing steps and requirements that must be followed for the correct execution. With the help of images, it allows you to express ideas and steps in a clearer, more precise and concise way. Allowing the tutorial to be very effective and efficient.\n\nThe construction of the tutorial was carried out by reviewing technical manuals and using simple language to convey clear ideas in each paragraph. In addition, sometimes it is very useful to make available images that show what we tried to express previously and therefore, the user will find images that are related in terms of modifying content directly in the files and those modifications are then shown on the web.\n\nAfter explaining all the theory in the tutorial, I decided to create two videos. One is in Spanish and the other is in English. They were created in two languages with the aim of expanding the audience of people interested in learning about Docusaurus.\n\nAnd finally, I built a site using docusaurus to demonstrate what can be done. In this case, the topic I decided on was OpenXR. Since I work creating applications for VR, it seems like a good topic to reference and build a site using Docusaurus to address the OpenXR topic.\n\nThe English version of the video can be consulted at this link: https://www.youtube.com/watch?v=OkMr8jmLyKU\n\nAnd the Spanish version of the video can be consulted at this link: https://www.youtube.com/watch?v=yYkPs3Q9UQE&lc=Ugw5PIpH8ar8dFaky5d4AaABAg\n\n## Challenges I ran into\n\nI think the main challenge I had was to create a tutorial that was capable of being as clear and precise for the audience that could read it. Another challenge was trying not to neglect some detail for the proper execution of all the steps. That means that a revision was made from the grammar and punctuation marks to the clarity of the sentences accompanied by the images that support it.\n\nOn the other hand, try to create some videos that were as explanatory as possible and that are possible to understand very easily. Creating both videos in different languages was another challenge to overcome.\n\n## Accomplishments that I'm proud of\n\nI am very happy that the information I have provided based on my experience and knowledge can be rewarding for other people to generate new knowledge and learn about the use of information technologies. You really enjoy using the Markdown language to create fascinating things.\n\nAnother thing that I am proud of was collaborating with the Open Source community to demonstrate how these tools can be very useful to us. Creating a repository that is available to any user where it is known that they will find quality information is really gratifying because you know that you are helping to develop new knowledge in people.\n\nPublishing the videos on YouTube gives me great satisfaction because I know that many users will be able to reproduce them and learn new knowledge.\n\n## What I learned\n\nI learned that it is important to create a very well structured and elaborate tutorial for all types of audiences. My learning was based on and focused on the fact that the tutorial should be easy and understandable for anyone. It is for that reason that I learned to take care of every detail in this tutorial.\n\nI also learned relevant aspects regarding the Markdown language and how you can have more organized information that later becomes an HTML format on the web. Another thing I learned is that thanks to the Markdown language, any change we make to the source code is immediately displayed on the web, which left me very surprised.\n\n## What's next for Documenting with Docusaurus Version 2 for beginners\n\nExtend this tutorial to the intermediate or advanced level, where users can implement source code to see effective changes directly in the templates. Work with images, layout styles and incorporation of new elements in the template. I would also like to share the tutorial with developer communities and get some feedback. In order to continue growing the tutorial to make new information available.", + "content_format": "markdown" + }, + { + "url": "https://devpost.com/software/shiritori-alexa", + "domain": "devpost.com", + "file_source": "part-00145-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\n## What it does\n\nYou can play the word game on alexa to help train or develop vocabulary.\n\n## How I built it\n\nIt's built on a lambda function using the alexa skills kit package.\n\n## Challenges I ran into\n\nIntent schemas didn't work as well as I planned, and responses were triggering the wrong intents, which caused slots to be wrong.\n\n## What's next for Shiritori Alexa\n\nWill add more classifications for words as well as giving the word a dictionary definition by hooking it up to an English dictionary. This being implemented within the backend.\n\n## Built With\n\n* amazon-alexa\n* lambda\n* node.js", + "content_format": "markdown" + }, + { + "url": "https://devpost.com/software/checkmait-dmcpxz", + "domain": "devpost.com", + "file_source": "part-00608-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\n# checkmAIt\n\ncheckmAIt was made from a combined love of board games amongst the groupmembers, who wanted to do an integrated hack. We pondered what games could be best improved with technology, and thought of implementing a chess game where you don't have to move the pieces and can play against an AI (or another person). The goal was to have the game integrated with Google speech to text, but due to us overscoping ourselves, we were unable to get it working completely reliably, so we also allow users to input moves through a computer. The game accepts standard chess notation and algebraic chess notation.\n\n## Requirements\n\nPython2 and Python3\n\nPython3 modules: chess, chess.uci\n\nGoogle Voice Recognition", + "content_format": "markdown" + }, + { + "url": "https://devpost.com/software/relevant-tag-suggestions", + "domain": "devpost.com", + "file_source": "part-00847-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\n## Inspiration\n\nWhile many believe technology is a limiting factor in social interaction, we wanted to use it in a way to enhance social interaction.\n\n## What it does\n\nGiphy Guesser is a simple phone game using Giphy which involves having a player choose a .gif relevant to a word or phrase that they choose and having other players guess the chosen word or phrase after viewing the given .gif image.\n\n## How we built it\n\nWe used Expo.io to easily create an app which runs on both iOS and Android while calling the Giphy API to fetch relevant .gif images. The app is made with React-native which works well with iOS and Android.\n\n## Challenges we ran into\n\nWe had a few issues with deciding what frameworks to use, but Expo.io was very easy to understand and use. We also ran into issues using various other APIs and technologies which limited us to using those we were about to work with.\n\n## Accomplishments that we're proud of\n\nWe scrapped another idea halfway through the hackathon so we didn't have as much time as we would have liked to build this idea. However, we got a working proof of concept which we are pretty proud of.\n\n## What we learned\n\nPreparing more ideas for hackathons ahead of time would be super beneficial for situations where we decide an idea is too difficult or found to be unviable.\n\n## What's next for Giphy Guesser\n\nBetter multiplayer features over bluetooth, wifi. etc, More gamemodes and variations, Smarter and more relevant Gif suggestions.\n\nLink to app: https://expo.io/@christinasz/giphyguesser\n\n## Built With\n\n* expo.io\n* giphy-api\n* react-native", + "content_format": "markdown" + }, + { + "url": "https://devpost.com/chloe2407", + "domain": "devpost.com", + "file_source": "part-00514-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\nDevpost\n\nParticipate in our public hackathons\n\nDevpost for Teams\n\nAccess your company's private hackathons\n\nGrow your developer ecosystem and promote your platform\n\nDrive innovation, collaboration, and retention within your organization\n\nBy use case\n\nBlog\n\nInsights into hackathon planning and participation\n\nCustomer stories\n\nInspiration from peers and other industry leaders\n\nPlanning guides\n\nBest practices for planning online and in-person hackathons\n\nWebinars & events\n\nUpcoming events and on-demand recordings\n\nHelp desk\n\nCommon questions and support documentation\n\nPétudier aims to aid students in becoming more productive by blocking out distracting websites when it's time to study. It promotes self-improvement and building effective time management.\n\nHive is a Flutter mobile application used to build awareness around mental health among students. Using sentiment analysis, Hive’s smart journal allows users to better understand their mood.\n\nThrough our website (a prototype), the user would input their health information, then receive meal suggestions that is most healthy for them and the environment, taking into user's health.", + "content_format": "markdown" + }, + { + "url": "https://devpost.com/software/autonomous-security-guard", + "domain": "devpost.com", + "file_source": "part-00070-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\n## Inspiration\n\nI was inspired by machine learning and A.I and wanted to work on it. To make effective models for social well-being.\n\n## What it does\n\nIt uses face recognition deep learning model made in python to recognize person and if that person is authorized or known one then door will open for him of that place or office using hardware system.\n\n## How I built it\n\nIt is built using deep learning model made in python also one virtual environment connected to model in python and association with Arduino Uno.\n\n## Challenges I ran into\n\nchallenge was of making a physical model at home and due to pandemic I was unable to make profound big model of door and hardware system.\n\n## Accomplishments that I'm proud of\n\nEven with small door physical model system is running very nicely that as it recognizes person as authorized door gets open for him.\n\n## What I learned\n\nI learned how to connect ML & DL model with other software and with hardware system also and make solution for problems.\n\n## What's next for Autonomous security guard\n\nIn future days I will make this into more sophisticated hardware part and also physical model .", + "content_format": "markdown" + }, + { + "url": "https://devpost.com/software/quick-meds", + "domain": "devpost.com", + "file_source": "part-00514-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\nA group member experienced the difficulties of communicating with EMS\n\nTwo other group members have training in emergency first aid, and realize the long procedure in calling EMS.\n\nWhat it does\n\nFor those with medical conditions (e.g. angina), if they were to have a reaction(heart attack), it would only be a simple touch away from calling for help rather than the long process involved with explaining the details in a call.\n\nHow we built it\n\nUsing marvel, we developed a model of the ui for our project\n\n- Used pygame and python to create the submission.\n\nChallenges we ran into\n\nTime constraints - unable to implement all the features we wanted to include in the project (a database where we can store all those who requested for help and rank them in order of importance similar to the process right now). For the purpose of the hackathon, we were going to add a sms api but we had run out of time.\n\nAlso unable to turn the pygame into the app, as we had troubles installing the necessary softwares.\n\nAccomplishments that we're proud of\n\nPersistence in the face of dev's block, went through many ideas and decided upon a reasonable creation we were satisfied with.\n\nWhat we learned\n\nImportance of having a good user interface\n\nWhat's next for Quick Meds\n\npartnering with the peel emergency services to connect to their database and send the requests of help", + "content_format": "markdown" + }, + { + "url": "https://devpost.com/software/facticity", + "domain": "devpost.com", + "file_source": "part-00035-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\n## Inspiration\n\nWhenever I started reading an article on the web, I often struggle with the high vocabulary used. I then used to rush to search the word on internet and it breaks the flow of reading and our brain takes some time to again gain focus on the article. Being a programmer, I thought of making a bookmarklet which can help you to give instant definitions with just one click!\n\n## What it does\n\nBy dragging the bookmarklet to the bookmarks tab, you are ready to use it. Open a webpage on your browser, click on the bookmark (facticity) icon to activate it and then you can select the words you want to get the definition of and voila - it shows you the definition. There's also a \"Learn More\" link and it will take you to the page where there's a lot of information about the selected word.\n\n## How we built it\n\nI used jQuery along with Vanilla JavaScript to build the bookmarklet and for the website, I used HTML5, CSS3, JavaScript for frontend and Node.js, Express.js for the backend.\n\n## Challenges we ran into\n\nThere was a lot of problems I ran into like fetching the data and passing it to the innerHTML of the tooltip, removing the tooltip once the user scrolls or touches the body at any point, pushing my code from local repository to github repository (this was my first time doing this).\n\n## Accomplishments that we're proud of\n\nI am proud of finally making a bookmarklet. This is my first time doing so. I learned a lot and of-course this was created in a span of only 1.5 days. Plus now I won't need to rush to google search to find the meanings of words.\n\n## What we learned\n\nI learned how to make bookmarklets, pushing local code to github using git.\n\n## What's next for Facticity\n\nTurning it into a chrome extension. Allowing users to save the words they select so that they can access it later.", + "content_format": "markdown" + }, + { + "url": "https://devpost.com/software/nextstop-a-distributed-real-time-scheduler", + "domain": "devpost.com", + "file_source": "part-00847-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\n## Inspiration\n\nWe were motivated to solve the scheduling problems in real time based on one of the use case provided by Optum. We were inspired by the kind of problem this would solve and believe that it might have a great potential in serving large scale users in a similar situation.\n\nThe potential Consider a 100 nodes at a particular location/point requiring to be assigned tasks on the go. We could address this with a straightforward scheduling algorithm, with the given constraints. Now imagine that there are a very high number of nodes at a single location/point and imagine a high number of such locations itself. This becomes a high scale distributed task assignment problem and we aim to tackle that with our system.\n\n## What it does\n\nA generic solution to real-time task assignment applications. Given in a few constraints, the system can assign tasks on the go in a large distributed environment based on real team events. This solution is capable of handling real-time analysis of the progress of the task and can handle reassignments as well, to the best fit of the user/organization. We present this system to seamlessly assign tasks to various nodes on a distributed network and can be configured according to the needs of the user.\n\n## How we built it\n\nWe built this system with the purpose to serve a large number of clients on a distributed network with real-time task assignment. The system is built on a foundation with the distributed streaming platform Apache Kafka. The system is designed based on the producer-consumer model, which allows us to get real-time requests from various clients and serve them efficiently. We implemented the scheduling algorithm in python that could be useful to most scenarios with similar goals. We use SQL databases for logging and storing assignments. We make use of REDIS for efficient update and fast caching and also a temporary assignment of tasks. We were also able to efficiently handle reassignment of tasks on the go using Redis and an on the go scheduling system. The system has a robust architecture which is highly scalable and adaptable to many environments.\n\nUse Case - Optum :\n\nThe specific use case was from Optum: They needed a mechanism to handle task management for their Nurses who would usually drive out to serve the patients. There are a number of constraints - a given due date for the task, the priority of each task based on the task type, and most importantly the time constraints which are the time taken to provide the service and the time taken to reach the patient. They needed a system that would efficiently assign these tasks of serving patients on the go when their Nurses opened an application; the nurses would basically be driving to a patient’s place and would remain there until the service is completed. The challenge was to assess the nurses’ situations and assign tasks on the go, keeping the constraints handy.\n\nOn the users end: Turns on an application, sees an optimal task assigned and proceeds with it. Repeat the process until their working hours are met or when there are no more tasks available.\n\nWhat we saw: A producer-consumer system that could handle the assignment of tasks; the producer grabs the location and passes it to the assigned consumer using Apache Kafka and Zookeeper. The consumer has a scheduling server that would take the location as an input and matches it with the list of tasks which are already optimized based on the various parameters and data provided to us by Optum. The server then assigns the best possible match in real time and saves all the assignments on a temporary fast cache using Redis. There is a logging service associated, which would simply log the nurse’s real time location on the go for analysis. We used a geo-location service to get the location coordinates and a geo-mapping service to calculate the ETA. We receive the location of the nurse in regular short intervals of time to assess the realistic feasibility of the task completion and efficiently re-assign the task if necessary.\n\n## Challenges we ran into\n\nDesigning the real time server that will respond to every user event and schedule the task accordingly. Had to explore kafka and redis to facilitate processing of real time data and running the scheduler.\n\n## Accomplishments that we're proud of\n\nWe were able to devise a generic scalable system which solves real time scheduling problem on a distributed platform.\n\n## What we learned\n\nContributing to opensource has a charm on its own. We devised a generic distributed scheduler and it is was possible only by using other open source libraries.\n\n## What's next for NextStop-A-Distributed-Real-Time-Scheduler\n\nThis system architecture is capable of handling a high number of nurse requests from various possible locations and has the potential to scale well in production. Running the whole system on Google Cloud Platform helped us utilize the platform advantages and also make the system more available. Also build an efficient user interface to make it more realistic, with some visualizations.", + "content_format": "markdown" + }, + { + "url": "https://devpost.com/software/tonifai", + "domain": "devpost.com", + "file_source": "part-00514-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\n## Inspiration\n\nThis project was Ananth's idea - the concept of not intersecting, but rather converting images and music was a fascinating one.\n\n## What it does\n\nVia the iOS or Android app, pick a contact who you think would enjoy hearing from you what song they remind you of. Their image gets uploaded to the Clarifai API which returns tags of the user. Finally, those tags are searched in a lyric searcher which return the top song their tags match.\n\n## How I built it\n\nThe Twilio integration and RESTful API was built by Ananth in Python/Flask. Varun worked on the Android app. Jay worked on the iOS app.\n\n## Challenges I ran into\n\nThe biggest challenge was pivoting at the last minute from another idea - we're fighting to get done and have a presentable MVP, and this was certainly our biggest challenge.\n\n## Accomplishments that I'm proud of\n\nPivoting super late and making it work!\n\n## What I learned\n\nHack, hack, hack!\n\n## What's next for Tonifai\n\nNot sure yet - we can't wait to find out ourselves!", + "content_format": "markdown" + }, + { + "url": "https://devpost.com/software/apollo-gpt", + "domain": "devpost.com", + "file_source": "part-00275-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\n## Inspiration\n\nApollo GPT for us began as a tool for interactive Research and Cyber Security functionality for our Virtual Reality platforms.\n\nWhat Apollo GPT end up becoming was a fully interactive AI model that became far more than a simple research tool or a security model in our VR environment!\n\n## What it does\n\nApollo GPT is an AI-driven VR platform that creates personalized, immersive experiences by integrating ChatGPT API, Azure TTS, and the Unreal Engine. It adapts to your needs in real-time, revolutionizing the way you learn, work, and socialize. Key features include:\n\n```\nImmersive learning and training with real-time feedback, tailored to individual needs.\nSeamless remote collaboration, fostering productivity and innovation across global teams.\nAccess to virtual healthcare information and directions to key services.\nPersonalized entertainment experiences, such as interactive storytelling and virtual events.\nEnhanced socialization through realistic virtual environments, reducing isolation.\nInnovative business solutions, including virtual showrooms and employee training.\n```\n\n## How we built it:\n\nVoice SDK by Meta: Captures and processes the microphone data. Azure Cognitive Services - STT: Converts the captured speech to text. OpenAI Chat GPT 3.5/4.0: Processes the text input and generates a relevant response. Azure Cognitive Services - TTS: Converts the generated response text to speech. MetaHuman SDK: Generates procedural animations based on the generated speech. MetaHuman: Converts Face Mesh data into 3D NPC model files. Quixel Bridge: Handles rendering and integration of the 3D models in Unreal Engine 5. UE5 (Unreal Engine 5): Renders the scene with the 3D models and animations, displaying the final output to the user.\n\n## Challenges we ran into:\n\nThe processing power needed to integrate the functionality of our demos were unable to properly combine each tool together to provide the results we wanted.\n\n## Accomplishments that we're proud of:\n\nWe have developed, gathered, and have all of the tools necessary to make our project work.\n\n## What we learned:\n\nGet better equipment.\n\n## What's next for Apollo GPT\n\nThis project has included XR compatibility, the next steps will be the integration of AR products.\n\n## Built With\n\n* api\n* azure\n* c++\n* chatgpt\n* cloud-services\n* python\n* sdk\n* unreal-engine\n* vr\n* xr", + "content_format": "markdown" + }, + { + "url": "https://devpost.com/software/devpost-project-modals", + "domain": "devpost.com", + "file_source": "part-00035-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\n> \n\n#idea #prototype #wip\n\n## Inspiration\n\nI browse my feed to discover fun and interesting projects, and the hackers behind it. Having to leave it to learn more about a project is annoying, and opening 10 new tabs isn't the best option either. Chrome is eating enough of my RAM already, thank you.\n\nSolution? Modals! Like Facebook, Dribbble, Tumblr, and everybody else :)\n\nTake a look:\n\nDon't pay attention to the design just yet.\n\n## Challenges\n\nThe project data comes back from the server as JSON, and is rendered in a #marionette layout.\n\nMaking the follow buttons work was super easy, as they were built this use case in mind. The like button? Not so much. It's actually a fake button in the GIF above.\n\n## Built With\n\n* api\n* backbone.js\n* idea\n* marionette.js\n* prototype\n* wip", + "content_format": "markdown" + }, + { + "url": "https://devpost.com/software/built-with/sinatra", + "domain": "devpost.com", + "file_source": "part-00145-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\nDevpost\n\nParticipate in our public hackathons\n\nDevpost for Teams\n\nAccess your company's private hackathons\n\nGrow your developer ecosystem and promote your platform\n\nDrive innovation, collaboration, and retention within your organization\n\nBy use case\n\nBlog\n\nInsights into hackathon planning and participation\n\nCustomer stories\n\nInspiration from peers and other industry leaders\n\nPlanning guides\n\nBest practices for planning online and in-person hackathons\n\nWebinars & events\n\nUpcoming events and on-demand recordings\n\nHelp desk\n\nCommon questions and support documentation\n\nSort by:\n\nPeople who create AI art want power tools (train models, generate inferences, etc). But GPUs are hard to come by and often hosts give limited UIs. What are they to do? That's where we come in.\n\nLevel up your PoGo game by quickly seeing which pokemon are in PoGo are weather-boosted. Learn more about each one to create a more effective Raid Attack team.\n\nThis Sinatra app running at RMRK.art will take any RMRK 1.0 NFT art tag and render the image with 10px padding, centered, with \"object-fit: contain;\". This makes it perfect for using in an iframe.\n\nOur widget automatically displays a business's water use data, target goal and compares to the industry average, allowing customers to choose water-wise businesses to support.\n\nSee if your budget is in balance using the Balanced Money Formula\n\nipywidgets forms for Jupyter Notebook education\n\nA project built using ruby and Sinatra framework.It lets both a user and a doctor to sign-up.A user can make an appointment.A doctor can view all appointments.Updating an appointment is available.\n\nWondering if a food item meets your vegan diet? Our website will take a UPC number or a description of the food and determine whether or not it is vegan!\n\nCustom Zendesk apps for your support agents in 5 minutes\n\nA web app to alert doctors of potential at risk covid-19 patients in the community.\n\nAn academic exercise to see if a web app can be made without JS // A basic online food bill budgeting tool that you shouldn't use!\n\nCoVoid - is a free smart watch application that is built for the user to easily develop healthy habits to frequently wash hands and to avoid touching face.\n\nMagic Word is the ideal game to keep the mind in great shape. Have fun guessing the magic word that connects to all the other words and become the champion of the week!\n\nA conversational bot that delivers subjective news with attitude\n\nFind your boo, from the loving rescue dogs, to your favorite neighborhood furry friends. Vote for the most athlete dog in the neighborhood, the cutest drama queen in town, and much more!\n\nRabbitMQ based lambda-like service creating new queues for any function & auto-scale workers based on execution time.\n\nAnalyze music playlists to identify explicit content for use on the radio\n\nA text and Whatsapp deployed chatbot that enables female entrepreneurs and others to sell online.\n\nEverything you need to supercharge your commute! Maps, directions, arrival times, and weather conditions on arrival!\n\nMobilizing Christians to actively pray for a city devastated by violence\n\nEver wanted to split up an online bill on the spot? Then this is the app for you!\n\nVitamin See is a web app that gives a voice back to recovering stroke survivors, providing them a way to communicate\n\nStreamlining the process for parents to find and enroll their children in the right SF public school.\n\nThe web is a giant graph database with its query language missing. We want to solve that problem.", + "content_format": "markdown" + }, + { + "url": "https://devpost.com/dhrumilp15", + "domain": "devpost.com", + "file_source": "part-00796-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\nDevpost\n\nParticipate in our public hackathons\n\nDevpost for Teams\n\nAccess your company's private hackathons\n\nGrow your developer ecosystem and promote your platform\n\nDrive innovation, collaboration, and retention within your organization\n\nBy use case\n\nBlog\n\nInsights into hackathon planning and participation\n\nCustomer stories\n\nInspiration from peers and other industry leaders\n\nPlanning guides\n\nBest practices for planning online and in-person hackathons\n\nWebinars & events\n\nUpcoming events and on-demand recordings\n\nHelp desk\n\nCommon questions and support documentation\n\nHey! I'm passionate about computer science, mathematics, neuroscience, and especially how my interests cross over.\n\nHaving trouble getting job? JobHelperGPT can identify the skills you lack for the job your interested in. It can also generate questions for practice interviews!\n\nHave you ever struggled (to du that dance)? Well, we at DU THAT DANCE can help you in your artistic aspirations with DU THAT DANCE. No dance will be outside your ability!\n\nCORTX + Discord = Efficiently store and search files in Discord Servers!\n\nCORTX + Discord = Efficiently store and search files in Discord Servers?\n\nTake on me? Yes.\n\nGreen Machine (Learning) making computing clean!\n\nAn app and hardware solution that helps households and communities prepare and cope with severe drought or storm events.\n\nThink you'll win a quick round of a game you enjoy? Now you can make some Kin doing it! Our service integrates micro-transactions into your favourite games to up the ante on what each game means.\n\nHelping doctors treat patients, not illnesses\n\nHelping to safeguard your medical history using Keybase\n\nA novel system for more accurate two-handed gesture tracking\n\nA Dapp to own your own PokeAnimals\n\nA mobile chatbot that helps individuals at the risk of developing heart disease make better lifestyle choices\n\nFinBetter makes it easier to visualize your current financial situation\n\nFirst steps in making VR/AR technology more portable\n\nIntroducing BMOre. BMO Real Experience- a twitter data scraping web application to better understand clients.\n\nA virtual piano that is controlled and played through hand motions.\n\nArThrowFighter allows elderly to have a fun and innovative way to regain joint mobility.", + "content_format": "markdown" + }, + { + "url": "https://devpost.com/maduakor-chinwendu/followers", + "domain": "devpost.com", + "file_source": "part-00035-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\nDevpost\n\nParticipate in our public hackathons\n\nDevpost for Teams\n\nAccess your company's private hackathons\n\nGrow your developer ecosystem and promote your platform\n\nDrive innovation, collaboration, and retention within your organization\n\nBy use case\n\nBlog\n\nInsights into hackathon planning and participation\n\nCustomer stories\n\nInspiration from peers and other industry leaders\n\nPlanning guides\n\nBest practices for planning online and in-person hackathons\n\nWebinars & events\n\nUpcoming events and on-demand recordings\n\nHelp desk\n\nCommon questions and support documentation\n\nA purpose driven woman, solving problems with creativity.\n\nPablo Chibuisi Ogbonna is a Nigerian entrepreneur and expert in IT Security and Electrical Engineering.\n\nIT Techie for 32 years who loves teaching and making Kids Alexa Skills Preview and Review Videos with my ARMY granddaughters.\n\nPassionate about technology, humanity, economics and dancing\n\nI work at Universidade Federal Fluminense in Brazil, state of Rio de Janeiro. We have developed projects to serve the poorest population.\n\nRealizing dreams with technology.", + "content_format": "markdown" + }, + { + "url": "https://devpost.com/software/hoo-dungeon-master", + "domain": "devpost.com", + "file_source": "part-00070-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\n## Inspiration\n\nBeing a 90's kid Dungeons N Dragons board games and Text adventure were a big part of my childhood. Taking inspiration from these pen and paper styled games, I built a DND adventure on top of Google Assistant bot.\n\n## What it does\n\nIts a Dungeon crawler game with multiple paths and varied enemies. The Omnipresent Narrator (Google Assistant Voice bot) describes the game environment and variables at each stage, based on which the user can either choose from suggested options or go Old-School and type/give voice instructions as raw text input. Based on these commands the protagonist interacts with the environment and traverses the various game encounters.\n\nThe game includes colorful imagery and background music across the various encounters. And there are multiple paths/endings which the player can explore like a Real DND game. Combat is also an essential part of the text adventure, which is resolved using a 6 sided dice just like the original.\n\n## How I built it\n\nThe game is built on top of a Google assistant bot using dialog flow. The underlying logic and game states are maintained by the remote Flask server. Google assistant handles for the voice to text and vice versa conversions and makes it possible to making the\n\n## Challenges I ran into\n\nConfiguring the bot and making sure the various components work as intended was the biggest challenge. Apart from this being a One man Hack, I had a lot of things to design,develop and fix.\n\n## Accomplishments that I'm proud of\n\nThis is mostly a complete hack, and can be pushed to production quite easily.\n\n## What I learned\n\nPlaying around with Chat bots and designing game encounters.\n\n## What's next for Hoo Dungeon Master\n\nMaybe Co-op and multiplayer would be a cool extension of this Dungeon crawler.\n\n## Built With\n\n* flask\n* google-assistant\n* google-cloud\n* google-storage\n* memcached\n* python", + "content_format": "markdown" + }, + { + "url": "https://devpost.com/software/driver-s-ed-in-spanish", + "domain": "devpost.com", + "file_source": "part-00035-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\n## Inspiration\n\nThis year, Congress passed a bill that allows U.S. immigrants to apply for their Driver's License. Though this is a huge achievement, many of these immigrants – most of whom are from Latin American countries – lack the education and tools to pass their written exam. This app allows those who can only speak Spanish to study for their written exams and get their licenses: a liberty that should be equally accessible to all.\n\n## What it does\n\nThis app prepares users by quizzing them with questions from the official DMV Spanish practice tests.\n\n## How we built it\n\nWe built the app using Xcode, a popular IDE for building iOS applications.\n\n## Challenges we ran into\n\nBoth members of the team had no knowledge of Swift or Xcode prior to this Hackathon. Because we had no prior experience, we utilized the mentors and the internet to learn the language. Also, there were complications with Xcode and our current operating system, which cost us more than 2 hours of development time before we could even get started on our project.\n\n## Accomplishments that we're proud of\n\nLearning Swift from nothing was one of our biggest achievements. Learning Swift turned out to be a lot more difficult that what we expected, but even this challenge, we were able to persevere and make a functional application that we are extremely proud of.\n\n## What we learned\n\nIn terms of computer science, we learned how to use Swift and Xcode, but more importantly, we learned what it is like to participate in a hackathon. This hackathon was our first hackathon, and it was here that we learned about time management with our application and collaboration with other hackers.\n\n## What's next for Driver's Ed in Spanish\n\nIn the future, we hope to improve the UI and make it more user-friendly. Similarly, we hope to incorporate images (e.g. traffic signals, signs and diagrams) into the app.", + "content_format": "markdown" + }, + { + "url": "https://devpost.com/software/adaptive-style-transfer-tf2", + "domain": "devpost.com", + "file_source": "part-00649-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\n## Inspiration\n\nFor a long time, I wanted to implement my own style transfer. I also wanted to learn to use all the cool new stuff inside tensorflow 2. Now both are done :)\n\n## What it does\n\nStyle transfer is a technology that takes an image as input and apply a style learn by analyzing work of art, for instance Picasso paintings. The content of the image stay the same but the style change.\n\nMore technically, this is a generative adversarial network (GAN). A first neural network (the generator) take the image and generate an image with the same content but the style of an artist and a second neural network try to make the difference between this image and a set of images from the artist. Both network train in parallel and at the end, the generator transform an image to another that looks like the images from the artist.\n\n## Accomplishments that I'm proud of\n\nCompiling Tensorflow 2 for TensorRT support. It's not very hard but I'm proud to have done it at least once\n\n## What I learned\n\nLot of things about tensorflow 2\n\n## What's next for Adaptive-style-transfer-tf2\n\nWith this project it is possible to train and export very small models in TFLite, Tensorflow format for embedding device and Android phone. So next step : build an Android app running these models!", + "content_format": "markdown" + }, + { + "url": "https://devpost.com/software/snapcache", + "domain": "devpost.com", + "file_source": "part-00796-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\n## Inspiration\n\nWhen visiting historical locations, wouldn't it be amazing to see the pictures taken by those who've already visited?\n\n## What it does\n\nOur web app allows you to take photos and associate those photos to a location. Later, when someone else visits the location, the can view the photos \"dropped\" at the location along with a description which provides insight into maximizing your experience. You are able to like to photo and photos are displayed on a most liked basis. Similar to Pokemon Go's Pokestops, you are able to take a photo to share your moments.\n\nFor example, when just Trudeau visited Waterloo for Hack the North 2017, you would be able to take a picture, upload it to the app with a description to remind future visitors of the day the prime minister gave a speech at Hagey Hall.\n\n## How we built it\n\nFirebase was perfect for our project as it satisfied the three major requirements we had, which was a database, authentication and storage. Firebase made it incredibly easy to handle our backend and allowed us to store the uploaded photos and retrieve them for viewing very quickly. Our backend solely consisted of Firebase while our frontend was done with AngularJS. We decided to go with a web app instead of an Android/iOS app so we could demo the project without having to install anything. The app is optimized for mobile devices.\n\n## Challenges I ran into\n\nNo one in our group excelled in creating web apps optimized for mobile devices so we ran into some interesting issues regarding CSS and trying to make a full app experience solely through web. Additionally, implementing gestures for the web app was also tricky.\n\n## Accomplishments that I'm proud of\n\nWe're really proud of overcoming our UI difficulties. We were able to implement the core functionality while designing a user friendly UI.\n\n## What I learned\n\nFirebase is perfect for Hackathons, without Firebase, we would have spent most of our time on backend, including authentication and general quests. Thanks to Firebase, we were able to set the backend up in less than an hour which allowed us to focus on creating a pleasant user experience through the frontend.\n\n## What's next for SnapCache\n\nSomething we're looking to implement is the ability to add comments to existing photos. Being able to chat on photos would be a cool feature and would allow for a more enjoyable, insightful user experience.", + "content_format": "markdown" + }, + { + "url": "https://devpost.com/software/simple-stream-uwp", + "domain": "devpost.com", + "file_source": "part-00564-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\n* \n\nIt works on any device that runs on Windows 10.\n\n* \n\nApp has good design and dark & light theme support.\n\n* \n\nYou can browse featured streams for selected game.\n\n* \n\nAdaptive UI helps to adapt the elements on the screen. Which makes the app even more beautiful.\n\n* \n\nIt has additional features like showing spoken language in stream, showing the stream delay if has any, and opening in browser.\n\n## My Simple Stream Story...\n\nSimple Stream was a Windows 8.1 application that I developed 2 years ago. It was the time that I've been selected as a Microsoft Student Partner in Turkey. They wanted us to develop something and I came up with this idea. I had no experience on WinRT development back then. Somehow, I pushed myself and learned some stuff about the platform and how HLS live streaming works.\n\nI coded the app in 3 weeks and published to the Windows Store. Had great feedbacks about the app and published a few more updates for it, then I stopped supporting it because I had no time to take care of the app anymore.\n\nI'm still a student but about to graduate in a few months. In this 2 years, I started to work as a software engineer and learned almost everything about the platform. Best practices, how-tos, tip& tricks... And as I was working I learned a lot of other things such as how mobile applications work, what the software development methodologies and design patterns are.\n\nI was in good shape, but mentally was trying to find a side-project to kill timeand just refresh my brain. Creating a new project always a fun to me. Improving is the key. Every project I create from scratch increases my potential.\n\nA week ago, Joanna Dionis from DevPost e-mailed me. I was getting literally bored most of the day and looking for an ideas. She said I should consider to post my Simple Stream app to this hackathon. That was the time that everything just got alright in my head and I responded to her that I'll be joining to hackathon soon.\n\nI didn't have the sources of the Simple Stream anymore, because I had no idea what source control is back then :) However, I knew that I had the potantial to re-write the whole app from scratch, even with better design and more faster working way. As always, I created a new project, pushed it to my GitHub and started to develop on there.\n\nIt was all about the challange for me. Because this is what Hackathon means for me. I actually like to attend hackathons locally in Turkey and push myself to the limits (2 days of non-sleeping and hardworking, as a reward we came up runner-up team in MasterCard Masters of Code Hackathon - Istanbul). I had a week when I saw Joanna's e-mail and re-writing the app that I've developed 2 years ago in 3 weeks in just a week was a good challange to me.\n\nSo I developed the app, open sourced it and shared to my friends who wants to learn or just want to look at how data binding,behaviors and adaptive media streaming works in Universal Windows Platform applications. App has been developed with MVVM pattern using Prism framwork and fully open-sourced under my GitHub profile. Anyone can access and spread the code-love as I did :)\n\nThanks for reading the whole stuff until this line and for such a great online hackathon. It was my pleasure to attend ^^", + "content_format": "markdown" + }, + { + "url": "https://devpost.com/software/autogit", + "domain": "devpost.com", + "file_source": "part-00564-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\n## Overview\n\nAutoGit was created by William, YaHeng, and Kirby. We’re from Canada and the United States. With organizations like the MLH Fellowship, it focused on contributing to large open-sourced projects, we thought it would be a good idea to have an app that will keep tabs on all the activity that’s happening within the fellowship. The number of pull requests and code reviews might get out-of-hand as we heavily contribute for the next 12 weeks, so instead of digging through your Github notifications and emails, we make it simple to view everything all at once.\n\nTo put this in perspective, we created a lightweight application (which you can see here): [insert showcasing demo] Building this app came with several challenges such as encountering a rate-limiting issue from Github, our next steps would be to fully integrate the Github API into our platform so that we can scale this service and hopefully push it into production. Other features that we’ve been considering include deploying a Discord Bot that’ll enable you to create issues and comment directly from Discord.\n\n## Technologies\n\nAutoGit was created using SheetJS, AWS Amplify, and Docsify. We used SheetJS to display the tables of info and Docisfy to write up an onboarding guide for users. The service was hosted with AWS Amplify, which will provide the scalability that we need if we scale in the future. All of this was hosted via our Github Repo (link down here on the screen) where we used the features of Github like branches for individual features, creating issues of what needs to get done, and setting up pull requests when we need to merge features.\n\n## Roles\n\nFor example, William was responsible for working with AWS Amplify and working with the Github API. That included tasks like making the API calls to fetch the data from Github, as well as setting up Amplify to work with our backend. YaHeng created all the tables with SheetJS and worked on the UI layout, which included tasks like building the filtering functionality and export options. Kirby worked on the documentation and deployment with Docsify and Amplify, which includes creating the user manual and serving our documentation through Amplify.", + "content_format": "markdown" + }, + { + "url": "https://devpost.com/software/storm-q8cmf4", + "domain": "devpost.com", + "file_source": "part-00275-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\n## Inspiration\n\nCurrently, there is a 2 BILLION dollar industry for smart speakers, and this is predicted to grow to 40 BILLION by 2020. Additionally, the market for voice to image generation technology is virtually untouched. In this fashion, Storm was inspired by the dream of transforming your own stories into movies with almost no effort!\n\nA huge inspiration came from the VoiceFlow platform. This opened the doors for our team in terms of interacting with smart speakers and we were able to easily utilize their features and integrate with our custom API.\n\n## What it does\n\nBy engaging Storm on your Google Home and logging onto our web-app, you are free to tell any story and Storm will listen and produce images onto its UI based off of keywords detected. Storm is currently targetting children's stories as there is a clear demand for new and innovative story ideas. By eliminating the barrier of entry for teachers to develop their own storybooks, we allow them to fully utilize their creativities to create new stories.\n\n## How we built it\n\nStorm uses complex Machine Learning pipelines, consisting of technologies from NLP to Image Recognition to statistical analyses to train a dataset capable of producing a flow of images identified from the text detected using a Google Home. The interface between the Google Home and Storm is handled by VoiceFlow and from there key phrases are passed to our data pipeline, where it is translated to images relevant to the story. Then, Storm displays the best-fit combination to its UI. However, at this point in development, every story must be inserted into our system a few days before use in order to allow the system to find the best-suited images.\n\n## Challenges we ran into\n\nAlmost half of our time was spent on brainstorming, therefore, development was rushed.\n\nWe came across a large delay in connecting our backend to the front (using WebSockets), consisting of errors with unsafe ports, CORS errors and many more. This involved translating the entire backend from Python to Node.\n\nSpending too much time on Machine Learning development when the more important focus should have been creating a clean demonstration.\n\n## Accomplishments that we're proud of\n\nOur UI was a top priority and we are very proud of its appealing, clean, functional design.\n\nWe are proud of our text detection hack built into VoiceFlow capable of identifying the data necessary for a unique demonstration.\n\n## What we learned\n\nAs first-time smart speaker application developers, everything was new territory.\n\n## What's next for Storm\n\nA fully-fledged Machine Learning pipeline that can accept a range of words and descriptions and create appropriate photographs.\n\nIncrease the speed of our data generation so, ideally, it would be completed in real-time so teachers don't need to enter it prior.", + "content_format": "markdown" + }, + { + "url": "https://devpost.com/software/corn20", + "domain": "devpost.com", + "file_source": "part-00514-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\n## Inspiration\n\nStudent times are valuable. The Student Resource Center (SRC) at University of Nebraska - Lincoln (UNL) is often packed with students. Teaching assistants often have a hard time knowing who is first in line to help. Many existing systems do not fit the school system. HeyMyTa app is a practical solution to many existing problems at the SRC at UNL.\n\n## What it does\n\nA queueing management system that allows TAs to keep track of students who present during office hours. This feature alone can replace existing solutions of using a whiteboard as a queuing system. Our application revolves on these 5 deliverable features:\n\n* Queueing Management system\n* Chats + Announcements for office hours\n* Real-time update\n* Integration with Canvas Infrastructure\n* Scalability\n> \n\nExisting Queueing System at UNL's SRC\n\nEvaluation reports can be generated to help Professor evaluate TA.\n\n## How I built it\n\nWe utilized Vuejs + Bootstrap + EJS engine to build the front end. Utilized Nodejs + Express + LevelDB (NoSQL) to build the back end.\n\n## Challenges I ran into\n\n* Lots of features were thrown away because of time limitations.\n\n## Accomplishments that I'm proud of\n\n* Each of us was able to take parts that we were not good at. I mainly work on the backend on previously. Our friend who is good at UI couldn't attend Corn Hack due to an Injury. Some of us pushed through the roadblock and were able to build a deliverable front end design.\n\n## What I learned\n\n* Lots of new CSS and Vuejs technique. :))\n\n## What's next for corn20\n\n* Build a time waiting time estimating feature.\n* Build a Phone app and make it more accessible.", + "content_format": "markdown" + }, + { + "url": "https://devpost.com/rushigandhi/likes", + "domain": "devpost.com", + "file_source": "part-00847-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\nDevpost\n\nParticipate in our public hackathons\n\nDevpost for Teams\n\nAccess your company's private hackathons\n\nGrow your developer ecosystem and promote your platform\n\nDrive innovation, collaboration, and retention within your organization\n\nBy use case\n\nBlog\n\nInsights into hackathon planning and participation\n\nCustomer stories\n\nInspiration from peers and other industry leaders\n\nPlanning guides\n\nBest practices for planning online and in-person hackathons\n\nWebinars & events\n\nUpcoming events and on-demand recordings\n\nHelp desk\n\nCommon questions and support documentation\n\nComputer Science at the University of Waterloo\n\nVersion Control in Augmented Reality for 3D CAD\n\nExplore the world through your friends in Augmented Reality and let your social be your guide\n\nRegistered Micro-financing for Low Income Populations\n\nFast & effective skin cancer diagnosis in the palm of your hand.\n\nConnecting Brampton's Businesses to Consumers via Mobile\n\nThis website is able perform final mark calculation based on weightings specific to the high school course.\n\nUsing natural language processing and sentiment analysis, we help ANY job seeker fix their online presence\n\nA novel innovation to ameliorate emergency service response.", + "content_format": "markdown" + }, + { + "url": "https://devpost.com/software/smartoff", + "domain": "devpost.com", + "file_source": "part-00564-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\n## Inspiration\n\nA microwave, even in standby mode consumes about 8W of electricity every hour. Considering there are about 126 million households in the US and half of them have microwaves if we add up this power wastage then the total number comes to be pretty huge. If we could find a way to cut down this wastage then we can do a lot of good for the environment.\n\n## What it does\n\nThe project takes the live usage data and determines if the appliance is in use or it is in stand-by mode. Based on this information it predicts at what times the appliance is on and when it is not. Then it uses REST APIs encrypted with an algorithm to securely command the relay to turn the appliance on or off based on its predictions. As an incentive to the user to install this device, we plan to pay 1 cent for every minute worth of electricity he saves because of our project.\n\n## How we built it\n\nWe downloaded the appliance usage data from uk-dale for 5 houses. We picked one house and determined the appliance usage times. To do this, we played with several models like ARIMA, Neural Networks etc. In the end, we determined that LSTM gives us the best results. Based on the usage data, the AI model sends commands to ESP8266 which then controls the relay connected to the appliance. Based on the number of minutes that the appliance was turned off, the user gets paid accordingly using Softheon payment API.\n\n## Challenges we ran into\n\nWe couldn’t work or apply our project on real-time setup Due to shortage of time, we couldn’t evaluate many models in depth\n\n## Accomplishments that we're proud of\n\n* Were able to determine the times at which the appliance is supposed to be switched on/off with reasonable accuracy,\n* Encrypting the communication between the server and the ESP\n* Developing the Android App which could control the ESP 4.Successfully integrated the Softheon's Payment API\n\n## What's next for SmartOff\n\nUse live data to train our models. ESP8266 should be able to communicate with each other. Improve the packaging and the Android App", + "content_format": "markdown" + }, + { + "url": "https://devpost.com/software/built-with/openweathermap?order_by=newest&page=1", + "domain": "devpost.com", + "file_source": "part-00035-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\nDevpost\n\nParticipate in our public hackathons\n\nDevpost for Teams\n\nAccess your company's private hackathons\n\nGrow your developer ecosystem and promote your platform\n\nDrive innovation, collaboration, and retention within your organization\n\nBy use case\n\nBlog\n\nInsights into hackathon planning and participation\n\nCustomer stories\n\nInspiration from peers and other industry leaders\n\nPlanning guides\n\nBest practices for planning online and in-person hackathons\n\nWebinars & events\n\nUpcoming events and on-demand recordings\n\nHelp desk\n\nCommon questions and support documentation\n\nSort by:\n\nAn intelligent voice assistant that helps you manage tasks, check the weather, tell jokes, and more, all through simple voice commands. Just speak to it and get things done!\n\nBreathe Easy, Live Healthy – Real-Time Air Quality at Your Fingertips!\n\nCultivating Smarter Farms for a Sustainable Future.\n\nMilitary operations face critical challenges in logistics optimization and threat assessment, leading to inefficient resource allocation and increased operational risks.\n\nImagine you use a plastic bottle for drinking after you're done, what do you do with it? Here we come in to decay plastics, offering an eco-friendly solution by using Fungi Farming .\n\nAn interactive weather dashboard with real-time data, Azure Maps integration, and an AI chatbot for personalized weather insights.\n\nAn app that allows users to calculate their carbon footprint based on factors such as transportation, energy usage at home, and other environmental factors.\n\nYour AI travel expert. Real-time info, personalized tips, language translation, and 24/7 assistance. Your perfect travel companion.\n\nIndia's agriculture faces over-irrigation due to heavy, unpredictable rainfall. SAI uses 8 soil moisture sensors, a Pi 5, and a weather API to predict rain and automate irrigation smartly.\n\nAutoEcoCharge is a device that uses piezoelectric panels to transform the kinetic energy from moving vehicles into electricity. This renewable energy powers streetlights and urban systems sustainably.\n\nCliWare introduces a competitive experience in making awareness about the environmental sustaianablity and promotes healty competiition amoung users to reduce co2 Footprint.\n\nStudying made fun: Stay organized, motivated, and inspired—all in one place!\n\n\"Automated Weather Video Generator: Transform data into dynamic video summaries with ease.\"\n\n\"Optimizing traffic, one signal at a time.\" With TrafficQ, we're giving the city the green light to a smarter, healthier future\n\nApp for Real-Time Weather Insights and Dataset Evaluation: Visualizing Temperature, Humidity, Wind, and Sky Conditions.\n\nA Smart Hiking Stick that can help the visually impaired navigate and stay safe, that also provides tons of additional utility for any user.\n\nAi is a desktop AI-powered companion designed to engage, assist, and evolve with the user, offering personalized interactions and enhancing everyday life.\n\nDiscover a future where steampunk aesthetics meet air quality monitoring. Explore real-time and historical data to understand the evolution of our environment and breathe easier.\n\nEnviroTrack: Real-time environmental monitoring system that tracks weather, air quality, and geospatial data to help organizations make data-driven decisions for a cleaner, healthier environment\n\nAquamyan connects donors and recipients in Myanmar’s flood-affected areas with real-time alerts, resource allocation, and health updates, enabling efficient aid, mutual support.\n\nA React application that fetches and displays weather information for a specified city using the OpenWeatherMap API, with a loading animation during data retrieval.\n\n\"Eco-Pact: Uniting tech, communities, and partners to preserve biodiversity and combat climate change.\"\n\n\"Precision Farming, Powered by Space\"", + "content_format": "markdown" + }, + { + "url": "https://devpost.com/software/tracov", + "domain": "devpost.com", + "file_source": "part-00035-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\n## Inspiration\n\nEven though there are many sources of coronavirus data, it can be difficult for individuals to calculate and understand their risk level for contracting the coronavirus. We wanted to design a tool that uses reliable data to help individuals understand and visualize how at-risk they are.\n\n## What it does\n\nThe web tool takes an input of the user’s age, country, and gender, as well as any pre-existing chronic conditions. It uses data from Wolfram and the United Nations to calculate the proportion of new cases in the last five days to the country's population and the proportion of people of the user’s age and gender with the coronavirus to the global population in that demographic group. These proportions are added together to form an overall measurement of risk, and this measurement is categorized as “Low,” “Moderate,” “High,” or “Very High.” Users can also view a map of worldwide cases of their age and gender and a graph of confirmed cases in their country, as well as the death rate for cases of their age and gender and the number of cases from patients with their pre-existing conditions.\n\n## How we built it\n\nWe used the Wolfram coronavirus datasets to collect data on cases by country and on the characteristics of individual patients. We also used data from the United Nations to obtain world population counts by age and gender. We used Wolfram to code the tool to take our desired inputs, calculate the risk levels using our datasets, and output the appropriate numbers, rankings, and graphics.\n\n## Challenges we ran into\n\nThe biggest challenge with this project was deciding which data to show to users. We wanted to create an intuitive tool that outputs useful information. Ultimately, we decided that individualized risk levels were the best way to combine the available data and help users understand how the coronavirus could impact them.\n\n## Accomplishments that we're proud of\n\nWe are most proud of designing, coding, and presenting a brand-new concept in 24 hours.\n\n## What we learned\n\nThis project taught us how to think about data. In order to connect people to enormous sets of data, we had to think carefully about how to design a tool that took user input and was able to output useful and understandable pieces of information for laypeople.\n\n## What's next for TraCOV\n\nIn the future, we hope to use a larger dataset, such as the Johns Hopkins coronavirus data, to make more precise and accurate risk level assessments.\n\n## Built With\n\n* domain.com\n* wix\n* wolfram-technologies", + "content_format": "markdown" + }, + { + "url": "https://devpost.com/software/genopets-zs1la5", + "domain": "devpost.com", + "file_source": "part-00514-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\n## Inspiration\n\nThe idea for Genopets was born from our prior work in digital health exploring ways to use a health data marketplace to provide passive income to individuals as an incentive to stay active. While the infrastructure and industry for that concept was too early, we remained steadfast in the vision for passive income for health and pivoted to the Play-to-Earn gaming model earlier this year. We’ve been blessed with narrative tailwinds and growth in this industry and are excited to keep innovating and growing with the community. With Genopets, not only do we hope to financially incentivize players to stay fit, we can encourage them to do so and have fun while doing it.\n\n## What it does\n\nGenopets is a Free-to-Play, Move-to-Earn NFT game on Solana that makes it fun and profitable to live an active lifestyle. Genopets combines user's step data from their mobile device and wearables with blockchain Play-to-Earn economics so players can earn crypto for taking action in real life as they explore the Genoverse evolving and battling their Genopet.\n\n## How we built it\n\nWe built this on top of Metaplex’s codebase - specifically the Token Metadata Program allows us to create mutable NFTs and update them at minimal cost. In the future, we intend to partner with Stardust to make this process even more seamless. The front-end is built using Unity and will be deployed to mobile. On the backend we have a simple REST API that performs simple operations such as minting NFTs and mutating them. For the minting process we currently use React / Next.js on the web and THREE.js for the graphics.\n\n## Challenges we ran into\n\nThe main challenge we faced was getting the API of the metadata program correct when it was released before the Anchor framework and had no IDL to use. Then we realized that the metaplex repo had JS bindings for all the metaplex programs and that made things a lot easier. The second challenge was to get the augment body parts to be dynamic, which we ended up using a simple unity function to achieve for now but will move to an attachment system later.\n\n## Accomplishments that we're proud of\n\nOne thing we are most proud of is that over the course of this hackathon, our community has grown significantly. Today we have more than 55k members on our Discord server and 28k followers on Twitter. This has helped validate the direction we’re headed and motivated our team to keep pushing to build throughout the competition.\n\n## What we learned\n\nKeeping logic and state completely separate may be hard to reason about at first, but becomes super elegant as you become familiar with it. For one, we basically don’t need to touch rust as most of the logic we need to utilize are already deployed on-chain. This allows us to use Javascript bindings very heavily to achieve what we need to do.\n\nOn the game and economics design side, one of the main things we learned from our deep research into Play-to-Earn games is that the problem with the primary net inflow of cash into your economy coming from new players joining your game is that it too closely ties the stability of your economy to continued user growth. The point at which user growth stalls, the entire game economy could collapse. That is why we designed Genopets to be a crafting and evolution game where the net inflow of cash to the economy comes from players’ active use and enjoyment of the game.\n\n## What's next for Genopets\n\nFollowing the listing of Genopet Egg NFTs on FTX.us earlier this week, our Genesis Genopets NFT Mint drop is expected mid November, followed by a release of a private beta of the full game by the end of the year. Follow us on Twitter and join our Discord community for the latest updates. Twitter: https://twitter.com/genopets Discord: https://discord.gg/genopets\n\n## Built With\n\n* react\n* solana\n* three.js\n* typescript\n* unity\n* web3\n* webgl", + "content_format": "markdown" + }, + { + "url": "https://devpost.com/software/twiboost", + "domain": "devpost.com", + "file_source": "part-00649-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\n## Inspiration\n\nTwitch is the world's leading live streaming platform for gamers and the things we love , a global community of millions who come together each day to create the future of live entertainment.\n\n## What it does\n\nTwiBoost chrome extension as middleware , leverages theta network to add decentralized peer-to-peer video delivery capabilities to Twicth platform\n\n## How we built it\n\nwe built it with Theta P2P javascript SDK\n\n## Source code\n\nthe frontend (chrome extension) and backend source code repository link is available in the twiboost.pdf file uploaded as additional info for judges and organizers\n\n## How to run\n\n* First run npm install in all subfolders where it's required , run command sh build-extension.sh in the root folder to generate dist folder\n* Enable developer mode in chrome extension manager then click on load unpacked extension and choose dist folder in twiboost chrome extension source code repository .\n* For twiboost backend install cockroach DB and edit .env with your DB parameters and Theta partner api key to suit your need\n* Go to twitch.tv and start sharing bandwith with others peers\n\n## What's next for TwiBoost\n\n* Firefox add-ons\n* Move to mainnet", + "content_format": "markdown" + }, + { + "url": "https://devpost.com/software/news-report", + "domain": "devpost.com", + "file_source": "part-00275-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\n* \n\nProof of upload (and submission) to App Store\n\n* \n\nOur AI algorithm reads about 15 articles on the topic and generates a summary that only includes the information that all sources reported.\n\n* \n\nAt the end of the short summary, the direct links to the articles used to create the summary is presented to the user in card format.\n\n* \n\nThis is one reference article that was collected by the spider and formatted in the app.\n\n* \n\nReferences in this specific article that a user can click on. Tags that are associated with the article are also included.\n\n## Inspiration\n\nWhen reading news articles, we're aware that the writer has bias that affects the way they build their narrative. Throughout the article, we're constantly left wondering—\"What did that article not tell me? What convenient facts were left out?\"\n\n## What it does\n\nNews Report collects news articles by topic from over 70 news sources and uses natural language processing (NLP) to determine the common truth among them. The user is first presented with an AI-generated summary of approximately 15 articles on the same event or subject. The references and original articles are at your fingertips as well!\n\n## How we built it\n\nFirst, we find the top 10 trending topics in the news. Then our spider crawls over 70 news sites to get their reporting on each topic specifically. Once we have our articles collected, our AI algorithms compare what is said in each article using a KL Sum, aggregating what is reported from all outlets to form a summary of these resources. The summary is about 5 sentences long—digested by the user with ease, with quick access to the resources that were used to create it!\n\n## Challenges we ran into\n\nWe were really nervous about taking on a NLP problem and the complexity of creating an app that made complex articles simple to understand. We had to work with technologies that we haven't worked with before, and ran into some challenges with technologies we were already familiar with. Trying to define what makes a perspective \"reasonable\" versus \"biased\" versus \"false/fake news\" proved to be an extremely difficult task. We also had to learn to better adapt our mobile interface for an application that’s content varied so drastically in size and available content.\n\n## Accomplishments that we're proud of\n\nWe’re so proud we were able to stretch ourselves by building a fully functional MVP with both a backend and iOS mobile client. On top of that we were able to submit our app to the App Store, get several well-deserved hours of sleep, and ultimately building a project with a large impact.\n\n## What we learned\n\nWe learned a lot! On the backend, one of us got to look into NLP for the first time and learned about several summarization algorithms. While building the front end, we focused on iteration and got to learn more about how UIScrollView’s work and interact with other UI components. We also got to worked with several new libraries and APIs that we hadn't even heard of before. It was definitely an amazing learning experience!\n\n## What's next for News Report\n\nWe’d love to start working on sentiment analysis on the headlines of articles to predict how distributed the perspectives are. After that we also want to be able to analyze and remove fake news sources from our spider's crawl.\n\n## Built With\n\n* backend-web\n* flask\n* google-cloud\n* heroku\n* ios\n* machine-learning\n* mobile\n* natural-language-processing\n* news-api\n* python\n* swift\n* web-app-2\n* web-spider", + "content_format": "markdown" + }, + { + "url": "https://devpost.com/software/mytime-ksjuxm", + "domain": "devpost.com", + "file_source": "part-00275-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\n## Inspiration\n\nWe all have the dream to be someone or achieve something. To this end, we earnestly plan our time spending and closely track our progress. However, it's very frustrating when we failed to hit our goal, again and again. Due to the problem in this implementation stage, we tend to believe we are doomed to be mediocre. However, with all the social media and possible distractions on the internet, it's really hard to stick to what you are supposed to finish. Understanding and predicting your own behavior pattern is the key to tackle many problems such as anxiety, time management and evaluate your productivity. We want to help students to understand the trigger of these distractions and the pattern of their behaviors by automatically tracking and analyzing their computer use.\n\n## What it does\n\nIn this hackthon, we only try to make this tool records what you are looking at on your computer. From this, it will take all the texts, images, and other data to see what you have been focusing on. From there, you can see where your time has really been spent.\n\n## How we built it\n\nIn our current implementation, we have mostly used python. As for how it is implemented, we recorded our screen for x minutes. From this, we break the video down into images, where each image is a second in the recording. Then, we run these images into an OCR reader, which (tries) to convert all the text from the image into strings. Finally we take the string data and put it into a matrix. This matrix shows which words are shown at which second, and how many times.\n\n## Challenges we ran into\n\n• OCR not being accurate, even after amplifying the images, many characters are still not recognized correctly. • We didn't get enough time to finish t-SNE to group words semantically, therefore the representation of the content is very computationally expensive and not clean for user to visualize the results. • We didn't prepare any labeled data to train the object detection and image captioning to extract information from images and videos. • Setting up the website to demonstrate the functionalities takes longer than we planned. • Security issues of this application isn't considered so far.\n\n## Accomplishments that we're proud of\n\nWe believe this a great idea although it doesn't take very fancy techniques. First, we are the targeting users and we understand what we want from this application with details. Second, all the techniques it takes to solve these problems are already there, we just need more time to finish them.\n\n## What we learned\n\nDoing something not fancy but practical is also very fun.\n\n## What's next for MyTime\n\nBasically solve the challenges we have met and try to infer more specific conclusions from the data we extracted. More specifically, we will:\n\n* increase words extraction accuracy\n* train CNN/NLP for image annotation and semantic analysis\n* combine the multi-modal data of words, image and caption to further improve the content extraction accuracy\n* design user interface to customize the content classification first\n\nThere are several potential directions:\n\n* time-management\n* mood detection and chat bot\n* personal assistant for data retrieving", + "content_format": "markdown" + }, + { + "url": "https://devpost.com/software/pay-per-face-a-brand-new-advertisement-system", + "domain": "devpost.com", + "file_source": "part-00514-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\n## Inspiration\n\nAs consumers embrace a proliferation of new digital channels, today’s brands have an increasingly hard time running effective marketing campaigns. To reach the right audiences at the right time wherever they are, they must rely on real-time rich data to deliver highly relevant, effective, and measurable ads. The age of smart advertising is here. Coupled with an increase in the amount of customer data available to marketers, we are seeing an astonishing rise in the application of machine learning across a number of industries. DeepLens represents a perfect opportunity for these two trends to overlap, allowing marketers to target adverts even more effectively based on real-time video feed data. The follow on effect of these targeted ads could provide increased benefits for brands across many parts of their value chain.\n\nTo leverage this opportunity we created DeepAds.\n\n## What it does\n\nDeepAds is an advertising platform allowing real-time targeting of consumers based on a set of distinct and learned characteristics. Depending on who is then in the DeepLens frame and which of these characteristics they display, DeepAds will serve the most relevant advert.\n\nOur current implementation of DeepAds distinguishes consumers based on their gender, so variations on product adverts are served up differently to females and males. You can read more in section 7 about how we would guard against gender stereotyping.\n\n## How we built it\n\nAfter receiving the DeepLens, we did a quick tech spike to understand the DeepLens’ basic capabilities. The DeepLens allows us to run deep learning models locally and understand what the camera sees, all with several simple steps. The team then brainstormed about what problem we can solve with the help of DeepLens’ deep learning capability. There are many possible applications, but we are most interested in the idea of using facial recognition technologies to drive better advertising engagement and consumer experiences, which we named as DeepAds.\n\nBy putting the DeepLens in front of the advertising billboard or screen and running facial detection and custom deep learning models to extract facial features, DeepAds allows marketers to understand more about their audience and how their advertisements are performing with specific audience segments. There is a long list of features that could help identify consumer’s characteristics and can be fed into the deep learning model. To accomplish the project in time and for to be able to demonstrate DeepAds’ purpose, gender was picked as the classification output.\n\nDeepAds contains the following major components:\n\n* Deployment Framework with Greengrass\n* Deep Learning Module with Amazon SageMaker\n* Ads Controller with AWS Lambda\n\nDetailed explanation of each component is given below.\n\n* \n\nDeployment Framework We followed the instructions from the AWS DeepLens documentation and used the AWS GreenGrass service to deploy a Lambda function and model to the device. We then setup a local develop environment for the lambda function in order to speed up the development process.\n\n* \n\nDeep Learning Module We first prepared the training data with 2000 photos that were labeled with gender and uploaded to AWS S3. Followed by formatting using MXNet RecordIO, we then trained the gender classification model on SageMaker. We used a small dataset to accelerate the training time for gender recognition which might lead to low accuracy. Based on the studies, the gender recognition accuracy should be able to reach 90%* with bigger dataset and better tuned model. We used the python module “awscam” to load the model to DeepLens. A latest version of Intel DeepLearning Deployment Toolkit is installed on the DeepLens device and used to optimise the MXNet gender model.\n\n* \n\nAds Controller To prove the concept and demonstrate the capability of targeted advertising, we defined two types of project video output stream, one is the “Advertising screen” and the other is the live stream “Analysis screen”. Analysis screen draws the detected information on top of the input video stream. Currently it draws a facial detection bounding box, the count of people and a list of potential characteristics. In future, it can generate a real-time analytic report. Advertising screen shows the targeted advertising based on the audience. We designed three different advertising images to target female, male and group of audiences representatively. When the audiences in front of the DeepLens change, the advertising screen will display the relevant image. Currently it reuses the project video output stream, but it can be a different device or screen that connected by AWS IoT.\n\n## Challenges we ran into\n\n* \n\nFiguring out how to do more complex tasks. Considering DeepLens and SageMaker are both still in their nascent stages, one of the main challenges is the lack of documentation and tutorials. It was relatively simple to set up the device and use the default template models, but when we tried to use custom models and modify the video stream, we found it’s difficult to get detailed information about what to do.\n\n* \n\nFinding out how to fix an issue. The DeepLens forum is helpful sometimes, but there aren’t enough established QA and only have a very small community on the forum to help troubleshooting. It leads to the result of spending unpredictably long time on troubleshooting during the project.\n\n* \n\nBuilding our own model with the Intel DeepLearning deployment toolkit. Model conversion using Intel DeepLearning deployment toolkit with SageMaker trained model always fails, making it difficult for us to test our own model.\n\n* \n\nDebugging and deploying our program. It’s difficult to debug a Lambda function because an important library called \"awscam\" is only available on the device. We needed to either setup a development environment locally on the device or we had to wait for the lengthy deployment process to finish to be able to debug.\n\n* \n\nDealing with the limited hardware capacity of the DeepLens device. The DeepLens struggles to handle inserting graphical overlays onto the project stream which caused us to reconsider our original concept of how to display advertising images on top of the video stream.\n\n## Accomplishments that we're proud of\n\n* Developing a feasible cool idea that helps industry\n* Training a custom deep learning model on SageMaker\n* Using OpenCV to modify the project video output stream\n\n## What we learned\n\n* DeepLens’ capabilities\n* How to use SageMaker to train a custom model and use it on DeepLens\n* How to develop software on the DeepLens platform\n\n## What's next for DeepAds - A brand new advertisement system\n\nWe believe that machine learning should be a fundamental driver for future marketing strategies. For brands to remain relevant they need to generate insights from an increasingly complex data set. There are 2 main areas of development for DeepAds beyond this point.\n\nClassifier Maturity Our current implementation of DeepAds distinguishes on consumer gender, however there are many ways that we would look to classify consumers moving forward. As well as gender we would build out our classifier to cover:\n\n* Location\n* Outfit\n* Facial features, e.g. beard, hairstyle, skin colour etc.\n* Mood\n* Height\n* Movement\n* Age\n* Activity\n\nStereotyping safeguards A critical part of the next phase of development for DeepAds would be to implement safeguards against perpetuating stereotypes. This is fundamental to building a platform that provides value to consumers in a safe and appropriate manner.\n\nFor this we would look to implement a number of features:\n\n* Create more robust rules around defaulting to neutral adverts when the model hasn’t been able to classify with high accuracy\n* Implement mandatory A/B testing of adverts at scheduled points regardless of classification\n* Place an increased weighting on mood as a measurable characteristic for our model, evaluating how people are reacting to the advert being shown and using this to tailor future ads.\n\n## Built With\n\n* amazon-web-services\n* deeplens\n* lambda\n* machine-learning\n* mxnet\n* opencv\n* python\n* sagemake\n* sagemaker", + "content_format": "markdown" + }, + { + "url": "https://devpost.com/software/no-duckling-is-ugly", + "domain": "devpost.com", + "file_source": "part-00035-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\n## Inspiration\n\nBullying is an issue prevalent worldwide - regardless of race, age or gender. Having seen it up close in our daily school lives, yet having done nothing about it, we decided to take a stand and try to tackle this issue using the skills at our disposal. We don't believe that bullies always deserve punishment - instead, we should reach out to them and help them overcome whatever reasons may be causing them to bully. Because of this, we decided to implement both a short time as well as a long term solution.\n\n## What it does\n\nNo Duckling Is Ugly is an IoT system that listens to conversations by students in classrooms, performs real time sentiment analysis on their interactions and displays the most recent and relevant bullying events, identifying the students involved in the interaction. In the short run, teachers are able to see real time when bullying occurs and intervene if necessary - in the long run, data on such events is collected and displayed in a user friendly manner, to help teachers decide on how to guide their class down the healthiest and most peaceful path.\n\n## How we built it\n\nHardware: We used Qualcomm Dragonboard 410c boards to serve as the listening IoT device, and soldered analog microphones onto them (the boards did not come with microphones inbuilt).\n\nSoftware: We used PyAudio and webrtcvad to read a constant stream of audio and break it into chunks to perform processing on. We then used Google Speech Recognition to convert this speech to text, and performed sentiment analysis using the Perspective API to determine the toxicity of the statement. If toxic, we use Microsoft's Cognitive Services API to determine who said the statement, and use Express to create a REST API, which finally interfaces with the MongoDB Stitch service to store the relevant data.\n\n## Challenges we ran into\n\n* \n\nAudio encoding PCM - The speaker recognition service we use requires audio input in a specific PCM format, with a 16K sampling rate and 16 bit encoding. Figuring out how to convert real time audio to this format was a challenge.\n\n* \n\nNo mic on Dragonboard - The boards we were provided with didn't come with onboard microphones, and the GPIO pins seemed to be dysfunctional, so we ended up soldering the mics directly onto the board after analyzing the chip architecture.\n\n* \n\nIntegrating MongoDB Stitch with Python and Angular - MongoDB Stitch does not have an SDK for either Python or Angular, so we had to create a middleman service (using Express) based on Node.js to act as a REST API, handling requests from Python and interfacing with MongoDB.\n\n* \n\nHandling streaming audio - None of the services we used supported constantly streaming audio, so we had to determine how exactly to split up the audio into chunks. We eventually used webrtcvad to detect if voices were present in the frames being recorded, creating temporary WAV files with the necessary encodings to send to the necessary APIs\n\n## Accomplishments that we're proud of\n\nBeing able to work together and distribute work effectively. This project had too many components to seem feasible in 36 hours, especially taking into account the obstacles we faced, yet we are proud that we managed to implement a working project in this time. Not only did we create something we were passionate about, we also managed to create something that will hopefully help people\n\n## What we learned\n\nWe had never worked with any of the technologies used in this project before, except AngularJS and Python - we learned how to use a Dragonboard, how to set up a MongoDB Stitch service, as well as audio formatting and detection. Most of all, we learned how to work together well as a team.\n\n## What's next for No Duckling Is Ugly\n\nThe applications for this technology reaches far beyond the classroom - in the future, this could even be applied to detecting crimes happening real time, prompting faster police response times and potentially saving lives. The possibilities are endless", + "content_format": "markdown" + }, + { + "url": "https://devpost.com/software/roam-e62ivw", + "domain": "devpost.com", + "file_source": "part-00847-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\n# Roam\n\n## A Wall Worthy AR Map Powered by Clusterpoint\n\n*Disclaimer: Roam was built in a day*\n\nRoam is an iOS App I built for the Maps as Art Summer Jam Hackathon. It utilizes the Clusterpoint platform to provide its backend. Roam allows you to explore some of the world's coolest points of interest on various maps in augmented reality. Download the Wall Worthy Roam Poster (You can hang it up if you want!) and point Roam at it. Hit the Roam button and you're off around the world!\n\n### Roam's Wall Worthy Poster\n\nhttp://i.imgur.com/nSxKKnZ.jpg?1\n\nYou can print this poster off and hang it on the wall, or even just point Roam at the computer image and it'll do its thing!\n\n### Download Links\n\nDownload the .ipa here: https://rink.hockeyapp.net/api/2/apps/70eb0014e2cd2afbfb1c49d15e80f3d3/app_versions/1?format=ipa&pltoken=782a2e8d7e27f8413121d9a7f7def67a&avtoken=8c9b52117bf54c73cd22b0d10cbfbd9dc7fbe117\n\nHockeyApp Overview Page Here: https://rink.hockeyapp.net/apps/70eb0014e2cd2afbfb1c49d15e80f3d3\n\n### Challenges\n\nWorking in Augmented Reality was very challenging, but honestly I've never worked with database coding. Implementing Clusterpoint into my app was definitely the hardest part!\n\n### What's Next\n\nI'd love to be able to map entire cities in Roam, not just points of interest. Having fully scaled and augmented 3D cityscapes could facilitate city planning, architecture, and just help people get their bearings!", + "content_format": "markdown" + }, + { + "url": "https://devpost.com/software/vision-6zau5m", + "domain": "devpost.com", + "file_source": "part-00035-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\n## Inspiration\n\nThe challenge we were tasked with solving was to re-imagine a tool of today to break barriers within communities all around the world. While thinking about the barriers that some communities face, we realized that some communities are more likely to face more difficulties then other communities. One of these communities is those who are visually impaired. The world is build for people who can see hence, we much make an effort to accommodate those who cant.\n\n## What it does\n\nVisions hardware uses four ultrasonic sensors and an active buzzer attached to a pair of pants to detect how far objects are from the person wearing the pants.\n\n## How we built it\n\nIt is built using an arduino. The components we used are four ultrasonic sensors, and an active buzzer. The code for this project was written in C++.\n\n## Challenges we ran into\n\nDuring the construction of our project, we ran into various problems. One of these problems was that the buzzer we were using was producing inaccuracies. Another problem was that the data output for the ultrasonic sensors was oscillating between a very low value and the accurate value. Fortunately, through a little bit of troubleshooting we were able to solve both these problems.\n\n## Accomplishments that we're proud of\n\nWithin the given time we were able to build a functioning prototype of a piece of hardware that helps the visually impaired.\n\n## What we learned\n\nWe learned many things about coding in C++ and working with arduinos. One things we learned a lot about was the function of ultrasonic sensors as we used four of them in the construction of our project.\n\n## What's next for Vision\n\nIn the future we hope to make the prototype fully wireless and we hope to make it more compact. We also hope to improve the function of two of the four ultrasonic sensors. We also hope to incorporate stair support.", + "content_format": "markdown" + }, + { + "url": "https://devpost.com/muhammadzeeshan020", + "domain": "devpost.com", + "file_source": "part-00649-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\nDevpost\n\nParticipate in our public hackathons\n\nDevpost for Teams\n\nAccess your company's private hackathons\n\nGrow your developer ecosystem and promote your platform\n\nDrive innovation, collaboration, and retention within your organization\n\nBy use case\n\nBlog\n\nInsights into hackathon planning and participation\n\nCustomer stories\n\nInspiration from peers and other industry leaders\n\nPlanning guides\n\nBest practices for planning online and in-person hackathons\n\nWebinars & events\n\nUpcoming events and on-demand recordings\n\nHelp desk\n\nCommon questions and support documentation\n\nEngineer | Hacker | Scientist\n\n5G will require a dense network of antennas in urban areas. We save 2.3 bn EUR of investments by optimizing the number and positions of antennas using datasets such as LIDAR and attenuation models.\n\nSelf-Service complete blood count test.\n\nPrevent tourists identity scams using self-sovereign identity and artificial intelligence.\n\nMake shopping easy using a chatbot assistant\n\nVoice and Messenger bot for earlier detecting coughing related to COVID-19 from cough recording at zero costs and at the comfort of any place\n\nDeep learning to detect coughing related to COVID-19 from recordings\n\nCrack-it is reducing the time and cost between a site examination and a work order being placed. It is using AI.\n\nPredictive Maintenance At Scale", + "content_format": "markdown" + }, + { + "url": "https://devpost.com/software/qrtailor-inago4", + "domain": "devpost.com", + "file_source": "part-00649-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\n## Inspiration\n\n“We see our customers as invited guests to a party, and we are the hosts. It’s our job every day to make every important aspect of the customer experience a little bit better.” - Jeff Bezos\n\nOur inspiration for this project was to do just that; help customers shop as effectively as possible as well as help employees retrieve information about products as quickly as possible.\n\n## What Does It Do Exactly?\n\nQRTailor is a B2B (Business to Business) company that sells customizable clothing tags with QR Codes that connect to a database with information about products. Companies who decide to use the QRTailor technology will allow their employees to have access to the app that can simply scan the QR Code on the label and retrieve information on the product, such as:\n\n* A Picture\n* Name & Description\n* Size\n* Colour & Available Colours\n* Quantity (Current Stock in the Particular Store)\n* Customer Review Ratings\n* Status (Availability Online or In-Store)\n\nEmployees can now have an in-depth perspective on all of the products, so when a customer asks a question such as, “Do you have this in red?” or “How many of these are left?”, all the employees need to do is scan the QR Code in order to properly respond to the customer’s concerns.\n\n## How we built it\n\nThe application uses OCR technologies provided by Google’s ML Kit to detect, read, and decode QR codes correctly. Furthermore, we used Android Studio to set up the application that the user will interact with. We also used Flask and Python, along with resource methods such as HTTP GET and POST to read from a minimal and decentralized database. We then set up the connection between the Android App and database by manually parsing incoming JSON files containing appropriately nested objects and attributes.\n\n## Challenges We ran into\n\nOur lack of development skills was a big factor. Despite our back-end heavy hack, our development team had nearly zero back-end experience. Thus, we had trouble learning and using database technologies such as Firebase and Flask in a short amount of time. Furthermore, we dealt with Android Studio and it’s rapid updates. Most of the code examples were outdated and thus unhelpful. We also had to change our target-market half-way into the hack, as the app was originally meant for consumers to use until we realized it would make more sense for the employees. We lost a bit of time due to this and we missed out on some ideas that could have been implemented since the start, (such as an employee login screen).\n\n## Accomplishments that We're proud of\n\nDespite the lack of developers in the team initially, through supporting each other and everyone learning very fast and effectively, we managed to implement an Android App using ML in time. We also added some bonus features to the App (for example, colour code the text) to improve user experience, which is very impressive. Lastly, we worked as a team very collaboratively and we encountered a lot of difficulties together, which really give us a sense of achievement when we reached our goal. Overall, it is a very meaningful hackathon for us!\n\n## What we learned\n\nOn technical aspects, we learned resource methods such as HTTP GET and POST. As we started to progress throughout the hackathon, we also started to experiment with many new techniques and tools such as Firebase, ML Kit, and design tools such as Figma. On the other side, through teamwork, we improved collaboration and communication skills. Furthermore, we learned that we should share responsibility and support each other as a way to increase team performance.\n\n## What's next for QRTailor\n\n* A more user-friendly GUI (graphic user interface)\n* Integrated chat interface between employees to share more information about products\n* Implementing a built-in interface to enter item entries for ease of use\n* Employee login page for individual access to view and/or manipulate store data\n* We have also used Figma to design a possible future prototype with a different interface that showcases a more optimal way our product can be designed. Figma Prototype Link", + "content_format": "markdown" + }, + { + "url": "https://devpost.com/software/scopenote", + "domain": "devpost.com", + "file_source": "part-00649-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\n## Inspiration\n\nFriday, March 13th, 2020. This was the final day that many students in Ontario were in a classroom face to face with their teachers. At this point, we, along with the rest of the world, were forced to transition to remote learning environments in an effort to fight the Covid-19 pandemic. Nobody was prepared to react to this drastic change, and it was by far the students who suffered the most from this change.\n\nWe quickly realized just how ineffective remote learning could be: with teachers too busy to handle individual queries, most of our education revolved around doing individual work in front of a screen, often reading or typing for hours for research, or watching countless videos on end. Most times, it was the most engaging experiences at school, where we were able to physically grapple with the information we were learning, that were taken away. The motivation was lacking too: with little peer interaction, we often struggled to keep working and meeting deadlines. This was further compounded by the aforementioned monotony of our tasks, which made many of us fall asleep of boredom. All together, learning became extremely difficult.\n\nWith ScopeNote, we wanted to eliminate the sense of repetitive monotony that comes with remote learning in favour of a more engaging learning process. With teacher impact minimized, the goal was to decrease the time students need to spend in front of a screen doing low-engagement, low-effort tasks, such as reading research papers, in favour of ones that actually benefit memory retention, such as annotating and using flashcards.\n\n## What it Does\n\nScopeNote is a Chrome extension that provides three main features that supplement a student’s learning. The first is a keyword breakdown of a given article in the form of a PDF file or a website. This component analyzes the text within an article and identifies around fifteen of the most prevalent, critical keywords to a piece. In practice, this facilitates a student’s understanding of a paper by reducing the need to consistently search up most of the specific keywords in favour of having the important ones displayed on screen and only one click away.\n\nSecondly, ScopeNote pulls some of the most important sentences from the piece that is meant to act as a summary for students to use both to ensure the content they are looking through is relevant and as a review section in their notes. While we noticed that this function was not entirely perfect, it did a fair job of capturing the tone and the content of the piece, which are also both relevant in determining how useful a piece may be for a research project or as a study resource. Again, this helps reduce the amount of time a student spends on low-effort tasks such as skimming through sources in favour of active studying or analysis. In both of the aforementioned features, students are also able to add their own commentary to supplement the software.\n\nFinally, ScopeNote takes the key words and summary that it pulls from a text and exports them such that they can be printed as a .pdf file. The format in which this is done is then conducive to making flashcards, while also acting as a reference sheet on the topic that a student can always look back to. While the process of making flashcards is both monotonous, the actual use of flashcards can be extremely beneficial to retaining memories. As a result, by automating the prior step, we hope to encourage more students to use flashcards and thus engage with information more actively.\n\n## How We Built It\n\nThe Chrome extension is coded in Python and React for the back-end and front-end, respectively. Specifically, we used the diffbot API to pull text from a website and the PyMuPDF library to read text from a .PDF file. Once the text was pulled out, we used the Azure Text Analytics API to pull key phrases that were most important to the piece. Following this, we ran a basic algorithm to determine which keywords or phrases were most prevalent in the text, and linked it to a WordsAPI to provide students with a definition. Phrases with no definition were then appended to the default Wikipedia URL so that students would be forwarded directly to the Wikipedia page on the topic. To process text and identify keywords, we explored other possibilities such as the RAKE algorithm and TF-IDF. While neither was as good as Azure’s API, we realized that by modifying the RAKE algorithm to include longer phrases and by using a database of the most common English words as stopwords, we could generate decently appropriate summary sentences for an article. This was leveraged in forming this functionality. These components were then attached to the React front-end with axios, which performed HTTP requests between React and Flask. More specifically, the URL is sent from tthe front end to the backend, where the text is pulled and processed into JSON objects, which are then sent back to the front end. In React, an emphasis was made on state changes and mapping to update the information in the application, using React lifecycle functionalities to do so. The actual user interface and functionality of the Chrome extension was made in React, where CRUD actions were used so that users can edit and add notes, definitions, and so on to the automated ones.\n\n## Azure\n\nSpecifically in terms of the usage of Microsoft Azure, we leveraged Azure’s Text Analytics API to pull keywords from a piece of text. These keywords, which were ordered as they were presented in text, were then compiled into a list of tuples containing each key word/phrase and the quantity of appearances. This was compiled alongside a list of words specifically from key phrases and how many times they showed up. Our mentality was that the most important keywords for a student to know would be the ones that showed up most frequently; thus, we used these compiled lists to determine which key phrases were most important to define.\n\n## Challenges We Ran Into\n\nOur team had little experience transferring information between a React front-end and Python back-end in both directions, making the implementation of axios one of the biggest challenges we faced. Specifically, mapping and working in states was difficult in terms of syntax, and saving json values into the React component states was extremely challenging. However, we worked with mentors and worked through a lot of trial and error to solve this challenge.\n\nAnother issue that we confronted was the limitations of the Azure API. While the API itself worked well, there was a character limit of around 5000 on each time it was called, and a total of 5000 calls available. Unfortunately, we did not handle this well and tested on long pieces of text from websites. This meant that for a single website, a single run through the program resulted in around 25 calls of the Azure API. Unfortunately, we did not realize this until we hit the limit on our free trial. It was only after our second setup of an account by a different team member did we recognize the importance of efficiently using the limited calls on APIs.\n\n## What We Learned\n\nOur team was composed of both inexperienced and more experienced hackers. For the beginners, both the provided workshops and immediate hands-on applicability were helpful for learning programs such as CSS and React. The novices were also pushed towards technologies they had never used before, such as Figma, and learned to leverage them by doing as opposed to by reading a textbook.\n\nThis does not mean that the more experienced coders were comfortable during the entire hackathon, though. We learned to work more comfortably with React and Flask, specifically with states in the former framework. We also explored more in terms of connecting the front and back-end of applications through axios, and learned to save information from .json files into the states of React components.\n\n## Accomplishments that We're Proud Of\n\nWe are all proud of the new things we learned as part of this hackathon, whether it was an introduction to React or Figma, or learning to save .json information in React states. Furthermore, we are proud of both the functionality and the design of the final product—together, we think we made a well-functioning Chrome extension that does not sacrifice anything visually.\n\n## What's Next for ScopeNote\n\nMoving forward, we hope ScopeNote can be a tool we use in a crunch if we need to synthesize a document or information from a website. As first year students, we’re venturing into a new level of challenge in university, and thus may leverage this program in a pinch. Aside from our personal use, there is a lot of potential for growth with ScopeNote. The most immediate steps that can be taken are to develop a platform for PDFs to be uploadable through the extension itself, as opposed to having them downloaded and opened on a browser. The development of a local database, where students can store past data would also be helpful. In the future, there may also be more work done in terms of machine learning or text-processing algorithms, which in turn opens up a wide range of possibilities. For example, we could explore using ML to recognize graphical representations of data such as bar, pie, and line graphs, and convert them into meaningful, text-based data for students in their notes. Hopefully with improved algorithms, we can also improve upon the keyword and summary sentence selections to better represent the articles from which they’re pulled. With these additions, we believe ScopeNote could feasibly go from a proof-of-concept idea to genuinely applicable, if we chose to continue development in the future.", + "content_format": "markdown" + }, + { + "url": "https://devpost.com/software/recado-76mq0h", + "domain": "devpost.com", + "file_source": "part-00275-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\n## Inspiration\n\nOur team realized that there had to be an efficient way to run errand despite this pandemic\n\n## What it does\n\nThe app optimizes the route and store suggestions to the user, saving time, effort and money. It also shows the crowd of each location which better helps the user plan when to run the errands.\n\n## How I built it\n\nMe and my team built it using InvisionApp since we do not know coding.\n\n## Challenges I ran into\n\nWe had plenty of challenges along the way. One of the biggest challenge that we faced was coming up with a viable idea. But as a team we were able to overcome that hurdle and all the other obstacles we faced. Also some us have school and exams coming up so, having to balance them both was a herculean task. Also staying awake through out the event, for building the app was a great challenge.\n\n## Accomplishments that I'm proud of\n\nOur team was able to learn a lot from the workshops and the experience we got from making the app.\n\n## What I learned\n\nWe learned a lot about graphic designing, and learned about cryptography from the cyber security workshop. We were also able to get some knowledge on programming from the other workshops.\n\n## What's next for Recado\n\nTo make a fully functional app of our model", + "content_format": "markdown" + }, + { + "url": "https://devpost.com/software/farmcare", + "domain": "devpost.com", + "file_source": "part-00145-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\n## Inspiration\n\nThere are many healthcare issues in the farm ecosystem as people in rural areas are not much aware of their farm's health.\n\n## What it does\n\nIt helps in monitoring the healthcare of the farm ecosystem.\n\n## How we built it\n\nWe have done the backend on the Django Framework. Using RestAPIs for the connection. In machine learning techniques we have used Deep Learning, and several classifiers and regression models like Gradient Boosting, Random Forest, Decision Trees. For masking the image we are using Otsu segmentation technique. Using advanced CSS to make it fully responsible and compatible with most of the devices. We have used NLP for building real-time recommendation system.\n\n## Challenges we ran into\n\nGetting a better accuracy for the machine learning models was a great challenge but we succeded in getting good accuracy through proper segmentation techniques.\n\n## Accomplishments that we're proud of\n\nWe have a full working project which has both frontend and backend connecting along with machine learning models. This is fully responsive so one will not get any struggle in running on different devices. This website has multi-language support so that rural people of different region and languages can use the application.\n\n## What we learned\n\nWe learn many new models and different ways to implement functionality and image segmentation. We also learn how to connect machine learning models with a web application which is of greate use.\n\n## What's next for FarmCare\n\nWe will connect IOT devices to get the real time data.\n\n## Built With\n\n* cnn\n* css\n* django\n* html5\n* javascript\n* keras\n* ml\n* natural-language-processing\n* python\n* tensorflow", + "content_format": "markdown" + }, + { + "url": "https://devpost.com/software/medentifier", + "domain": "devpost.com", + "file_source": "part-00035-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\n## Inspiration\n\nBoth members of the team have connections and experience in the health care field, and wanted to work with machine learning technology. Creating a tool to identify household medicine seemed like an achievable foundation for a project which could become much more.\n\n## What it does\n\nCompares photographed medicine with the medicines included in the model.\n\n## How I built it\n\nUsing Google Cloud's AutoML to generate the model, react native was used to create an android/ios interface, which compares with the model using TensorFlow.js.\n\n## Challenges I ran into\n\nUnderstanding the data structures associated with machine learning provided challenging, as both team members were novices in this area of expertise. Likewise, we needed to retrain the algorithm after early testing, which took up an immense amount of time.\n\n## Accomplishments that I'm proud of\n\nThe labeled datasets were partially collected with a data scraper, for the initial training of the algorithm. Thousands of additional photos were later added, taken in various environments around one of the team member's houses. This resulted in significantly higher accuracy for the medicines which were photographed (although, unfortunately this only includes 5 of the 20 medicines in our data set).\n\n## What I learned\n\nEffective machine learning takes a lot of time, ideally this data should be collected as early as possible in a hackathon or time limited event. The majority of the coding should be done while the model trains, rather than before.\n\n## What's next for Medentifier\n\nWe'd like to add user-photographed images to our ML database, as well as provide information to the user when it is not clear which medicine the user has photographed. In these edge cases, we'd like to prompt for additional information to aid in identifying the medicine (such as prompting the user to place a quarter for scale, or request the user enter the numbers on the pill).\n\n## Built With\n\n* automl\n* expo.io\n* react-native\n* tensorflow.js", + "content_format": "markdown" + }, + { + "url": "https://devpost.com/software/peercode-ide-0cyk6x", + "domain": "devpost.com", + "file_source": "part-00514-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\n## Inspiration\n\nI was actually irritated with sharing codes on whatsapp and other chat application because it is a very inefficient way of sharing code or discussing any Topic or concept in Coding. So i thought of building a effiicient way to promote peer learning in this quarantine which would not only help individuals but also help teachers to live connect with any peer and ask them to come up on the app and code live.\n\n## What it does\n\nIt connects two people on a common WebRTC server and now those two people can communicate live with video calling and the app has and IDE support with a canvas on its side. So basically if i write code on that IDE the writing and editing of code will take place live to other user connected and the other user will be able to see all the changes live.. Also if User is unable to make other user understand what he is saying then the user can draw the logic on the canvas board to make user understand what he's actually trying to say.\n\n## How we built it\n\ni user React.js for frontend Node.js (Express.js) for backend Firebase for Realtime transfer of data and mouse coordinates WebRTC for video calling support P5.js library for canvas drawing feature typescript and javascript for ES6 coding in frontend and backend\n\n## Challenges we ran into\n\nThe WebApp was initiallly slow when i used transfer of data with socket.io but then i improved it by using Firebase Realtime Database. Also sending Drawing points from one point to another in a common WebRTC server was also a big deal initially but then moving and pushing the limits helped me to optimise it to a level of 60% faster and accurate than before.\n\n## Accomplishments that we're proud of\n\nAcheiving the working of canvas drawing support in WebApp that too in Realtime to other user also was the highlight of the project.\n\n## What we learned\n\nI learned few more concepts of javascript and got my hands dirty on typescript.\n\n## What's next for PeerCode IDE\n\nthere is a future scope of using CRDTS( complex free replicated data types ) and a group video chat feature. Currently only two people can connect at a time.\n\n## Built With\n\n* canvas\n* es6\n* express.js\n* firebase\n* javascript\n* node.js\n* nosql\n* p5.js\n* react.js\n* typescript", + "content_format": "markdown" + }, + { + "url": "https://devpost.com/Aaln/likes", + "domain": "devpost.com", + "file_source": "part-00514-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\nDevpost\n\nParticipate in our public hackathons\n\nDevpost for Teams\n\nAccess your company's private hackathons\n\nGrow your developer ecosystem and promote your platform\n\nDrive innovation, collaboration, and retention within your organization\n\nBy use case\n\nBlog\n\nInsights into hackathon planning and participation\n\nCustomer stories\n\nInspiration from peers and other industry leaders\n\nPlanning guides\n\nBest practices for planning online and in-person hackathons\n\nWebinars & events\n\nUpcoming events and on-demand recordings\n\nHelp desk\n\nCommon questions and support documentation\n\nEnsure your mortgage loan estimate is fair and you're not getting screwed.\n\nAnonymously hear the amazing stories of the people you pass by every day. Your community just got more interesting.\n\nHalo is an emergency home management system\n\nBinary Menora Clock\n\nPong with the Myo\n\nMake Moves\n\nContexual Product Hunt Search\n\nGet notified when hackathon registrations become available.\n\nUI analytics for Android\n\nScraping for everyone!\n\nA marketplace for founders to pay hackers $1,000 for an mvp.\n\nStarhub is an automated collaborative repo starring app.\n\nWhit is an open source SMS service, which allows you to query CrunchBase, Wikipedia, and several other data APIs.", + "content_format": "markdown" + }, + { + "url": "https://devpost.com/software/posturizer", + "domain": "devpost.com", + "file_source": "part-00649-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\n## Inspiration\n\nWith the use of technology being integrated into our everyday lives, there is an increasing number of people suffering from bad sitting posture all around the world. This leads to back pain, neck injuries, and more negative side effects.\n\nOur team and probably the majority of hackers at this event face bad posture issues while sitting all day on our laptops, so we wanted to take a stand to fix this. We needed a solution that would be responsible with reminding us to fix our posture, that was convenient for us to use no matter where we were located, and was easily accessible to all users. Thus, we decided that creating a web application that used your laptop’s webcam would be the optimal solution - it is easy to take with you wherever you go and it is easily accessible for all users anywhere. This led to the creation of Posturizer.\n\n## What it does\n\nOur product begins by authenticating user accounts - you can simply sign in with your google account. We have programmed this web app to check every 10 seconds to see if users are sitting in a good posture or not, and if not, it will give tips on how they can improve their posture. It takes a photo, identifies one of the five classes we are using - slouching forward, leaning back, leaning to the right or left and good posture and then will provide the appropriate advice _ by speaking back to you _. The application will also give the users metrics on their progress of improving their posture such as how they’ve been doing for the past couple months, weeks, and days, as well as their frequencies of postures for the current day and yesterday.\n\n## How we built it\n\nFor the backend, we used azure custom vision trained with a manually collected data set and labelled our primary data into five classes: front, back, right, left, good. These postures dictated the five posture positions we were using for our product. For the frontend, we used react.js and next.js with the firebase database to store previous user data/metrics in order to include their analytics and summaries on the dashboard and provide further insights. The frontend calls the API from Microsoft Azure AI to communicate with the backend which then spits out the class ‘front’, ‘back’, ‘left’, ‘right’, or ‘good’ with the respective probability. When the user is exhibiting bad posture, we have recorded audio that will tell them to align themselves according to their current position. Lastly, on the user’s dashboard there are visualizations to show their metrics and progress over time, created with ant design.\n\n## Challenges we ran into\n\nFor simplicity, we began by using a single class classification rather than a multi class classification when identifying the posture; this was challenging as these classes are not entirely mutually exclusive. One of the main challenges was collecting a wide variety of high quality and diverse training data - we needed different faces, backgrounds, clothing, lighting and collecting that dataset manually is a tedious process. Once we had this data collected, the next challenge we faced was improving the overall accuracy of the model from 40% to 91% through fine-tuning the model, transforming and augmenting the database and digging through the azure custom vision documentation. With transferring this data onto the frontend side, we had to determine how to keep resolutions consistent, and keep it user friendly with all of the extra features (dashboards, analytics, etc).\n\n## Accomplishments that we're proud of\n\nWith this project, considering the time constraint and lack of data, we were extremely proud of reaching an accuracy of 91% with detecting the five sitting postures. With the features of our product, we are proud of being able to preserve privacy by blurring out the faces of the people in the training model and as well removing photos of the users after converting them to data for analytics. Lastly, using Microsoft Azure AI and Firebase to create a unique product that would benefit our team as well as other hackers was an achievement.\n\n## What we learned\n\nFrom this incredible experience building an impact product, we came away with many learning points. First and foremost, we learned of the power of great ideas. The potential impact and importance of an idea was key to rallying and staying motivated for the hack. Through being motivated by small accomplishments and ultimately by the excitement of reaching our end goal, we were able to work diligently and collaborate. With an idea we all resonated with and truly wanted to solve, we were all fueled with a common purpose. In addition, we realized the difficulties of gathering primary data to train an accurate model. It was our first time doing preliminary data collection and we had to go to great lengths to gather data that was diverse and unbiased to train an accurate model. Finally, we learned to create a user friendly experience for the user with a simple interface, rather than cluttering it up with many features that would be difficult to understand. All in all, this hackathon has been an amazing learning experience!\n\n## What's next for Posturizer\n\nIf we were to take our product to the next phase, we would begin with collecting more data of different people, different settings, pictures with more noise and preprocess the images so that we can focus on the postures in all settings and retrain our model. We could extend this to a multi class classification so it can detect other postures such as if you are leaning to the right but also forward; this could also increase our accuracy for postures that are bordering between two classes.\n\nOn the consumer end, we could add additional analysis features for users on the dashboard to track their progress over periods of time. The product could have better personalization as each person’s spinal structure is unique, and calibration to the individual user can help with personalized feedback and suggestions. The voice reminders could also be personalized with different accents, genders, and other preferences. This product could also be converted into a chrome extension or application rather than a web application for convenience.", + "content_format": "markdown" + }, + { + "url": "https://devpost.com/software/prithajnath-github-io", + "domain": "devpost.com", + "file_source": "part-00145-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\nLately I was thinking about redoing my personal website. But personal websites these days look all the same. Same single page feel with flashy text and animations. I wanted something unique and original, so I went back to the basics. No unnecessary animations, no pictures whatsoever. Just plain and simple text in a command line. This resume is a homage to the old days when computers didn't have GUIs and everything was done through the terminal.", + "content_format": "markdown" + }, + { + "url": "https://devpost.com/software/gunshield-iuktyb", + "domain": "devpost.com", + "file_source": "part-00847-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\n## Inspiration\n\nThe original inspiration for our project was a sport called 'Crowd Ball', from the anime 'The Irregular at Magic High School'. Here is a video example. Once we decided on the game being singleplayer, additional inspiration was taken from the game 'Arkanoid'.\n\n## What it does\n\nLets the user control a shield that can deflect a ball towards targets. If necessary, users may also use magnetism to cheat.\n\n## How we built it\n\nWe built this using Unity3D, Visual Studio, and GitHub.\n\n## Challenges we ran into\n\nWe spent a significant amount of Saturday fine-tuning the physics for the demo, something that took a surprising amount of effort to get a half-decent feel. We also ran into problems with version control on GitHub, specifically in losing progress due to overwritten work. In order to make up for lost time, we had to redo everything but we are still proud of how it turned out. Many challenges also came through learning animations.\n\n## Accomplishments that we're proud of\n\nWe are proud of the smooth animations of the targets and environment as well as the gun controller effects. We like the intuitive control system involving a remote force controller. We are proud that it works.\n\n## What we learned\n\nOne of our team members, Math, had never used Unity before the event. During the course of the event, they taught themself the basics of the Unity animation timeline as well as the stage editor, as well as having written some of the Unity scripting used in our project. As a team we learned more about how Rigidbodies and forces work under the physics engine.\n\n## What's next for Gunshield\n\n* We want to make the ball physics even easier for the user so that rallies can be prolonged.\n* We want to be able to make this a multiplayer game\n* We want to improve on our level design.\n* We want to implement different paddles with different controls.\n* We want to implement a UI so that the user can set preferences such as difficulty.\n* We want to add more interactive elements such as portals or ramps.\n* We want to add more levels.\n* We want to(maybe) fix more bugs.", + "content_format": "markdown" + }, + { + "url": "https://devpost.com/software/snow-problem", + "domain": "devpost.com", + "file_source": "part-00796-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\nSnowProblem is a conversation tool that will work in any of 60+ languages. Whether learning a language, traveling, or you just want a question answered in your native tongue, we have you covered. To make the journey smooth, the user is accompanied by two cute winter friends, Snowy the polar bear and Slushy the penguin.\n\nWhen studying a language, one of the most important parts of the process is having conversations with friends and classmates in the desired language. Unfortunately, most learners don't have access to these resources frequently, if ever. We want to make sure that this is no longer true, that it is Snow Problem to converse and really learn how to use your new language.\n\nOur stack runs using a pairing of numerous services elegantly strung together. To make it work, quite a few hacks were used. Firstly, the app is developed namely as a chrome extension. These don't have access to voice input of any kind, so js injection, object tracking, and a few clever tricks combine to form a 4 layer deep process to finally get the parsed response in any language. From there, it will determine if it must be translated to the base language of the underlying chat application, translate using Google translate, collect an in context response from the chatbot, and translate it back to the expected language. Finally, it uses TTS to produce the desired language verbally. After waiting for itself to complete speaking, it will once again await the user.\n\nOur second feature is for when you are dropped in a real environment without sufficient language skills. Users can select to languages to be used, speak into their device, and then the other party will hear their native language. When they want to switch speakers, they simply click on the microphone nose of our adorable winter friends, and the other party will instantly be able to continue the conversation. This will eliminate much of the difficulties encountered while travelling and interacting with a wide variety of individuals.\n\nTo push further, we even made a generic web app that runs in any browser with supported components, including mobile phones. This extends the reach far beyond a desktop and makes the features users need available wherever they are.\n\n## Built With\n\n* azure\n* botlibre\n* chrome\n* chrome-tts\n* cleverbot\n* google-translate\n* google-web-speech-api\n* html5\n* html5-voice-recognition\n* javascript\n* node.js\n* python", + "content_format": "markdown" + }, + { + "url": "https://devpost.com/software/cash-at-the-moment", + "domain": "devpost.com", + "file_source": "part-00035-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\nAs two members of our team are Europeans, we suffer a lot to find a close ATM with a convenient exchange range.\n\nSo we thought - why not use google maps, twilio and a simple Android app? The user enters a location, we calculate the most convenient ATMs and twilio sends you a SMS with th locations.\n\nWe used exchange rates based on a website listing the most common UK banks and their exchange rates. With the google-places API we search for ATMs.\n\nAndroid apps are very complex. We tried to keep it as simple as possible since we only need an insert box and a output tab. How to get data from the app to the node.js program and back?!\n\nAndroid-Studio, Node.js is painful\n\nWe want to link all the components together and include more banks (right now 9).", + "content_format": "markdown" + }, + { + "url": "https://devpost.com/software/zune-simulator", + "domain": "devpost.com", + "file_source": "part-00145-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\n## Inspiration\n\nZune was always better than the iPod\n\n## What it does\n\nSimulate a Zune interface for your musical desires\n\n## How I built it\n\nUsed react to create a UI that mocked the Zune interface and device and connected it to the Spotify API to login and play music\n\n## Challenges I ran into\n\nEnabling the Spotify auth flow and interacting with the API was more complicated than expected\n\n## Accomplishments that I'm proud of\n\nPretty satisfied with the way this turned out. Fiddling around with the Spotify API was annoying but overall very satisfying once it worked.\n\n## What I learned\n\nRead the API and Spec documentation VERY closely, pay attention to EVERY little fine print.\n\n## What's next for Zune-Simulator\n\nEnable Albums, Artists, Genres and all the other great features Zune has to offer", + "content_format": "markdown" + }, + { + "url": "https://devpost.com/software/optimized-finance", + "domain": "devpost.com", + "file_source": "part-00070-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\n## Inspiration\n\nOur program was first inspired by Fiserv's API demonstration. We noticed that their data (especially within the JSON files) could be optimized a little more. For example, the value of the phone number was stored as an integer, and was a string later in the same document. Since we had a team member who was slightly familiar with optimization, we wanted to optimize Fiserv's data too, but the program could work for all mass data optimization.\n\n## What it does\n\nThe project optimizes data so that once it is written in the query writer, it will not have to go back through queryparser and analyzer the next time the user accesses it. In other words, it is similar to the concept of cache, and our program returns data faster than simply getting data using the API.\n\n## How I built it\n\nOur team built the program using Lucene, a text search engine library. It parses through the data in the JSON files, separates it into fields to optimize it, and then returns the wanted value.\n\n## Challenges I ran into\n\nCalling the Fiserv API through Java proved to be very challenging, partially because the API itself had some issues in documentation and implementation, and also because JAX-RS had several thousand other dependencies to contend with.\n\n## Accomplishments that I'm proud of\n\n## What I learned\n\nWe learned a lot more about Lucene, the text search engine library, as well as the backend of the search engine, ie, how search engines works. Because we learned why it works, we also learned how it works, so the methods behind the code weren't difficult to understand, although we did run into difficulties.", + "content_format": "markdown" + }, + { + "url": "https://devpost.com/software/built-with/jinja?page=1", + "domain": "devpost.com", + "file_source": "part-00035-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\nFree Illuminati Agent In Uganda Call+256761985574-0753770020 \n Way To Join Illuminati Agent Call+256761985574-0753770020 In Masaka, Masindi,Gulu, Mbarara, Kasese, Mbale, Jinja, Tororo, Lira, Arua, Kitgum, Hoima, Soroti, Apac, Nebbi, Mubende, Wakiso, Mukono\n\n# Free Illuminati Agent In Uganda Call+256761985574-0753770020\n\nWay To Join Illuminati Agent Call+256761985574-0753770020 In Masaka, Masindi,Gulu, Mbarara, Kasese, Mbale, Jinja, Tororo, Lira, Arua, Kitgum, Hoima, Soroti, Apac, Nebbi, Mubende, Wakiso, Mukono", + "content_format": "markdown" + }, + { + "url": "https://devpost.com/software/youtube-title-generator", + "domain": "devpost.com", + "file_source": "part-00514-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\n## Inspiration\n\nWe wanted to use Cohere AI as our team had used boldGTP-3*bold* previously. We thought that the Cohere content creation section was fascinating.\n\n## What it does\n\nThe user enters their desired tags that they are going to add to their Youtube video into the web form. The website then calls the Cohere API and generates a title based on tags and titles that have been web-scraped.\n\n## How we built it\n\nWe used Beautiful Soup and Selenium in Python to web-scrap the tags and titles from Youtube videos. Then, Flask to create our website along with CSS for styling the webpages.\n\n## Challenges we ran into\n\nUnderstanding how to use the Cohere API, Google preventing web-scraping,Website front end\n\n## Accomplishments that we're proud of\n\nGetting around Google's web-scraping prevention and creating the backend of the website\n\n## What we learned\n\nWeb-scraping,Flask and CSS\n\n## What's next for Youtube Title Generator\n\nImproving the front end design, training on more data, improve website speed.", + "content_format": "markdown" + }, + { + "url": "https://devpost.com/software/social-symphony", + "domain": "devpost.com", + "file_source": "part-00070-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\nThe Social Symphony Project is an experiment in determining how VR could be a useful medium for better understanding one's own social network, from it's inception to the present.\n\nAs our social networks grow, they produce an unwieldy amount of information. The capacity to analyze and interpret this data is reliant on current interfaces and their various mediums. Social Symphony will allow anyone to experience their entire Facebook presence in an active connected world around them, with as little or as much information as they'd like. A network is visualized around a spherical space with current activity feed events passing by them like the messages of the synaptic network of a brain.\n\n# Technical Milestones\n\n* Allow user to seamlessly browse in depth up and down\n* Choosing the right library/engine to visualize this experience\n* Implementing FB's login and structuring the proper query calls to grab the user's history or current activity\n* Storing and calling the data at a later time for a smooth experience\n* Grabbing live friend activity as it happens and displaying that information\n* Drawing the nodes and connections to display the user's network and activity\n* User interactions: Pausing time, expanding a photo/comment/post, changing verbosity\n* Getting scene to display facebook graph data properly without running server\n\n# Nice to Have Milestones\n\n* A chronological timeline of the user's profile from inception to now being built up around them\n* Ability to pause the timeline and explore or rearrange their environment\n* Add a reactive musical set to the experience\n\nSocial Symphony may blur the line between a conceptual interface for VR and an art piece that aims to present a new perspective through the externalization of a Facebook profile and how the information from a social network may be affecting us.", + "content_format": "markdown" + }, + { + "url": "https://devpost.com/software/umafia", + "domain": "devpost.com", + "file_source": "part-00514-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\n## Mafia\n\nMafia is a party game for a group of friends to get together and yell and lie at each other. It is incredibly fun, and I play all of the time with my friends and family. During the game of mafia, there are townspeople and mafia members, and each group seeks to kill off the other. The mafia are the informed minority: there are few of them (generally two), and the townspeople are the majority. The mafia know who is in the mafia, but the townspeople do not, so the goal of the game is for the townspeople to discover who is in the mafia. Because the game is played this way, there must be one person acting as the narrator, who tells the townspeople what the mafia do at night. This player does not participate in the game, and generally, every player in the game makes it more fun. Also, the mafia members and the townspeople must be chosen randomly, so playing cards or some other randomizer is required for the game.\n\n## UMafia comes to the rescue.\n\nUMafia is a companion for the party game: it is meant to assist the game, rather than something like \"Town of Salem\", which is just a way to play the game online. UMafia is meant for people to sit around a living room or a campfire, and pull out their phones to vote and kill other players. UMafia takes the place of the narrator, meaning that an extra person can play, and it also randomizes the roles. It makes friendly mafia games much easier.\n\n## How I built it\n\nUMafia is a website built with html, css, javascript and bootstrap. It should be compatible with mobile devices. I modeled UMafia after spyfall,which is a different party game to be played in person. It is nearly 500 lines of code, made mostly from scratch, except for some bootstrap effects which I added at the last minute. The javascript creates an array pL (short for playerList), which holds playerdata, and role data. It randomizes roles, and keeps track of how many players remain. If the condition is met that all of the mafia players are dead, then the townspeople win, and if the condition is met that the amount of mafia players exceeds the amount of townspeople, then the mafia wins.\n\n## The Interface\n\nDuring the day cycle of the game, all of the players vote one player to be killed, and during the night cycle the players choose which other players to be the target of their abilities. Therefore, I chose the interface to be a list of buttons, with each player on one of the buttons. Every a player makes a decision (ie, when a player votes for another player, or when the mafia decides to kill a player), UMafia puts the decision in an array called \"decisions\", and checks whether there are enough decisions to proceed. During the daytime, as soon as a majority of the decisions are made for one player, that player is killed and the game shifts to night. During the nighttime, UMafia waits for every player to make a decision, even players who have no role. These players must select another player, even if it does not do anything, otherwise people would be able to deduce who is in the mafia based on who is on their phones. After every player makes a decision during the nighttime, the game kills whichever player was chosen by the Godfather; the leader of the mafia, then the game transitions back to daytime. In this way, the game goes back and forth from day and night until any win conditions are met.\n\n## Challenges\n\nSeveral issues came up during the development of this game. During the production, everything broke multiple times, and most of the code was rewritten. In particular, using buttons in html was difficult, because I needed to create a function that created n buttons (where n is the number of players), and each button had to have an event listener for a different function. Initially this did not work, every event listener was the same. Google eventually led me to a solution in which I used the button object within the parameters of the event listener to convey information. Another issue was that of interaction between users. However, it would be hard to test if I constantly needed multiple users. Therefore, I made a temporary solution, which is a drop-down list of users, which would simulate different users. I have not yet created a way for users to interact, but it will be coming soon.\n\n## Takeaways from YHack\n\nAfter YHack, I feel optimistic about the future of UMafia. I have learned many things, and just as I have improved my code, my code has improved me. I stepped outside my comfort zone this time and searched through APIs for help in my code. Previously, I was doing all of my code from scratch, and challenging myself to use API code took a lot of time and energy. In the end, I did not end up using Google Cloud APIs as I had planned, but instead, I tried Bootstrap, which was much easier to access compared to Google's APIs. Using that allowed my entire interface to evolve. YHack made a boost in knowledge which will propel this project into the future.\n\n## What's next?\n\nThe next few things for UMafia are integration of new roles, user interaction, and scalability. Adding new roles like doctor, detective and vigilante will improve the experience of the game. Adding user interaction is the final step before the application is functional. After that, UMafia should be expanded to include any number of players and many more new roles.", + "content_format": "markdown" + }, + { + "url": "https://devpost.com/software/finn-id-universal-sec", + "domain": "devpost.com", + "file_source": "part-00514-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\n## Inspiration\n\nIntegrate new tech in guard sphere to increas security.\n\n## What it does\n\nTrack subjects and separate them by access and privacy groups, send alarm notifications and tracking visualisation to security control room.\n\n## How we built it\n\nWe use Python for data analytics on server, after that use Python for security guard client, Python is grate again.\n\n## Challenges we ran into\n\nSome problems cause of specific demo hardware setup. But it is not a real trouble.\n\n## Accomplishments that we're proud of\n\nSystem can be easely integrated in any security complex, flexeble source let to change groups, rights and areas.\n\n## What we learned\n\nGet many good new knolege about rfid and radiophysics in general.\n\n## What's next for Finn-id universal sec.\n\nIn a time perspective we want to integrate posibilits of automaticly access control and smart alarms.", + "content_format": "markdown" + }, + { + "url": "https://devpost.com/software/hands-free-augmented-reality-support-for-medical-equipment-procedures", + "domain": "devpost.com", + "file_source": "part-00564-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\nWellStar Health System and CN2 Technology are partnering to develop a hands-free augmented reality system to support healthcare equipment procedures. There are many different pieces of equipment used in emergency and triage environments that require healthcare professionals to follow strict procedures for setup and usage. The chest pump is one of many such systems that includes electrocardiogram monitors, ventilator machines and intravenous drug delivery. CN2 Technology proposes to develop an application for the Moverio BT-200 that augments and extends the existing paper-based setup and usage documentation. The application will use computer vision to track the Atrium chest pump and present stereoscopic labels, videos and animations onto the actual chest pump. The proposed application functions include chest drain setup (connecting patient tube, adjusting pressure level, etc.), chest drain operation (suction bellows monitor, high negativity release valve) and drain features (vaccum indicator, collection chamber).\n\nWhen you launch the app, you can use the cursor to select either Instructions or Setup. Selecting \"Instructions\" will provide several slides outlining how to use the app. If you select \"Setup\", you will begin the AR experience. The app will guide you to look at the Atrium chest pump and step back 1 meter after you attain tracking. It will then prompt you asking if the model is aligned. If it is not, you should run the calibration app. If it is aligned, you will then see overlaid AR instructions for setting up the chest pump. On step 5, if you step closer to the pump, you will see an x-ray view inside the pump's suction control regulator.\n\n## Built With\n\n* epson-moverio\n* unity\n* vuforia-tracking", + "content_format": "markdown" + }, + { + "url": "https://devpost.com/software/once-the-iot-web-operating-system", + "domain": "devpost.com", + "file_source": "part-00275-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\n## Inspiration\n\nThe Web is mainly copy and paste. Reusing components with one click does not exist in the Web. Its time for a better Web ... a Web 4.1. Legends like Alan Kay and Anders Hejlsberg show us the way ...\n\n## What it does\n\nONCE creates Web Components by utilizing the long forgotten original definition of the word 'Component'. Today just a cloudy mix of characters starting with C.... but ONCE it had a clearly defined meaning (semantic) with undeniable qualities. The four qualities which distinguish real components from units are:\n\n* it's a black box (no need to look inside)\n* it's fully self-contained (no other 'stuff' required)\n* it has an interface (to communicate with the outside world)\n* it is self describing (in a machine readable format)\n\nUnits are everything else, even if only one of the qualities is missing or incomplete. The long forgotten standard model is named UCP: Units-Components-Packages\n\nin short for experts: it's OSGi++ for native JavaScript\n\n## How I built it\n\nGall's Law from Gall's book Systematics:\n\nHow Systems Really Work and How They Fail. It states:\n\nA complex system that works is invariably found to have evolved from a simple system that worked. A complex system designed from scratch never works and cannot be patched up to make it work. You have to start over with a working simple system. – John Gall (1975, p.71)\n\n## Challenges I ran into\n\nMainly that today developers do not even know what they do not know. E.g. that good system design has been around since the 1960s (Donald Knuth, Alan Kay, et. al.) and was much more mature 40 years ago than anything you find in the Web Space today.\n\n## Accomplishments that I'm proud of\n\nONCE is as object oriented as Alan Key would love it, Thinglish adds - together with ONCE - even more features:\n\n* global domain namespaces to any javascript\n* Interfaces,\n* Type safety even during runtime\n* and Reflection of every Thing.\n\nThings are UcpComponets in a global repository and can be reused with one click or drag & drop. And this is how Anders Hejlsberg made IT successful (in the form of Delphi and C##). The repository self organizes into the 5 EAM Layers of TLA and map a fully service oriented IT world to the 5 rows of the business model canvas. In this manner, every IoT System can become a new business model right from the start.\n\n## What I learned\n\nThat the education system in IT completely failed and has to be reinvented quickly. We are alread running out of good programmers and DevOps people and our education system is not nearly adequate for the challenges ahead.\n\n## What's next for ONCE - the IoT Web Operating System\n\nStreamline it for better use with nodeJS and start to develop ONCE in silicon.\n\n## Built With\n\n* bootstrap\n* eamd.ucp\n* es6\n* javascript\n* npm\n* once\n* thinglish", + "content_format": "markdown" + }, + { + "url": "https://devpost.com/software/navassistai", + "domain": "devpost.com", + "file_source": "part-00796-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\n## Inspiration\n\nOne day, we were perusing youtube looking for an idea for our school's science fair. On that day, we came across a blind YouTuber named Tommy Edison. He had uploaded a video of himself attempting to cross a busy intersection on his own. It was apparent that he was having difficulty, and at one point he almost ran into a street sign. After seeing his video, we decided that we wanted to leverage new technology to help people like Tommy in daily navigation, so we created NavAssist AI.\n\n## What it does\n\nIn essence, NavAssist AI uses object detection to detect both the position and state of a crosswalk light (stop hand or walking person). It then processes this information and relays it to the user through haptic feedback in the form of vibration motors inside a headband. This allows the user to understand whether it is safe to cross the street or not, and in which direction they should face when crossing.\n\n## How we built it\n\nWe started out by gathering our own dataset of 200+ images of crosswalk lights because there was no existing library for those images. We then ran through many iterations on many different models, training each model on this data set. Through the different model architectures and iterations, we strove to find a balance between accuracy and speed. We eventually discovered an SSDLite Mobilenet model from the TensorFlow model zoo had the balance we required. Using transfer learning and many iterations we trained a model that finally worked. We implemented it onto a raspberry pi with a camera, soldered on a power button and vibration motors, and custom-designed a 3D printed case with room for a battery. This made our prototype wearable device.\n\n## Challenges we ran into\n\nWhen we started this project, we knew nothing about machine learning or TensorFlow and had to start from scratch. However, with some googling and trying stuff out, we were able to figure out how to implement TensorFlow for our project with relative ease. Another challenge was collecting, preparing, and labelling our data set of 200+ images. Although, our most important challenge was not knowing what it's like to be visually impaired. To overcome this, we had to go out to people in the blind community and talk to them so that we could properly understand the problem and create a good solution.\n\n## Accomplishments that we're proud of\n\n* Making our first working model that could tell the difference between stop and go\n* Getting the haptic feedback implementation to work with the Raspberry Pi\n* When we first tested the device and successfully crossed the street\n* When we presented our work at TensorFlow World 2019\n\nAll of these milestones made us very proud because we are progressing towards something that could really help people in the world.\n\n## What we learned\n\nThroughout the development of this project, we learned so much. Going into it, we had no idea what we were doing. Along the way, we learned about neural networks, machine learning, computer vision, as well as practical skills such as soldering and 3D CAD. Most of all, we learned that through perseverance and determination, you can make progress towards helping to solve problems in the world, even if you don't initially think you have the resources or knowledge.\n\n## What's next for NavAssistAI\n\nWe hope to expand its ability for detecting objects. For example, we would like to add detection for things such as obstacles so that it may aid in more than just crossing the street. We are also working to make the wearable device smaller and more portable, as our first prototype can be somewhat burdensome. In the future, we hope to eventually reach a point where it will be marketable, and we can start helping people everywhere.\n\n## Built With\n\n* 3dprinting\n* colab\n* google-cloud\n* machine-learning\n* opencv\n* python\n* raspberry-pi\n* ssd\n* tensorflow\n* tinkercad", + "content_format": "markdown" + }, + { + "url": "https://devpost.com/software/ibn", + "domain": "devpost.com", + "file_source": "part-00514-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\n## Inspiration\n\nAn intelligent bot was created during a long summer school in the American University Of Nigeria, this bot was to assist pen testers in creating a profile for networks by using common attacking techniques. It became a major success after discovering a vulnerability so big in the university's network an attacker could alter grades, alter finances and even completely annihilate the whole network. Considering how successful this bot was, we decided to take the logic used in building it and apply it to something broader and bigger like an enterprise.\n\n## What it does\n\nIBN takes request in natural language i.e. \"Create an invoice for Mr John with these products: 2 bags of rice, 3 cartons of indomie and 1 pack of cocacola.\" and converts this into a real invoice which can then be seen by Mr John on the same platform. This is the basics of what i does but the possibilities are more, this is an intro into a world of information and insight at the finger tips of a user. Information that would have cost time and resources are now available to the entrepreneur at his/her convenience\n\n## How we built it\n\nAfter dismantling the pen-testing bot due to legal issues that arose from the vulnerabilities it discovered, we decided to take the research and rebuild it into IBN, previously we had used a specially crafted language called Margee(made by us) to describe the models that the bot used to decipher and process intents but due to the complexity of the domain we are currently facing we have decided to go for a more traditional and suitable language. We used NLTK and a in-house library for the natural language processing and Django for the core backend. We also used NodeJS for parts of the NLP. We have also created a concept called NLOM (Natural Language Object Mapper) which tries to map objects, subjects to a language class for further analysis to derive an intent.\n\n## Challenges we ran into\n\nFacing legal actions, we were forced to let go of the original source code and had to build this up gain from scratch. Also due to the complexity of the current domain being tackled, IBN as at this time is unable to detect its features from the corpus and requires models built with specific features. We hope that further in the future we may be able to apply deeper concepts to enable feature detection.\n\n## Accomplishments that we're proud of\n\nWe are proud of the project in general and are very sure that this is the next phase in business insights and connectivity. From its earlier success in pen-testing to its current redeployment in the business domain\n\n## What we learned\n\nWe have learnt the use and implementation of certain algorithms for NLP. We were able to learn so much that we built our own in house library to run some parts with a little assistance from NLTK python library.\n\n## What's next for IBN\n\nWell we are looking for funding and investments and also to tell those afraid of the machine that the world will be better with machine intelligence.", + "content_format": "markdown" + }, + { + "url": "https://devpost.com/software/mypandapal", + "domain": "devpost.com", + "file_source": "part-00145-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\nAs busy university students trying to balance the stress of attending back-to-back classes, keeping up with coursework, attending a lot of meetings, and juggling a whole lot of extracurriculars, it can be easy to forget about maintaining healthy habits. We often forget to properly sit down to eat a meal, drink water, exercise regularly, and sleep enough hours. We've all been there and have had classes starting at 8 am in the morning and meetings ending at 9 pm at night, resulting in us completely losing track of healthy habits!\n\nWe built MyPandaPal to solve exactly this issue faced by burnt-out students! It's an interactive Chrome Extension that features real-time notifications alerting the user to take the appropriate action when it's time to eat/drink/sleep/exercise to ultimately regulate healthy habits. Every user would have their own Pet Panda to take care of, which represents themselves and the Panda displays a different mood based on the personalized status of healthy habit tracking. Users can press the appropriate button for whichever action they just performed (e.g. eating or drinking) and this would update the mood of the Panda accordingly. Your goal is to keep your Panda happy by regularly tracking your healthy habits. Habit stats (hunger/thirst/energy/happiness) would be displayed directly on the webpage (e.g. hunger level of 50%) so that every user would be able to keep track of their own personalized progress. The Panda is capable of displaying emotion of happiness, neutrality, sadness, and death. Therefore, if the Panda goes too long without one of the four healthy habits, the appropriate notification reminders would pop up on the user's screen, and on the extreme end, the Panda would die—something we all do not want to see! Remember, your Pet Panda represents yourself: MyPandaPal is a gamified habit tracker/habit regulation tool that motivates users to take care of their pets and in turn, take care of themselves to prevent burnout and practice a healthier lifestyle.\n\nThrough building our project, we’ve learned to be patient throughout the process and take breaks for ourselves when needed. We are proud to say we didn’t pull any all-nighters despite feeling quite stressed throughout the day!\n\nTo build it, we used HTML, CSS, Javascript, JSON and implemented it as a Chrome Extension. We designed the interface to provide the best user-friendly and creative experience for the user!\n\nSome challenges we’ve faced include finding ways of tracking previous data, implementing it as a Chrome Extension, learning how to use keyframes in CSS, finding a proper way to alert users to drink water, and making everything user-friendly and visually appealing.", + "content_format": "markdown" + }, + { + "url": "https://devpost.com/software/digital-drug-adventure-companion", + "domain": "devpost.com", + "file_source": "part-00035-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\n## Inspiration\n\nThe workshop by Alannah Fricker really got us thinking about a way to make a safer environment for people doing recreational drugs. This app currently only focuses on opioid users but are highly scalable to cater to diverse users due to the technology stack chosen. The COVID-19 pandemic has resulted in social distancing. This prevents people from using while with someone they trust. The isolation has also led to people using more, which changes the body’s tolerance to various substances. What is needed is a way for people to ensure they’re looked after, even when they have to be alone.\n\n## What it does\n\nThe main purpose of this app is to make it safer for recreational drugs to be used alone. We wanted it to be like a friend at a party checking up on someone. This app does 3 main things. First, it presents a checklist of helpful things to do before starting your adventure. These would include, unlocking the door, getting out the Naloxone etc. Second, when the \"adventure\" button is triggered, a virtual assistant voice bot checks on the user occasionally to make sure they are in stable conditions. Third, it phones a friend on the emergency contact list to call for help if the person is imperil. This app has the potential to save lives by assisting in the prevention of and response to opioid overdoses\n\n## How we built it\n\nThe main app was built using android studio. To get the tech demos working in such little time, we used the code examples. We used figma to design the prototype and Flutter to get the Frontend setup. Used discord, google docs, and google slides to communicate and organize the project.\n\n## Challenges we ran into\n\nGetting the APIs to work took most of saturday. It is our first time using flutter, so there are many features that are not yet implemented. Getting an app to work in that short time is hard.\n\n## Accomplishments that we're proud of\n\nThinking up a simple solution to a major problem. And this app has the ability to add more features to determine if the user is overdosing. Despite the challenges, we polled our energy into creating a front-end showcasing the main features and conveying our group ideas .\n\n## What we learned\n\nHow to use google cloud APIs to synthesize speech, and natural language processing. How to make an app using Firebase. Learnt app development using Flutter.\n\n## What's next for digital drug adventure companion\n\nThe next steps for the app would be to scale it by adding additional features such as: Monitoring heart rate and blood oxygen levels using a connection with smart watches. Monitor for skin turning blue from lack of oxygen using the camera on the phone. Monitor for labored breathing using the mic on the phone. Have a list of safe injection sites and their operation hours. Add a log of the user's dosages and frequency of use to track decreasing tolerances.\n\n## Built With\n\n* firebase\n* flutter\n* google-cloud\n* google-speech-to-text-api\n* google-text-to-speech-api\n* java\n* kotlin\n* natural-language-processing", + "content_format": "markdown" + }, + { + "url": "https://devpost.com/software/covidcatch", + "domain": "devpost.com", + "file_source": "part-00514-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\n## Inspiration\n\nWe wanted to make a fun game in the beginning, but then we thought about how we can make it useful considering the world COVID situation. We see so many videos on the internet of moms trying to make their children wash their hands by singing or other fun interactive ways, and we thought that considering kids and young teenagers like playing mobile games, this is a catchy useful game that can teach you what are the good ways of protecting yourself from the virus.\n\n## What it does\n\nYour goal as the player is to achieve the highest score possible! You control a set of hands (or masked face, depending on player selection) that moves horizontally across the screen, catching any falling objects that land in its grasp. Catch a mask, hand sanitizer, or pair of gloves and your score will increase by 1. However, catch a virus or less than 2 meters sign and the two people on each side of the screen will move a step closer to each other! Let them get too close and it’ll be GAME OVER! An integrated scoreboard on the main menu allows you to play against your friends, or even yourself, tracking the highest scores in any given session\n\n## How we built it\n\nWe made a catching isolation-inspired videogame using Python and its diverse Pygame libraries.\n\n## Challenges I ran into\n\nIt was our first time working in Pygame. Moreover, of course, it was very hard collaborating online. We are three friends that come from different places in the world and we just share screens and split the work.\n\n## Accomplishments that I'm proud of\n\nWe are proud of making it look so good and finishing it in time. We are proud to make it such a catchy interesting game that can be useful for many.\n\n## What I learned\n\nHow to use Python and Pygame, and wonderful fun games we can do\n\n## What's next for COVIDcatch\n\nWe can publish it to PlayStore or AppStore so that this catchy game can be of help for young people quarantining. We plan on introducing leaderboards, for people more experienced in protecting from the virus to be on top. Sound effects, better animation, a level system with speeds, and more virus threats falling off. There are lots of things we can introduce to make it even more interesting and catchy. We can have a section of learning, where young children can see how every object helps - a mask, gloves, sanitizer, and how not keeping distance, not avoiding crowded places can be dangerous.", + "content_format": "markdown" + }, + { + "url": "https://devpost.com/software/whatsfood", + "domain": "devpost.com", + "file_source": "part-00145-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\n## Inspiration\n\nBeing a super diverse group of strangers we had a great idea flow.\n\nWe went through ideas ranging from financial budgeting, reinforcement learning, posture tracking, Q&A bots.\n\nSuddenly we started talking about food orders in the office and with co-workers.\n\nThere was a lot of energy around the topic we all chipped in the following problems:\n\n* Choosing a place\n* Collecting everyone’s individual orders and changes they might make\n* Remind latecomers\n* Placing the order manually\n* Recurring (having to do the same EVERY SINGLE DAY)\n\nAfter this it was settled, we started sketching a solution to automate the process leveraging a Chat Bot powerd by a NLP engine combined with convasational AI.\n\n## Challenges\n\nThe biggest challenge was being the first WhatsApp Chatbot. They recently launched their first API.\n\nBeing disciplined to cut out great features that we all want but will have to come later because of time constraints.\n\n## Learned\n\nIt pays-off to stay super focused on the MVP, don’t let the big features creep back in. Open and direct group communication make it fun and rewarding.\n\nVenturing off the beaten path and using brand new APIs without tutorials.", + "content_format": "markdown" + }, + { + "url": "https://devpost.com/software/internalai", + "domain": "devpost.com", + "file_source": "part-00070-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\n## Inspiration\n\nAs companies scale and employees come and go, important knowledge is spread across different sources (Email, Notion, Slack, Jira, etc) and difficult to find and synthesize. InternalAI seeks to solve this problem.\n\nChatGPT has proven to be a popular interface for interacting with large language models. However, while ChatGPT is great for general purpose text generation via natural conversation, it is not great for interacting with specific or private data. Common techniques like passing data as context to the LLM break down as the specific data you want to ask about scales, since LLM \"memory\" is limited (around 4k-8k tokens).\n\nChatGPT on its own would therefore be useless for the actual business problem of internal knowledge fragmentation.\n\n## What it does\n\nInternalAI is an internal business intelligence tool that gathers information across all sources of knowledge at a company (Notion, Slack, etc) and lets employees easily ask for answers. First, it syncs all your internal knowledge data via integrations. Then, through a chatbot interface, any employee can ask InternalAI questions about anything related to the company (company policies, business metrics, etc). InternalAI will then synthesize an answer to the question even if the knowledge is spread across different documents.\n\n## How we built it\n\nData ingestion: We first retrieve all company documents via various APIs (i.e Notion API). Then, we create embeddings of these documents via the OpenAI Embedding API. Once we have the embeddings, we can make our vector database. We use Facebook's FAISS library to cluster the embedding vectors together and create the vector database. We also separately make an in-memory database of documents (content and metadata).\n\nQuestion-answer step: When a user asks a question, we first make an embedding of the question via the OpenAI Embedding API. Then, we use a extremely fast similarity search on our vector database to give us the documents most relevant to answering the question. Once we have the most relevant documents, we use a prompting technique similar to map-reduce. For each relevant document, we ask GPT3 to retain specific information that could be relevant to answering the original question. Then, we combine those intermediate answers, and ask GPT3 to synthesize a final answer (with sources).\n\n## Challenges we ran into\n\nA big challenge for us was helping GPT-3 understand context outside of the small chunks of relevant information it was pulling from each document. This led to situations where searching for \"How much am I allowed to expense for dinner in the office?\" pulled up \"Dinner - $100\" from an expense document on Notion, but missed the context right above that which noted \"The following expense caps are for events\". By changing to our own content-combo prompting which not only pulls relevant information but also summarizes the entire document, we were able to significantly increase accuracy on these edge cases. Therefore in our final combo prompt, GPT-3 would have the context that a particular number referred to expensing for events, while a different document showed the total expenses allowed for dinners in the office.\n\n## Accomplishments that we're proud of\n\nOur team had minimal experience with GPT-3 (and LLMs in general) and were pretty frustrated at first when realizing that re-training on a large corpus of documents wasn't possible to build out our idea, and the limited number of tokens in prompts limited our input sizing. In just two days, with close to zero prior knowledge, we were able to learn about a variety of interesting ways to leverage those documents, build out a working demo, and even improve upon some of the solutions we'd seen used with our content-combo prompting.\n\n## What we learned\n\nWe learned quite a bit about the cost/accuracy balance across many different LLMs. Since we actually prompt GPT3 multiple times for the same question, (First for context & summarization, and then for building the final answer) we spend quite a lot of tokens to answer one question. Embeddings are also expensive to create across a large amount of documents. We experimented with multiple text and embedding models (Cohere, OpenAI ada/babbage, and more), but either performance decreased significantly or there were token limits. So we ended up using OpenAI embeddings and text-davinci-003, which are both quite expensive but product results of significantly higher quality.\n\n## What's next for InternalAI ⏩\n\nWhile powerful as-is, there are couple considerations to make for us outside of just adding more simple integrations.\n\n* Employee permissions on documents (certain employees can access only certain information).\n* Trust based document ranking (there's often conflicting information across knowledge sources, how do we rely on the best available data).\n* Increase speed (a good start might be caching summaries of often referenced documents).", + "content_format": "markdown" + }, + { + "url": "https://devpost.com/fersot100", + "domain": "devpost.com", + "file_source": "part-00608-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\nDevpost\n\nParticipate in our public hackathons\n\nDevpost for Teams\n\nAccess your company's private hackathons\n\nGrow your developer ecosystem and promote your platform\n\nDrive innovation, collaboration, and retention within your organization\n\nBy use case\n\nBlog\n\nInsights into hackathon planning and participation\n\nCustomer stories\n\nInspiration from peers and other industry leaders\n\nPlanning guides\n\nBest practices for planning online and in-person hackathons\n\nWebinars & events\n\nUpcoming events and on-demand recordings\n\nHelp desk\n\nCommon questions and support documentation\n\nCode Enthusiast. Interested in futurist tech. Studying Neuroscience and Computer Science\n\nAn easier more secure way to log into your devices... Using your face!\n\nMobile app for safe and effective activist organization\n\nA student demonstration organization and coordination yet\n\nA pressure based VR Experience where players race against time to fulfill tasks using motion controls\n\nA real-time, 2-player, dueling game. Each player takes control of a colored Wizard and casts spells\n\nA fun swarm game where you take the role of a virus and infect healthy cells and multiply.", + "content_format": "markdown" + }, + { + "url": "https://devpost.com/software/paper-piano-8yzaw9", + "domain": "devpost.com", + "file_source": "part-00649-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\n## Inspiration\n\nWe all grew up playing many various musical instruments. We thought it would be a cheap way for people who aren't as fortunate to learn to play instrumental skills such as learning the piano and drums.\n\n## What it does\n\nIt allows users to draw out their instrument board and connect their shapes to various sounds (piano sounds, drum sounds, etc). The board is supposed to be drawn on a piece of paper, allowing all users to be able to afford learning and playing this instrument.\n\n## How we built it\n\nWe built a front end with React.js and OpenCV with Python was used for backend, used with Numpy.\n\n## Challenges we ran into\n\nWe had some trouble with building the website with React.js because it was our first time working with it. We also learned Figma which we weren't used to, and also other design techniques on our group member's iPad.\n\n## Accomplishments that we're proud of\n\nWe are very proud that our project is able to detect your fingers pressing the shapes drawn on the board, and we can play different sounds from different shapes. Also, we learned to use Figma and was able to create some amazing designs for our website.\n\n## What we learned\n\nWe learned a lot of webdev skills using React, JavaScript, Figma, OpenCV with Python.\n\n## What's next for Paper Piano\n\nConnecting the UI from the front end to the backend. We hope that many users who like learning instruments at a very low cost would use our project on a daily basis!", + "content_format": "markdown" + }, + { + "url": "https://devpost.com/software/find-my-tech-stw3p1", + "domain": "devpost.com", + "file_source": "part-00070-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\n## Inspiration\n\nWhen pursuing our first idea on Saturday, we realized that we didn't have or know of the dependencies, packages or libraries that we would need to build our application. Furthermore, we knew that programming everything without suitable dependencies would be infeasible during a 24 hour period. Therefore, we decided to build a development tool that would allow programmers to begin building their applications immediately by telling them what dependencies they would need to install based on their idea.\n\n## What it does\n\nfindmy.tech takes in search queries from a webapp, and uses them to search github for projects similar or related to the search query submitted, by taking key words from the search query. It then searches every relevant repository and for the dependencies that power it, and analyses the frequency at which certain dependencies are present. Finally, it displays the data in a beautiful graph on the webapp, telling the developer which dependencies they would need to begin installing to start working on their big idea.\n\n## How we built it\n\nWe built it using Node.js, HTML, CSS and Javascript. We also used the Github Search API.\n\n## Challenges we ran into\n\n* At one point, searching took over a minute.\n* Starvation, drowsiness, sleep-depravation and fear of drinking more red-bull.\n* We're not good enough at HTML and CSS\n* Problems with the Github Search API.\n* Accidentally using a jQuery library that was decrepitated, and having to restart.\n\n## Accomplishments that we're proud of\n\n* Making a very pretty website.\n* Providing a practical results results\n\n## What we learned\n\n* A whole lot about web design.\n* Node.js\n* The endless documentation of the Github search API\n\n## What's next for findmy.tech\n\n* Switching from using the Github API to using the Google Cloud Big Data API so we can have a larger dataset to search through.\n* Incorporating more languages into the tool, instead of just Node.js.\n* Finding a better way of processing languages -- current method is a little time consuming and not very precise.\n\n## Table Number\n\nOur Table Number is 34\n\n## .Tech Domain Name\n\nOur .tech domain name is findmy.tech We weren't able to get the domain name from .tech, because our request from never fulfilled.\n\n## Built With\n\n* css\n* github-api\n* html\n* javascript\n* node.js", + "content_format": "markdown" + }, + { + "url": "https://devpost.com/software/crunchletter", + "domain": "devpost.com", + "file_source": "part-00796-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\n## Inspiration\n\nNewsletters are a great way for us to stay up to date about the topics we are interested in, however the availability of these newsletters are limited by human curation. In order to truly scale the availability of newsletters for all the topics that interests us, there has to be a system which can take the current data and generate useful snippets of information. We are interested in Startups and VC topics and thus we created CrunchLetter.\n\n## About\n\nCrunchLetter is a deep learning powered newsletter generator that uses the real-time funding and startup data from Crunchbase to create newsletters in real time.\n\n## How we build it\n\nWe used Recurrent Neural Networks to train the model to learn the word and topic representations for our topic. We utilized the Techcrunch/Wordpress API to get over ~15,000 Techcrunch articles as well as 3 other Tech newsletters as our data source (over 600MB of text data). We built a Bi-directional LSTM network on Keras with Tensorflow backend to train our model. We also split our datasets into multiple chunks and used model parallelism to train them synchronously and later combined the learned weights to use on our main model. We computed over ~212,000,000 parameters in total.\n\n## Challenges\n\nThis is a very challenging problem Natural Language generation problem. There were multiple constraints in terms of compute power, finding relevant data for the model, transfer learning and debugging the program in real-time. We solved lot of these problems by parallelizing the process on 4 GPUs.\n\n## Accomplishments\n\nWe are absolutely delighted about the way we handled the persistent Out of memory constrains on the GPU by using some techniques we devised for the model. We also made the network more robust and efficient to help us train the model faster. The front-end for the app and the real time API we created for the model.\n\n## What we learned\n\nEfficient NLP techniques, Training a very deep network on a large dataset, troubleshooting the network without losing the weights and transfer learning.\n\n## What's next for CrunchLetter\n\nCrunchLetter is not just for tech newsletters, The algorithm can be used for generation in numerous other fields like Marketing copy, Appstore description, Social stories and others. We plan to build a robust api for our model and use it for other use cases.\n\n## Built With\n\n* crunchbase\n* flask\n* keras\n* node.js\n* python\n* react\n* rest\n* rnn\n* techcrunch\n* tensorflow", + "content_format": "markdown" + }, + { + "url": "https://devpost.com/software/built-with/league-of-legends", + "domain": "devpost.com", + "file_source": "part-00847-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\nDevpost\n\nParticipate in our public hackathons\n\nDevpost for Teams\n\nAccess your company's private hackathons\n\nGrow your developer ecosystem and promote your platform\n\nDrive innovation, collaboration, and retention within your organization\n\nBy use case\n\nBlog\n\nInsights into hackathon planning and participation\n\nCustomer stories\n\nInspiration from peers and other industry leaders\n\nPlanning guides\n\nBest practices for planning online and in-person hackathons\n\nWebinars & events\n\nUpcoming events and on-demand recordings\n\nHelp desk\n\nCommon questions and support documentation\n\nSort by:\n\nDo you lose all your League Games!!????? Well introducing League of Legends Assist! Using the OMI Device you can ask it questions about the game!!!!\n\nAccessib.lol seeks to augment the League of Legends in-game experience to provide validation to members of the LGBTQ+ community and other minority groups.\n\nUsers will have the capability to dynamically display summoner names, champion select picks/bans, summoner spells, as well as live Twitter feeds about the match automatically on Stream Labs/OBS.\n\nTwitter Bot to track Trieuloo's League of Legends games\n\nA tool to help league of legends players stop being noobs\n\nCalculates Average Housing Sales Based on a slew of Metrics\n\nLeague is a videogame with over 140 unique characters. Our app uses player data to suggest new characters to try out.\n\nSaving the free web (period).\n\nA League of Legends application that assists players in choosing the best possible champion every game.\n\nLoL stat tracking and gameplay analysis\n\nEnjoy League of Legends without even touching your keyboard!", + "content_format": "markdown" + }, + { + "url": "https://devpost.com/software/bmatch", + "domain": "devpost.com", + "file_source": "part-00514-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\n## What it does\n\nBmatch is a chatbot built on Telegram and Facebook messenger platform that matches voluntary non paid blood donors to recipients based on certain criteria such as blood group and location of the recipient. It also provides additional details such as distance and estimated time of arrival to the recipient's destination. A user can also search for the nearest blood drive and blood bank to their location. All this is done with the help of here routing, search, and data layers. Wit AI was also used to integrate NLP into the chatbot which enabled us to fetch structured data from users' chat.\n\nHow it works\n\n* Telegram: Visit t.me/bmatchbot\n\nFacebook Messenger: Visit m.me/bmatchbot (Note: The Facebook messenger bot is currently not live because individual verification has been put to a hold by facebook. Send an email to bmatchbot@gmail.com to be added as a tester to the bot.)* Tap /start button (Telegram) or tap the Get Started button (Facebook messenger).\n* Registration: fill in the necessary information required such as phone number, gender, blood group/type, age, country and your current location. (The entered current location is queried on here's search API and results are displayed. You'd have to select from the list of displayed addresses)\n* Help: Send \"help\" to display the bot's menu.\n* Register as donor: Send \"register as a donor\" or tap => Donor > Register in the menu. Tap \"yes\" to proceed donor registration after reading the details of what it means.\n* Request for a donor: Send \"request for a donor\" or tap => Donor > Request in the menu. Tap \"yes\" to proceed donor request after reading the details of what it means.\n* Search for blood bank: Send \"search for a blood bank\" or tap => Blood Bank > Search in the menu. Fill in the location to fetch the nearest blood bank.\n* Search for blood drive: Send \"search for a blood drive\" or tap => Blood Drive > Search in the menu. Fill in the location to fetch the nearest blood drive.\n\nCore features\n\n* Register as a donor: It makes it possible for willing blood donors to register through the chatbot. The user would have to tap or send \"yes\" to continue with donor registration or \"no\" to cancel registration as a donor. A user can also opt-out or unregister as a blood donor to stop receiving blood donor requests.\n\nTest Phrase:\n\nTo register, try sending: Register as a donor\n\nTo opt-out, try sending: Opt out from donor registration* Request for a donor: Bmatch makes it possible for recipients to request for a blood donor. The recipient is matched based on the location and blood group of the available donor. Personal details such as name, phone number, gender, blood group, and the current location(health center) of the recipient would be sent to the matching blood donor. The current location of the recipient is queried with the here search API. This makes it easy for us to restrict users to input known locations on the here location service. When a match is found, additional details such as the distance and estimated time of arrival to the recipient is sent to the blood donor. The additional details(distance and eta) are fetched using the here routing API.\n\nTest Phrase:\n\nTo request, try sending: Request for a donor* Search for blood bank: Users can search for the nearest blood bank close to a selected location. The entered location is queried on the here search API to restrict users to input known locations on the here location service. Blood banks are fetched from three sources:\n\n* Here places data layer: Blood banks from Johannesburg, South Africa are fetched and updated into our database weekly using the data from the places layer.\n* Bmatch Partner: Blood banks are manually stored in our database upon request by users to do so. These blood banks are validated by bmatch before it is stored.\n* Here Search API (Browse): Blood banks are also fetched from the search API and filtered by category.\n\nTest Phrase:\n\nTo search, try sending: Find the nearest blood bank close to me.* Search for blood drive: The blood drive is created and stored in the database upon request by users. Users can now search for the nearest blood drive close to a selected location. The entered location is queried on the here search API to restrict users to input known locations on the here location service.\n\nTest Phrase:\n\nTo search, try sending: Find the nearest blood drive close to me.\nOther features\n\n* Enlist Blood Bank: Blood banks can contact us to enlist their blood bank on our platform.\n\nTest Phrase:\n\nTo enlist, try sending: Enlist blood bank.* Organize Blood Drive: A user can organize a blood drive in a selected location. To moderate creation of blood drives, interested individuals would have to contact us to create one.\n\nTest Phrase:\n\nTo organize, try sending: Organize a blood drive\n\n## How we built it\n\nTools Used:\n\n* Chatbot Channel: Telegram, Facebook Messenger.\n* Location Service: Here data layer, search, and route service.\n* Backend: Node JS/Sails JS\n* Database: Redis, MongoDB\n* Natural Language Processing: Wit AI\n* External APIs: Telegram Bot API, Facebook Messenger API, Wit AI API.\n\n## Challenges we ran into\n\nWe initially built the chatbot on the Facebook messenger platform but when we were about to go live, we noticed we couldn't because we needed to do the individual verification for the Facebook app. Due to covid19, Individual verification for apps is currently paused on the Facebook developers portal so we could not go live with the Facebook messenger platform. We decided to also build the chatbot on the Telegram platform.\n\n## What's next for Bmatch\n\n* Complete integration with the Facebook Messenger platform.\n* Restrict requests for blood donors to come from only recipients in known health centers (clinics, hospitals).", + "content_format": "markdown" + }, + { + "url": "https://devpost.com/software/will-o-the-wisp", + "domain": "devpost.com", + "file_source": "part-00564-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\n## Inspiration\n\nWhen moving 400+ CryptoKitties and 200+ ENS names from one account to another, it can be a huge inconvenience. The original motivation for this project was to create an Asset Store contract, which would own the tokens (and such), so the they could still be managed, but were bucketed together, allowing easy transfer of all items at once.\n\nThe CREATE2 opcode is a new and interesting feature to Ethereum that we've wanted to try out, and see if this idea was possible.\n\nThis also allows arbitrary code to be used with the assets, since often, as an asset's contract may be updated over time, new features are enabled or new contracts that can interact with them are released.\n\n## What it does\n\nThe CREATE2 call uses the byte code of a contract to determine its address, and we were curious if we could get around this restriction, which would allow either:\n\n* An address to be \"allocated\" in advance, without knowledge of the byte code that would ultimately be placed there\n* Re-create multiple different contracts at the same address (after a self-destruct) with wildly different byte code\n\nBy using a custom bootstrap (see the ./scripts/deploy-springboard for the assembly), the initCode used by the CREATE2 call, which in turn loads the bytecode from another source (the caller) and installs it at an address.\n\nThere are 2 modes it can work from:\n\n* Any EOA (Externally Owned Account), in which case the address has complete control over the contents of the Wisp\n* Any ENS Name, in which case the resolved address of the ENS name has complete control over the Wisp, and by changing the \"addr\" associated with an ENS name, the controlling address of a Wisp can be modified. Basically, it uses ENS as permission control.\n\n## How we built it\n\nWe built it using ethers.js, and with a lot of experimentation with custom byte code, trial and error (more emphasis on the errors).\n\nThe UI currently is a Command-Line Interface, but the library and contract is generic, so it could easily be migrated to any other form.\n\n## Challenges we ran into\n\nChallenge #1: Just getting the STATICCALL and byte code to copy properly from the caller. It required a lot of very tiny little steps to figure out where things went wrong, since when they did, everything just failed. Hand-crafting EVM machine code is quite error-prone; when you can't use a compiler, you really realize how wonderful they are.\n\nChallenge #2: Sending ether to a contract that is about to self-destruct completely obliterates it, as it evaporates from existence. We did not plan on that. So we had to fall back onto storing ether in the EOA of the Wisp owner.\n\n## Accomplishments that we're proud of\n\nIt works!! We are able to create and re-create arbitrary code and the same address repeatedly, and control assets at that address, emit events, send and receive ether and call external functions on any other contract as the Wisp.\n\n## What we learned\n\nA lot more about initCode and how the Solidity compiler prepares and writes out the EVM code for constructors.\n\n## What's next for Will-o-the-Wisp\n\nOptimizations; the existing bootstrap is very much a proof-of-concept. There are certain errors it should attempt to detect (and throw) and with a little effort the small amount of boiler-plate code required in a Wisp can be entirely eliminated.\n\nWe also need to wrap it up in a nicer UI, and will likely include it as part of the Firefly Multi-Sig.\n\nThere is also a few other techniques we would like to experiment with for managing ether, rather than moving it back to the controlling EOA, one idea is to \"leap-frog\" it or \"pay-it-forward\". The idea is at the end of execution, all ether is moved back into the Springboard; however at the beginning of each user interaction, that user pays the gas for the previous user, to move their ether back into their Wisp. The Wisp can safely hold ether, it just cannot receive it before its self-destruct.\n\n## Built With\n\n* blockchain\n* create2\n* ethereum\n* ethers\n* javascript\n* solidity", + "content_format": "markdown" + }, + { + "url": "https://devpost.com/software/unsupervised-representation-learning-for-contextual-bandits", + "domain": "devpost.com", + "file_source": "part-00564-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\n## Inspiration\n\nRecent advances in deep unsupervised learning allow for learning concise yet rich representations of images, audio, natural language, and more. Integrating these representations into sequential decision-making paradigms such as reinforcement learning is an essential step to creating general-purpose agents that can robustly incorporate diverse unstructured sources of data. We consider the contextual bandit setting as a tractable and real-world applicable version of reinforcement learning.\n\n## What it does\n\nWe base our work off of recent work from Google Brain: Deep Bayesian Bandits Showdown. This paper (and accompanying TensorFlow code) implements a simple MLP-based method for learning contexts from hand-crafted features via contextual bandit feedback.\n\nOur contribution: we extended this work to include a novel unsupervised representation learning step. Specifically, we pre-train an unsupervised model, and use the learned embedding as an input to the context encoding MLP. We re-implemented contextual bandit algorithms with deep Thompson sampling in PyTorch, and test our algorithm on several tasks, including the Mushroom dataset, MNIST, and polarized Yelp reviews.\n\nWe confirmed that our code works properly by testing on the Edible Mushroom dataset. For the image MNIST dataset and text Yelp dataset, we trained a variational autoencoder and obtained averaged BERT embeddings respectively. We load and process our data using the `PyTorch Hub` and `Torchvision Datasets` APIs.\n\n## How we built it\n\nWe ported a contextual bandit model from Tensorflow to Pytorch. Then, we took various sources of real data (MNIST, Yelp 1-2 reviews) and built unsupervised models to contextualize such data into a low-dimension context vector. Finally, we trained the contextual bandit model on the generated contexts and compared regret to its baselines.", + "content_format": "markdown" + }, + { + "url": "https://devpost.com/software/patientport", + "domain": "devpost.com", + "file_source": "part-00275-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\nDiscord:\n\n* spicedGG#1470\n* jeremy#1472\n\n## 💡 Inspiration\n\nAs healthcare is continuing to be more interconnected and advanced, patients and healthcare resources will always have to worry about data breaches and the misuses of private information. While healthcare facilities move their databases to third-party providers (Amazon, Google, Microsoft), patients become further distanced from accessing their own medical record history, and the complete infrastructure of healthcare networks are significantly at risk and threatened by malicious actors. Even a single damaging attack on a centralized storage solution can end up revealing much sensitive and revealing data.\n\nTo combat this risk, we created Patientport as a decentralized and secure solution for patients to easily view the requests for their medical records and take action on them.\n\n## 💻 What it does\n\nPatientport is a decentralized, secure, and open medical record solution. It is built on the Ethereum blockchain and securely stores all of your medical record requests, responses, and exchanges through smart contracts. Your medical data is encrypted and stored on the blockchain.\n\nBy accessing the powerful web application online through patientport.tech, the patient can gain access to all these features.\n\nFirst, on the website, the patient authenticates to the blockchain via MetaMask, and provides the contract address that was provided to them from their primary care provider.\n\nOnce they complete these two steps, a user has the ability to view all requests made about their medical record by viewing their “patientport” smart contract that is stored on the blockchain.\n\nFor demo purposes, the instance of the Ethereum blockchain that the application connects to is hosted locally.\n\nHowever, anyone can compile and deploy the smart contracts on the Ethereum mainnet and connect to our web app.\n\n## ⚙️ How we built it\n\n| Application | Purpose |\n| --- | --- |\n| React, React Router, Chakra UI | Front-end web application |\n| Ethers, Solidity, MetaMask | Blockchain, Smart contracts |\n| Netlify | Hosting |\n| Figma, undraw.co | Design |\n\n## 🧠 Challenges we ran into\n\n* Implementation of blockchain and smart contracts was very difficult, especially since the web3.js API was incompatible with the latest version of react, so we had to switch to a new, unfamiliar library, ethers.\n* We ran into many bugs and unfamiliar behavior when coding the smart contracts with Solidity due to our lack of experience with it.\n* Despite our goals and aspirations for the project, we had to settle to build a viable product quickly within the timeframe.\n\n## 🏅 Accomplishments that we're proud of\n\n* Implementing a working and functioning prototype of our idea\n* Designing and developing a minimalist and clean user interface through a new UI library and reusable components with a integrated design\n* Working closely with Solidity and MetaMask to make an application that interfaces directly with the Ethereum blockchain\n* Creating and deploying smart contracts that communicate with each other and store patient data securely\n\n## 📖 What we learned\n\n* How to work with the blockchain and smart contracts to make decentralized transactions that can accurately record and encrypt/decrypt transactions\n* How to work together and collaborate with developers in a remote environment via Github\n* How to use React to develop a fully-featured web application that users can access and interact with\n\n## 🚀 What's next for patientport\n\n* Implementing more features, data, and information into patientport via a more robust smart contract and blockchain connections\n* Developing a solution for medical professionals to handle their patients’ data with patientport through a simplified interface of the blockchain wallet\n\n## Built With\n\n* blockchain\n* chakra-ui\n* ethereum\n* ethers.js\n* figma\n* git\n* github\n* javascript\n* metamask\n* netlify\n* react\n* react-router\n* smart-contracts\n* solidity\n* web3", + "content_format": "markdown" + }, + { + "url": "https://devpost.com/software/cipherpad", + "domain": "devpost.com", + "file_source": "part-00035-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\n## Inspiration\n\nI've used Wolfram Alpha in the past, and the technology was fascinating to me. The computational engine was extremely powerful and its database provided me with many answers and aided me in my academic pursuits. I wanted to build something that utilized this technology and impacted the student community by helping them acquire more knowledge quicker and collaborate with this tool.\n\n## What it does\n\nCipherPad is a computational knowledge engine similar to Wolfram Alpha, but with handwriting-recognition and speech-recognition technology. CipherPad will interpret anything you write down directly on the tablet or phone or anything you say out loud, and it will instantly give you results and information on the subject. It's very powerful in terms of accepting various types of input. Additionally, students can share these questions, output, and answer to their friends through a Questions Feed (built in the app), and message or email any of their contacts.\n\n## How I built it\n\nI built the computational engine by incorporating Wolfram Alpha's database, speech-recognition technology with SpeechKit, and the handwriting-recognition technology with MyScript API. I also built the app backend through REST API calls and Parse for the social collaboration and sharing.\n\n## Challenges I ran into\n\nI had many challenges with interpreting the XML results I received from the REST API calls and displaying them properly and neatly on the UI. I had to encrypt, encode, convert, and interpret a huge variety of information in order to display everything on the UI. The backend, overall, was fairly tough to build.\n\n## Accomplishments that I'm proud of\n\nI'm proud that I was able to build an app that incorporates so many innovative and unique technologies, especially the computation-knowledge engine and handwriting-recognition technology. I was also able to integrate this technology into a useful product that students will find extremely useful in their everyday lives. Won 2nd place at MakeHacks 2015\n\n## What I learned\n\nI learned a lot about how databases work, using many APIs from speech-recognition to handwriting-recognition technology, navigation among different scenes, app design and flow, and integration of all these different technologies.\n\n## What's next for CipherPad\n\nI can continue to improve the handwriting-recognition technology in CipherPad and extend the computational knowledge engine's capabilities.", + "content_format": "markdown" + }, + { + "url": "https://devpost.com/software/audiodigits", + "domain": "devpost.com", + "file_source": "part-00145-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\n## Inspiration\n\nAny company can leverage data science and machine learning to add value but few succeed. Perhaps this is because decision makers have a misconception that fancy algorithms are self-contained solutions; in goes some data and out comes an answer. It shouldn't be like that. Data science and machine learning techniques are tools that must be understood to be used effectively. If companies better understand these tools then they can make decisions that leverage the value of data science and machine learning.\n\n## What it does\n\nAudio Digits is an interactive speech recognition web app that teaches you to train a machine learning model and recognize spoken digits.\n\n## How we built it\n\nAudioDigits was built in 1 week by 2 University of California San Diego students who wanted their previous machine learning python research to be more accessible.\n\n## Challenges we ran into\n\nThere have been many implementation challenges up to this point including a lack of knowledge of javascript, html, and css. There was also a point where we followed a tutorial that included Docker and learning those basics was more than we had asked for. A few design challenges persist. For the moment an inconsistent audio sample rate, audio recording length, and a small training set for the machine learning model prevent the first iteration of this product from predicting well. Moreover the model’s prediction is sensitive to audio sample rate and the choice of software used for recording: audacity v.s. recorder.js\n\n## Accomplishments that we're proud of\n\nAlec and Mingcan are proud of completing their goal and delivering an immersive web app that demonstrates speech recognition machine learning. They are also proud to learn about audio signal processing techniques for speech recognition.\n\n## What we learned\n\nAlec and Mingcan learned that web development is challenging because a developer must optimize multiple pipelines while also improving user experience and sometimes these two are directly opposed.\n\n## What's next for AudioDigits\n\nFuture feature improvements include: clipping the audio files only where there is sound (rn sound capture isn’t immediate), allowing user recorded data to be used as training points, and using librosa to compute the mfccs.", + "content_format": "markdown" + }, + { + "url": "https://devpost.com/software/mo-ai", + "domain": "devpost.com", + "file_source": "part-00649-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\nHomepage https://convexai.vercel.app\n\nDeploy convex with ❤️ TiDB & Vercel\n\n### Real Demo Code\n\n# KEY FEATURE !!!\n\n* GENERATE Productivity App, not beautiful junk\n* One-click Deployment (based on Vercel & TiDB Cloud).\n* AI Inspiration, Refining Ideas.\n* Just input your idea to Convex\n* One-click AI-generated Full Project.\n* auto publish change to TiDB Cloud / Vercel\n* One click to Generate Doc/ NextJS API & Web / TiDB Data Service / Chatgpt Plugin ....\n\n## Inspiration\n\nHere's our answer:\n\nPeople doesn't need to master programming, but to focus on creative thinking.\n\nDuring this Hackathon, we validated two pioneering features of this concept.\n\n* Engineering\n* Multimodality\n\nTo make sure AI create Productivity App, NOT **beautiful junk\n\n## What it does\n\none click deployment our seed app\n\ninput your idea.\n\nWait AI generate everything.\n\n* doc\n* test\n* api\n* page\n* chatgpt plugin\n* tidb data service\n* ...\n\nTiDB Cloud and Vercel will handle all devops work!\n\n## How we built it\n\n* NextJS front/ API\n* Prisma ORM\n* TiDB Serverless Database\n* TiDB Data Service AP\n* OpenAI Gpt4, Vercel AI SDK\n* Vercel. deploy\n\n## Challenges we ran into\n\n> \n\nSee us TiDB + Vercel Deploy button? It's magic. guess how we found and build it?\n\nTo generate controllable and expected projects without falling into the illusion of producing mere toys, we have implemented several approaches:\n\nLeveraging cutting-edge research: We incorporate insights from the latest academic papers, including techniques like \"Chain-of-thought\" and \"ReACT,\" to ensure our AI remains grounded in real-world applications.\n\nAI Agent Pair Programming: We have introduced a novel method where two AI agents engage in a dialogue, providing different perspectives on the task. This iterative process helps generate stable and reliable code.\n\nE2E Testing as a Reward: During training, we use End-to-End (E2E) testing as a rewarding mechanism. Multiple iterations are performed to generate projects that pass the user's project proposal-based tests. This significantly enhances the reliability of the resulting projects.\n\nBy combining these methods, we aim to produce AI-generated projects that are controllable, dependable, and align with user expectations, going beyond the limitations of mere toys.\n\n### Also We have optimized the TiDB Data Service.\n\nBy creating our custom OpenAPI to Data Service Config tool, we have successfully enabled AI to learn how to write TiDB Data Service configurations.\n\n### Also We found 10+ bug for TiDB vercel integration and TiDB Data service\n\n### Also like our 2022 tidb hackathon project mirror*. we are preparing a paper. hoping some conf can accept us again :)\n\n> \n\nTwo weekends in a row of almost all-nighters, so tired, please automatically ignore the grammer errors and typo\n\n## Built With\n\n* nextjs\n* tidb\n* tidbdataservice\n* vercel\n* vercelai", + "content_format": "markdown" + }, + { + "url": "https://devpost.com/software/chroneus", + "domain": "devpost.com", + "file_source": "part-00145-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\n## Inspiration\n\nWe wanted to make a smarter calendar app that's tailored towards our needs.\n\n## What it does\n\nChroneus is a smart calendar app that organizes your day and informs you about upcoming events. It allows you discover nearby events and events that your friends are attending, integrating them together into one complete calendar.\n\n## How we built it\n\nWe used the Ionic Framework to build a mobile app that is optimized for both iOS and Android devices. We also built a backend REST API for our app to communicate with.\n\n## Challenges we ran into\n\nIt was our first time trying to build a mobile app, so we spent a majority of the time debugging and figuring out how to make stuff work with a phone. Also, the fact that we build our own API for our app to communicate with was a challenge in itself.\n\n## Accomplishments that we're proud of\n\nWe're proud of making a mobile app that people can use in their everyday lives. This was our first time making a mobile hack and an API to go with it, so we're definitely proud of that too.\n\n## What we learned\n\nWe learned how to integrate our own API with our app that we built.\n\n## What's next for Chroneus\n\nDefinitely working on adding additional features and integrating more services for people to interact with.\n\n## Built With\n\n* angular.js\n* flask\n* ionic-framework\n* mysql\n* python", + "content_format": "markdown" + }, + { + "url": "https://devpost.com/software/pmababag", + "domain": "devpost.com", + "file_source": "part-00514-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\n## Inspiration\n\nWe wanted to design a reliable method for predicting which stocks will be successful and to help investors make more educated decisions when buying/selling stocks.\n\n## What it does\n\nThis web application prompts the user to enter the ticker for a stock they would like to know more about. The website displays sentiment information about the stock from twitter posts, along with a graph showing the distribution of such sentiment information. There is also a recommendation if to buy the stock based off the given sentiment.\n\n## How we built it\n\nWe used html, css, and d3js in the frontend to give the user a clean experience. We used flask in the backend to provide the data.\n\n## Challenges we ran into\n\nWe had no previous experience in networking, so it was difficult to set up the flask server and send the predictions from the machine learning model back to the browser without running into API restrictions. This lead us to not being able to use a neural network, which was our previous plan, to predict whether or not to buy, sell or hold the stock.\n\n## Accomplishments that we're proud of\n\nSuccessfully connecting the backend the the frontend.\n\n## What we learned\n\nWe learned how to use flask, keras, json, d3js, and how to connect these elements together to make a working application.\n\n## What's next for StockEye\n\nWe plan to analyze more sources for sentiment data, such as social media platforms. We also plan to incorporate a neural network to give users better stock predictions along with a wider set of training data, something we significantly lacked prior.\n\n## Built With\n\n* javascript\n* python\n* web-application", + "content_format": "markdown" + }, + { + "url": "https://devpost.com/mufeez-amjad/followers", + "domain": "devpost.com", + "file_source": "part-00796-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\nDevpost\n\nParticipate in our public hackathons\n\nDevpost for Teams\n\nAccess your company's private hackathons\n\nGrow your developer ecosystem and promote your platform\n\nDrive innovation, collaboration, and retention within your organization\n\nBy use case\n\nBlog\n\nInsights into hackathon planning and participation\n\nCustomer stories\n\nInspiration from peers and other industry leaders\n\nPlanning guides\n\nBest practices for planning online and in-person hackathons\n\nWebinars & events\n\nUpcoming events and on-demand recordings\n\nHelp desk\n\nCommon questions and support documentation\n\nDeveloper and Designer\n\nHello, I'm Michael. I like to draw paintings and listening to music in my spare time. By profession, I'm a financial consultant.\n\nHello, I'm Kenny From USA. I love to do some creative things. I'm much passionate about my work. I like reading books.\n\nDeveloper and a lot more :)\n\nI work at the Fit My Money Company. Our website was created by professionals and can positively affect your financial decisions.\n\nI’m a self-taught Full stack Software Developer with the skills to deliver Effective solutions that meets users requirements.\n\nSWE @ Warp | Hack the North '19-'22 Backend Lead/Infra | StarterHacks '19 Organizer | JAMHacks '17 '18 Co-founder | Waterloo BCS | GTech MCS\n\nComputer Science at the University of Waterloo", + "content_format": "markdown" + }, + { + "url": "https://devpost.com/software/studyfire", + "domain": "devpost.com", + "file_source": "part-00796-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\n## Inspiration\n\nThe inspiration for this project was to utilize the Johns Hopkins API's to create a course list. Students can then list the courses they are signed up for and contact other students in the same course to form study groups.\n\n## What it does\n\nHelps students organize study groups and collaborate.\n\n## How I built it\n\nBuilt using Node.js, Google Firebase, and Hopkins Hub API's\n\n## Challenges I ran into\n\nIssues regarding parsing of data and data types from Firestore.\n\n## Accomplishments that I'm proud of\n\nBeautiful UI, integration with JHU HUB. Course searching, Use of Database.\n\n## What I learned\n\nLearned about databases, fetching of data, parsing of data, and integration into a single webapp.\n\n## What's next for StudyFire\n\nFurther work to enable full feature set for optimal simplicity and functionality", + "content_format": "markdown" + }, + { + "url": "https://devpost.com/software/weblock-kwh1bi", + "domain": "devpost.com", + "file_source": "part-00608-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\n## Inspiration\n\nParents often leave their phones in the hands of their children. Life can be hectic and thus it is not always possible to supervise what they are up to.\n\n## What it does\n\nTherefore we decided to build an app that makes use of the Microsoft Cognitive Services Face API in order to determine the age of the user. Consequently we can moderate the content that the user is able to access in the app browser. If they are under 18 then certain websites such as those affiliated with gambling (BetFred, Ladbrokes) will be blocked.\n\n## How we built it\n\nBy using the Microsoft Cognitive Services Face API provided in conjunction with Android Studio.\n\n## Challenges I ran into\n\nTrying to use the camera alongside the Face API. Additionally merging all the files was a bit awkward and required a workaround in Android Studio.\n\n## Accomplishments that I'm proud of\n\nCompleting the project on time and accomplishing what we set out to do.\n\n## What we learned\n\nDeveloping software in a team. Implementing APIs.\n\n## What's next for WebLock\n\nIdeally we would like it to be integrated into facial recognition authentication software built into the OS. Subsequently content across the entire system would be moderated, such as hiding particular apps and blocking certain domains from other browsers rather than just the in-app browser.\n\n## Built With\n\n* java\n* microsoft-cognitive-services-api", + "content_format": "markdown" + }, + { + "url": "https://devpost.com/software/doublekick", + "domain": "devpost.com", + "file_source": "part-00796-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\n# DoubleKick\n\nQITCOM Appathon Hack\n\n==========\n\nThe whole application is developed using Django.\n\nFor now you only need whatever version of Python to run the app.\n\n### How to run the app\n\nExecute the code below inside the folder doublekick\n\n `python manage.py runserver` \nand the app will run, out of the box.\n\n==========\n\nMore info to come!", + "content_format": "markdown" + }, + { + "url": "https://devpost.com/software/math-genie", + "domain": "devpost.com", + "file_source": "part-00070-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\n## Inspiration\n\nThe inspiration for this skill was my 10-year old daughter. We brainstormed how Alexa could help her and she came up with the idea to create a skill to quiz her on her multiplication facts. We decided to name the skill \"Math Genie\" because it is a fun and engaging name for young kids.\n\n## What it does\n\nKids need a lot of practice when learning their multiplication facts. Flashcards are SO outdated! Math Genie grants kids' wishes by helping them learn their multiplication facts. Kids have fun by asking the genie to tell them a random multiplication fact or by asking for the answer to a specific multiplication problem! The genie will even help kids study by quizzing them on certain multiplication facts. During the quiz, after guessing incorrectly three times, the genie reveals the answer. Kids will have fun studying!\n\n## How I built it\n\nI built this skill using a Lambda written in Python.\n\n## Challenges I ran into\n\nI am new to Python so the syntax of the language was a bit challenging. I also had never used a skill that saved session variables between invocations so figuring this out was challenging (and fun).\n\n## Accomplishments that I'm proud of\n\nI am proud that my daughter and I partnered together to create a skill. She came up with the idea and then I implemented it. She helped a lot with testing and identifying bugs.\n\n## What I learned\n\nI learned a lot about session management and storing values between calls. I never knew this was possible, and I am glad that I learned more about it. All future skills I develop can now make use of this nice feature.\n\n## What's next for Math Genie\n\nMy daughter will present Math Genie at her school with the intent that kids will use it to help them learn.\n\n## Built With\n\n* lambda\n* python", + "content_format": "markdown" + }, + { + "url": "https://devpost.com/christinakayastha/likes", + "domain": "devpost.com", + "file_source": "part-00796-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\nDevpost\n\nParticipate in our public hackathons\n\nDevpost for Teams\n\nAccess your company's private hackathons\n\nGrow your developer ecosystem and promote your platform\n\nDrive innovation, collaboration, and retention within your organization\n\nBy use case\n\nBlog\n\nInsights into hackathon planning and participation\n\nCustomer stories\n\nInspiration from peers and other industry leaders\n\nPlanning guides\n\nBest practices for planning online and in-person hackathons\n\nWebinars & events\n\nUpcoming events and on-demand recordings\n\nHelp desk\n\nCommon questions and support documentation\n\nPower up your most important meetings with Super Scrum! This is a platform of Jira add-ons to facilitate and streamline Agile team ceremonies\n\nEnergy Bee helps you set goals to reduce energy usage and cost.\n\nEVA uses natural language processing and machine learning to automate on-the-go clinical workflow with athenahealth.", + "content_format": "markdown" + }, + { + "url": "https://devpost.com/software/monsoonview", + "domain": "devpost.com", + "file_source": "part-00796-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\n `Problem Statement` \nOne of the major problems in the city of Mumbai during the monsoon, is the large number of pedestrian, vehicular and public transport traffic. These traffic conditions are generally worse when the amount of rainfall received is very high. A common occurrence is that citizens generally like to take their cars back home no matter what the road and water situation without bothering of condition of the car or more importantly their safety. One of the prime reasons is lack of real time traffic and advance information on road conditions and traffic diversions etc.\n\nThus to overcome this, I propose a solution that crowdsources information from vehicles and non-driving citizenry in the city.\n\n `Target User` \nThe main user of the application is the general public and the civic bodies in the city. The data generated from vehicles and crowdsourcing, combining with data from various sensors that is at the disposable of the civic authorities enables them to get a micro view of the effect of monsoon on the movement of the citizens on the streets of Mumbai.\n\nThe general public has access to crowdsourced information about water levels, safe parking spots, car breakdowns along their intended route of travel. In the current situation the application cannot be used while driving, as the safe driving mode has not been tested thoroughly.\n\n `Data Generated` \nThe application not only consumes public data but also makes crowdsourced data from vehicles and general public accessible to the public and civic authorities.\n\nData From Vehicle\n\nThis data is read from the vehicle interfaces and uploaded to the backend to get real time updates on the condition of driving through the rain. --- Water Level(Simulated, but demoed through use of BMP Bosch 180 sensor with an Arduino. In car set up attach it to the base of the car at the front. ) --- Vehicle Speed(OpenXC) --- Wiper Speed(Simulated -- Assuming a Smart Wiper connected) --- Parked Cars(OpenXC) --- Car Breakdowns(Interpreting through OpenXC data) --- Latitude and Longitude(OpenXC data) --- Water Level(Simulated -- Connecting a Barometric Pressure Sensor to the base of the car)\n\nCrowdsourced Data From Pedestrians\n\nThe pedestrians can use the Android app to answer to these questions. --- Approximate Water Level --- Any car breakdowns nearby --- Safe Parking Spots nearby\n\nIn a live system, it is possible to monitor (Not implemented) ---intake_air_temperature ---intake_manifold_pressure ---mass_airflow\n\nThese can then be used in conjunction with water level values to deduct chances the car taking in water and possibly stalling.\n\n `Application Developed` \nAndroid App\n\nThere 3 main UIs of the android app --- City View - This is a map interface that displays all the generated data to make it easy to read and understand the situation across the city. It displays information related to parking, water level, car breakdowns, vehicle speed etc. It gives drivers, pedestrians and city officials, on the ground view of the effect of the monsoon on the traffic and general safety. This information is overlayed with information from Mumbai such as chances of flooding in different areas, traffic diversions etc.\n\n---Vehicle Interface to Android - This UI displays the information gathered from the Vehicle. This part has been tested using OpenXC-Enabler application from the open-xc library on github. The data is conveyed to the Backend every 2 minutes. All the connection between the interface and the android device is assumed to have taken place when the vehicle is stationary. There is no functionality to use when the vehicle is in motion.\n\n--- Crowdsourcing - This UI asks users, not driving to provide information in their neighbourhood, such as safe parking, car breakdowns, water levels. This information is relayed to the backend, to improve the quality of the real-time data on the Monsoon.\n\nApp Engine Backend(Google App Engine)\n\nThe backend provides interfaces to write data from vehicles and users. This data can be then queried to find relevant information by area or to get general information from cars or crowdsourced information. Car owner can access their own vehicle data as well. It stores information in index, that can be queried for filtering information by location, date etc.", + "content_format": "markdown" + }, + { + "url": "https://devpost.com/software/supremeuoft", + "domain": "devpost.com", + "file_source": "part-00145-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\nCreated by: Willa Kong, Stephen Lin, Mark Branton, Azin Taqizadeh, with Team ID 0B9B6\n\n## SupremeUofT - Spellerly\n\nAutocorrect for handwriting on a tablet\n\nThese days, many students have been converting from traditional pen and paper writing to tablet writing. However, in this digital age, the benefits of autocorrect have not yet caught up to this innovative note-taking. So, using the power of OCR and NLP, we have created Spellerly, the tool to autocorrect handwriting.\n\nWe learned about machine learning as we had a chance to work with NLP and training models for it. In addition, we were able to learned more about messaging services which are similar to well-known services such as Apache Kafka and IBM MQ. We also worked with hardware and were able to push the bounds of the capabilities of the Raspberry Pi using our current knowledge.\n\nSo, this was built using a Raspberry Pi and display to simulate a tablet a student may use. Then, using the message broker Solace, the Raspberry Pi would send the note that was processed with OCR to our Python service that would then take the given text and determine if there are any spelling mistakes that need to be correct. In particular, it provides a suggested replacement using a custom-trained, TextBlob model that would find the closest words that matches the intended meaning.\n\nThe challenges we faced included viability of our original ideas, hardware issues, as well as connection issues with the message broker. In the beginning, we had a much larger scope for our original idea but as time progressed, we realized we had to narrow the scope as it took too long to debug some of our problems. In terms of the hardware, we had a tough time connecting to our LCD screen but in the end, we borrowed another group's screen that we knew for sure worked. Additionally, the eduroam network was not connecting to our Raspberry Pi and, as a result, we had to go and purchase an ethernet cable and use hotspot for its WiFi. Finally, as Solace is a budding message broker, there are not a lot of problems that have been explored and thus the problems we faced took some time to identify and solve.\n\n## Built With\n\n* python\n* raspberry-pi\n* solace\n* textblob", + "content_format": "markdown" + }, + { + "url": "https://devpost.com/Sweeneytr", + "domain": "devpost.com", + "file_source": "part-00070-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\nDevpost\n\nParticipate in our public hackathons\n\nDevpost for Teams\n\nAccess your company's private hackathons\n\nGrow your developer ecosystem and promote your platform\n\nDrive innovation, collaboration, and retention within your organization\n\nBy use case\n\nBlog\n\nInsights into hackathon planning and participation\n\nCustomer stories\n\nInspiration from peers and other industry leaders\n\nPlanning guides\n\nBest practices for planning online and in-person hackathons\n\nWebinars & events\n\nUpcoming events and on-demand recordings\n\nHelp desk\n\nCommon questions and support documentation\n\nAn aspiring full-stack developer, currently an undergraduate at Northearstern University. Looking to improve in web & mobile development.\n\nView Boston in a whole new way with the power of open source data\n\nHelps you to get things done...with a little bit of motivation.\n\n3D printing recycling center\n\nFind the lowest priced gas station for you with GetMeGas, a sleek non-invasive app running over google maps.", + "content_format": "markdown" + }, + { + "url": "https://devpost.com/software/pinbike-bike-pooling", + "domain": "devpost.com", + "file_source": "part-00564-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\n## Inspiration\n\nSoutheast Asian cities are always packed with motorbikes and scooters. Thousands of people commute to CBDs every day, just to find themselves stranded in traffic jams. Yet, most of them ride by themselves, not carrying any passenger. That is a waste, in terms of room for transportation, fuel cost, and pollution cost. If many of them switch to bike pooling, either being a passenger or taking one, the number of bikes on the roads will be significantly reduced, which then ease the traffic flow, save money and the environment.\n\n## What it does\n\nOur app allows bike riders to share their rides, and passengers to book them.\n\n## How we built it\n\nAt the core of the app is the trip-matching algorithm. When someone posts a trip (either as a rider offering a ride, or a passenger requesting one), we save them to our database. Then when someone else posts another trip that can be matched with the already-saved trip, the compatible trips are shown. An easier case is that the users just go to the app, view the already-posted trips and send direct requests to the trip creators.\n\nAt the back-end, we have a server that records all trips to do the matching. Another important (and relevant to this competition) is the chat history between two users (more on it later).\n\nAt the front-end, we already built an Android app which allows users to post their trips, also book other people's trips. iOS app is being built, and will be released in a month's time.\n\n## Challenges we ran into\n\nThe first challenge is the trip-matching algorithm. We have to consider multiple factors such as:\n\n* Waypoints of the trips, especially if a trip is just part of another one\n* Time: a rider may depart quite a while ago but he still can pick up a passenger en route, because his trip is long enough that he hasn't arrived at the passenger's departure point yet. In that case we have to come up with a reasonable travel time estimation algorithm.\n\nThe second challenge is the communication problem. It is NOT a technical problem, but rather a User Experience (UX) problem. From our field tests with early users, they DO NOT want to disclose their phone numbers to the public, even to potential fellow riders before they make their trips. So we decide that we only show phone numbers after the two sides agree to pair with each other. However, then how do they communicate before they agree to pair? We need to allow them to chat, that's the simplest but most effective solution. When we tested the chat function of our app, users are so delighted and feel more eager to use the app.\n\nThen we face the third challenge, since MMX doesn't have chat history, all delivered messages are not retained, so when the users come back to the conversation, they can't see the history. We have to build that function ourselves, by saving all the messages that belong to chat participants, and retrieve them when needed.\n\n## Accomplishments that we are proud of\n\n* Build a well-functioning app in a short time (3 months).\n* Integrate Magnet chat function into our app, and build message history function. All in less than 3 weeks.\n\n## What we learned\n\nWe shouldn't build everything from scratch, that takes a hell of time. Thanks to MMX server and SDK, our job is much easier. At the end of the day, bringing a seamless experience to our users is more important than going experimenting with all the stuff we are not strong at.\n\n## What's next for PinBike, bike-pooling:\n\n* iOS version is coming out in late November\n* Find more users on both sides of the market\n* Expand geographically, currently we are building solutions for Vietnamese cities (Ho Chi Minh, Hanoi, Danang), so the list of popular locations are restricted to those in such cities. That said, the functionality should work everywhere now.", + "content_format": "markdown" + }, + { + "url": "https://devpost.com/sirhype/likes", + "domain": "devpost.com", + "file_source": "part-00847-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\nDevpost\n\nParticipate in our public hackathons\n\nDevpost for Teams\n\nAccess your company's private hackathons\n\nGrow your developer ecosystem and promote your platform\n\nDrive innovation, collaboration, and retention within your organization\n\nBy use case\n\nBlog\n\nInsights into hackathon planning and participation\n\nCustomer stories\n\nInspiration from peers and other industry leaders\n\nPlanning guides\n\nBest practices for planning online and in-person hackathons\n\nWebinars & events\n\nUpcoming events and on-demand recordings\n\nHelp desk\n\nCommon questions and support documentation\n\nThis portfolio is private.", + "content_format": "markdown" + }, + { + "url": "https://devpost.com/software/competition-assessment-and-market-analysis", + "domain": "devpost.com", + "file_source": "part-00514-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\nCompetition Assessment - Market Sentiment We may consider our app as an extension to the existing Yelp/Zomato applications. While Yelp/Zomato is quite useful for people who wanna choose a restaurant to have a meal, our app will be useful for the restaurant owners who like to improve their businesses. Our app uses Zomato APIs to fetch real-time data from their servers and analyze it with our restaurant’s data in order to generate a comprehensive market report. By this, we can see how many restaurants are there in the neighborhood which serve the same cuisine, how they charge, what’s the traffic and on top of all this data, we can even perform analysis comparing our own data. We are taking it even further by pulling out the population stats and spread of various races in the neighborhood. We will go to as deep as zips, counties and tracts to make the population stats as precise as possible. We will also build a favorability matrix between population and cuisines (Ex. Asians prefer Chinese, Indian whereas Hispanics prefer Mexican, Spanish and Italian etc.). Moreover, we will also pull out the reviews for each competing restaurant and put them under textRazor and alchemy APIs to understand the emotions, sentiments of a particular review. We will also find good and bads of a restaurant based on this (Ex: Golden Dragon - Orange Chicken is heavenly, Lamb is good, Desserts are bad etc.)\n\nIdea Abstract by Manideep Bollu - 4/15/2016", + "content_format": "markdown" + }, + { + "url": "https://devpost.com/software/board-to-chatbot", + "domain": "devpost.com", + "file_source": "part-00145-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\n## Inspiration\n\nI noticed Monday is launching a marketplace and I saw Formito a great opportunity to create an app for marketplace where it can help Monday users to create a different interface other than traditional forms for collecting data for their board. I believe this could highly help marketers or businesses who have connection with customers.\n\n## What it does\n\nThis app will let you create a chatbot from your board, share it with your audiences, and see responses come in as a task to your board.\n\n## How I built it\n\nI started reading your documentation and playing with your GraphQL endpoint. Then read your getting started guide for creating an app and started coding! The integration in Formito for appending a task to a board wasn’t that hard and I built it in a day. But the app for marketplace and conversion tool took me about two weeks because Formito hasn’t had this feature before. So, I had to build React components, create API end-point, create a conversion toolkit, etc. But it definitely has worth it.\n\n## What I learned\n\nThey're always learning points where you work with a new tool or API. Monday inspired me to extend Formito and make connections with other tools to eventually build a better experience for users through a one-click converter. Also, by studying your column types I figured out that I need to extend Formito to support more complex data types that have already started.\n\n## What's next for Formito\n\nI would like to extend the editor in the app where users can have more customization options before converting the board. Also, if I see people are interested in this app, I add support for more column types.", + "content_format": "markdown" + }, + { + "url": "https://devpost.com/software/leonard-bones-mccoy", + "domain": "devpost.com", + "file_source": "part-00649-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\n## Inspiration\n\n* few doctors\n* many people visit doctors for very minor issues\n* both patients and doctors spend a lot of time with issues that could've been treated at home\n\n=> provide basic health services using a Google Home voice assistant, by giving proper tips and treatments for issues\n\n## What it does\n\nIt analyses the user's minor health issues and provides high-quality tips to help cure these issues.\n\n## How we built it\n\nWith Python, Google Cloud Platform and love.\n\n## Challenges we ran into\n\nGetting the bot running on the Google Home and continuously deploying working versions.\n\n## Accomplishments that I'm proud of\n\nParsing the NHS page with a custom parsing approach: structuring by tags.\n\n## What I learned\n\nDialogflow has a V2 API, apparently.\n\n## What's next for Leonard \"Bones\" McCoy\n\nProviding basic health assistant on other surfaces, e.g. the Google Assistant app.", + "content_format": "markdown" + }, + { + "url": "https://devpost.com/karthikvaddi", + "domain": "devpost.com", + "file_source": "part-00035-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\nDevpost\n\nParticipate in our public hackathons\n\nDevpost for Teams\n\nAccess your company's private hackathons\n\nGrow your developer ecosystem and promote your platform\n\nDrive innovation, collaboration, and retention within your organization\n\nBy use case\n\nBlog\n\nInsights into hackathon planning and participation\n\nCustomer stories\n\nInspiration from peers and other industry leaders\n\nPlanning guides\n\nBest practices for planning online and in-person hackathons\n\nWebinars & events\n\nUpcoming events and on-demand recordings\n\nHelp desk\n\nCommon questions and support documentation\n\nI am very passionate about Web Development, and strive to better myself as a developer.\n\nCryptowatch is a crytpo price tracking application which will send sms and news to users through whatsapp and sms", + "content_format": "markdown" + } +] \ No newline at end of file