anchor
stringlengths 159
16.8k
| positive
stringlengths 184
16.2k
| negative
stringlengths 167
16.2k
| anchor_status
stringclasses 3
values |
---|---|---|---|
## Inspiration
As women ourselves, we have always been aware that there are unfortunately additional measures we have to take in order to stay safe in public. Recently, we have seen videos emerge online for individuals to play in these situations, prompting users to engage in conversation with a “friend” on the other side. We saw that the idea was extremely helpful to so many people around the world, and wanted to use the features of voice assistants to add more convenience and versatility to the concept.
## What it does
Safety Buddy is an Alexa Skill that simulates a conversation with the user, creating the illusion that there is somebody on the other line aware of the user’s situation. It intentionally states that the user has their location shared and continues to converse with the user until they are in a safe location and can stop the skill.
## How I built it
We built the Safety Buddy on the Alexa Developer Console, while hosting the audio files on AWS S3 and used a Twilio messaging API to send a text message to the user. On the front-end, we created intents to capture what the user said and connected those to the backend where we used JavaScript to handle each intent.
## Challenges I ran into
While trying to add additional features to the skill, we had Alexa send a text message to the user, which then interrupted the audio that was playing. With the help of a mentor, we were able to handle the asynchronous events.
## Accomplishments that I'm proud of
We are proud of building an application that can help prevent dangerous situations. Our Alexa skill will keep people out of uncomfortable situations when they are alone and cannot contact anyone on their phone. We hope to see our creation being used for the greater good!
## What I learned
We were exploring different ways we could improve our skill in the future, and learned about the differences between deploying on AWS Lambda versus Microsoft Azure Functions. We used AWS Lambda for our development, but tested out Azure Functions briefly. In the future, we would further consider which platform to continue with.
## What's next for Safety Buddy
We wish to expand the skill by developing more intents to allow the user to engage in various conversation flows. We can monetize these additional conversation options through in-skill purchases in order to continue improving Safety Buddy and bring awareness to more individuals. Additionally, we can adapt the skill to be used for various languages users speak. | ## Inspiration
During this lockdown, everyone is pretty much staying home and not able to interact with others. So we want to connect like-minded people using our platform.
## What it does
You can register to our portal and then look for events (e.g. sports, hiking etc) happening around you and can join that person. The best thing about our platform is that once you register you can use the voice assistant to search for events, request the host for joining, and publish events. Everything is hands-free. It is really easy to use.
## How we built it
We built the front end using ReactJS and for the voice assistant, we used Alexa. We built a back-end that is connected to both the front end and Alexa. Whenever a user requests an event or wants to publish it is connect to our server hosted on AWS instance. Even now it is hosted live so that anyone who wants to try can use it. We are also using MongoDB to store the currently active events, user details, etc. One user requests something we scan through the database based on the user's location and deliver events happening near him. We create several REST APIs on the server that servers the requests.
## Challenges we ran into
There were lot of technical challenges faced. Setting up the server. Building the Alexa voice assistant which can serve the user easily without asking too many questions. We also thought of safety and privacy as our top priority.
## Accomplishments that we're proud of
An easy to use assistant and web portal to connect people.
## What we learned
How to use Alexa assistant for custom real life use case. How to deploy the production on AWS instances. Configuring the server to
## What's next for Get Together
Adding more privacy for the user who posts events, having official accounts for better credibility, rating mechanism for a better match-making. | ## Inspiration
Partially inspired by the Smart Cities track, we wanted our app to have the direct utility of ordering food, while still being fun to interact with. We aimed to combine convenience with entertainment, making the experience more enjoyable than your typical drive-through order.
## What it does
You interact using only your voice. The app automatically detects when you start and stop talking, uses AI to transcribe what you say, figures out the food items (with modifications) you want to order, and adds them to your current order. It even handles details like size and flavor preferences. The AI then generates text-to-speech audio, which is played back to confirm your order in a humorous, engaging way. There is absolutely zero set-up or management necessary, as the program will completely ignore all background noises and conversation. Even then, it will still take your order with staggering precision.
## How we built it
The frontend of the app is built with React and TypeScript, while the backend uses Flask and Python. We containerized the app using Docker and deployed it using Defang. The design of the menu is also done in Canva with a dash of Harvard colors.
## Challenges we ran into
One major challenge was getting the different parts of the app—frontend, backend, and AI—to communicate effectively. From media file conversions to AI prompt engineering, we worked through each of the problems together. We struggled particularly with maintaining smooth communication once the app was deployed. Additionally, fine-tuning the AI to accurately extract order information from voice inputs while keeping the interaction natural was a big hurdle.
## Accomplishments that we're proud of
We're proud of building a fully functioning product that successfully integrates all the features we envisioned. We also managed to deploy the app, which was a huge achievement given the complexity of the project. Completing our initial feature set within the hackathon timeframe was a key success for us. Trying to work with Python data type was difficult to manage, and we were proud to navigate around that. We are also extremely proud to meet a bunch of new people and tackle new challenges that we were not previously comfortable with.
## What we learned
We honed our skills in React, TypeScript, Flask, and Python, especially in how to make these technologies work together. We also learned how to containerize and deploy applications using Docker and Docker Compose, as well as how to use Defang for cloud deployment.
## What's next for Harvard Burger
Moving forward, we want to add a business-facing interface, where restaurant staff would be able to view and fulfill customer orders. There will also be individual kiosk devices to handle order inputs. These features would allow *Harvard Burger* to move from a demo to a fully functional app that restaurants could actually use. Lastly, we can sell the product by designing marketing strategies for fast food chains. | losing |
## Inspiration
There is a need for an electronic health record (EHR) system that is secure, accessible, and user-friendly. Currently, hundred of EHRs exist and different clinical practices may use different systems. If a patient requires an emergency visit to a certain physician, the physician may be unable to access important records and patient information efficiently, requiring extra time and resources that strain the healthcare system. This is especially true for patients traveling abroad where doctors from different countries may be unable to access a centralized healthcare database in another.
In addition, there is a strong potential to utilize the data available for improved analytics. In a clinical consultation, patient description of symptoms may be ambiguous and doctors often want to monitor the patient's symptoms for an extended period. With limited resources, this is impossible outside of an acute care unit in a hospital. As access to the internet is becoming increasingly widespread, patients may be able to self-report certain symptoms through a web portal if such an EHR exists. With a large amount of patient data, artificial intelligence techniques can be used to analyze the similarity of patients to predict certain outcomes before adverse events happen such that intervention can occur timely.
## What it does
myHealthTech is a block-chain EHR system that has a user-friendly interface for patients and health care providers to record patient information such as clinical visitation history, lab test results, and self-reporting records from the patient. The system is a web application that is accessible from any end user that is approved by the patient. Thus, doctors in different clinics can access essential information in an efficient manner. With the block-chain architecture compared to traditional databases, patient data is stored securely and anonymously in a decentralized manner such that third parties cannot access the encrypted information.
Artificial intelligence methods are used to analyze patient data for prognostication of adverse events. For instance, a patient's reported mood scores are compared to a database of similar patients that have resulted in self-harm, and myHealthTech will compute a probability that the patient will trend towards a self-harm event. This allows healthcare providers to monitor and intervene if an adverse event is predicted.
## How we built it
The block-chain EHR architecture was written in solidity, truffle, testRPC, and remix. The web interface was written in HTML5, CSS3, and JavaScript. The artificial intelligence predictive behavior engine was written in python.
## Challenges we ran into
The greatest challenge was integrating the back-end and front-end components. We had challenges linking smart contracts to the web UI and executing the artificial intelligence engine from a web interface. Several of these challenges require compatibility troubleshooting and running a centralized python server, which will be implemented in a consistent environment when this project is developed further.
## Accomplishments that we're proud of
We are proud of working with novel architecture and technology, providing a solution to solve common EHR problems in design, functionality, and implementation of data.
## What we learned
We learned the value of leveraging the strengths of different team members from design to programming and math in order to advance the technology of EHRs.
## What's next for myHealthTech?
Next is the addition of additional self-reporting fields to increase the robustness of the artificial intelligence engine. In the case of depression, there are clinical standards from the Diagnostics and Statistical Manual that identify markers of depression such as mood level, confidence, energy, and feeling of guilt. By monitoring these values for individuals that have recovered, are depressed, or inflict self-harm, the AI engine can predict the behavior of new individuals much stronger by logistically regressing the data and use a deep learning approach.
There is an issue with the inconvenience of reporting symptoms. Hence, a logical next step would be to implement smart home technology, such as an Amazon Echo, for the patient to interact with for self reporting. For instance, when the patient is at home, the Amazon Echo will prompt the patient and ask "What would you rate your mood today? What would you rate your energy today?" and record the data in the patient's self reporting records on myHealthTech.
These improvements would further the capability of myHealthTech of being a highly dynamic EHR with strong analytical capabilitys to understand and predict the outcome of patients to improve treatment options. | ## Inspiration
As a patient in the United States you do not know what costs you are facing when you receive treatment at a hospital or if your insurance plan covers the expenses. Patients are faced with unexpected bills and left with expensive copayments. In some instances patients would pay less if they cover the expenses out of pocket instead of using their insurance plan.
## What it does
Healthiator provides patients with a comprehensive overview of medical procedures that they will need to undergo for their health condition and sums up the total costs of that treatment depending on which hospital they go-to, and if they pay the treatment out-of-pocket or through their insurance.
This allows patients to choose the most cost-effective treatment and understand the medical expenses they are facing. A second feature healthiator provides is that once patients receive their actual hospital bill they can claim inaccuracies. Healthiator helps patients with billing disputes by leveraging AI to handle the process of negotiating fair pricing.
## How we built it
We used a combination of Together.AI and Fetch.AI. We have several smart agents running in Fetch.AI each responsible for one of the features. For instance, we get the online and instant data from the hospitals (publicly available under the Good Faith act/law) about the prices and cash discounts using one agent and then use together.ai's API to integrate those information in the negotiation part.
## Ethics
The reason is that although our end purpose is to help people get medical treatment by reducing the fear of surprise bills and actually making healthcare more affordable, we are aware that any wrong suggestions or otherwise violations of the user's privacy have significant consequences. Giving the user as much information as possible while keeping away from making clinical suggestions and false/hallucinated information was the most challenging part in our work.
## Challenges we ran into
Finding actionable data from the hospitals was one of the most challenging parts as each hospital has their own format and assumptions and it was not straightforward at all how to integrate them all into a single database. Another challenge was making various APIs and third parties work together in time.
## Accomplishments that we're proud of
Solving a relevant social issue. Everyone we talked to has experienced the problem of not knowing the costs they're facing for different procedures at hospitals and if their insurance covers it. While it is an anxious process for everyone, this fact might prevent and delay a number of people from going to hospitals and getting the care that they urgently need. This might result in health conditions that could have had a better outcome if treated earlier.
## What we learned
How to work with convex fetch.api and together.api.
## What's next for Healthiator
As a next step, we want to set-up a database and take the medical costs directly from the files published by hospitals. | ## Inspiration
We hate making resumes and customizing them for each employeer so we created a tool to speed that up.
## What it does
A user creates "blocks" which are saved. Then they can pick and choose which ones they want to use.
## How we built it
[Node.js](https://nodejs.org/en/)
[Express](https://expressjs.com/)
[Nuxt.js](https://nuxtjs.org/)
[Editor.js](https://editorjs.io/)
[html2pdf.js](https://ekoopmans.github.io/html2pdf.js/)
[mongoose](https://mongoosejs.com/docs/)
[MongoDB](https://www.mongodb.com/) | partial |
## Inspiration
As cybersecurity enthusiasts, we are taking one for the team by breaking the curse of CLIs. `Appealing UI for tools like nmap` + `Implementation of Metasploitable scripts` = `happy hacker`
## What it does
nmap is a cybersecurity tool that scans ports of an ip on a network, and retrives the service that is running on each of them, as well as the version. Metasploitable is another tool that is able to run attacks on specified ip and ports to gain access to a machine.
Our app creates a graphical user interface for the use of both tools: it first scans an IP adress with nmap, and then retrieves the attack script from Metasploitable that matches the version of the service to use it.
In one glance, see what ports of an IP address are open, and if they are vulnerable or not. If they are, then click on the `🕹️` button to run the attack.
## How we built it
* ⚛️ React for the front-end
* 🐍 Python with fastapi for the backend
* 🌐 nmap and 🪳 Metasploitable
* 📚 SQLi for the database
## Challenges we ran into
Understanding that terminal sessions running under python take time to complete 💀
## Accomplishments that we're proud of
We are proud of the project in general. As cybersecurity peeps, we're making one small step for humans but a giant leap for hackers.
## What we learned
How Mestaploitable actually works lol.
No for real just discovering new libraries is always one main takeaway during hackathons, and McHacks delivered for that one.
## What's next for Phoenix
Have a fuller database, and possibly a way to update it redundantly and less manually. Then, it's just matter of showing it to the world. | ## Inspiration
We wanted to be able to connect with mentors. There are very few opportunities to do that outside of LinkedIn where many of the mentors are in a foreign field to our interests'.
## What it does
A networking website that connects mentors with mentees. It uses a weighted matching algorithm based on mentors' specializations and mentees' interests to prioritize matches.
## How we built it
Google Firebase is used for our NoSQL database which holds all user data. The other website elements were programmed using JavaScript and HTML.
## Challenges we ran into
There was no suitable matching algorithm module on Node.js that did not have version mismatches so we abandoned Node.js and programmed our own weighted matching algorithm. Also, our functions did not work since our code completed execution before Google Firebase returned the data from its API call, so we had to make all of our functions asynchronous.
## Accomplishments that we're proud of
We programmed our own weighted matching algorithm based on interest and specialization. Also, we refactored our entire code to make it suitable for asynchronous execution.
## What we learned
We learned how to use Google Firebase, Node.js and JavaScript from scratch. Additionally, we learned advanced programming concepts such as asynchronous programming.
## What's next for Pyre
We would like to add interactive elements such as integrated text chat between matched members. Additionally, we would like to incorporate distance between mentor and mentee into our matching algorithm. | As a response to the ongoing wildfires devastating vast areas of Australia, our team developed a web tool which provides wildfire data visualization, prediction, and logistics handling. We had two target audiences in mind: the general public and firefighters. The homepage of Phoenix is publicly accessible and anyone can learn information about the wildfires occurring globally, along with statistics regarding weather conditions, smoke levels, and safety warnings. We have a paid membership tier for firefighting organizations, where they have access to more in-depth information, such as wildfire spread prediction.
We deployed our web app using Microsoft Azure, and used Standard Library to incorporate Airtable, which enabled us to centralize the data we pulled from various sources. We also used it to create a notification system, where we send users a text whenever the air quality warrants action such as staying indoors or wearing a P2 mask.
We have taken many approaches to improving our platform’s scalability, as we anticipate spikes of traffic during wildfire events. Our codes scalability features include reusing connections to external resources whenever possible, using asynchronous programming, and processing API calls in batch. We used Azure’s functions in order to achieve this.
Azure Notebook and Cognitive Services were used to build various machine learning models using the information we collected from the NASA, EarthData, and VIIRS APIs. The neural network had a reasonable accuracy of 0.74, but did not generalize well to niche climates such as Siberia.
Our web-app was designed using React, Python, and d3js. We kept accessibility in mind by using a high-contrast navy-blue and white colour scheme paired with clearly legible, sans-serif fonts. Future work includes incorporating a text-to-speech feature to increase accessibility, and a color-blind mode. As this was a 24-hour hackathon, we ran into a time challenge and were unable to include this feature, however, we would hope to implement this in further stages of Phoenix. | partial |
## Inspiration
We've all been in the situation where we've ran back and forth in the store, looking for a single small thing on our grocery list. We've all been on the time crunch and have found ourselves running back and forth from dairy to snacks to veggies, frustrated that we can't find what we need in an efficient way. This isn't a problem that just extends to us as college students, but is also a problem which people of all ages face, including parents and elderly grandparents, which can make the shopping experience very unpleasant. InstaShop is a platform that solves this problem once and for all.
## What it does
Input any grocery list with a series of items to search at the Target retail store in Boston. If the item is available, then our application will search the Target store to see where this item is located in the store. It will add a bullet point to the location of the item in the store. You can add all of your items as you wish. Then, based on the store map of the Target, we will provide the exact route that you should take from the entrance to the exit to retrieve all of the items.
## How we built it
Based off of the grocery list, we trigger the Target retail developer API to search for a certain item and retrieve the aisle number of the location within the given store. Alongside, we also wrote classes and functions to create and develop a graph with different nodes to mock the exact layout of the store. Then, we plot the exact location of the given item within the map. Once the user is done inputting all of the items, we will use our custom dynamic programming algorithm which we developed using a variance of the Traveling Salesman algorithm along with a breadth first search. This algorithm will return the shortest path from the entrance to retrieving all of your items to the exit. We display the shortest path on the frontend.
## Challenges we ran into
One of the major problems we ran into was developing the intricacies of the algorithm. This is a very much so convoluted algorithm (as mentioned above). Additionally, setting up the data structures with the nodes, edges, and creating the graph as a combination of the nodes and edges required a lot of thinking. We made sure to think through our data structure carefully and ensure that we were approaching it correctly.
## Accomplishments that we're proud of
According to our approximations in acquiring all of the items within the retail store, we are extremely proud that we improved our runtime down from 1932! \* 7 / 100! minutes to a few seconds. Initially, we were performing a recursive depth-first search on each of the nodes to calculate the shortest path taken. At first, it was working flawlessly on a smaller scale, but when we started to process the results on a larger scale (10\*10 grid), it took around 7 minutes to find the path for just one operation. Assuming that we scale this to the size of the store, one operation would take 7 divided by 100! minutes and the entire store would take 1932! \* 7 / 100! minutes. In order to improve this, we run a breadth-first search combined with an application of the Traveling Salesman problem developed in our custom dynamic programming based algorithm. We were able to bring it down to just a few seconds. Yay!
## What we learned
We learned about optimizing algorithms and overall graph usage and building an application from the ground up regarding the structure of the data.
## What's next for InstaShop
Our next step is to go to Target and pitch our idea. We would like to establish partnership with many Target stores and establish a profitable business model that we can incorporate with Target. We strongly believe that this will be a huge help for the public. | ## Inspiration
Shopping can be a very frustrating experience at times. Nowadays, almost everything is digitally connected yet some stores fall behind when it comes to their shopping experience. We've unfortunately encountered scenarios where we weren't able to find products stocked at our local grocery store, and there have been times where we had no idea how much stock was left or if we need to hurry! Our app solves this issue, by displaying various data relating to each ingredient to the user.
## What it does
Our application aims to guide users to the nearest store that stocks the ingredient they're looking for. This is done on the maps section of the app, and the user can redirect to other stores in the area as well to find the most suitable option. Displaying the price also enables the user to find the most suitable product for them if there are alternatives, ultimately leading to a much smoother shopping experience.
## How we built it
The application was built using React Native and MongoDB. While there were some hurdles to overcome, we were finally able to get a functional application that we could view and interact with using Expo.
## Challenges we ran into
Despite our best efforts, we weren't able to fit the integration of the database within the allocated timeframe. Given that this was a fairly new experience to us with using MongoDB, we struggled to correctly implement it within our React Native code which resulted in having to rely on hard-coding ingredients.
## Accomplishments that we're proud of
We're very proud of the progress we managed to get on our mobile app. Both of us have little experience ever making such a program, so we're very happy we have a fully functioning app in so little time.
Although we weren't able to get the database loaded into the search functionality, we're still quite proud of the fact that we were able to create and connect all users on the team to the database, as well as correctly upload documents to it and we were even able to get the database printing through our code. Just being able to connect to the database and correctly output it, as well as being able to implement a query functionality, was quite a positive experience since this was unfamiliar territory to us.
## What we learned
We learnt how to create and use databases with MongoDB and were able to enhance our React Native skills through importing Google Cloud APIs and being able to work with them (particularly through react-native-maps).
## What's next for IngredFind
In the future, we would hope to improve the front and back end of our application. Aside from visual tweaks and enhancing our features, as well as fixing any bugs that may occur, we would also hope to get the database fully functional and working and perhaps create the application that enables the grocery store to add and alter products on their end. | **Come check out our fun Demo near the Google Cloud Booth in the West Atrium!! Could you use a physiotherapy exercise?**
## The problem
A specific application of physiotherapy is that joint movement may get limited through muscle atrophy, surgery, accident, stroke or other causes. Reportedly, up to 70% of patients give up physiotherapy too early — often because they cannot see the progress. Automated tracking of ROM via a mobile app could help patients reach their physiotherapy goals.
Insurance studies showed that 70% of the people are quitting physiotherapy sessions when the pain disappears and they regain their mobility. The reasons are multiple, and we can mention a few of them: cost of treatment, the feeling that they recovered, no more time to dedicate for recovery and the loss of motivation. The worst part is that half of them are able to see the injury reappear in the course of 2-3 years.
Current pose tracking technology is NOT realtime and automatic, requiring the need for physiotherapists on hand and **expensive** tracking devices. Although these work well, there is a HUGE room for improvement to develop a cheap and scalable solution.
Additionally, many seniors are unable to comprehend current solutions and are unable to adapt to current in-home technology, let alone the kinds of tech that require hours of professional setup and guidance, as well as expensive equipment.
[![IMAGE ALT TEXT HERE](https://res.cloudinary.com/devpost/image/fetch/s--GBtdEkw5--/c_limit,f_auto,fl_lossy,q_auto:eco,w_900/http://img.youtube.com/vi/PrbmBMehYx0/0.jpg)](http://www.youtube.com/watch?feature=player_embedded&v=PrbmBMehYx0)
## Our Solution!
* Our solution **only requires a device with a working internet connection!!** We aim to revolutionize the physiotherapy industry by allowing for extensive scaling and efficiency of physiotherapy clinics and businesses. We understand that in many areas, the therapist to patient ratio may be too high to be profitable, reducing quality and range of service for everyone, so an app to do this remotely is revolutionary.
We collect real-time 3D position data of the patient's body while doing exercises for the therapist to adjust exercises using a machine learning model directly implemented into the browser, which is first analyzed within the app, and then provided to a physiotherapist who can further analyze the data. It also asks the patient for subjective feedback on a pain scale
This makes physiotherapy exercise feedback more accessible to remote individuals **WORLDWIDE** from their therapist
## Inspiration
* The growing need for accessible physiotherapy among seniors, stroke patients, and individuals in third-world countries without access to therapists but with a stable internet connection
* The room for AI and ML innovation within the physiotherapy market for scaling and growth
## How I built it
* Firebase hosting
* Google cloud services
* React front-end
* Tensorflow PoseNet ML model for computer vision
* Several algorithms to analyze 3d pose data.
## Challenges I ran into
* Testing in React Native
* Getting accurate angle data
* Setting up an accurate timer
* Setting up the ML model to work with the camera using React
## Accomplishments that I'm proud of
* Getting real-time 3D position data
* Supporting multiple exercises
* Collection of objective quantitative as well as qualitative subjective data from the patient for the therapist
* Increasing the usability for senior patients by moving data analysis onto the therapist's side
* **finishing this within 48 hours!!!!** We did NOT think we could do it, but we came up with a working MVP!!!
## What I learned
* How to implement Tensorflow models in React
* Creating reusable components and styling in React
* Creating algorithms to analyze 3D space
## What's next for Physio-Space
* Implementing the sharing of the collected 3D position data with the therapist
* Adding a dashboard onto the therapist's side | losing |
## Inspiration
To introduce the most impartial and ensured form of voting submission in response to controversial democratic electoral polling following the 2018 US midterm elections. This event involved several encircling clauses of doubt and questioned authenticity of results by citizen voters. This propelled the idea of bringing enforced and much needed decentralized security to the polling process.
## What it does
Allows voters to vote through a web portal on a blockchain. This web portal is written in HTML and Javascript using the Bootstrap UI framework and JQuery to send Ajax HTTP requests through a flask server written in Python communicating with a blockchain running on the ARK platform. The polling station uses a web portal to generate a unique passphrase for each voter. The voter then uses said passphrase to cast their ballot anonymously and securely. Following this, their vote alongside passphrase go to a flask web server where it is properly parsed and sent to the ARK blockchain accounting it as a transaction. Is transaction is delegated by one ARK coin represented as the count. Finally, a paper trail is generated following the submission of vote on the web portal in the event of public verification.
## How we built it
The initial approach was to use Node.JS, however, Python with Flask was opted for as it proved to be a more optimally implementable solution. Visual studio code was used as a basis to present the HTML and CSS front end for visual representations of the voting interface. Alternatively, the ARK blockchain was constructed on the Docker container. These were used in a conjoined manner to deliver the web-based application.
## Challenges I ran into
* Integration for seamless formation of app between front and back-end merge
* Using flask as an intermediary to act as transitional fit for back-end
* Understanding incorporation, use, and capability of blockchain for security in the purpose applied to
## Accomplishments that I'm proud of
* Successful implementation of blockchain technology through an intuitive web-based medium to address a heavily relevant and critical societal concern
## What I learned
* Application of ARK.io blockchain and security protocols
* The multitude of transcriptional stages for encryption involving pass-phrases being converted to private and public keys
* Utilizing JQuery to compile a comprehensive program
## What's next for Block Vote
Expand Block Vote’s applicability in other areas requiring decentralized and trusted security, hence, introducing a universal initiative. | ## Inspiration
After observing the news about the use of police force for so long, we considered to ourselves how to solve that. We realized that in some ways, the problem was made worse by a lack of trust in law enforcement. We then realized that we could use blockchain to create a better system for accountability in the use of force. We believe that it can help people trust law enforcement officers more and diminish the use of force when possible, saving lives.
## What it does
Chain Gun is a modification for a gun (a Nerf gun for the purposes of the hackathon) that sits behind the trigger mechanism. When the gun is fired, the GPS location and ID of the gun are put onto the Ethereum blockchain.
## Challenges we ran into
Some things did not work well with the new updates to Web3 causing a continuous stream of bugs. To add to this, the major updates broke most old code samples. Android lacks a good implementation of any Ethereum client making it a poor platform for connecting the gun to the blockchain. Sending raw transactions is not very well documented, especially when signing the transactions manually with a public/private keypair.
## Accomplishments that we're proud of
* Combining many parts to form a solution including an Android app, a smart contract, two different back ends, and a front end
* Working together to create something we believe has the ability to change the world for the better.
## What we learned
* Hardware prototyping
* Integrating a bunch of different platforms into one system (Arduino, Android, Ethereum Blockchain, Node.JS API, React.JS frontend)
* Web3 1.0.0
## What's next for Chain Gun
* Refine the prototype | ## Inspiration
While working on a political campaign this summer, we noticed a lack of distributed system's that allowed for civic engagement. The campaigns we saw had large numbers of willing volunteers willing to do cold calling to help their favorite candidate win the election, but who lacked the infrastructure to do so. Even those few who did manage to do so successfully are utilized ineffectively. Since they have no communication with the campaign, they often end up wasting many calls on people who would vote for their candidate anyway, or in districts where their candidate has an overwhelming majority.
## What it does
Our app allows political campaigns and volunteers to strategize and work together to get their candidate elected. On the logistical end, in our web dashboard campaign managers can work to target those most open to their candidate and see what people are saying. They can also input the numbers of people in districts which are most vital to the campaign, and have their constituents target those people.
## How we built it
We took a two prong approach to building our applications since they would have to serve two different people. Our web app is more analytically focused and closely resembles an enterprise app in it's sophistication and functionality. It allows campaign staff to clearly see how their volunteers are being utilized and who their calling, and perform advanced analytical operations to enhance volunteer effectiveness.
This is very different from the approach we took with the consumer app which we wanted to make as easy to use and intuitive as possible. Our consumer facing app allows users to quickly login with their google accounts, and, with the touch of a button start calling voters who are carefully curated by the campaign staff on their dashboard. We also added a gasification element by adding a leaderboard and offering the user simple analytics on their performance.
## Challenges we ran into
One challenge we ran into was getting statistically relevant data into our platform. At first we struggled with creating an easy to use interface for users to convey information about people they called back to the campaign staff without making the process tedious. We solved this problem by spending a lot of time refining our app's user interface to be as simple as possible.
## Accomplishments that we're proud of
We're very proud of the fact that we were able to build what is essentially two closely integrated platforms in one hackathon. Our iOS app is built natively in swift while our website is built in PHP so very little of the code, besides the api was reusable despite the fact that the two apps were constantly interfacing with each other.
## What we learned
That creating effective actionable data is hard, and that it's not being done enough. We also learned through the process of brainstorming the concept for the app that for civic movements to be effective in the future, they have to be more strategic with who they target, and how they utilize their volunteers.
## What's next for PolitiCall
Analytics are at the core of any modern political campaign, and we believe volunteers calling thousands of people are one of the best ways to gather analytics. We plan to combine user gathered analytics with proprietary campaign information to offer campaign managers the best possible picture of their campaign, and what they need to focus on. | winning |
# BananaExpress
A self-writing journal of your life, with superpowers!
We make journaling easier and more engaging than ever before by leveraging **home-grown, cutting edge, CNN + LSTM** models to do **novel text generation** to prompt our users to practice active reflection by journaling!
Features:
* User photo --> unique question about that photo based on 3 creative techniques
+ Real time question generation based on (real-time) user journaling (and the rest of their writing)!
+ Ad-lib style questions - we extract location and analyze the user's activity to generate a fun question!
+ Question-corpus matching - we search for good questions about the user's current topics
* NLP on previous journal entries for sentiment analysis
I love our front end - we've re-imagined how easy and futuristic journaling can be :)
And, honestly, SO much more! Please come see!
♥️ from the Lotus team,
Theint, Henry, Jason, Kastan | ## Inspiration
Have you ever had to wait in long lines just to buy a few items from a store? Not wanted to interact with employees to get what you want? Now you can buy items quickly and hassle free through your phone, without interacting with any people whatsoever.
## What it does
CheckMeOut is an iOS application that allows users to buy an item that has been 'locked' in a store. For example, clothing that have the sensors attached to them or items that are physically locked behind glass. Users can scan a QR code or use ApplePay to quickly access the information about an item (price, description, etc.) and 'unlock' the item by paying for it. The user will not have to interact with any store clerks or wait in line to buy the item.
## How we built it
We used xcode to build the iOS application, and MS Azure to host our backend. We used an intel Edison board to help simulate our 'locking' of an item.
## Challenges I ran into
We're using many technologies that our team is unfamiliar with, namely Swift and Azure.
## What I learned
I've learned not underestimate things you don't know, to ask for help when you need it, and to just have a good time.
## What's next for CheckMeOut
Hope to see it more polished in the future. | ## Inspiration 💡
Our inspiration for this project was to leverage new AI technologies such as text to image, text generation and natural language processing to enhance the education space. We wanted to harness the power of machine learning to inspire creativity and improve the way students learn and interact with educational content. We believe that these cutting-edge technologies have the potential to revolutionize education and make learning more engaging, interactive, and personalized.
## What it does 🎮
Our project is a text and image generation tool that uses machine learning to create stories from prompts given by the user. The user can input a prompt, and the tool will generate a story with corresponding text and images. The user can also specify certain attributes such as characters, settings, and emotions to influence the story's outcome. Additionally, the tool allows users to export the generated story as a downloadable book in the PDF format. The goal of this project is to make story-telling interactive and fun for users.
## How we built it 🔨
We built our project using a combination of front-end and back-end technologies. For the front-end, we used React which allows us to create interactive user interfaces. On the back-end side, we chose Go as our main programming language and used the Gin framework to handle concurrency and scalability. To handle the communication between the resource intensive back-end tasks we used a combination of RabbitMQ as the message broker and Celery as the work queue. These technologies allowed us to efficiently handle the flow of data and messages between the different components of our project.
To generate the text and images for the stories, we leveraged the power of OpenAI's DALL-E-2 and GPT-3 models. These models are state-of-the-art in their respective fields and allow us to generate high-quality text and images for our stories. To improve the performance of our system, we used MongoDB to cache images and prompts. This allows us to quickly retrieve data without having to re-process it every time it is requested. To minimize the load on the server, we used socket.io for real-time communication, it allow us to keep the HTTP connection open and once work queue is done processing data, it sends a notification to the React client.
## Challenges we ran into 🚩
One of the challenges we ran into during the development of this project was converting the generated text and images into a PDF format within the React front-end. There were several libraries available for this task, but many of them did not work well with the specific version of React we were using. Additionally, some of the libraries required additional configuration and setup, which added complexity to the project. We had to spend a significant amount of time researching and testing different solutions before we were able to find a library that worked well with our project and was easy to integrate into our codebase. This challenge highlighted the importance of thorough testing and research when working with new technologies and libraries.
## Accomplishments that we're proud of ⭐
One of the accomplishments we are most proud of in this project is our ability to leverage the latest technologies, particularly machine learning, to enhance the user experience. By incorporating natural language processing and image generation, we were able to create a tool that can generate high-quality stories with corresponding text and images. This not only makes the process of story-telling more interactive and fun, but also allows users to create unique and personalized stories.
## What we learned 📚
Throughout the development of this project, we learned a lot about building highly scalable data pipelines and infrastructure. We discovered the importance of choosing the right technology stack and tools to handle large amounts of data and ensure efficient communication between different components of the system. We also learned the importance of thorough testing and research when working with new technologies and libraries.
We also learned about the importance of using message brokers and work queues to handle data flow and communication between different components of the system, which allowed us to create a more robust and scalable infrastructure. We also learned about the use of NoSQL databases, such as MongoDB to cache data and improve performance. Additionally, we learned about the importance of using socket.io for real-time communication, which can minimize the load on the server.
Overall, we learned about the importance of using the right tools and technologies to build a highly scalable and efficient data pipeline and infrastructure, which is a critical component of any large-scale project.
## What's next for Dream.ai 🚀
There are several exciting features and improvements that we plan to implement in the future for Dream.ai. One of the main focuses will be on allowing users to export their generated stories to YouTube. This will allow users to easily share their stories with a wider audience and potentially reach a larger audience.
Another feature we plan to implement is user history. This will allow users to save and revisit old prompts and stories they have created, making it easier for them to pick up where they left off. We also plan to allow users to share their prompts on the site with other users, which will allow them to collaborate and create stories together.
Finally, we are planning to improve the overall user experience by incorporating more customization options, such as the ability to select different themes, characters and settings. We believe these features will further enhance the interactive and fun nature of the tool, making it even more engaging for users. | winning |
# 🪼 **SeaScript** 🪸
## Inspiration
Learning MATLAB can be as appealing as a jellyfish sting. Traditional resources often leave students lost at sea, making the process more exhausting than a shark's endless swim. SeaScript transforms this challenge into an underwater adventure, turning the tedious journey of mastering MATLAB into an exciting expedition.
## What it does
SeaScript plunges you into an oceanic MATLAB adventure with three distinct zones:
1. 🪼 Jellyfish Junction: Help bioluminescent jellies navigate nighttime waters.
2. 🦈 Shark Bay: Count endangered sharks to aid conservation efforts.
3. 🪸 Coral Code Reef: Assist Nemo in finding the tallest coral home.
Solve MATLAB challenges in each zone to collect puzzle pieces, unlocking a final mystery message. It's not just coding – it's saving the ocean, one function at a time!
## How we built it
* Python game engine for our underwater world
* MATLAB integration for real-time, LeetCode-style challenges
* MongoDB for data storage (player progress, challenges, marine trivia)
## Challenges we ran into
* Seamlessly integrating MATLAB with our Python game engine
* Crafting progressively difficult challenges without overwhelming players
* Balancing education and entertainment (fish puns have limits!)
## Accomplishments that we're proud of
* Created a unique three-part underwater journey for MATLAB learning
* Successfully merged MATLAB, Python, and MongoDB into a cohesive game
* Developed a rewarding puzzle system that tracks player progress
## What we learned
* MATLAB's capabilities are as vast as the ocean
* Gamification can transform challenging subjects into adventures
* The power of combining coding, marine biology, and puzzle-solving in education
## What's next for SeaScript
* Expand with more advanced MATLAB concepts
* Implement multiplayer modes for collaborative problem-solving
* Develop mobile and VR versions for on-the-go and immersive learning
Ready to dive in? Don't let MATLAB be the one that got away – catch the wave with SeaScript and code like a fish boss! 🐠👑 | ## Inspiration
We were inspired by hard working teachers and students. Although everyone was working hard, there was still a disconnect with many students not being able to retain what they learned. So, we decided to create both a web application and a companion phone application to help target this problem.
## What it does
The app connects students with teachers in a whole new fashion. Students can provide live feedback to their professors on various aspects of the lecture, such as the volume and pace. Professors, on the other hand, get an opportunity to receive live feedback on their teaching style and also give students a few warm-up exercises with a built-in clicker functionality.
The web portion of the project ties the classroom experience to the home. Students receive live transcripts of what the professor is currently saying, along with a summary at the end of the lecture which includes key points. The backend will also generate further reading material based on keywords from the lecture, which will further solidify the students’ understanding of the material.
## How we built it
We built the mobile portion using react-native for the front-end and firebase for the backend. The web app is built with react for the front end and firebase for the backend. We also implemented a few custom python modules to facilitate the client-server interaction to ensure a smooth experience for both the instructor and the student.
## Challenges we ran into
One major challenge we ran into was getting and processing live audio and giving a real-time transcription of it to all students enrolled in the class. We were able to solve this issue through a python script that would help bridge the gap between opening an audio stream and doing operations on it while still serving the student a live version of the rest of the site.
## Accomplishments that we’re proud of
Being able to process text data to the point that we were able to get a summary and information on tone/emotions from it. We are also extremely proud of the
## What we learned
We learned more about React and its usefulness when coding in JavaScript. Especially when there were many repeating elements in our Material Design. We also learned that first creating a mockup of what we want will facilitate coding as everyone will be on the same page on what is going on and all thats needs to be done is made very evident. We used some API’s such as the Google Speech to Text API and a Summary API. We were able to work around the constraints of the API’s to create a working product. We also learned more about other technologies that we used such as: Firebase, Adobe XD, React-native, and Python.
## What's next for Gradian
The next goal for Gradian is to implement a grading system for teachers that will automatically integrate with their native grading platform so that clicker data and other quiz material can instantly be graded and imported without any issues. Beyond that, we can see the potential for Gradian to be used in office scenarios as well so that people will never miss a beat thanks to the live transcription that happens. | ## Inspiration
Us college students all can relate to having a teacher that was not engaging enough during lectures or mumbling to the point where we cannot hear them at all. Instead of finding solutions to help the students outside of the classroom, we realized that the teachers need better feedback to see how they can improve themselves to create better lecture sessions and better ratemyprofessor ratings.
## What it does
Morpheus is a machine learning system that analyzes a professor’s lesson audio in order to differentiate between various emotions portrayed through his speech. We then use an original algorithm to grade the lecture. Similarly we record and score the professor’s body language throughout the lesson using motion detection/analyzing software. We then store everything on a database and show the data on a dashboard which the professor can access and utilize to improve their body and voice engagement with students. This is all in hopes of allowing the professor to be more engaging and effective during their lectures through their speech and body language.
## How we built it
### Visual Studio Code/Front End Development: Sovannratana Khek
Used a premade React foundation with Material UI to create a basic dashboard. I deleted and added certain pages which we needed for our specific purpose. Since the foundation came with components pre build, I looked into how they worked and edited them to work for our purpose instead of working from scratch to save time on styling to a theme. I needed to add a couple new original functionalities and connect to our database endpoints which required learning a fetching library in React. In the end we have a dashboard with a development history displayed through a line graph representing a score per lecture (refer to section 2) and a selection for a single lecture summary display. This is based on our backend database setup. There is also space available for scalability and added functionality.
### PHP-MySQL-Docker/Backend Development & DevOps: Giuseppe Steduto
I developed the backend for the application and connected the different pieces of the software together. I designed a relational database using MySQL and created API endpoints for the frontend using PHP. These endpoints filter and process the data generated by our machine learning algorithm before presenting it to the frontend side of the dashboard. I chose PHP because it gives the developer the option to quickly get an application running, avoiding the hassle of converters and compilers, and gives easy access to the SQL database. Since we’re dealing with personal data about the professor, every endpoint is only accessible prior authentication (handled with session tokens) and stored following security best-practices (e.g. salting and hashing passwords). I deployed a PhpMyAdmin instance to easily manage the database in a user-friendly way.
In order to make the software easily portable across different platforms, I containerized the whole tech stack using docker and docker-compose to handle the interaction among several containers at once.
### MATLAB/Machine Learning Model for Speech and Emotion Recognition: Braulio Aguilar Islas
I developed a machine learning model to recognize speech emotion patterns using MATLAB’s audio toolbox, simulink and deep learning toolbox. I used the Berlin Database of Emotional Speech To train my model. I augmented the dataset in order to increase accuracy of my results, normalized the data in order to seamlessly visualize it using a pie chart, providing an easy and seamless integration with our database that connects to our website.
### Solidworks/Product Design Engineering: Riki Osako
Utilizing Solidworks, I created the 3D model design of Morpheus including fixtures, sensors, and materials. Our team had to consider how this device would be tracking the teacher’s movements and hearing the volume while not disturbing the flow of class. Currently the main sensors being utilized in this product are a microphone (to detect volume for recording and data), nfc sensor (for card tapping), front camera, and tilt sensor (for vertical tilting and tracking professor). The device also has a magnetic connector on the bottom to allow itself to change from stationary position to mobility position. It’s able to modularly connect to a holonomic drivetrain to move freely around the classroom if the professor moves around a lot. Overall, this allowed us to create a visual model of how our product would look and how the professor could possibly interact with it. To keep the device and drivetrain up and running, it does require USB-C charging.
### Figma/UI Design of the Product: Riki Osako
Utilizing Figma, I created the UI design of Morpheus to show how the professor would interact with it. In the demo shown, we made it a simple interface for the professor so that all they would need to do is to scan in using their school ID, then either check his lecture data or start the lecture. Overall, the professor is able to see if the device is tracking his movements and volume throughout the lecture and see the results of their lecture at the end.
## Challenges we ran into
Riki Osako: Two issues I faced was learning how to model the product in a way that would feel simple for the user to understand through Solidworks and Figma (using it for the first time). I had to do a lot of research through Amazon videos and see how they created their amazon echo model and looking back in my UI/UX notes in the Google Coursera Certification course that I’m taking.
Sovannratana Khek: The main issues I ran into stemmed from my inexperience with the React framework. Oftentimes, I’m confused as to how to implement a certain feature I want to add. I overcame these by researching existing documentation on errors and utilizing existing libraries. There were some problems that couldn’t be solved with this method as it was logic specific to our software. Fortunately, these problems just needed time and a lot of debugging with some help from peers, existing resources, and since React is javascript based, I was able to use past experiences with JS and django to help despite using an unfamiliar framework.
Giuseppe Steduto: The main issue I faced was making everything run in a smooth way and interact in the correct manner. Often I ended up in a dependency hell, and had to rethink the architecture of the whole project to not over engineer it without losing speed or consistency.
Braulio Aguilar Islas: The main issue I faced was working with audio data in order to train my model and finding a way to quantify the fluctuations that resulted in different emotions when speaking. Also, the dataset was in german
## Accomplishments that we're proud of
Achieved about 60% accuracy in detecting speech emotion patterns, wrote data to our database, and created an attractive dashboard to present the results of the data analysis while learning new technologies (such as React and Docker), even though our time was short.
## What we learned
As a team coming from different backgrounds, we learned how we could utilize our strengths in different aspects of the project to smoothly operate. For example, Riki is a mechanical engineering major with little coding experience, but we were able to allow his strengths in that area to create us a visual model of our product and create a UI design interface using Figma. Sovannratana is a freshman that had his first hackathon experience and was able to utilize his experience to create a website for the first time. Braulio and Gisueppe were the most experienced in the team but we were all able to help each other not just in the coding aspect, with different ideas as well.
## What's next for Untitled
We have a couple of ideas on how we would like to proceed with this project after HackHarvard and after hibernating for a couple of days.
From a coding standpoint, we would like to improve the UI experience for the user on the website by adding more features and better style designs for the professor to interact with. In addition, add motion tracking data feedback to the professor to get a general idea of how they should be changing their gestures.
We would also like to integrate a student portal and gather data on their performance and help the teacher better understand where the students need most help with.
From a business standpoint, we would like to possibly see if we could team up with our university, Illinois Institute of Technology, and test the functionality of it in actual classrooms. | partial |
## What it does
ColoVR is a virtual experience allowing users to modify their environment with colors and models. It's a relaxing, low poly experience that is the first of its kind to bring an accessible 3D experience to the Google Cardboard platform. It's a great way to become a creative in 3D without dropping a ton of money on VR hardware.
## How we built it
We used Google Daydream and extended off of the demo scene to our liking. We used Unity and C# to create the scene, and do all the user interactions. We did painting and dragging objects around the scene using a common graphics technique called Ray Tracing. We colored the scene by changing the vertex color. We also created a palatte that allowed you to change tools, colors, and insert meshes. We hand modeled the low poly meshes in Maya.
## Challenges we ran into
The main challenge was finding a proper mechanism for coloring the geometry in the scene. We first tried to paint the mesh face by face, but we had no way of accessing the entire face from the way the Unity mesh was set up. We then looked into an asset that would help us paint directly on top of the texture, but it ended up being too slow. Textures tend to render too slowly, so we ended up doing it with a vertex shader that would interpolate between vertex colors, allowing real time painting of meshes. We implemented it so that we changed all the vertices that belong to a fragment of a face. The vertex shader was the fastest way that we could render real time painting, and emulate the painting of a triangulated face. Our second biggest challenge was using ray tracing to find the object we were interacting with. We had to navigate the Unity API and get acquainted with their Physics raytracer and mesh colliders to properly implement all interaction with the cursor. Our third challenge was making use of the Controller that Google Daydream supplied us with - it was great and very sandboxed, but we were almost limited in terms of functionality. We had to find a way to be able to change all colors, insert different meshes, and interact with the objects with only two available buttons.
## Accomplishments that we're proud of
We're proud of the fact that we were able to get a clean, working project that was exactly what we pictured. People seemed to really enjoy the concept and interaction.
## What we learned
How to optimize for a certain platform - in terms of UI, geometry, textures and interaction.
## What's next for ColoVR
Hats, interactive collaborative space for multiple users. We want to be able to host the scene and make it accessible to multiple users at once, and store the state of the scene (maybe even turn it into an infinite world) where users can explore what past users have done and see the changes other users make in real time. | ## Background
Collaboration is the heart of humanity. From contributing to the rises and falls of great civilizations to helping five sleep deprived hackers communicate over 36 hours, it has become a required dependency in the [git@ssh.github.com](mailto:git@ssh.github.com):**hacker/life**.
Plowing through the weekend, we found ourselves shortchanged by the current available tools for collaborating within a cluster of devices. Every service requires:
1. *Authentication* Users must sign up and register to use a service.
2. *Contact* People trying to share information must establish a prior point of contact(share e-mails, phone numbers, etc).
3. *Unlinked Output* Shared content is not deep-linked with mobile or web applications.
This is where Grasshopper jumps in. Built as a **streamlined cross-platform contextual collaboration service**, it uses our on-prem installation of Magnet on Microsoft Azure and Google Cloud Messaging to integrate deep-linked commands and execute real-time applications between all mobile and web platforms located in a cluster near each other. It completely gets rid of the overhead of authenticating users and sharing contacts - all sharing is done locally through gps-enabled devices.
## Use Cases
Grasshopper lets you collaborate locally between friends/colleagues. We account for sharing application information at the deepest contextual level, launching instances with accurately prepopulated information where necessary. As this data is compatible with all third-party applications, the use cases can shoot through the sky. Here are some applications that we accounted for to demonstrate the power of our platform:
1. Share a video within a team through their mobile's native YouTube app **in seconds**.
2. Instantly play said video on a bigger screen by hopping the video over to Chrome on your computer.
3. Share locations on Google maps between nearby mobile devices and computers with a single swipe.
4. Remotely hop important links while surfing your smartphone over to your computer's web browser.
5. Rick Roll your team.
## What's Next?
Sleep. | ## Inspiration
Metaverse, VR, and games today lack true immersion. Even in the Metaverse, as a person, you exist completely phantom waist down. The movement of your elbows is predicted by an algorithm, and can look unstable and jittery. Worst of all, you have to use joycons to do something like waterbend, or spawn a fireball in your open palm.
## What it does
We built an iPhone powered full-body 3D tracking system that captures every aspect of the way that you move, and it costs practically nothing. By leveraging MediaPipePose's precise body part tracking and Unity's dynamic digital environment, it allows users to embody a virtual avatar that mirrors their real-life movements with precision. The use of Python-based sockets facilitates real-time communication, ensuring seamless and immediate translation of physical actions into the virtual world, elevating immersion for users engaging in virtual experiences.
## How we built it
To create our real-life full-body tracking avatar, we integrated MediaPipePose with Unity and utilized Python-based sockets. Initially, we employed MediaPipePose's computer vision capabilities to capture precise body part coordinates, forming the avatar's basis. Simultaneously, we built a dynamic digital environment within Unity to house the avatar. The critical link connecting these technologies was established through Python-based sockets, enabling real-time communication. This integration seamlessly translated users' physical movements into their virtual avatars, enhancing immersion in virtual spaces.
## Challenges we ran into
There were a number of issues. We first used media pipe holistic but then realized it was a legacy system and we couldn't get 3d coordinates for the hands. Then, we transitioned to using media pipe pose for the person's body and cutting out small sections of the image where we detected hands, and running media pipe hand on those sub images to capture both the position of the body and the position of the hands. The math required to map the local coordinate system of the hand tracking to the global coordinate system of the full body pose was difficult. There were latency issues with python --> Unity, that had to be resolved by decreasing the amount of data points. We also had to use techniques like an exponential moving average to make the movements smoother. And naturally, there were hundreds of bugs that had to be resolved in parsing, moving, storing, and working with the data from these deep learning CV models.
## Accomplishments that we're proud of
We're proud of achieving full-body tracking, which enhances user immersion. Equally satisfying is our seamless transition of the body with little latency, ensuring a fluid user experience.
## What we learned
For one, we learned how to integrate MediaPipePose with Unity, all the while learning socket programming for data transfer via servers in python. We learned how to integrate c# with python since unity only works on c# scripts and MediaPipePose only works on python scripts. We also learned how to use openCV and computer vision pretty intimately, since we had to work around a number of limitations for the libraries and old legacy code lurking around Google. There was also an element of asynchronous code handling via queues. Cool to see the data structure in action!
## What's next for Full Body Fusion!
Optimizing the hand mappings, implementing gesture recognition, and adding a real avatar in Unity instead of white dots. | partial |
# Inspiration and Product
There's a certain feeling we all have when we're lost. It's a combination of apprehension and curiosity – and it usually drives us to explore and learn more about what we see. It happens to be the case that there's a [huge disconnect](http://www.purdue.edu/discoverypark/vaccine/assets/pdfs/publications/pdf/Storylines%20-%20Visual%20Exploration%20and.pdf) between that which we see around us and that which we know: the building in front of us might look like an historic and famous structure, but we might not be able to understand its significance until we read about it in a book, at which time we lose the ability to visually experience that which we're in front of.
Insight gives you actionable information about your surroundings in a visual format that allows you to immerse yourself in your surroundings: whether that's exploring them, or finding your way through them. The app puts the true directions of obstacles around you where you can see them, and shows you descriptions of them as you turn your phone around. Need directions to one of them? Get them without leaving the app. Insight also supports deeper exploration of what's around you: everything from restaurant ratings to the history of the buildings you're near.
## Features
* View places around you heads-up on your phone - as you rotate, your field of vision changes in real time.
* Facebook Integration: trying to find a meeting or party? Call your Facebook events into Insight to get your bearings.
* Directions, wherever, whenever: surveying the area, and find where you want to be? Touch and get instructions instantly.
* Filter events based on your location. Want a tour of Yale? Touch to filter only Yale buildings, and learn about the history and culture. Want to get a bite to eat? Change to a restaurants view. Want both? You get the idea.
* Slow day? Change your radius to a short distance to filter out locations. Feeling adventurous? Change your field of vision the other way.
* Want get the word out on where you are? Automatically check-in with Facebook at any of the locations you see around you, without leaving the app.
# Engineering
## High-Level Tech Stack
* NodeJS powers a RESTful API powered by Microsoft Azure.
* The API server takes advantage of a wealth of Azure's computational resources:
+ A Windows Server 2012 R2 Instance, and an Ubuntu 14.04 Trusty instance, each of which handle different batches of geospatial calculations
+ Azure internal load balancers
+ Azure CDN for asset pipelining
+ Azure automation accounts for version control
* The Bing Maps API suite, which offers powerful geospatial analysis tools:
+ RESTful services such as the Bing Spatial Data Service
+ Bing Maps' Spatial Query API
+ Bing Maps' AJAX control, externally through direction and waypoint services
* iOS objective-C clients interact with the server RESTfully and display results as parsed
## Application Flow
iOS handles the entirety of the user interaction layer and authentication layer for user input. Users open the app, and, if logging in with Facebook or Office 365, proceed through the standard OAuth flow, all on-phone. Users can also opt to skip the authentication process with either provider (in which case they forfeit the option to integrate Facebook events or Office365 calendar events into their views).
After sign in (assuming the user grants permission for use of these resources), and upon startup of the camera, requests are sent with the user's current location to a central server on an Ubuntu box on Azure. The server parses that location data, and initiates a multithread Node process via Windows 2012 R2 instances. These processes do the following, and more:
* Geospatial radial search schemes with data from Bing
* Location detail API calls from Bing Spatial Query APIs
* Review data about relevant places from a slew of APIs
After the data is all present on the server, it's combined and analyzed, also on R2 instances, via the following:
* Haversine calculations for distance measurements, in accordance with radial searches
* Heading data (to make client side parsing feasible)
* Condensation and dynamic merging - asynchronous cross-checking from the collection of data which events are closest
Ubuntu brokers and manages the data, sends it back to the client, and prepares for and handles future requests.
## Other Notes
* The most intense calculations involved the application of the [Haversine formulae](https://en.wikipedia.org/wiki/Haversine_formula), i.e. for two points on a sphere, the central angle between them can be described as:
![Haversine 1](https://upload.wikimedia.org/math/1/5/a/15ab0df72b9175347e2d1efb6d1053e8.png)
and the distance as:
![Haversine 2](https://upload.wikimedia.org/math/0/5/5/055b634f6fe6c8d370c9fa48613dd7f9.png)
(the result of which is non-standard/non-Euclidian due to the Earth's curvature). The results of these formulae translate into the placement of locations on the viewing device.
These calculations are handled by the Windows R2 instance, essentially running as a computation engine. All communications are RESTful between all internal server instances.
## Challenges We Ran Into
* *iOS and rotation*: there are a number of limitations in iOS that prevent interaction with the camera in landscape mode (which, given the need for users to see a wide field of view). For one thing, the requisite data registers aren't even accessible via daemons when the phone is in landscape mode. This was the root of the vast majority of our problems in our iOS, since we were unable to use any inherited or pre-made views (we couldn't rotate them) - we had to build all of our views from scratch.
* *Azure deployment specifics with Windows R2*: running a pure calculation engine (written primarily in C# with ASP.NET network interfacing components) was tricky at times to set up and get logging data for.
* *Simultaneous and asynchronous analysis*: Simultaneously parsing asynchronously-arriving data with uniform Node threads presented challenges. Our solution was ultimately a recursive one that involved checking the status of other resources upon reaching the base case, then using that knowledge to better sort data as the bottoming-out step bubbled up.
* *Deprecations in Facebook's Graph APIs*: we needed to use the Facebook Graph APIs to query specific Facebook events for their locations: a feature only available in a slightly older version of the API. We thus had to use that version, concurrently with the newer version (which also had unique location-related features we relied on), creating some degree of confusion and required care.
## A few of Our Favorite Code Snippets
A few gems from our codebase:
```
var deprecatedFQLQuery = '...
```
*The story*: in order to extract geolocation data from events vis-a-vis the Facebook Graph API, we were forced to use a deprecated API version for that specific query, which proved challenging in how we versioned our interactions with the Facebook API.
```
addYaleBuildings(placeDetails, function(bulldogArray) {
addGoogleRadarSearch(bulldogArray, function(luxEtVeritas) {
...
```
*The story*: dealing with quite a lot of Yale API data meant we needed to be creative with our naming...
```
// R is the earth's radius in meters
var a = R * 2 * Math.atan2(Math.sqrt((Math.sin((Math.PI / 180)(latitude2 - latitude1) / 2) * Math.sin((Math.PI / 180)(latitude2 - latitude1) / 2) + Math.cos((Math.PI / 180)(latitude1)) * Math.cos((Math.PI / 180)(latitude2)) * Math.sin((Math.PI / 180)(longitude2 - longitude1) / 2) * Math.sin((Math.PI / 180)(longitude2 - longitude1) / 2))), Math.sqrt(1 - (Math.sin((Math.PI / 180)(latitude2 - latitude1) / 2) * Math.sin((Math.PI / 180)(latitude2 - latitude1) / 2) + Math.cos((Math.PI / 180)(latitude1)) * Math.cos((Math.PI / 180)(latitude2)) * Math.sin((Math.PI / 180)(longitude2 - longitude1) / 2) * Math.sin((Math.PI / 180)(longitude2 - longitude1) / 2);)));
```
*The story*: while it was shortly after changed and condensed once we noticed it's proliferation, our implementation of the Haversine formula became cumbersome very quickly. Degree/radian mismatches between APIs didn't make things any easier. | ## Inspiration
The inspiration for our project stems from the increasing trend of online shopping and the declining foot traffic in physical stores. Our goal was to provide a unique and engaging experience for customers, encouraging them to visit physical stores and rediscover the joy of in-person shopping. We wanted to create an interactive and entertaining shopping experience that would entice customers to visit stores more frequently and foster a deeper connection between them and the store's brand.
## What it does
Our project is an AR scavenger hunt experience that gamifies the shopping experience. The scavenger hunt encourages customers to explore the store and discover new products they may have otherwise overlooked. As customers find specific products, they can earn points which can be redeemed for exclusive deals and discounts on future purchases. This innovative marketing scheme not only provides customers with an entertaining experience but also incentivizes them to visit stores more frequently and purchase products they may have otherwise overlooked.
## How we built it
To create the AR component of our project, we used Vuforia and Unity, two widely used platforms for building AR applications. The Vuforia platform allowed us to create and track image targets, while Unity was used to design the 3D models for the AR experience. We then integrated the AR component into an Android application by importing it as a Gradle project. Our team utilized agile development methodologies to ensure efficient collaboration and problem-solving throughout the development process.
## Challenges we ran into
One of the challenges we faced was integrating multiple APIs and ensuring that they worked together seamlessly. Another challenge was importing the AR component and creating the desired functionality within our project. We also faced issues with debugging and resolving technical errors that arose during the development process.
## Accomplishments that we're proud of
Despite the challenges we faced, we were able to achieve successful teamwork and collaboration. Despite forming the team later than other groups, we were able to effectively communicate and work together to bring our project to fruition. We are proud of the end result, which was a polished and functional AR scavenger hunt experience that met our objectives.
## What we learned
We learned how difficult it is to truly ship out software, and we are grateful to have joined the hackathon. We gained a deeper understanding of the importance of project planning, effective communication, and collaboration among team members. We also learned that the development process can be challenging and unpredictable, and that it requires perseverance and problem-solving skills. Additionally, participating in the hackathon taught us valuable technical skills such as integrating APIs, creating AR functionality, and importing projects onto an Android application.
## What's next for Winnur
Looking forward, we plan to incorporate Computer Vision technology into our project to prevent potential damage to our product's packaging. We also aim to expand the reach of our AR scavenger hunt experience by partnering with more retailers and enhancing the user interface and experience. We are excited about the potential for future development and growth of Winnur. | ## Where we got the spark?
**No one is born without talents**.
We all get this situation in our childhood, No one gets a chance to reveal their skills and gets guided for their ideas. Some skills are buried without proper guidance, we don't even have mates to talk about it and develop our skills in the respective field. Even in college if we are starters we have trouble in implementation. So we started working for a solution to help others who found themselves in this same crisis.
## How it works?
**Connect with neuron of your same kind**
From the problem faced we are bridging all bloomers on respective fields to experts, people in same field who need a team mate (or) a friend to develop the idea along. They can become aware of source needed to develop themselves in that field by the guidance of experts and also experienced professors.
We can also connect with people all over globe using language translator, this makes us feels everyone feel native.
## How we built it
**1.Problem analysis:**
We ran through all problems all over the globe in the field of education and came across several problems and we chose a problem that gives solution for several problems.
**2.Idea Development:**
We started to examine the problems and lack of features and solution for topic we chose and solved all queries as much as possible and developed it as much as we can.
**3.Prototype development:**
We developed a working prototype and got a good experience developing it.
## Challenges we ran into
Our plan is to get our application to every bloomers and expertise, but what will make them to join in our community. It will be hard to convince them that our application will help them to learn new things.
## Accomplishments that we're proud of
The jobs which are currently popular may or may not be popular after 10 years. Our World will always looks for a better version of our current version . We are satisfied that our idea will help 100's of children like us who don't even know about the new things in todays new world. Our application may help them to know the things earlier than usual. Which may help them to lead a path in their interest. We are proud that we are part of their development.
## What we learned
We learnt that many people are suffering from lack of help for their idea/project and we felt useless when we learnt this. So we planned to build an web application for them to help with their project/idea with experts and same kind of their own. So, **Guidance is important. No one is born pro**
We learnt how to make people understand new things based on the interest of study by guiding them through the path of their dream.
## What's next for EXPERTISE WITH
We're planning to advertise about our web application through all social medias and help all the people who are not able to get help for development their idea/project and implement from all over the world. to the world. | winning |
## Inspiration
My father put me in charge of his finances and in contact with his advisor, a young, enterprising financial consultant eager to make large returns. That might sound pretty good, but to someone financially conservative like my father doesn't really want that kind of risk in this stage of his life. The opposite happened to my brother, who has time to spare and money to lose, but had a conservative advisor that didn't have the same fire. Both stopped their advisory services, but that came with its own problems. The issue is that most advisors have a preferred field but knowledge of everything, which makes the unknowing client susceptible to settling with someone who doesn't share their goals.
## What it does
Resonance analyses personal and investment traits to make the best matches between an individual and an advisor. We use basic information any financial institution has about their clients and financial assets as well as past interactions to create a deep and objective measure of interaction quality and maximize it through optimal matches.
## How we built it
The whole program is built in python using several libraries for gathering financial data, processing and building scalable models using aws. The main differential of our model is its full utilization of past data during training to make analyses more wholistic and accurate. Instead of going with a classification solution or neural network, we combine several models to analyze specific user features and classify broad features before the main model, where we build a regression model for each category.
## Challenges we ran into
Our group member crucial to building a front-end could not make it, so our designs are not fully interactive. We also had much to code but not enough time to debug, which makes the software unable to fully work. We spent a significant amount of time to figure out a logical way to measure the quality of interaction between clients and financial consultants. We came up with our own algorithm to quantify non-numerical data, as well as rating clients' investment habits on a numerical scale. We assigned a numerical bonus to clients who consistently invest at a certain rate. The Mathematics behind Resonance was one of the biggest challenges we encountered, but it ended up being the foundation of the whole idea.
## Accomplishments that we're proud of
Learning a whole new machine learning framework using SageMaker and crafting custom, objective algorithms for measuring interaction quality and fully utilizing past interaction data during training by using an innovative approach to categorical model building.
## What we learned
Coding might not take that long, but making it fully work takes just as much time.
## What's next for Resonance
Finish building the model and possibly trying to incubate it. | ![alt tag](https://raw.githubusercontent.com/zackharley/QHacks/develop/public/pictures/logoBlack.png)
# What is gitStarted?
GitStarted is a developer tool to help get projects off the ground in no time. When time is of the essence, devs hate losing time to setting up repositories. GitStarted streamlines the repo creation process, quickly adding your frontend tools and backend npm modules.
## Installation
To install:
```
npm install
```
## Usage
To run:
```
gulp
```
## Credits
Created by [Jake Alsemgeest](https://github.com/Jalsemgeest), [Zack Harley](https://github.com/zackharley), [Colin MacLeod](https://github.com/ColinLMacLeod1) and [Andrew Litt](https://github.com/andrewlitt)!
Made with :heart: in Kingston, Ontario for QHacks 2016 | ## Inspiration
The inspiration for the project was to design a model that could detect fake loan entries hidden amongst a set of real loan entries. Also, our group was eager to design a dashboard to help see these statistics - many similar services are good at identifying outliers in data but are unfriendly to the user. We wanted businesses to look at and understand fake data immediately because its important to recognize quickly.
## What it does
Our project handles back-end and front-end tasks. Specifically, on the back-end, the project uses libraries like Pandas in Python to parse input data from CSV files. Then, after creating histograms and linear regression models that detect outliers on given input, the data is passed to the front-end to display the histogram and present outliers on to the user for an easy experience.
## How we built it
We built this application using Python in the back-end. We utilized Pandas for efficiently storing data in DataFrames. Then, we used Numpy and Scikit-Learn for statistical analysis. On the server side, we built the website in HTML/CSS and used Flask and Django to handle events on the website and interaction with other parts of the code. This involved retrieving taking a CSV file from the user, parsing it into a String, running our back-end model, and displaying the results to the user.
## Challenges we ran into
There were many front-end and back-end issues, but they ultimately helped us learn. On the front-end, the biggest problem was using Django with the browser to bring this experience to the user. Also, on the back-end, we found using Keras to be an issue during the start of the process, so we had to switch our frameworks mid-way.
## Accomplishments that we're proud of
An accomplishment was being able to bring both sides of the development process together. Specifically, creating a UI with a back-end was a painful but rewarding experience. Also, implementing cool machine learning models that could actually find fake data was really exciting.
## What we learned
One of our biggest lessons was to use libraries more effectively to tackle the problem at hand. We started creating a machine learning model by using Keras in Python, which turned out to be ineffective to implement what we needed. After much help from the mentors, we played with other libraries that made it easier to implement linear regression, for example.
## What's next for Financial Outlier Detection System (FODS)
Eventually, we aim to use a sophisticated statistical tools to analyze the data. For example, a Random Forrest Tree could have been used to identify key characteristics of data, helping us decide our linear regression models before building them. Also, one cool idea is to search for linearly dependent columns in data. They would help find outliers and eliminate trivial or useless variables in new data quickly. | winning |
## Inspiration
The inspiration from merchflow was the Google form that PennApps sent out regarding shipping swag. We found the question regarding distribution on campuses particularly odd, but it made perfect sense after giving it a bit more thought. After all, shipping a few large packages is cheaper than many small shipments. But then we started considering the logistics of such an arrangement, particularly how the event organizers would have to manually figure out these shipments. Thus the concept of merchflow was born.
## What it does
Merchflow is a web app that allows event organizers (like for a hackathon) to easily determine the optimal shipping arrangement for swag (or, more generically, for any package) to event participants. Below is our design for merchflow.
First, the event organizer provides merchflow with the contact info (email) of the event participants. Merchflow will then send out emails on behalf of the organizer with a link to a form and an event-specific code.
The form will ask for information such as shipping address as well if they would be willing to distribute swag to other participants nearby. This information will be sent back to merchflow’s underlying database Firestore and updates the organizer’s dashboard in real-time.
Once the organizer is ready to ship, merchflow will compute the best shipping arrangement based on the participant’s location and willingness to distribute. This will be done according to a shipping algorithm that we define to minimize the number of individual shipments required (which will in turn lower the overall shipping costs for the organizer).
## How we built it
Given the scope of PennApps and the limited time we had, we decided to focus on designing the concept of Merchflow and building out its front end experience. While there is much work to be done in the backend, we believe what we have so far provides a good visualization of its potential.
Merchflow is built using react.js and firebase (and related services such as Firestore and Cloud Functions). We ran into many issues with Firebase and ultimately were not able to fully utilize it; however, we were able to successfully deploy the web app to the provided host.
With react.js, we used bootstrap and started off with airframe React templates and built our own dashboard, tabs, forms, tables, etc. custom to our design and expectations for merchflow. The dashboard and tabs are designed and built with responsiveness in mind as well as an intention to pursue a minimalistic, clean style. For functionalities that our backend isn’t operational in yet, we used faker.js to populate it with data to simulate the real experience an event planner would have.
## Challenges I ran into
During the development of merchflow, we ran into many issues. The one being that we were unable to get Firebase authentication working with our React app. We tried following several tutorials and documentations; however, it was just something that we were unable to resolve in the time span of PennApps. Therefore, we focused our energy on polishing up the front end and the design of the project so that we can relay our project concept well even without the backend being fully operational.
Another issue that we encountered was regarding Firebase deployment (while we weren’t able to connect to any Firebase SDKs, we were still able to connect the web app as a Firebase app and could deploy to the provided hosted site). During deployment, we noticed that the color theme was not properly displaying compared to what we had locally. Since we specify the colors in node\_modules (a folder that we do not commit to Git), we thought that by moving the specific color variable .scss file out of node\_modules, change import paths, we would be able to fix it. And it did, but it took quite some time to realize this because the browser had cached the site prior to this change and it didn’t propagate over immediately.
## Accomplishments that I'm proud of
We are very proud of the level of polish in our design and react front end. As a concept, we fleshed out merchflow quite extensively and considered many different aspects and features that would be required of an actual service that event organizers actually use. This includes dealing with authentication, data storage, and data security. Our diagram describes the infrastructure of merchflow quite well and clearly lays out the work ahead of us.
Likewise, we spent hours reading through how the airframe template was built in the first place before being able to customize and add on top of it, and in the process gained a lot of insight into how React projects should be structured and how each file and component connects with each other. Ultimately, we were able to turn what we dreamed of in our designs into reality that we can present to someone else.
## What I learned
As a team, we learned a lot about web development (which neither of us is particularly strong in) specifically regarding react.js and Firebase. For react.js, we didn’t know the full extent of modularizing components could bring in terms of scale and clarity. We interacted and learned the workings of scss and javascript, including the faker.js package, on the fly as we try to build out merchflow’s front end.
## What's next for merchflow
While we are super excited about our front end, unfortunately, there are still a few more gaps to turn merchflow into an operational tool for event organizers to utilize, primarily dealing with the backend and Firebase. We need to resolve the Firebase connection issues that we were experiencing so we can actually get a backend working for merchflow.
After we are able to integrate Firebase into the react app, we can start connecting the fields and participant list to Firestore which will maintain these documents based on the event organizer’s user id (preventing unauthorized access and modification).
Once that is complete, we can focus on the two main features of merchflow: sending out emails and calculating the best shipping arrangement. Both of these features would be implemented via a Cloud Function and would work with the underlying data stored in Firestore. Sending out emails could be achieved using a library such as Twilio SendGrid using the emails the organizer has provided. Computing the best arrangement would require a bit more work to figure out an algorithm to work with. Regardless of algorithm, it will likely utilize Google Maps API (or some other map API) in order to calculate the distance between addresses (and thus determine viability for proxy distribution). We would also need to utilize some service to programmatically generate (and pay for) shipping labels. | ## Inspiration
iPonzi started off as a joke between us, but we decided that PennApps was the perfect place to make our dream a reality.
## What it does
The app requires the user to sign up using an email and/or social logins. After purchasing the application and creating an account, you can refer your friends to the app. For every person you refer, you are given $3, and the app costs $5. All proceeds will go to Doctors' Without Borders. A leader board of the most successful recruiters and the total amount of money donated will be updated.
## How I built it
Google Polymer, service workers, javascript, shadow-dom
## Challenges I ran into
* Learning a new framework
* Live deployment to firebase hosting
## Accomplishments that I'm proud of
* Mobile like experience offline
* App shell architecture and subsequent load times.
* Contributing to pushing the boundaries of web
## What I learned
* Don't put payment API's into production in 2 days.
* DOM module containment
## What's next for iPonzi
* Our first donation
* Expanding the number of causes we support by giving the user a choice of where their money goes.
* Adding addition features to the app
* Production | ## Why We Created **Here**
As college students, one question that we catch ourselves asking over and over again is – “Where are you studying today?” One of the most popular ways for students to coordinate is through texting. But messaging people individually can be time consuming and awkward for both the inviter and the invitee—reaching out can be scary, but turning down an invitation can be simply impolite. Similarly, group chats are designed to be a channel of communication, and as a result, a message about studying at a cafe two hours from now could easily be drowned out by other discussions or met with an awkward silence.
Just as Instagram simplified casual photo sharing from tedious group-chatting through stories, we aim to simplify casual event coordination. Imagine being able to efficiently notify anyone from your closest friends to lecture buddies about what you’re doing—on your own schedule.
Fundamentally, **Here** is an app that enables you to quickly notify either custom groups or general lists of friends of where you will be, what you will be doing, and how long you will be there for. These events can be anything from an open-invite work session at Bass Library to a casual dining hall lunch with your philosophy professor. It’s the perfect dynamic social calendar to fit your lifestyle.
Groups are customizable, allowing you to organize your many distinct social groups. These may be your housemates, Friday board-game night group, fellow computer science majors, or even a mixture of them all. Rather than having exclusive group chat plans, **Here** allows for more flexibility to combine your various social spheres, casually and conveniently forming and strengthening connections.
## What it does
**Here** facilitates low-stakes event invites between users who can send their location to specific groups of friends or a general list of everyone they know. Similar to how Instagram lowered the pressure involved in photo sharing, **Here** makes location and event sharing casual and convenient.
## How we built it
UI/UX Design: Developed high fidelity mockups on Figma to follow a minimal and efficient design system. Thought through user flows and spoke with other students to better understand needed functionality.
Frontend: Our app is built on React Native and Expo.
Backend: We created a database schema and set up in Google Firebase. Our backend is built on Express.js.
All team members contributed code!
## Challenges
Our team consists of half first years and half sophomores. Additionally, the majority of us have never developed a mobile app or used these frameworks. As a result, the learning curve was steep, but eventually everyone became comfortable with their specialties and contributed significant work that led to the development of a functional app from scratch.
Our idea also addresses a simple problem which can conversely be one of the most difficult to solve. We needed to spend a significant amount of time understanding why this problem has not been fully addressed with our current technology and how to uniquely position **Here** to have real change.
## Accomplishments that we're proud of
We are extremely proud of how developed our app is currently, with a fully working database and custom frontend that we saw transformed from just Figma mockups to an interactive app. It was also eye opening to be able to speak with other students about our app and understand what direction this app can go into.
## What we learned
Creating a mobile app from scratch—from designing it to getting it pitch ready in 36 hours—forced all of us to accelerate our coding skills and learn to coordinate together on different parts of the app (whether that is dealing with merge conflicts or creating a system to most efficiently use each other’s strengths).
## What's next for **Here**
One of **Here’s** greatest strengths is the universality of its usage. After helping connect students with students, **Here** can then be turned towards universities to form a direct channel with their students. **Here** can provide educational institutions with the tools to foster intimate relations that spring from small, casual events. In a poll of more than sixty university students across the country, most students rarely checked their campus events pages, instead planning their calendars in accordance with what their friends are up to. With **Here**, universities will be able to more directly plug into those smaller social calendars to generate greater visibility over their own events and curate notifications more effectively for the students they want to target. Looking at the wider timeline, **Here** is perfectly placed at the revival of small-scale interactions after two years of meticulously planned agendas, allowing friends who have not seen each other in a while casually, conveniently reconnect.
The whole team plans to continue to build and develop this app. We have become dedicated to the idea over these last 36 hours and are determined to see just how far we can take **Here**! | partial |
## Inspiration
We wanted to bring financial literacy into being a part of your everyday life while also bringing in futuristic applications such as augmented reality to really motivate people into learning about finance and business every day. We were looking at a fintech solution that didn't look towards enabling financial information to only bankers or the investment community but also to the young and curious who can learn in an interesting way based on the products they use everyday.
## What it does
Our mobile app looks at company logos, identifies the company and grabs the financial information, recent company news and company financial statements of the company and displays the data in an Augmented Reality dashboard. Furthermore, we allow speech recognition to better help those unfamiliar with financial jargon to better save and invest.
## How we built it
Built using wikitude SDK that can handle Augmented Reality for mobile applications and uses a mix of Financial data apis with Highcharts and other charting/data visualization libraries for the dashboard.
## Challenges we ran into
Augmented reality is very hard, especially when combined with image recognition. There were many workarounds and long delays spent debugging near-unknown issues. Moreover, we were building using Android, something that none of us had prior experience with which made it harder.
## Accomplishments that we're proud of
Circumventing challenges as a professional team and maintaining a strong team atmosphere no matter what to bring something that we believe is truly cool and fun-to-use.
## What we learned
Lots of things about Augmented Reality, graphics and Android mobile app development.
## What's next for ARnance
Potential to build in more charts, financials and better speech/chatbot abilities into out application. There is also a direction to be more interactive with using hands to play around with our dashboard once we figure that part out. | ## Inspiration
Today Instagram has become a huge platform for social activism and encouraging people to contribute to different causes. I've donated to several of these causes but have always been met with a clunky UI that takes several minutes to fully fill out. With the donation inertia already so high, it makes sense to simplify the process and that's exactly what Activst does.
## What it does
It provides social media users to create a personalized profile of what causes they support and different donation goals. Each cause has a description of the rationale behind the movement and details of where the donation money will be spent. Then the user can specify how much they want to donate and finish the process in one click.
## How we built it
ReactJS, Firebase Hosting, Google Pay, Checkbook API, Google Cloud Functions (Python)
## Challenges we ran into
It's very difficult to facilitate payments directly to donation providers and create a one click process to do so as many of the donation providers require specific information from the donor. Using checkbook's API simplified this process as we could simply send a check to the organization's email. CORS.
## What's next for Activst
Add in full payment integration and find a better way to complete the donation process without needing any user engagement. Launch, beta test, iterate, repeat. The goal is to have instagram users have an activst url in their instagram bio. | # turnip - food was made for sharing.
## Inspiration
After reading about the possible projects, we decided to work with Velo by Wix on a food tech project. What are two things that we students never get tired of? Food and social media! We took some inspiration from Radish and GoodReads to throw together a platform for hungry students.
Have you ever wanted takeout but not been sure what you're in the mood for? turnip is here for you!
## What it does
turnip is a website that connects local friends with their favourite food takeout spots. You can leave reviews and share pictures, as well as post asking around for food recommendations. turnip also keeps track of your restaurant wishlist and past orders, so you never forget to check out that place your friend keeps telling you about. With integrated access to partnered restaurants, turnip would allow members to order right on the site seamlessly and get food delivered for cheap. Since the whole design is built around sharing (sharing thoughts, sharing secrets, sharing food), turnip would also allow users to place orders together, splitting the cost right at payment to avoid having to bring out the calculator and figure out who owes who what.
## How we built it
We used Velo by Wix for the entire project, with Carol leading the design of the website while Amir and Tudor worked on the functionality. We also used Wix's integrated "members" area and forum add-ons to implement the "feed".
## Challenges we ran into
One of the bigger challenges we had to face was that none of us had any experience developing full-stack, so we had to learn on the spot how to write a back-end and try to implement it into our website. It was honestly a lot of fun trying to "speedrun" learning the ins and outs of Javascript. Unfortunately, Wix made the project even more difficult to work on as it doesn't natively support multiple people working on it at the same time. As such, our plan to work concurrently fell through and we had to "pass the baton" when it came to working on the website and keep ourselves busy the rest of the time. Lastly, since we relied on Wix add-ons we were heavily limited in the functionality we could implement with Velo. We still created a few functions; however, much of it was already covered by the add-ons and what wasn't was made very difficult to access without rewriting the functionality of the modules from scratch. Given the time crunch, we made do with what we had and had to restrict the scope for McHacks.
## Accomplishments that we're proud of
We're super proud of how the design of the site came together, and all the art Carol drew really flowed great with the look we were aiming for. We're also very proud of what we managed to get together despite all the challenges we faced, and the back-end functionality we implemented.
## What we learned
Our team really learned about the importance of scope, as well as about the importance of really planning out the project before diving right in. Had we done some research to really familiarize ourselves with Wix and Velo we might have reconsidered the functionalities we would need to implement (and/or implemented them ourselves, which in hindsight would have been better), or chosen to tackle this project in a different way altogether!
## What's next for Turnip
We have a lot of features that we really wanted to implement but didn't quite have the time to.
A simple private messaging feature would have been great, as well as fully implementing the block feature (sometimes we don't get along with people, and that's okay!).
We love the idea that a food delivery service like Radish could implement some of our ideas, like the social media/recommendations/friends feature aspect of our project, and would love to help them do it.
Overall, we're extremely proud of the ideas we have come up with and what we have managed to implement, especially the fact that we kept in mind the environmental impact of meal deliveries with the order sharing. | winning |
## Inspiration
Lots of applications require you to visit their website or application for initial tasks such as signing up on a waitlist to be seen. What if these initial tasks could be performed at the convenience of the user on whatever platform they want to use (text, slack, facebook messenger, twitter, webapp)?
## What it does
In a medical setting, allows patients to sign up using platforms such as SMS or Slack to be enrolled on the waitlist. The medical advisor can go through this list one by one and have a video conference with the patient. When the medical advisor is ready to chat, a notification is sent out to the respective platform the patient signed up on.
## How I built it
I set up this whole app by running microservices on StdLib. There are multiple microservices responsible for different activities such as sms interaction, database interaction, and slack interaction. The two frontend Vue websites also run as microservices on StdLib. The endpoints on the database microservice connect to a MongoDB instance running on mLab. The endpoints on the sms microservice connect to the MessageBird microservice. The video chat was implemented using TokBox. Each microservice was developed one by one and then also connected one by one like building blocks.
## Challenges I ran into
Initially, getting the microservices to connect to each other, and then debugging microservices remotely.
## Accomplishments that I'm proud of
Connecting multiple endpoints to create a complex system more easily using microservice architecture.
## What's next for Please Health Me
Developing more features such as position in the queue and integrating with more communication channels such as Facebook Messenger. This idea can also be expanded into different scenarios, such as business partners signing up for advice from a busy advisor, or fans being able to sign up and be able to connect with a social media influencer based on their message. | This project was developed with the RBC challenge in mind of developing the Help Desk of the future.
## What inspired us
We were inspired by our motivation to improve the world of work.
## Background
If we want technical support, we usually contact companies by phone, which is slow and painful for both users and technical support agents, especially when questions are obvious. Our solution is an online chat that responds to people immediately using our own bank of answers. This is a portable and a scalable solution.
## Try it!
<http://www.rbcH.tech>
## What we learned
Using NLP, dealing with devilish CORS, implementing docker successfully, struggling with Kubernetes.
## How we built
* Node.js for our servers (one server for our webapp, one for BotFront)
* React for our front-end
* Rasa-based Botfront, which is the REST API we are calling for each user interaction
* We wrote our own Botfront database during the last day and night
## Our philosophy for delivery
Hack, hack, hack until it works. Léonard was our webapp and backend expert, François built the DevOps side of things and Han solidified the front end and Antoine wrote + tested the data for NLP.
## Challenges we faced
Learning brand new technologies is sometimes difficult! Kubernetes (and CORS brought us some painful pain... and new skills and confidence.
## Our code
<https://github.com/ntnco/mchacks/>
## Our training data
<https://github.com/lool01/mchack-training-data> | ## Inspiration
In this era, with medicines being readily available for consumption, people take on pills without even consulting with a specialist to find out what diagnosis they have. We have created this project to find out what specific illnesses that a person can be diagnosed with, so that they can seek out the correct treatment, without self-treating themselves with pills which might in turn harm them in the long run.
## What it does
This is your personal medical assistant bot which takes in a set of symptoms you are experiencing and returns some illnesses that are most closely matched with that set of symptoms. It is powered by Machine learning which enables it to return more accurate data (tested and verified!) as to what issue the person might have.
## How we built it
We used React for building the front-end. We used Python and its vast array of libraries to design the ML model. For building the model, we used scikit-learn. We used pandas for the data processing. To connect the front end with the model, we used Fast API. We used a Random Forest multi-label classification model to give the diagnosis. Since the model takes in a string, we used the Bag-of-Words from Scikit-Learn to convert it to number-related values.
## Challenges we ran into
Since none of us had significant ML experience, we had to learn how to create an ML model specifically the multi-label classification model, train it and get it deployed on time. Furthermore, FAST API does not have good documentation, we ran into numerous errors while configuring and interfacing it between our front-end and back-end.
## Accomplishments that we're proud of
Creating a Full-Stack Application that would help the public to find a quick diagnosis for the symptoms they experience. Working on the Project as a team and brainstorming ideas for the proof of concept and how to get our app working.
We trained the model with use cases which evaluated to 97% accuracy
## What we learned
Working with Machine Learning and creating a full-stack App. We also learned how to coordinate with the team to work effectively. Reading documentation and tutorials to get an understanding of how the technologies we used work.
## What's next for Medical Chatbot
The first stage for the Medical Chatbot would be to run tests and validate that it works using different datasets. We also plan about adding more features in the front end such as authentication so that different users can register before using the feature. We can get inputs from professionals in healthcare to increase coverage and add more questions to give the correct prediction. | winning |
## Inspiration
I was walking down the streets of Toronto and noticed how there always seemed to be cigarette butts outside of any building. It felt disgusting for me, especially since they polluted the city so much. After reading a few papers on studies revolving around cigarette butt litter, I noticed that cigarette litter is actually the #1 most littered object in the world and is toxic waste. Here are some quick facts
* About **4.5 trillion** cigarette butts are littered on the ground each year
* 850,500 tons of cigarette butt litter is produced each year. This is about **6 and a half CN towers** worth of litter which is huge! (based on weight)
* In the city of Hamilton, cigarette butt litter can make up to **50%** of all the litter in some years.
* The city of San Fransico spends up to $6 million per year on cleaning up the cigarette butt litter
Thus our team decided that we should develop a cost-effective robot to rid the streets of cigarette butt litter
## What it does
Our robot is a modern-day Wall-E. The main objectives of the robot are to:
1. Safely drive around the sidewalks in the city
2. Detect and locate cigarette butts on the ground
3. Collect and dispose of the cigarette butts
## How we built it
Our basic idea was to build a robot with a camera that could find cigarette butts on the ground and collect those cigarette butts with a roller-mechanism. Below are more in-depth explanations of each part of our robot.
### Software
We needed a method to be able to easily detect cigarette butts on the ground, thus we used computer vision. We made use of this open-source project: [Mask R-CNN for Object Detection and Segmentation](https://github.com/matterport/Mask_RCNN) and [pre-trained weights](https://www.immersivelimit.com/datasets/cigarette-butts). We used a Raspberry Pi and a Pi Camera to take pictures of cigarettes, process the image Tensorflow, and then output coordinates of the location of the cigarette for the robot. The Raspberry Pi would then send these coordinates to an Arduino with UART.
### Hardware
The Arduino controls all the hardware on the robot, including the motors and roller-mechanism. The basic idea of the Arduino code is:
1. Drive a pre-determined path on the sidewalk
2. Wait for the Pi Camera to detect a cigarette
3. Stop the robot and wait for a set of coordinates from the Raspberry Pi to be delivered with UART
4. Travel to the coordinates and retrieve the cigarette butt
5. Repeat
We use sensors such as a gyro and accelerometer to detect the speed and orientation of our robot to know exactly where to travel. The robot uses an ultrasonic sensor to avoid obstacles and make sure that it does not bump into humans or walls.
### Mechanical
We used Solidworks to design the chassis, roller/sweeper-mechanism, and mounts for the camera of the robot. For the robot, we used VEX parts to assemble it. The mount was 3D-printed based on the Solidworks model.
## Challenges we ran into
1. Distance: Working remotely made designing, working together, and transporting supplies challenging. Each group member worked on independent sections and drop-offs were made.
2. Design Decisions: We constantly had to find the most realistic solution based on our budget and the time we had. This meant that we couldn't cover a lot of edge cases, e.g. what happens if the robot gets stolen, what happens if the robot is knocked over ...
3. Shipping Complications: Some desired parts would not have shipped until after the hackathon. Alternative choices were made and we worked around shipping dates
## Accomplishments that we're proud of
We are proud of being able to efficiently organize ourselves and create this robot, even though we worked remotely, We are also proud of being able to create something to contribute to our environment and to help keep our Earth clean.
## What we learned
We learned about machine learning and Mask-RCNN. We never dabbled with machine learning much before so it was awesome being able to play with computer-vision and detect cigarette-butts. We also learned a lot about Arduino and path-planning to get the robot to where we need it to go. On the mechanical side, we learned about different intake systems and 3D modeling.
## What's next for Cigbot
There is still a lot to do for Cigbot. Below are some following examples of parts that could be added:
* Detecting different types of trash: It would be nice to be able to gather not just cigarette butts, but any type of trash such as candy wrappers or plastic bottles, and to also sort them accordingly.
* Various Terrains: Though Cigbot is made for the sidewalk, it may encounter rough terrain, especially in Canada, so we figured it would be good to add some self-stabilizing mechanism at some point
* Anti-theft: Cigbot is currently small and can easily be picked up by anyone. This would be dangerous if we left the robot in the streets since it would easily be damaged or stolen (eg someone could easily rip off and steal our Raspberry Pi). We need to make it larger and more robust.
* Environmental Conditions: Currently, Cigbot is not robust enough to handle more extreme weather conditions such as heavy rain or cold. We need a better encasing to ensure Cigbot can withstand extreme weather.
## Sources
* <https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3397372/>
* <https://www.cbc.ca/news/canada/hamilton/cigarette-butts-1.5098782>
* [https://www.nationalgeographic.com/environment/article/cigarettes-story-of-plastic#:~:text=Did%20You%20Know%3F-,About%204.5%20trillion%20cigarettes%20are%20discarded%20each%20year%20worldwide%2C%20making,as%20long%20as%2010%20years](https://www.nationalgeographic.com/environment/article/cigarettes-story-of-plastic#:%7E:text=Did%20You%20Know%3F-,About%204.5%20trillion%20cigarettes%20are%20discarded%20each%20year%20worldwide%2C%20making,as%20long%20as%2010%20years). | ## Inspiration
The EPA estimates that although 75% of American waste is recyclable, only 30% gets recycled. Our team was inspired to create RecyclAIble by the simple fact that although most people are not trying to hurt the environment, many unknowingly throw away recyclable items in the trash. Additionally, the sheer amount of restrictions related to what items can or cannot be recycled might dissuade potential recyclers from making this decision. Ultimately, this is detrimental since it can lead to more trash simply being discarded and ending up in natural lands and landfills rather than being recycled and sustainably treated or converted into new materials. As such, RecyclAIble fulfills the task of identifying recycling objects with a machine learning-based computer vision software, saving recyclers the uncertainty of not knowing whether they can safely dispose of an object or not. Its easy-to-use web interface lets users track their recycling habits and overall statistics like the number of items disposed of, allowing users to see and share a tangible representation of their contributions to sustainability, offering an additional source of motivation to recycle.
## What it does
RecyclAIble is an AI-powered mechanical waste bin that separates trash and recycling. It employs a camera to capture items placed on an oscillating lid and, with the assistance of a motor, tilts the lid in the direction of one compartment or another depending on whether the AI model determines the object as recyclable or not. Once the object slides into the compartment, the lid will re-align itself and prepare for proceeding waste. Ultimately, RecyclAIBle autonomously helps people recycle as much as they can and waste less without them doing anything different.
## How we built it
The RecyclAIble hardware was constructed using cardboard, a Raspberry Pi 3 B+, an ultrasonic sensor, a Servo motor, and a Logitech plug-in USB web camera, and Raspberry PI. Whenever the ultrasonic sensor detects an object placed on the surface of the lid, the camera takes an image of the object, converts it into base64 and sends it to a backend Flask server. The server receives this data, decodes the base64 back into an image file, and inputs it into a Tensorflow convolutional neural network to identify whether the object seen is recyclable or not. This data is then stored in an SQLite database and returned back to the hardware. Based on the AI model's analysis, the Servo motor in the Raspberry Pi flips the lip one way or the other, allowing the waste item to slide into its respective compartment. Additionally, a reactive, mobile-friendly web GUI was designed using Next.js, Tailwind.css, and React. This interface provides the user with insight into their current recycling statistics and how they compare to the nationwide averages of recycling.
## Challenges we ran into
The prototype model had to be assembled, measured, and adjusted very precisely to avoid colliding components, unnecessary friction, and instability. It was difficult to get the lid to be spun by a single Servo motor and getting the Logitech camera to be propped up to get a top view. Additionally, it was very difficult to get the hardware to successfully send the encoded base64 image to the server and for the server to decode it back into an image. We also faced challenges figuring out how to publicly host the server before deciding to use ngrok. Additionally, the dataset for training the AI demanded a significant amount of storage, resources and research. Finally, establishing a connection from the frontend website to the backend server required immense troubleshooting and inspect-element hunting for missing headers. While these challenges were both time-consuming and frustrating, we were able to work together and learn about numerous tools and techniques to overcome these barriers on our way to creating RecyclAIble.
## Accomplishments that we're proud of
We all enjoyed the bittersweet experience of discovering bugs, editing troublesome code, and staying up overnight working to overcome the various challenges we faced. We are proud to have successfully made a working prototype using various tools and technologies new to us. Ultimately, our efforts and determination culminated in a functional, complete product we are all very proud of and excited to present. Lastly, we are proud to have created something that could have a major impact on the world and help clean our environment clean.
## What we learned
First and foremost, we learned just how big of a problem under-recycling was in America and throughout the world, and how important recycling is to Throughout the process of creating RecyclAIble, we had to do a lot of research on the technologies we wanted to use, the hardware we needed to employ and manipulate, and the actual processes, institutions, and statistics related to the process of recycling. The hackathon has motivated us to learn a lot more about our respective technologies - whether it be new errors, or desired functions, new concepts and ideas had to be introduced to make the tech work. Additionally, we educated ourselves on the importance of sustainability and recycling as well to better understand the purpose of the project and our goals.
## What's next for RecyclAIble
RecycleAIble has a lot of potential as far as development goes. RecyclAIble's AI can be improved with further generations of images of more varied items of trash, enabling it to be more accurate and versatile in determining which items to recycle and which to trash. Additionally, new features can be incorporated into the hardware and website to allow for more functionality, like dates, tracking features, trends, and the weights of trash, that expand on the existing information and capabilities offered. And we’re already thinking of ways to make this device better, from a more robust AI to the inclusion of much more sophisticated hardware/sensors. Overall, RecyclAIble has the potential to revolutionize the process of recycling and help sustain our environment for generations to come. | ## Inspiration
How many times have you been walking around the city and seen trash on the ground, sometimes just centimetres away from a trash can? It can be very frustrating to see people who either have no regard for littering, or just have horrible aim. This is what inspired us to create TrashTalk: trash talk for your trash shots.
## What it does
When a piece of garbage is dropped on the ground within the camera’s field of vision, a speaker loudly hurls insults until the object is picked up. Because what could motivate people to pick up after themselves more than public shaming? Perhaps the promise of a compliment: once the litter is picked up, the trash can will deliver praise, designed to lift the pedestrian’s heart.
The ultrasonic sensor attached to the rim of the can will send a ping to the server when the trash can becomes full, thus reducing litter by preventing overfilling, as studies have shown that programmed encouragement as opposed to regular maintenance can reduce littering by as much as 25%. On the website, one can view the current 'full' status of the trash can, how much trash is currently inside and outside the can in a bar graph, and how many pieces of trash have been scanned total. This quantifies TrashTalk's design to drastically reduce littering in public areas, with some nice risk and reward involved for the participant.
## How we built it
We build this project using NEXT.js, Python, MongoDB, and the Express library integrated together using HTTP requests to send data to and from the Arduino, computer, and end-user.
Our initial idea was made quite early on, but as we ran into challenges, the details of the project changed over time in order to reflect what we could realistically accomplish in one hackathon.
We split up our work so we could cover more ground: Abeer would cover trash detection using AI models that could be run on a Raspberry Pi, Kersh would handle the MongoDB interaction, Vansh would help create the Arduino Logic, and Matias would structure the project together.
## Challenges we ran into
We ran into *quite* a few challenges making TrashTalk, and a lot of them had to do with the APIs that we were using for OpenCV. The first major issue was that we were not able to get Raspberry Pi running, so we migrated all the code onto one of our laptops.
Furthermore, none of the pretrained computer vision models we tried to use to recognize trash would work. We realized with the help of one of the mentors that we could simply use an object detection algorithm, and it was smooth sailing from there.
## Accomplishments that we're proud of
* Getting a final working product together
* Being able to demo to people at the hackathon
* Having an interactive project
## What we learned
We learned so many things during this hackathon due to the varying experience levels in our team. Some members learned how to integrate GitHub with VSCode, while others learned how to use Next.js (SHOUTOUT TO FREDERIC) and motion detection with OpenCV.
## What's next for TrashTalk
The next steps for TrashTalk would be to have more advanced analytics being run on each trash can. If we aim to reduce litter through the smart placement of trashcans along with auditory reminders, having a more accurate kit of sensors, such as GPS, weight sensor, etc. would allow us to have a much more accurate picture of the trash can's usage. The notification of a trash can being full could also be used to alert city workers to optimize their route and empty more popular trash cans first, increasing efficiency. | winning |
OUR VIDEO IS IN THE COMMENTS!! THANKS FOR UNDERSTANDING (WIFI ISSUES)
## Inspiration
As a group of four students having completed 4 months of online school, going into our second internship and our first fully remote internship, we were all nervous about how our internships would transition to remote work. When reminiscing about pain points that we faced in the transition to an online work term this past March, the one pain point that we all agreed on was a lack of connectivity and loneliness. Trying to work alone in one's bedroom after experiencing life in the office where colleagues were a shoulder's tap away for questions about work, and the noise of keyboards clacking and people zoned into their work is extremely challenging and demotivating, which decreases happiness and energy, and thus productivity (which decrease energy and so on...). Having a mentor and steady communication with our teams is something that we all valued immensely during our first co-ops. In addition, some of our works had designated exercise times, or even pre-planned one-on-one activities, such as manager-coop lunches, or walk breaks with company walking groups. These activities and rituals bring structure into a sometimes mundane day which allows the brain to recharge and return back to work fresh and motivated. Upon the transition to working from home, we've all found that somedays we'd work through lunch without even realizing it and some days we would be endlessly scrolling through Reddit as there would be no one there to check in on us and make sure that we were not blocked. Our once much-too-familiar workday structure seemed to completely disintegrate when there was no one there to introduce structure, hold us accountable and gently enforce proper, suggested breaks into it. We took these gestures for granted in person, but now they seemed like a luxury- almost impossible to attain.
After doing research, we noticed that we were not alone:
A 2019 Buffer survey asked users to rank their biggest struggles working remotely. Unplugging after work and loneliness were the most common (22% and 19% respectively)
<https://buffer.com/state-of-remote-work-2019>
We set out to create an application that would allow us to facilitate that same type of connection between colleagues and make remote work a little less lonely and socially isolated. We were also inspired by our own online term recently, finding that we had been inspired and motivated when we were made accountable by our friends through usage of tools like shared Google Calendars and Notion workspaces.
As one of the challenges we'd like to enter for the hackathon, the 'RBC: Most Innovative Solution' in the area of helping address a pain point associated with working remotely in an innovative way truly encaptured the issue we were trying to solve perfectly.
Therefore, we decided to develop aibo, a centralized application which helps those working remotely stay connected, accountable, and maintain relationships with their co-workers all of which improve a worker's mental health (which in turn has a direct positive affect on their productivity).
## What it does
Aibo, meaning "buddy" in Japanese, is a suite of features focused on increasing the productivity and mental wellness of employees. We focused on features that allowed genuine connections in the workplace and helped to motivate employees. First and foremost, Aibo uses a matching algorithm to match compatible employees together focusing on career goals, interests, roles, and time spent at the company following the completion of a quick survey. These matchings occur multiple times over a customized timeframe selected by the company's host (likely the People Operations Team), to ensure that employees receive a wide range of experiences in this process. Once you have been matched with a partner, you are assigned weekly meet-ups with your are partner to build that connection. Using Aibo, you can video call with your partner and start creating a To-Do list with your partner and by developing this list together, you can bond over the common tasks to perform despite potentially having seemingly very different roles. Partners would have 2 meetings a day, once in the morning where they would go over to-do lists and goals for the day, and once in the evening in order to track progress over the course of that day and tasks that need to be transferred over to the following day.
## How We built it
This application was built with React, Javascript and HTML/CSS on the front-end along with Node.js and Express on the back-end. We used the Twilio chat room API along with Autocode to store our server endpoints and enable a Slack bot notification that POSTs a message in your specific buddy Slack channel when your buddy joins the video calling room.
In total, we used **4 APIs/ tools** for our project.
* Twilio chat room API
* Autocode API
* Slack API for the Slack bots
* Microsoft Azure to work on the machine learning algorithm
When we were creating our buddy app, we wanted to find an effective way to match partners together. After looking over a variety of algorithms, we decided on the K-means clustering algorithm. This algorithm is simple in its ability to group similar data points together and discover underlying patterns. The K-means will look for a set amount of clusters within the data set. This was my first time working with machine learning but luckily, through Microsoft Azure, I was able to create a working training and interference pipeline. The dataset marked the user’s role and preferences and created n/2 amount of clusters where n are the number of people searching for a match. This API was then deployed and tested on web server. Although, we weren't able to actively test this API on incoming data from the back-end, this is something that we are looking forward to implementing in the future. Working with ML was mainly trial and error, as we have to experiment with a variety of algorithm to find the optimal one for our purposes.
Upon working with Azure for a couple of hours, we decided to pivot towards leveraging another clustering algorithm in order to group employees together based on their answers to the form they fill out when they first sign up on the aido website. We looked into the PuLP, a python LP modeler, and then looked into hierarchical clustering. This seemed similar to our initial approach with Azure, and after looking into the advantages of this algorithm over others for our purpose, we decided to chose this one for the clustering of the form responders. Some pros of hierarchical clustering include:
1. Do not need to specify the number of clusters required for the algorithm- the algorithm determines this for us which is useful as this automates the sorting through data to find similarities in the answers.
2. Hierarchical clustering was quite easy to implement as well in a Spyder notebook.
3. the dendrogram produced was very intuitive and helped me understand the data in a holistic way
The type of hierarchical clustering used was agglomerative clustering, or AGNES. It's known as a bottom-up algorithm as it starts from a singleton cluster then pairs of clusters are successively merged until all clusters have been merged into one big cluster containing all objects. In order to decide which clusters had to be combined and which ones had to be divided, we need methods for measuring the similarity between objects. I used Euclidean distance to calculate this (dis)similarity information.
This project was designed solely using Figma, with the illustrations and product itself designed on Figma. These designs required hours of deliberation and research to determine the customer requirements and engineering specifications, to develop a product that is accessible and could be used by people in all industries. In terms of determining which features we wanted to include in the web application, we carefully read through the requirements for each of the challenges we wanted to compete within and decided to create an application that satisfied all of these requirements.
After presenting our original idea to a mentor at RBC, we had learned more about remote work at RBC and having not yet completed an online internship, we learned about the pain points and problems being faced by online workers such as:
1. Isolation
2. Lack of feedback
From there, we were able to select the features to integrate including: Task Tracker, Video Chat, Dashboard, and Matching Algorithm which will be explained in further detail later in this post.
Technical implementation for AutoCode:
Using Autocode, we were able to easily and successfully link popular APIs like Slack and Twilio to ensure the productivity and functionality of our app. The Autocode source code is linked before:
Autocode source code here: <https://autocode.com/src/mathurahravigulan/remotework/>
**Creating the slackbot**
```
const lib = require('lib')({token: process.env.STDLIB_SECRET_TOKEN});
/**
* An HTTP endpoint that acts as a webhook for HTTP(S) request event
* @returns {object} result Your return value
*/
module.exports = async (context) => {
console.log(context.params)
if (context.params.StatusCallbackEvent === 'room-created') {
await lib.slack.channels['@0.7.2'].messages.create({
channel: `#buddychannel`,
text: `Hey! Your buddy started a meeting! Hop on in: https://aibo.netlify.app/ and enter the room code MathurahxAyla`
});
} // do something
let result = {};
// **THIS IS A STAGED FILE**
// It was created as part of your onboarding experience.
// It can be closed and the project you're working on
// can be returned to safely - or you can play with it!
result.message = `Welcome to Autocode! 😊`;
return result;
};
```
**Connecting Twilio to autocode**
```
const lib = require('lib')({token: process.env.STDLIB_SECRET_TOKEN});
const twilio = require('twilio');
const AccessToken = twilio.jwt.AccessToken;
const { VideoGrant } = AccessToken;
const generateToken =() => {
return new AccessToken(
process.env.TWILIO_ACCOUNT_SID,
process.env.TWILIO_API_KEY,
process.env.TWILIO_API_SECRET
);
};
const videoToken = (identity, room) => {
let videoGrant;
if (typeof room !== 'undefined') {
videoGrant = new VideoGrant({ room });
} else {
videoGrant = new VideoGrant();
}
const token = generateToken();
token.addGrant(videoGrant);
token.identity = identity;
return token;
};
/**
* An HTTP endpoint that acts as a webhook for HTTP(S) request event
* @returns {object} result Your return value
*/
module.exports = async (context) => {
console.log(context.params)
const identity = context.params.identity;
const room = context.params.room;
const token = videoToken(identity, room);
return {
token: token.toJwt()
}
};
```
From the product design perspective, it is possible to explain certain design choices:
<https://www.figma.com/file/aycIKXUfI0CvJAwQY2akLC/Hack-the-6ix-Project?node-id=42%3A1>
1. As shown in the prototype, the user has full independence to move through the designs as one would in a typical website and this supports the non sequential flow of the upper navigation bar as each feature does not need to be viewed in a specific order.
2. As Slack is a common productivity tool in remote work and we're participating in the Autocode Challenge, we chose to use Slack as an alerting feature as sending text messages to phone could be expensive and potentially distract the user and break their workflow which is why Slack has been integrated throughout the site.
3. The to-do list that is shared between the pairing has been designed in a simple and dynamic way that allows both users to work together (building a relationship) to create a list of common tasks, and duplicate this same list to their individual workspace to add tasks that could not be shared with the other (such as confidential information within the company)
In terms of the overall design decisions, I made an effort to create each illustration from hand simply using Figma and the trackpad on my laptop! Potentially a non-optimal way of doing so, but this allowed us to be very creative in our designs and bring that individuality and innovation to the designs.
The website itself relies on consistency in terms of colours, layouts, buttons, and more - and by developing these components to be used throughout the site, we've developed a modern and coherent website.
## Challenges We ran into
Some challenges that we ran into were:
* Using data science and machine learning for the very first time ever! We were definitely overwhelmed by the different types of algorithms out there but we were able to persevere with it and create something amazing.
* React was difficult for most of us to use at the beginning as only one of our team members had experience with it. But by the end of this, we all felt like we were a little more confident with this tech stack and front-end development.
+ Lack of time - there were a ton of features that we were interested in (like user authentication and a Google calendar implementation) but for the sake of time we had to abandon those functions and focus on the more pressing ones that were integral to our vision for this hack. These, however, are features I hope that we can complete in the future. We learned how to successfully scope a project and deliver upon the technical implementation.
## Accomplishments that We're proud of
* Created a fully functional end-to-end full stack application incorporating both the front-end and back-end to enable to do lists and the interactive video chat that can happen between the two participants. I'm glad I discovered Autocode which made this process simpler (shoutout to Jacob Lee - mentor from Autocode for the guidance)
* Solving an important problem that affects an extremely large amount of individuals- according to tnvestmentexecutive.com:
StatsCan reported that five million workers shifted to home working arrangements in late March. Alongside the 1.8-million employees who already work from home, the combined home-bound employee population represents 39.1% of workers.
<https://www.investmentexecutive.com/news/research-and-markets/statscan-reports-numbers-on-working-from-home/>
* From doing user research we learned that people can feel isolated when working from home and miss the social interaction and accountability of a desk buddy. We're solving two problems in one, tackling social problems and increasing worker mental health while also increasing productivity as their buddy will keep them accountable!
* Creating a working matching algorithm for the first time in a time crunch and learning more about Microsoft Azure's capabilities in Machine Learning
* Creating all of our icons/illustrations from scratch using Figma!
## What We learned
* How to create and trigger Slack bots from React
* How to have a live video chat on a web application using Twilio and React hooks
* How to use a hierarchical clustering algorithm (agglomerative clustering algorithms) to create matches based on inputted criteria
* How to work remotely in a virtual hackathon, and what tools would help us work remotely!
## What's next for aibo
* We're looking to improve on our pairing algorithm. I learned that 36 hours is not enough time to create a new Tinder algorithm and that other time these pairing can be improved and perfected.
* We're looking to code more screens and add user authentication to the mix, and integrate more test cases in the designs rather than using Figma prototyping to prompt the user.
* It is important to consider the security of the data as well, and that not all teams can discuss tasks at length due to specificity. That is why we encourage users to create a simple to do list with their partner during their meeting, and use their best judgement to make it vague. In the future, we hope to incorporate machine learning algorithms to take in the data from the user knowing whether their project is NDA or not, and if so, as the user types it can provide warnings for sensitive information.
* Add a dashboard! As can be seen in the designs, we'd like to integrate a dashboard per user that pulls data from different components of the website such as your match information and progress on your task tracker/to-do lists. This feature could be highly effective to optimize productivity as the user simply has to click on one page and they'll be provided a high level explanation of these two details.
* Create our own Slackbot to deliver individualized Kudos to a co-worker, and pull this data onto a Kudos board on the website so all employees can see how their coworkers are being recognized for their hard work which can act as a motivator to all employees. | ## Inspiration
Social interaction with peers is harder than ever in our world today where everything is online. We wanted to create a setting that will mimic organic encounters the same way as if they would occur in real life -- in the very same places that you’re familiar with.
## What it does
Traverse a map of your familiar environment with an avatar, and experience random encounters like you would in real life! A Zoom call will initiate when two people bump into each other.
## Use Cases
Many students entering their first year at university have noted the difficulty in finding new friends because few people stick around after zoom classes, and with cameras off, it’s hard to even put a name to the face. And it's not just first years too - everybody is feeling the [impact](https://www.mcgill.ca/newsroom/channels/news/social-isolation-causing-psychological-distress-among-university-students-324910).
Our solution helps students meet potential new friends and reunite with old ones in a one-on-one setting in an environment reminiscent of the actual school campus.
Another place where organic communication is vital is in the workplace. [Studies](https://pyrus.com/en/blog/how-spontaneity-can-boost-productivity) have shown that random spontaneous meetings between co-workers can help to inspire new ideas and facilitate connections. With indefinite work from home, this simply doesn't happen anymore. Again, Bump fills this gap of organic conversation between co-workers by creating random happenstances for interaction - you can find out which of your co-workers also likes to hang out in the (virtual) coffee room!
## How we built it
Webapp built with Vue.js for the main structure, firebase backend
Video conferencing integrated with Zoom Web SDK. Original artwork was created with Illustrator and Procreate.
## Major Challenges
Major challenges included implementing the character-map interaction and implementing the queueing process for meetups based on which area of the map each person’s character was in across all instances of the Bump client. In the prototype, queueing is achieved by writing the user id of the waiting client in documents located at area-specific paths in the database and continuously polling for a partner, and dequeuing once that partner is found. This will be replaced with a more elegant implementation down the line.
## What's next for bump
* Auto-map generation: give our app the functionality to create a map with zones just by uploading a map or floor plan (using OCR and image recognition technologies)
* Porting it over to mobile: change arrow key input to touch for apps
* Schedule mode: automatically move your avatar around on the map, following your course schedule. This makes it more likely to bump into classmates in the gap between classes.
## Notes
This demo is a sample of BUMP for a single community - UBC. In the future, we plan on adding the ability for users to be part of multiple communities. Since our login authentication uses email addresses, these communities can be kept secure by only allowing @ubc.ca emails into the UBC community, for example. This ensures that you aren’t just meeting random strangers on the Internet - rather, you’re meeting the same people you would have met in person if COVID wasn’t around. | ## **CoLab** makes exercise fun.
In August 2020, **53%** of US adults reported that their mental health has been negatively impacted due to worry and stress over coronavirus. This is **significantly higher** than the 32% reported in March 2020.
That being said, there is no doubt that Coronavirus has heavily impacted our everyday lives. Quarantine has us stuck inside, unable to workout at our gyms, practice with our teams, and socialize in classes.
Doctor’s have suggested we exercise throughout lockdown, to maintain our health and for the release of endorphins.
But it can be **hard to stay motivated**, especially when we’re stuck inside and don’t know the next time we can see our friends.
Our inspiration comes from this, and we plan to solve these problems with **CoLab.**
## What it does
CoLab enables you to workout with others, following a synced YouTube video or creating a custom workout plan that can be fully dynamic and customizable.
## How we built it
Our technologies include: Twilio Programmable Video API, Node.JS and React.
## Challenges we ran into
At first, we found it difficult to resize the Video References for local and remote participants. Luckily, we were able to resize and set the correct ratios using Flexbox and Bootstrap's grid system.
We also needed to find a way to mute audio and disable video as these are core functionalities in any video sharing applications. We were luckily enough to find that someone else had the same issue on [stack overflow](https://stackoverflow.com/questions/41128817/twilio-video-mute-participant) which we were able to use to help build our solution.
## Accomplishments that we're proud of
When the hackathon began, our team started brainstorming a ton of goals like real-time video, customizable workouts, etc. It was really inspiring and motivating to see us tackle these problems and accomplish most of our planned goals one by one.
## What we learned
This sounds cliché but we learned how important it was to have a strong chemistry within our team. One of the many reasons why I believe our team was able to complete most of our goals was because we were all very communicative, helpful and efficient. We knew that we joined together to have a good time but we also joined because we wanted to develop our skills as developers. It helped us grow as individuals and we are now more competent in using new technologies like Twilios Programmable API!
## What's next for CoLab
Our team will continue developing the CoLab platform and polishing it until we deem it acceptable for publishing. We really believe in the idea of CoLab and want to pursue the idea further. We hope you share that vision and our team would like to thank you for reading this verbose project story! | partial |
## Inspiration
Team member's father works in the medical field and he presented the problem to us. We wanted to try to create a tool that he could actually use in the workplace.
## What it does
Allows users to create requests for air ambulances (medically equipped helicopters) and automatically prioritizes and dispatches the helicopters. Displays where the helicopters will be flying and how long it will take.
## How we built it
Java, Firebase realtime database, android-studio, google-maps api for locations
## What we learned
First time integrating google-maps into an android app which was interesting. Firebase has some strange asynchronous issues that we took a lot of time to fix. Android is great for building a quick and dirty UI.
Redbull + a mentor = bug fixes | ## Inspiration
Originally, we wanted to think of various ways drone delivery could be used to solve problems, and decided to create an app for emergency medicine delivery as there are certainly situations in which someone might not be able to make it to the hospital or pharmacy. Drones could deliver life-saving products like insulin and inhalers.
Building upon that, I don't think many people enjoy having to drive to CVS or Walgreens to pick up a prescription medicine, so the drones could be used for all sorts of pharmaceutical deliveries as well.
## What it does
This is a user-facing web app that allows the user to pick a delivery location and follow the delivery through real-time tracking. The app also provides an ETA.
## How we built it
The app is built on CherryPy and the styling is done with HTML/CSS/SCSS/JS (jQuery). In terms of APIs, we used Mapbox to set and view location for tracking and ordering. This app was built off of DroneKit's API and drone delivery example, and we used DroneKit's drone simulator to test it.
## Challenges we ran into
We really wanted to add SMS notifications that would alert the user when the package had arrived but ran into issues implementing the Twilio API without a server using only jQuery, as most solutions utilized PHP. It was also our first time working with CherryPy, so that was a challenging learning experience in terms of picking up a new framework.
## Accomplishments that we're proud of
I'm proud of figuring out how to calculate ETA given coordinates, learning a lot more Python than I'd previously ever known, and integrating nice styling with the bare bones website. I'm also proud of the photoshopped Pusheen background.
## What we learned
I learned how to work with new APIs, since I hadn't had much prior experience using them. I also learned more Python in the context of web development and jQuery.
## What's next for Flight Aid
I really want to figure out how to add notifications so I can flesh out more features of the user-facing app. I would in the future want to build a supplier-facing app that would give the supplier analytics and alarms based on the drone's sensor data. | # BananaExpress
A self-writing journal of your life, with superpowers!
We make journaling easier and more engaging than ever before by leveraging **home-grown, cutting edge, CNN + LSTM** models to do **novel text generation** to prompt our users to practice active reflection by journaling!
Features:
* User photo --> unique question about that photo based on 3 creative techniques
+ Real time question generation based on (real-time) user journaling (and the rest of their writing)!
+ Ad-lib style questions - we extract location and analyze the user's activity to generate a fun question!
+ Question-corpus matching - we search for good questions about the user's current topics
* NLP on previous journal entries for sentiment analysis
I love our front end - we've re-imagined how easy and futuristic journaling can be :)
And, honestly, SO much more! Please come see!
♥️ from the Lotus team,
Theint, Henry, Jason, Kastan | partial |
This project was developed with the RBC challenge in mind of developing the Help Desk of the future.
## What inspired us
We were inspired by our motivation to improve the world of work.
## Background
If we want technical support, we usually contact companies by phone, which is slow and painful for both users and technical support agents, especially when questions are obvious. Our solution is an online chat that responds to people immediately using our own bank of answers. This is a portable and a scalable solution.
## Try it!
<http://www.rbcH.tech>
## What we learned
Using NLP, dealing with devilish CORS, implementing docker successfully, struggling with Kubernetes.
## How we built
* Node.js for our servers (one server for our webapp, one for BotFront)
* React for our front-end
* Rasa-based Botfront, which is the REST API we are calling for each user interaction
* We wrote our own Botfront database during the last day and night
## Our philosophy for delivery
Hack, hack, hack until it works. Léonard was our webapp and backend expert, François built the DevOps side of things and Han solidified the front end and Antoine wrote + tested the data for NLP.
## Challenges we faced
Learning brand new technologies is sometimes difficult! Kubernetes (and CORS brought us some painful pain... and new skills and confidence.
## Our code
<https://github.com/ntnco/mchacks/>
## Our training data
<https://github.com/lool01/mchack-training-data> | ## Problem
In these times of isolation, many of us developers are stuck inside which makes it hard for us to work with our fellow peers. We also miss the times when we could just sit with our friends and collaborate to learn new programming concepts. But finding the motivation to do the same alone can be difficult.
## Solution
To solve this issue we have created an easy to connect, all in one platform where all you and your developer friends can come together to learn, code, and brainstorm together.
## About
Our platform provides a simple yet efficient User Experience with a straightforward and easy-to-use one-page interface.
We made it one page to have access to all the tools on one screen and transition between them easier.
We identify this page as a study room where users can collaborate and join with a simple URL.
Everything is Synced between users in real-time.
## Features
Our platform allows multiple users to enter one room and access tools like watching youtube tutorials, brainstorming on a drawable whiteboard, and code in our inbuilt browser IDE all in real-time. This platform makes collaboration between users seamless and also pushes them to become better developers.
## Technologies you used for both the front and back end
We use Node.js and Express the backend. On the front end, we use React. We use Socket.IO to establish bi-directional communication between them. We deployed the app using Docker and Google Kubernetes to automatically scale and balance loads.
## Challenges we ran into
A major challenge was collaborating effectively throughout the hackathon. A lot of the bugs we faced were solved through discussions. We realized communication was key for us to succeed in building our project under a time constraints. We ran into performance issues while syncing data between two clients where we were sending too much data or too many broadcast messages at the same time. We optimized the process significantly for smooth real-time interactions.
## What's next for Study Buddy
While we were working on this project, we came across several ideas that this could be a part of.
Our next step is to have each page categorized as an individual room where users can visit.
Adding more relevant tools, more tools, widgets, and expand on other work fields to increase our User demographic.
Include interface customizing options to allow User’s personalization of their rooms.
Try it live here: <http://35.203.169.42/>
Our hopeful product in the future: <https://www.figma.com/proto/zmYk6ah0dJK7yJmYZ5SZpm/nwHacks_2021?node-id=92%3A132&scaling=scale-down>
Thanks for checking us out! | ## Inspiration
All of us have gone through the painstaking and difficult process of onboarding as interns and making sense of huge repositories with many layers of folders and files. We hoped to shorten this or remove it completely through the use of Code Flow.
## What it does
Code Flow exists to speed up onboarding and make code easy to understand for non-technical people such as Project Managers and Business Analysts. Once the user has uploaded the repo, it has 2 main features. First, it can visualize the entire repo by showing how different folders and files are connected and providing a brief summary of each file and folder. It can also visualize a file by showing how the different functions are connected for a more technical user. The second feature is a specialized chatbot that allows you to ask questions about the entire project as a whole or even specific files. For example, "Which file do I need to change to implement this new feature?"
## How we built it
We used React to build the front end. Any folders uploaded by the user through the UI are stored using MongoDB. The backend is built using Python-Flask. If the user chooses a visualization, we first summarize what every file and folder does and display that in a graph data structure using the library pyvis. We analyze whether files are connected in the graph based on an algorithm that checks features such as the functions imported, etc. For the file-level visualization, we analyze the file's code using an AST and figure out which functions are interacting with each other. Finally for the chatbot, when the user asks a question we first use Cohere's embeddings to check the similarity of the question with the description we generated for the files. After narrowing down the correct file, we use its code to answer the question using Cohere generate.
## Challenges we ran into
We struggled a lot with narrowing down which file to use to answer the user's questions. We initially thought to simply use Cohere generate to reply with the correct file but knew that it isn't specialized for that purpose. We decided to use embeddings and then had to figure out how to use those numbers to actually get a valid result. We also struggled with getting all of our tech stacks to work as we used React, MongoDB and Flask. Making the API calls seamless proved to be very difficult.
## Accomplishments that we're proud of
This was our first time using Cohere's embeddings feature and accurately analyzing the result to match the best file. We are also proud of being able to combine various different stacks and have a working application.
## What we learned
We learned a lot about NLP, how embeddings work, and what they can be used for. In addition, we learned how to problem solve and step out of our comfort zones to test new technologies.
## What's next for Code Flow
We plan on adding annotations for key sections of the code, possibly using a new UI so that the user can quickly understand important parts without wasting time. | winning |
## Inspiration
We love cooking and watching food videos. From the Great British Baking Show to Instagram reels, we are foodies in every way. However, with the 119 billion pounds of food that is wasted annually in the United States, we wanted to create a simple way to reduce waste and try out new recipes.
## What it does
lettuce enables users to create a food inventory using a mobile receipt scanner. It then alerts users when a product approaches its expiration date and prompts them to notify their network if they possess excess food they won't consume in time. In such cases, lettuce notifies fellow users in their network that they can collect the surplus item. Moreover, lettuce offers recipe exploration and automatically checks your pantry and any other food shared by your network's users before suggesting new purchases.
## How we built it
lettuce uses React and Bootstrap for its frontend and uses Firebase for the database, which stores information on all the different foods users have stored in their pantry. We also use a pre-trained image-to-text neural network that enables users to inventory their food by scanning their grocery receipts. We also developed an algorithm to parse receipt text to extract just the food from the receipt.
## Challenges we ran into
One big challenge was finding a way to map the receipt food text to the actual food item. Receipts often use annoying abbreviations for food, and we had to find databases that allow us to map the receipt item to the food item.
## Accomplishments that we're proud of
lettuce has a lot of work ahead of it, but we are proud of our idea and teamwork to create an initial prototype of an app that may contribute to something meaningful to us and the world at large.
## What we learned
We learned that there are many things to account for when it comes to sustainability, as we must balance accessibility and convenience with efficiency and efficacy. Not having food waste would be great, but it's not easy to finish everything in your pantry, and we hope that our app can help find a balance between the two.
## What's next for lettuce
We hope to improve our recipe suggestion algorithm as well as the estimates for when food expires. For example, a green banana will have a different expiration date compared to a ripe banana, and our scanner has a universal deadline for all bananas. | ## Inspiration
Food waste is a huge issue globally. Overall, we throw out about 1/3 of all of the food we produce ([FAO](https://www.fao.org/3/mb060e/mb060e00.pdf)), and that number is even higher at up to 40% in the U.S. ([Gunders](https://www.nrdc.org/sites/default/files/wasted-food-IP.pdf)). Young adults throw away an even higher proportion of their food than other age groups ([University of Illinois](https://www.sciencedaily.com/releases/2018/08/180822122832.htm)).
All of us have on the team have had problems with buying food and then forgetting about it. It's been especially bad in the last couple of years because the pandemic has pushed us to buy more food less often. The potatoes will be hiding behind some other things and by the time we remember them, they're almost potato plants.
## What it does
Foodpad is an application to help users track what food they have at home and when it needs to be used by. Users simply add their groceries and select how they're planning to store the item (fridge, pantry, freezer), and the app suggests an expiry date. The app even suggests the best storage method for the type of grocery. The items are sorted so that the soonest expiry date is at the top. As the items are used, the user removes them from the list. At any time, the user can access recipes for the ingredients.
## How we built it
We prototyped the application in Figma and built a proof-of-concept version with React JS. We use API calls to the opensourced TheMealDB, which has recipes for given ingredients.
## Challenges we ran into
Only one of us had ever used JavaScript before, so it was tough to figure out how to use that, especially to get it to look nice. None of us had ever used Figma either, and it was tricky at first, but it's a really lovely tool and we'll definitely use it again in the future!
## Accomplishments that we're proud of
* We think it's a really cool idea that would be helpful in our own lives and would also be useful for other people.
* We're all more hardware/backend coders, so we're really proud of the design work that went into this and just pushing ourselves outside of our comfort zones.
## What we learned
* how to prioritize tasks in a project over a very short timeframe for an MVP
* how to code in JS and use React
* how to design an application to look nice
* how to use Figma
## What's next for foodPad
* release it!
* make the application's UI match the design more closely
* expanding the available food options
* giving users the option of multiple recipes for an ingredient
* selecting recipes that use many of the ingredients on the food list
* send push notifications to the user if the product is going to expire in the next day
* if a certain food keeps spoiling, suggest to the user that they should buy less of an item | ## Inspiration:
Every year, the world wastes about 2.5 billion tons of food, with the United States alone discarding nearly 60 million tons. This staggering waste inspired us to create **eco**, an app focused on food sustainability and community generosity.
## What it does:
**eco** leverages advanced Computer Vision technology, powered by YOLOv8 and OpenCV, to detect fruits, vegetables, and groceries while accurately predicting their expiry dates. The app includes a Discord bot that notifies users of impending expirations and alerts them about unused groceries. Users can easily generate delicious recipes using OpenAI's API, utilizing ingredients from their fridge. Additionally, **eco** features a Shameboard to track and highlight instances of food waste, encouraging community members to take responsibility for their consumption habits.
## How we built it:
For the frontend, we chose React, Typescript, and TailwindCSS to create a sleek and responsive interface. The database is powered by Supabase Serverless, ensuring reliable and scalable data management. The heart of **eco** is its advanced Computer Vision model, developed with Python, OpenCV, and YOLOv8, allowing us to accurately predict expiry dates for fruits, vegetables, and groceries. We leveraged OpenAI's API to generate recipes based on expiring foods, providing users with practical and creative meal ideas. Additionally, we integrated a Discord bot using JavaScript for seamless communication and alerts within our Discord server.
## Challenges we ran into:
During development, we encountered significant challenges with WebSockets and training the Computer Vision model. These hurdles ignited our passion for problem-solving, driving us to think creatively and push the boundaries of innovation. Through perseverance and ingenuity, we not only overcame these obstacles but also emerged stronger, armed with newfound skills and a deepened resolve to tackle future challenges head-on.
## Accomplishments that we're proud of:
We take pride in our adaptive approach, tackling challenges head-on to deliver a fully functional app. Our successful integration of Computer Vision, Discord Bot functionality, and recipe generation showcases our dedication and skill in developing **eco**.
## What we learned:
Building **eco** was a transformative journey that taught us invaluable lessons in teamwork, problem-solving, and the seamless integration of technology. We immersed ourselves in the intricacies of Computer Vision, Discord bot development, and frontend/backend development, elevating our skills to new heights. These experiences have not only enriched our project but have also empowered us with a passion for innovation and a drive to excel in future endeavors.
**Eco** is not just an app; it's a movement towards a more sustainable and generous community. Join us in reducing food waste and fostering a sense of responsibility towards our environment with eco. | winning |
## Inspiration
1. Affordable pet doors with simple "flap" mechanisms are not secure
2. Potty trained pets requires the door to be manually opened (e.g. ring a bell, scratch the door)
## What it does
The puppy *(or cat, we don't discriminate)* can exit without approval as soon as the sensor detects an object within the threshold distance. When entering back in, the ultrasonic sensor will trigger a signal that something is at the door and the camera will take a picture and send to the owner's phone through a web app. The owner may approve or deny the request depending on the photo. If the owner approves the request, the door will open automatically.
## How we built it
Ultrasonic sensors relay the distance from the sensor to an object to the Arduino, which sends this signal to Raspberry Pi. The Raspberry Pi program handles the stepper motor movement (rotate ~90 degrees CW and CCW) to open and close the door and relays information to the Flask server to take a picture using the Kinect camera. This photo will display on the web application, where an approval to the request will open the door.
## Challenges we ran into
1. Connecting everything together (Arduino, Raspberry Pi, frontend, backend, Kinect camera) despite each component working well individually
2. Building cardboard prototype with limited resources = lots of tape & poor wire management
3. Using multiple different streams of I/O and interfacing with each concurrently
## Accomplishments that we're proud of
This was super rewarding as it was our first hardware hack! The majority of our challenges lie in the camera component as we're unfamiliar with Kinect but we came up with a hack-y solution and nothing had to be hardcoded.
## What we learned
Hardware projects require a lot of troubleshooting because the sensors will sometimes interfere with eachother or the signals are not processed properly when there is too much noise. Additionally, with multiple different pieces of hardware, we learned how to connect all the subsystems together and interact with the software components.
## What's next for PetAlert
1. Better & more consistent photo quality
2. Improve frontend notification system (consider push notifications)
3. Customize 3D prints to secure components
4. Use thermal instead of ultrasound
5. Add sound detection | ## Inspiration
The failure of a certain project using Leap Motion API motivated us to learn it and use it successfully this time.
## What it does
Our hack records a motion password desired by the user. Then, when the user wishes to open the safe, they repeat the hand motion that is then analyzed and compared to the set password. If it passes the analysis check, the safe unlocks.
## How we built it
We built a cardboard model of our safe and motion input devices using Leap Motion and Arduino technology. To create the lock, we utilized Arduino stepper motors.
## Challenges we ran into
Learning the Leap Motion API and debugging was the toughest challenge to our group. Hot glue dangers and complications also impeded our progress.
## Accomplishments that we're proud of
All of our hardware and software worked to some degree of success. However, we recognize that there is room for improvement and if given the chance to develop this further, we would take it.
## What we learned
Leap Motion API is more difficult than expected and communicating with python programs and Arduino programs is simpler than expected.
## What's next for Toaster Secure
-Wireless Connections
-Sturdier Building Materials
-User-friendly interface | ## Inspiration
## What it does
## It searches for a water Bottle!
## How we built it
## We built it using a roomba, raspberrypi with a picamera, python, and Microsoft's Custom Vision
## Challenges we ran into
## Attaining wireless communications between the pi and all of our PCs was tricky. Implementing Microsoft's API was troublesome as well. A few hours before judging we exceeded the call volume for the API and were forced to create a new account to use the service. This meant re-training the AI and altering code, and loosing access to our higher accuracy recognition iterations during final tests.
## Accomplishments that we're proud of
## Actually getting the wireless networking to consistently. Surpassing challenges like framework indecision between IBM and Microsoft. We are proud of making things that lacked proper documentation, like the pi camera, work.
## What we learned
## How to use Github, train AI object recognition system using Microsoft APIs, write clean drivers
## What's next for Cueball's New Pet
## Learn to recognize other objects. | winning |
## Inspiration
We aims to bridge the communication gap between hearing-impaired individuals and those who don't understand sign language.
## What it does
This web app utilizes the webcam to capture hand gestures, recognizes the corresponding sign language symbols using machine learning models, and displays the result on the screen.
### Features
* Real-time hand gesture recognition
* Supports standard ASL
* Intuitive user interface
* Cross-platform compatibility (iOS and Android via web browsers)
## How we built it
We use [Hand Pose Detection Model](https://github.com/tensorflow/tfjs-models/tree/master/hand-pose-detection) and [Fingerpose](https://github.com/andypotato/fingerpose) to detect hand and its correspond gesture from webcam. For frontend, we use ReactJS and Vite as the build tool and serve our website on Netlify. There is no backend since we embed the models on client side.
## Challenges we ran into
### Model Choice
We originally used [google web ML model](https://teachablemachine.withgoogle.com/train/image) to trained with data that contains many noises such as the person and the background. This implies that the prediction will be heavily biased on objects that are not hands. Next, we thought about labeling the image to emphasize on hand. That leads to another bias on some hand gestures are weighted more, so the model tends to predict certain gestures even when we did not pose it that way. Later, we discover to use hand landscape to recognize hand only and joint pose to recognize the gesture which gives much better prediction.
### Streaming
In order to let the webcam worked for all device(mobile and desktop) of different specs and screen ratio, we struggle to find a way to enable full screen on all of them. First solution was to hard code the width and height for one device but that was hard to adjust and limited. During try and error, there's another issue that raised, the width and height is applied horizontally for mobile devices, so to work on both mobile and desktop, we dynamically check the user's screen ratio. To solve full screen issue, we used progressive web app to capture the device window.
## Accomplishments that we're proud of
* Active and accurate hand tracking from webcam streaming
* Finding ways to translate different gestures from ASL to English
* Being able to use across mobile and desktop
* Intuitive yet functional design
## What we learned
* Learned about American Sign Language
* Deploy progressive web app
* How machine learning model takes inputs and make prediction
* How to stream from webcam to inputs to our model
* Variations of machine learning model and how to fine-tune them
## What's next for Signado
Next step, we plan to add two hand and motion based gestures support since many words in sign language require the use of these two properties. Also to improve on model accuracy, we can use the latest [handpose]{<https://blog.tensorflow.org/2021/11/3D-handpose.html>} model that transform the hand into a 3D mesh. This will provide more possibility and variation to the gesture that can be performed. | ## **1st Place!**
## Inspiration
Sign language is a universal language which allows many individuals to exercise their intellect through common communication. Many people around the world suffer from hearing loss and from mutism that needs sign language to communicate. Even those who do not experience these conditions may still require the use of sign language for certain circumstances. We plan to expand our company to be known worldwide to fill the lack of a virtual sign language learning tool that is accessible to everyone, everywhere, for free.
## What it does
Here at SignSpeak, we create an encouraging learning environment that provides computer vision sign language tests to track progression and to perfect sign language skills. The UI is built around simplicity and useability. We have provided a teaching system that works by engaging the user in lessons, then partaking in a progression test. The lessons will include the material that will be tested during the lesson quiz. Once the user has completed the lesson, they will be redirected to the quiz which can result in either a failure or success. Consequently, successfully completing the quiz will congratulate the user and direct them to the proceeding lesson, however, failure will result in the user having to retake the lesson. The user will retake the lesson until they are successful during the quiz to proceed to the following lesson.
## How we built it
We built SignSpeak on react with next.js. For our sign recognition, we used TensorFlow with a media pipe model to detect points on the hand which were then compared with preassigned gestures.
## Challenges we ran into
We ran into multiple roadblocks mainly regarding our misunderstandings of Next.js.
## Accomplishments that we're proud of
We are proud that we managed to come up with so many ideas in such little time.
## What we learned
Throughout the event, we participated in many workshops and created many connections. We engaged in many conversations that involved certain bugs and issues that others were having and learned from their experience using javascript and react. Additionally, throughout the workshops, we learned about the blockchain and entrepreneurship connections to coding for the overall benefit of the hackathon.
## What's next for SignSpeak
SignSpeak is seeking to continue services for teaching people to learn sign language. For future use, we plan to implement a suggestion box for our users to communicate with us about problems with our program so that we can work quickly to fix them. Additionally, we will collaborate and improve companies that cater to phone audible navigation for blind people. | ## What it does
What our project does is introduce ASL letters, words, and numbers to you in a flashcard manner.
## How we built it
We built our project with React, Vite, and TensorFlowJS.
## Challenges we ran into
Some challenges we ran into included issues with git commits and merging. Over the course of our project we made mistakes while resolving merge conflicts which resulted in a large part of our project being almost discarded. Luckily we were able to git revert back to the correct version but time was misused regardless. With our TensorFlow model we had trouble reading the input/output and getting webcam working.
## Accomplishments that we're proud of
We are proud of the work we got done in the time frame of this hackathon with our skill level. Out of the workshops we attended I think we learned a lot and can't wait to implement them in future projects!
## What we learned
Over the course of this hackathon we learned that it is important to clearly define our project scope ahead of time. We spent a lot of our time on day 1 thinking about what we could do with the sponsor technologies and should have looked into them more in depth before the hackathon.
## What's next for Vision Talks
We would like to train our own ASL image detection model so that people can practice at home in real time. Additionally we would like to transcribe their signs into plaintext and voice so that they can confirm what they are signing. Expanding our project scope beyond ASL to other languages is also something we wish to do. | winning |
## Inspiration
As computer science students, our writing skills are not as strong as our technical abilities, so we want a motivating platform to improve them.
## What it does
Fix My Mistake prompts users with a grammatically incorrect sentence that they must fix. Depending on the accuracy of their attempt, experience points are rewarded to them which is used to level up.
## How we built it
Fix My Mistake was built in a team of four using Google Firebase, HTML, CSS, JavaScript, React, React Router DOM, Material UI, JSON API, and Advice Slip API.
## Challenges we ran into
It was difficult to find an efficient open-source API that could randomly generate sentences for the user to read.
## Accomplishments that we're proud of
Our team is proud that we were able to complete a useful and functional product within the deadline to share with others.
## What we learned
The team behind Fix My Mistake got to experience the fundamentals of web development and web scraping, in addition to sharpening our skills in communication and collaboration.
## What's next for Fix My Mistake
The next level for Fix My Mistake includes better grammar detection and text generation systems, multiple gamemodes, and an improved user-interface. | ## Inspiration
Patients often have trouble with following physiotherapy routines correctly which can often lead to worsening the affects of existing issues.
## What it does
Our project gives patients audio tips on where they are going wrong in real time and responds to voice commands to control the flow of their workout. Data from these workouts is then uploaded to the cloud for physiotherapists to track the progress of their patients!
## How we built it
Using the Kinect C# SDK we extracted a human wireframe from the sensors and performed calculations on the different limbs to detect improper form. We also used the .NET speech libraries to create an interactive "trainer" experience.
## Challenges we ran into
Analyzing movements over time is pretty hard due to temporal stretching and temporal misalignment. We solved this by option to code particular rules for difference exercises which were far more robust. This also allowed us to be more inclusive of all body sizes and shapes (everyone has different limb sizes relative to their entire body).
## Accomplishments that we're proud of
Creating a truly helpful trainer experience. Believe it or not we have actually gotten better at squatting over the course of this hackathon 😂😂
## What we learned
Xbox kinect is SUPER COOL and there is so much more that can be done with it.
## What's next for Can we change this later?
Many improvements to the patient management platform. We figured this wasn't super important since fullstack user management applications have been created millions of times over.
## Note
We also have prototypes for what the front-end would look like from the patient and physiotherapist's perspective, to be presented during judging! | ## Inspiration
On social media, most of the things that come up are success stories. We've seen a lot of our friends complain that there are platforms where people keep bragging about what they've been achieving in life, but not a single one showing their failures.
We realized that there's a need for a platform where people can share their failure episodes for open and free discussion. So we have now decided to take matters in our own hands and are creating Failed-In to break the taboo around failures! On Failed-in, you realize - "You're NOT alone!"
## What it does
* It is a no-judgment platform to learn to celebrate failure tales.
* Enabled User to add failure episodes (anonymously/non-anonymously), allowing others to react and comment.
* Each episode on the platform has #tags associated with it, which helps filter out the episodes easily. A user's recommendation is based on the #tags with which they usually interact
* Implemented sentiment analysis to predict the sentiment score of a user from the episodes and comments posted.
* We have a motivational bot to lighten the user's mood.
* Allowed the users to report the episodes and comments for
+ NSFW images (integrated ML check to detect nudity)
+ Abusive language (integrated ML check to classify texts)
+ Spam (Checking the previous activity and finding similarities)
+ Flaunting success (Manual checks)
## How we built it
* We used Node for building REST API and MongoDb as database.
* For the client side we used flutter.
* Also we used tensorflowjs library and its built in models for NSFW, abusive text checks and sentiment analysis.
## Challenges we ran into
* While brainstorming on this particular idea, we weren't sure how to present it not to be misunderstood. Mental health issues from failure are serious, and using Failed-In, we wanted to break the taboo around discussing failures.
* It was the first time we tried using Flutter-beta instead of React with MongoDB and node. It took a little longer than usual to integrate the server-side with the client-side.
* Finding the versions of tensorflow and other libraries which could integrate with the remaining code.
## Accomplishments that we're proud of
* During the 36 hour time we were able to ideate and build a prototype for the same.
* From fixing bugs to resolving merge conflicts the whole experience is worth remembering.
## What we learned
* Team collaboration
* how to remain calm and patient during the 36 hours
* Remain up on caffeine.
## What's next for Failed-In
* Improve the model of sentiment analysis to get more accurate results so we can understand the users and recommend them famous failure to success stories using web scraping.
* Create separate discussion rooms for each #tag, facilitating users to communicate and discuss their failures.
* Also provide the option to follow/unfollow a user. | losing |
End of preview. Expand
in Dataset Viewer.
README.md exists but content is empty.
- Downloads last month
- 42