anchor
stringlengths 159
16.8k
| positive
stringlengths 184
16.2k
| negative
stringlengths 167
16.2k
| anchor_status
stringclasses 3
values |
---|---|---|---|
## Inspiration
As women ourselves, we have always been aware that there are unfortunately additional measures we have to take in order to stay safe in public. Recently, we have seen videos emerge online for individuals to play in these situations, prompting users to engage in conversation with a “friend” on the other side. We saw that the idea was extremely helpful to so many people around the world, and wanted to use the features of voice assistants to add more convenience and versatility to the concept.
## What it does
Safety Buddy is an Alexa Skill that simulates a conversation with the user, creating the illusion that there is somebody on the other line aware of the user’s situation. It intentionally states that the user has their location shared and continues to converse with the user until they are in a safe location and can stop the skill.
## How I built it
We built the Safety Buddy on the Alexa Developer Console, while hosting the audio files on AWS S3 and used a Twilio messaging API to send a text message to the user. On the front-end, we created intents to capture what the user said and connected those to the backend where we used JavaScript to handle each intent.
## Challenges I ran into
While trying to add additional features to the skill, we had Alexa send a text message to the user, which then interrupted the audio that was playing. With the help of a mentor, we were able to handle the asynchronous events.
## Accomplishments that I'm proud of
We are proud of building an application that can help prevent dangerous situations. Our Alexa skill will keep people out of uncomfortable situations when they are alone and cannot contact anyone on their phone. We hope to see our creation being used for the greater good!
## What I learned
We were exploring different ways we could improve our skill in the future, and learned about the differences between deploying on AWS Lambda versus Microsoft Azure Functions. We used AWS Lambda for our development, but tested out Azure Functions briefly. In the future, we would further consider which platform to continue with.
## What's next for Safety Buddy
We wish to expand the skill by developing more intents to allow the user to engage in various conversation flows. We can monetize these additional conversation options through in-skill purchases in order to continue improving Safety Buddy and bring awareness to more individuals. Additionally, we can adapt the skill to be used for various languages users speak. | ## Inspiration
During this lockdown, everyone is pretty much staying home and not able to interact with others. So we want to connect like-minded people using our platform.
## What it does
You can register to our portal and then look for events (e.g. sports, hiking etc) happening around you and can join that person. The best thing about our platform is that once you register you can use the voice assistant to search for events, request the host for joining, and publish events. Everything is hands-free. It is really easy to use.
## How we built it
We built the front end using ReactJS and for the voice assistant, we used Alexa. We built a back-end that is connected to both the front end and Alexa. Whenever a user requests an event or wants to publish it is connect to our server hosted on AWS instance. Even now it is hosted live so that anyone who wants to try can use it. We are also using MongoDB to store the currently active events, user details, etc. One user requests something we scan through the database based on the user's location and deliver events happening near him. We create several REST APIs on the server that servers the requests.
## Challenges we ran into
There were lot of technical challenges faced. Setting up the server. Building the Alexa voice assistant which can serve the user easily without asking too many questions. We also thought of safety and privacy as our top priority.
## Accomplishments that we're proud of
An easy to use assistant and web portal to connect people.
## What we learned
How to use Alexa assistant for custom real life use case. How to deploy the production on AWS instances. Configuring the server to
## What's next for Get Together
Adding more privacy for the user who posts events, having official accounts for better credibility, rating mechanism for a better match-making. | ## Inspiration
Partially inspired by the Smart Cities track, we wanted our app to have the direct utility of ordering food, while still being fun to interact with. We aimed to combine convenience with entertainment, making the experience more enjoyable than your typical drive-through order.
## What it does
You interact using only your voice. The app automatically detects when you start and stop talking, uses AI to transcribe what you say, figures out the food items (with modifications) you want to order, and adds them to your current order. It even handles details like size and flavor preferences. The AI then generates text-to-speech audio, which is played back to confirm your order in a humorous, engaging way. There is absolutely zero set-up or management necessary, as the program will completely ignore all background noises and conversation. Even then, it will still take your order with staggering precision.
## How we built it
The frontend of the app is built with React and TypeScript, while the backend uses Flask and Python. We containerized the app using Docker and deployed it using Defang. The design of the menu is also done in Canva with a dash of Harvard colors.
## Challenges we ran into
One major challenge was getting the different parts of the app—frontend, backend, and AI—to communicate effectively. From media file conversions to AI prompt engineering, we worked through each of the problems together. We struggled particularly with maintaining smooth communication once the app was deployed. Additionally, fine-tuning the AI to accurately extract order information from voice inputs while keeping the interaction natural was a big hurdle.
## Accomplishments that we're proud of
We're proud of building a fully functioning product that successfully integrates all the features we envisioned. We also managed to deploy the app, which was a huge achievement given the complexity of the project. Completing our initial feature set within the hackathon timeframe was a key success for us. Trying to work with Python data type was difficult to manage, and we were proud to navigate around that. We are also extremely proud to meet a bunch of new people and tackle new challenges that we were not previously comfortable with.
## What we learned
We honed our skills in React, TypeScript, Flask, and Python, especially in how to make these technologies work together. We also learned how to containerize and deploy applications using Docker and Docker Compose, as well as how to use Defang for cloud deployment.
## What's next for Harvard Burger
Moving forward, we want to add a business-facing interface, where restaurant staff would be able to view and fulfill customer orders. There will also be individual kiosk devices to handle order inputs. These features would allow *Harvard Burger* to move from a demo to a fully functional app that restaurants could actually use. Lastly, we can sell the product by designing marketing strategies for fast food chains. | losing |
## Inspiration
There is a need for an electronic health record (EHR) system that is secure, accessible, and user-friendly. Currently, hundred of EHRs exist and different clinical practices may use different systems. If a patient requires an emergency visit to a certain physician, the physician may be unable to access important records and patient information efficiently, requiring extra time and resources that strain the healthcare system. This is especially true for patients traveling abroad where doctors from different countries may be unable to access a centralized healthcare database in another.
In addition, there is a strong potential to utilize the data available for improved analytics. In a clinical consultation, patient description of symptoms may be ambiguous and doctors often want to monitor the patient's symptoms for an extended period. With limited resources, this is impossible outside of an acute care unit in a hospital. As access to the internet is becoming increasingly widespread, patients may be able to self-report certain symptoms through a web portal if such an EHR exists. With a large amount of patient data, artificial intelligence techniques can be used to analyze the similarity of patients to predict certain outcomes before adverse events happen such that intervention can occur timely.
## What it does
myHealthTech is a block-chain EHR system that has a user-friendly interface for patients and health care providers to record patient information such as clinical visitation history, lab test results, and self-reporting records from the patient. The system is a web application that is accessible from any end user that is approved by the patient. Thus, doctors in different clinics can access essential information in an efficient manner. With the block-chain architecture compared to traditional databases, patient data is stored securely and anonymously in a decentralized manner such that third parties cannot access the encrypted information.
Artificial intelligence methods are used to analyze patient data for prognostication of adverse events. For instance, a patient's reported mood scores are compared to a database of similar patients that have resulted in self-harm, and myHealthTech will compute a probability that the patient will trend towards a self-harm event. This allows healthcare providers to monitor and intervene if an adverse event is predicted.
## How we built it
The block-chain EHR architecture was written in solidity, truffle, testRPC, and remix. The web interface was written in HTML5, CSS3, and JavaScript. The artificial intelligence predictive behavior engine was written in python.
## Challenges we ran into
The greatest challenge was integrating the back-end and front-end components. We had challenges linking smart contracts to the web UI and executing the artificial intelligence engine from a web interface. Several of these challenges require compatibility troubleshooting and running a centralized python server, which will be implemented in a consistent environment when this project is developed further.
## Accomplishments that we're proud of
We are proud of working with novel architecture and technology, providing a solution to solve common EHR problems in design, functionality, and implementation of data.
## What we learned
We learned the value of leveraging the strengths of different team members from design to programming and math in order to advance the technology of EHRs.
## What's next for myHealthTech?
Next is the addition of additional self-reporting fields to increase the robustness of the artificial intelligence engine. In the case of depression, there are clinical standards from the Diagnostics and Statistical Manual that identify markers of depression such as mood level, confidence, energy, and feeling of guilt. By monitoring these values for individuals that have recovered, are depressed, or inflict self-harm, the AI engine can predict the behavior of new individuals much stronger by logistically regressing the data and use a deep learning approach.
There is an issue with the inconvenience of reporting symptoms. Hence, a logical next step would be to implement smart home technology, such as an Amazon Echo, for the patient to interact with for self reporting. For instance, when the patient is at home, the Amazon Echo will prompt the patient and ask "What would you rate your mood today? What would you rate your energy today?" and record the data in the patient's self reporting records on myHealthTech.
These improvements would further the capability of myHealthTech of being a highly dynamic EHR with strong analytical capabilitys to understand and predict the outcome of patients to improve treatment options. | ## Inspiration
As a patient in the United States you do not know what costs you are facing when you receive treatment at a hospital or if your insurance plan covers the expenses. Patients are faced with unexpected bills and left with expensive copayments. In some instances patients would pay less if they cover the expenses out of pocket instead of using their insurance plan.
## What it does
Healthiator provides patients with a comprehensive overview of medical procedures that they will need to undergo for their health condition and sums up the total costs of that treatment depending on which hospital they go-to, and if they pay the treatment out-of-pocket or through their insurance.
This allows patients to choose the most cost-effective treatment and understand the medical expenses they are facing. A second feature healthiator provides is that once patients receive their actual hospital bill they can claim inaccuracies. Healthiator helps patients with billing disputes by leveraging AI to handle the process of negotiating fair pricing.
## How we built it
We used a combination of Together.AI and Fetch.AI. We have several smart agents running in Fetch.AI each responsible for one of the features. For instance, we get the online and instant data from the hospitals (publicly available under the Good Faith act/law) about the prices and cash discounts using one agent and then use together.ai's API to integrate those information in the negotiation part.
## Ethics
The reason is that although our end purpose is to help people get medical treatment by reducing the fear of surprise bills and actually making healthcare more affordable, we are aware that any wrong suggestions or otherwise violations of the user's privacy have significant consequences. Giving the user as much information as possible while keeping away from making clinical suggestions and false/hallucinated information was the most challenging part in our work.
## Challenges we ran into
Finding actionable data from the hospitals was one of the most challenging parts as each hospital has their own format and assumptions and it was not straightforward at all how to integrate them all into a single database. Another challenge was making various APIs and third parties work together in time.
## Accomplishments that we're proud of
Solving a relevant social issue. Everyone we talked to has experienced the problem of not knowing the costs they're facing for different procedures at hospitals and if their insurance covers it. While it is an anxious process for everyone, this fact might prevent and delay a number of people from going to hospitals and getting the care that they urgently need. This might result in health conditions that could have had a better outcome if treated earlier.
## What we learned
How to work with convex fetch.api and together.api.
## What's next for Healthiator
As a next step, we want to set-up a database and take the medical costs directly from the files published by hospitals. | ## Inspiration
We hate making resumes and customizing them for each employeer so we created a tool to speed that up.
## What it does
A user creates "blocks" which are saved. Then they can pick and choose which ones they want to use.
## How we built it
[Node.js](https://nodejs.org/en/)
[Express](https://expressjs.com/)
[Nuxt.js](https://nuxtjs.org/)
[Editor.js](https://editorjs.io/)
[html2pdf.js](https://ekoopmans.github.io/html2pdf.js/)
[mongoose](https://mongoosejs.com/docs/)
[MongoDB](https://www.mongodb.com/) | partial |
## Inspiration
As cybersecurity enthusiasts, we are taking one for the team by breaking the curse of CLIs. `Appealing UI for tools like nmap` + `Implementation of Metasploitable scripts` = `happy hacker`
## What it does
nmap is a cybersecurity tool that scans ports of an ip on a network, and retrives the service that is running on each of them, as well as the version. Metasploitable is another tool that is able to run attacks on specified ip and ports to gain access to a machine.
Our app creates a graphical user interface for the use of both tools: it first scans an IP adress with nmap, and then retrieves the attack script from Metasploitable that matches the version of the service to use it.
In one glance, see what ports of an IP address are open, and if they are vulnerable or not. If they are, then click on the `🕹️` button to run the attack.
## How we built it
* ⚛️ React for the front-end
* 🐍 Python with fastapi for the backend
* 🌐 nmap and 🪳 Metasploitable
* 📚 SQLi for the database
## Challenges we ran into
Understanding that terminal sessions running under python take time to complete 💀
## Accomplishments that we're proud of
We are proud of the project in general. As cybersecurity peeps, we're making one small step for humans but a giant leap for hackers.
## What we learned
How Mestaploitable actually works lol.
No for real just discovering new libraries is always one main takeaway during hackathons, and McHacks delivered for that one.
## What's next for Phoenix
Have a fuller database, and possibly a way to update it redundantly and less manually. Then, it's just matter of showing it to the world. | ## Inspiration
We wanted to be able to connect with mentors. There are very few opportunities to do that outside of LinkedIn where many of the mentors are in a foreign field to our interests'.
## What it does
A networking website that connects mentors with mentees. It uses a weighted matching algorithm based on mentors' specializations and mentees' interests to prioritize matches.
## How we built it
Google Firebase is used for our NoSQL database which holds all user data. The other website elements were programmed using JavaScript and HTML.
## Challenges we ran into
There was no suitable matching algorithm module on Node.js that did not have version mismatches so we abandoned Node.js and programmed our own weighted matching algorithm. Also, our functions did not work since our code completed execution before Google Firebase returned the data from its API call, so we had to make all of our functions asynchronous.
## Accomplishments that we're proud of
We programmed our own weighted matching algorithm based on interest and specialization. Also, we refactored our entire code to make it suitable for asynchronous execution.
## What we learned
We learned how to use Google Firebase, Node.js and JavaScript from scratch. Additionally, we learned advanced programming concepts such as asynchronous programming.
## What's next for Pyre
We would like to add interactive elements such as integrated text chat between matched members. Additionally, we would like to incorporate distance between mentor and mentee into our matching algorithm. | As a response to the ongoing wildfires devastating vast areas of Australia, our team developed a web tool which provides wildfire data visualization, prediction, and logistics handling. We had two target audiences in mind: the general public and firefighters. The homepage of Phoenix is publicly accessible and anyone can learn information about the wildfires occurring globally, along with statistics regarding weather conditions, smoke levels, and safety warnings. We have a paid membership tier for firefighting organizations, where they have access to more in-depth information, such as wildfire spread prediction.
We deployed our web app using Microsoft Azure, and used Standard Library to incorporate Airtable, which enabled us to centralize the data we pulled from various sources. We also used it to create a notification system, where we send users a text whenever the air quality warrants action such as staying indoors or wearing a P2 mask.
We have taken many approaches to improving our platform’s scalability, as we anticipate spikes of traffic during wildfire events. Our codes scalability features include reusing connections to external resources whenever possible, using asynchronous programming, and processing API calls in batch. We used Azure’s functions in order to achieve this.
Azure Notebook and Cognitive Services were used to build various machine learning models using the information we collected from the NASA, EarthData, and VIIRS APIs. The neural network had a reasonable accuracy of 0.74, but did not generalize well to niche climates such as Siberia.
Our web-app was designed using React, Python, and d3js. We kept accessibility in mind by using a high-contrast navy-blue and white colour scheme paired with clearly legible, sans-serif fonts. Future work includes incorporating a text-to-speech feature to increase accessibility, and a color-blind mode. As this was a 24-hour hackathon, we ran into a time challenge and were unable to include this feature, however, we would hope to implement this in further stages of Phoenix. | partial |
## Inspiration
We've all been in the situation where we've ran back and forth in the store, looking for a single small thing on our grocery list. We've all been on the time crunch and have found ourselves running back and forth from dairy to snacks to veggies, frustrated that we can't find what we need in an efficient way. This isn't a problem that just extends to us as college students, but is also a problem which people of all ages face, including parents and elderly grandparents, which can make the shopping experience very unpleasant. InstaShop is a platform that solves this problem once and for all.
## What it does
Input any grocery list with a series of items to search at the Target retail store in Boston. If the item is available, then our application will search the Target store to see where this item is located in the store. It will add a bullet point to the location of the item in the store. You can add all of your items as you wish. Then, based on the store map of the Target, we will provide the exact route that you should take from the entrance to the exit to retrieve all of the items.
## How we built it
Based off of the grocery list, we trigger the Target retail developer API to search for a certain item and retrieve the aisle number of the location within the given store. Alongside, we also wrote classes and functions to create and develop a graph with different nodes to mock the exact layout of the store. Then, we plot the exact location of the given item within the map. Once the user is done inputting all of the items, we will use our custom dynamic programming algorithm which we developed using a variance of the Traveling Salesman algorithm along with a breadth first search. This algorithm will return the shortest path from the entrance to retrieving all of your items to the exit. We display the shortest path on the frontend.
## Challenges we ran into
One of the major problems we ran into was developing the intricacies of the algorithm. This is a very much so convoluted algorithm (as mentioned above). Additionally, setting up the data structures with the nodes, edges, and creating the graph as a combination of the nodes and edges required a lot of thinking. We made sure to think through our data structure carefully and ensure that we were approaching it correctly.
## Accomplishments that we're proud of
According to our approximations in acquiring all of the items within the retail store, we are extremely proud that we improved our runtime down from 1932! \* 7 / 100! minutes to a few seconds. Initially, we were performing a recursive depth-first search on each of the nodes to calculate the shortest path taken. At first, it was working flawlessly on a smaller scale, but when we started to process the results on a larger scale (10\*10 grid), it took around 7 minutes to find the path for just one operation. Assuming that we scale this to the size of the store, one operation would take 7 divided by 100! minutes and the entire store would take 1932! \* 7 / 100! minutes. In order to improve this, we run a breadth-first search combined with an application of the Traveling Salesman problem developed in our custom dynamic programming based algorithm. We were able to bring it down to just a few seconds. Yay!
## What we learned
We learned about optimizing algorithms and overall graph usage and building an application from the ground up regarding the structure of the data.
## What's next for InstaShop
Our next step is to go to Target and pitch our idea. We would like to establish partnership with many Target stores and establish a profitable business model that we can incorporate with Target. We strongly believe that this will be a huge help for the public. | ## Inspiration
Shopping can be a very frustrating experience at times. Nowadays, almost everything is digitally connected yet some stores fall behind when it comes to their shopping experience. We've unfortunately encountered scenarios where we weren't able to find products stocked at our local grocery store, and there have been times where we had no idea how much stock was left or if we need to hurry! Our app solves this issue, by displaying various data relating to each ingredient to the user.
## What it does
Our application aims to guide users to the nearest store that stocks the ingredient they're looking for. This is done on the maps section of the app, and the user can redirect to other stores in the area as well to find the most suitable option. Displaying the price also enables the user to find the most suitable product for them if there are alternatives, ultimately leading to a much smoother shopping experience.
## How we built it
The application was built using React Native and MongoDB. While there were some hurdles to overcome, we were finally able to get a functional application that we could view and interact with using Expo.
## Challenges we ran into
Despite our best efforts, we weren't able to fit the integration of the database within the allocated timeframe. Given that this was a fairly new experience to us with using MongoDB, we struggled to correctly implement it within our React Native code which resulted in having to rely on hard-coding ingredients.
## Accomplishments that we're proud of
We're very proud of the progress we managed to get on our mobile app. Both of us have little experience ever making such a program, so we're very happy we have a fully functioning app in so little time.
Although we weren't able to get the database loaded into the search functionality, we're still quite proud of the fact that we were able to create and connect all users on the team to the database, as well as correctly upload documents to it and we were even able to get the database printing through our code. Just being able to connect to the database and correctly output it, as well as being able to implement a query functionality, was quite a positive experience since this was unfamiliar territory to us.
## What we learned
We learnt how to create and use databases with MongoDB and were able to enhance our React Native skills through importing Google Cloud APIs and being able to work with them (particularly through react-native-maps).
## What's next for IngredFind
In the future, we would hope to improve the front and back end of our application. Aside from visual tweaks and enhancing our features, as well as fixing any bugs that may occur, we would also hope to get the database fully functional and working and perhaps create the application that enables the grocery store to add and alter products on their end. | **Come check out our fun Demo near the Google Cloud Booth in the West Atrium!! Could you use a physiotherapy exercise?**
## The problem
A specific application of physiotherapy is that joint movement may get limited through muscle atrophy, surgery, accident, stroke or other causes. Reportedly, up to 70% of patients give up physiotherapy too early — often because they cannot see the progress. Automated tracking of ROM via a mobile app could help patients reach their physiotherapy goals.
Insurance studies showed that 70% of the people are quitting physiotherapy sessions when the pain disappears and they regain their mobility. The reasons are multiple, and we can mention a few of them: cost of treatment, the feeling that they recovered, no more time to dedicate for recovery and the loss of motivation. The worst part is that half of them are able to see the injury reappear in the course of 2-3 years.
Current pose tracking technology is NOT realtime and automatic, requiring the need for physiotherapists on hand and **expensive** tracking devices. Although these work well, there is a HUGE room for improvement to develop a cheap and scalable solution.
Additionally, many seniors are unable to comprehend current solutions and are unable to adapt to current in-home technology, let alone the kinds of tech that require hours of professional setup and guidance, as well as expensive equipment.
[![IMAGE ALT TEXT HERE](https://res.cloudinary.com/devpost/image/fetch/s--GBtdEkw5--/c_limit,f_auto,fl_lossy,q_auto:eco,w_900/http://img.youtube.com/vi/PrbmBMehYx0/0.jpg)](http://www.youtube.com/watch?feature=player_embedded&v=PrbmBMehYx0)
## Our Solution!
* Our solution **only requires a device with a working internet connection!!** We aim to revolutionize the physiotherapy industry by allowing for extensive scaling and efficiency of physiotherapy clinics and businesses. We understand that in many areas, the therapist to patient ratio may be too high to be profitable, reducing quality and range of service for everyone, so an app to do this remotely is revolutionary.
We collect real-time 3D position data of the patient's body while doing exercises for the therapist to adjust exercises using a machine learning model directly implemented into the browser, which is first analyzed within the app, and then provided to a physiotherapist who can further analyze the data. It also asks the patient for subjective feedback on a pain scale
This makes physiotherapy exercise feedback more accessible to remote individuals **WORLDWIDE** from their therapist
## Inspiration
* The growing need for accessible physiotherapy among seniors, stroke patients, and individuals in third-world countries without access to therapists but with a stable internet connection
* The room for AI and ML innovation within the physiotherapy market for scaling and growth
## How I built it
* Firebase hosting
* Google cloud services
* React front-end
* Tensorflow PoseNet ML model for computer vision
* Several algorithms to analyze 3d pose data.
## Challenges I ran into
* Testing in React Native
* Getting accurate angle data
* Setting up an accurate timer
* Setting up the ML model to work with the camera using React
## Accomplishments that I'm proud of
* Getting real-time 3D position data
* Supporting multiple exercises
* Collection of objective quantitative as well as qualitative subjective data from the patient for the therapist
* Increasing the usability for senior patients by moving data analysis onto the therapist's side
* **finishing this within 48 hours!!!!** We did NOT think we could do it, but we came up with a working MVP!!!
## What I learned
* How to implement Tensorflow models in React
* Creating reusable components and styling in React
* Creating algorithms to analyze 3D space
## What's next for Physio-Space
* Implementing the sharing of the collected 3D position data with the therapist
* Adding a dashboard onto the therapist's side | losing |
## Inspiration
To introduce the most impartial and ensured form of voting submission in response to controversial democratic electoral polling following the 2018 US midterm elections. This event involved several encircling clauses of doubt and questioned authenticity of results by citizen voters. This propelled the idea of bringing enforced and much needed decentralized security to the polling process.
## What it does
Allows voters to vote through a web portal on a blockchain. This web portal is written in HTML and Javascript using the Bootstrap UI framework and JQuery to send Ajax HTTP requests through a flask server written in Python communicating with a blockchain running on the ARK platform. The polling station uses a web portal to generate a unique passphrase for each voter. The voter then uses said passphrase to cast their ballot anonymously and securely. Following this, their vote alongside passphrase go to a flask web server where it is properly parsed and sent to the ARK blockchain accounting it as a transaction. Is transaction is delegated by one ARK coin represented as the count. Finally, a paper trail is generated following the submission of vote on the web portal in the event of public verification.
## How we built it
The initial approach was to use Node.JS, however, Python with Flask was opted for as it proved to be a more optimally implementable solution. Visual studio code was used as a basis to present the HTML and CSS front end for visual representations of the voting interface. Alternatively, the ARK blockchain was constructed on the Docker container. These were used in a conjoined manner to deliver the web-based application.
## Challenges I ran into
* Integration for seamless formation of app between front and back-end merge
* Using flask as an intermediary to act as transitional fit for back-end
* Understanding incorporation, use, and capability of blockchain for security in the purpose applied to
## Accomplishments that I'm proud of
* Successful implementation of blockchain technology through an intuitive web-based medium to address a heavily relevant and critical societal concern
## What I learned
* Application of ARK.io blockchain and security protocols
* The multitude of transcriptional stages for encryption involving pass-phrases being converted to private and public keys
* Utilizing JQuery to compile a comprehensive program
## What's next for Block Vote
Expand Block Vote’s applicability in other areas requiring decentralized and trusted security, hence, introducing a universal initiative. | ## Inspiration
After observing the news about the use of police force for so long, we considered to ourselves how to solve that. We realized that in some ways, the problem was made worse by a lack of trust in law enforcement. We then realized that we could use blockchain to create a better system for accountability in the use of force. We believe that it can help people trust law enforcement officers more and diminish the use of force when possible, saving lives.
## What it does
Chain Gun is a modification for a gun (a Nerf gun for the purposes of the hackathon) that sits behind the trigger mechanism. When the gun is fired, the GPS location and ID of the gun are put onto the Ethereum blockchain.
## Challenges we ran into
Some things did not work well with the new updates to Web3 causing a continuous stream of bugs. To add to this, the major updates broke most old code samples. Android lacks a good implementation of any Ethereum client making it a poor platform for connecting the gun to the blockchain. Sending raw transactions is not very well documented, especially when signing the transactions manually with a public/private keypair.
## Accomplishments that we're proud of
* Combining many parts to form a solution including an Android app, a smart contract, two different back ends, and a front end
* Working together to create something we believe has the ability to change the world for the better.
## What we learned
* Hardware prototyping
* Integrating a bunch of different platforms into one system (Arduino, Android, Ethereum Blockchain, Node.JS API, React.JS frontend)
* Web3 1.0.0
## What's next for Chain Gun
* Refine the prototype | ## Inspiration
While working on a political campaign this summer, we noticed a lack of distributed system's that allowed for civic engagement. The campaigns we saw had large numbers of willing volunteers willing to do cold calling to help their favorite candidate win the election, but who lacked the infrastructure to do so. Even those few who did manage to do so successfully are utilized ineffectively. Since they have no communication with the campaign, they often end up wasting many calls on people who would vote for their candidate anyway, or in districts where their candidate has an overwhelming majority.
## What it does
Our app allows political campaigns and volunteers to strategize and work together to get their candidate elected. On the logistical end, in our web dashboard campaign managers can work to target those most open to their candidate and see what people are saying. They can also input the numbers of people in districts which are most vital to the campaign, and have their constituents target those people.
## How we built it
We took a two prong approach to building our applications since they would have to serve two different people. Our web app is more analytically focused and closely resembles an enterprise app in it's sophistication and functionality. It allows campaign staff to clearly see how their volunteers are being utilized and who their calling, and perform advanced analytical operations to enhance volunteer effectiveness.
This is very different from the approach we took with the consumer app which we wanted to make as easy to use and intuitive as possible. Our consumer facing app allows users to quickly login with their google accounts, and, with the touch of a button start calling voters who are carefully curated by the campaign staff on their dashboard. We also added a gasification element by adding a leaderboard and offering the user simple analytics on their performance.
## Challenges we ran into
One challenge we ran into was getting statistically relevant data into our platform. At first we struggled with creating an easy to use interface for users to convey information about people they called back to the campaign staff without making the process tedious. We solved this problem by spending a lot of time refining our app's user interface to be as simple as possible.
## Accomplishments that we're proud of
We're very proud of the fact that we were able to build what is essentially two closely integrated platforms in one hackathon. Our iOS app is built natively in swift while our website is built in PHP so very little of the code, besides the api was reusable despite the fact that the two apps were constantly interfacing with each other.
## What we learned
That creating effective actionable data is hard, and that it's not being done enough. We also learned through the process of brainstorming the concept for the app that for civic movements to be effective in the future, they have to be more strategic with who they target, and how they utilize their volunteers.
## What's next for PolitiCall
Analytics are at the core of any modern political campaign, and we believe volunteers calling thousands of people are one of the best ways to gather analytics. We plan to combine user gathered analytics with proprietary campaign information to offer campaign managers the best possible picture of their campaign, and what they need to focus on. | winning |
# BananaExpress
A self-writing journal of your life, with superpowers!
We make journaling easier and more engaging than ever before by leveraging **home-grown, cutting edge, CNN + LSTM** models to do **novel text generation** to prompt our users to practice active reflection by journaling!
Features:
* User photo --> unique question about that photo based on 3 creative techniques
+ Real time question generation based on (real-time) user journaling (and the rest of their writing)!
+ Ad-lib style questions - we extract location and analyze the user's activity to generate a fun question!
+ Question-corpus matching - we search for good questions about the user's current topics
* NLP on previous journal entries for sentiment analysis
I love our front end - we've re-imagined how easy and futuristic journaling can be :)
And, honestly, SO much more! Please come see!
♥️ from the Lotus team,
Theint, Henry, Jason, Kastan | ## Inspiration
Have you ever had to wait in long lines just to buy a few items from a store? Not wanted to interact with employees to get what you want? Now you can buy items quickly and hassle free through your phone, without interacting with any people whatsoever.
## What it does
CheckMeOut is an iOS application that allows users to buy an item that has been 'locked' in a store. For example, clothing that have the sensors attached to them or items that are physically locked behind glass. Users can scan a QR code or use ApplePay to quickly access the information about an item (price, description, etc.) and 'unlock' the item by paying for it. The user will not have to interact with any store clerks or wait in line to buy the item.
## How we built it
We used xcode to build the iOS application, and MS Azure to host our backend. We used an intel Edison board to help simulate our 'locking' of an item.
## Challenges I ran into
We're using many technologies that our team is unfamiliar with, namely Swift and Azure.
## What I learned
I've learned not underestimate things you don't know, to ask for help when you need it, and to just have a good time.
## What's next for CheckMeOut
Hope to see it more polished in the future. | ## Inspiration 💡
Our inspiration for this project was to leverage new AI technologies such as text to image, text generation and natural language processing to enhance the education space. We wanted to harness the power of machine learning to inspire creativity and improve the way students learn and interact with educational content. We believe that these cutting-edge technologies have the potential to revolutionize education and make learning more engaging, interactive, and personalized.
## What it does 🎮
Our project is a text and image generation tool that uses machine learning to create stories from prompts given by the user. The user can input a prompt, and the tool will generate a story with corresponding text and images. The user can also specify certain attributes such as characters, settings, and emotions to influence the story's outcome. Additionally, the tool allows users to export the generated story as a downloadable book in the PDF format. The goal of this project is to make story-telling interactive and fun for users.
## How we built it 🔨
We built our project using a combination of front-end and back-end technologies. For the front-end, we used React which allows us to create interactive user interfaces. On the back-end side, we chose Go as our main programming language and used the Gin framework to handle concurrency and scalability. To handle the communication between the resource intensive back-end tasks we used a combination of RabbitMQ as the message broker and Celery as the work queue. These technologies allowed us to efficiently handle the flow of data and messages between the different components of our project.
To generate the text and images for the stories, we leveraged the power of OpenAI's DALL-E-2 and GPT-3 models. These models are state-of-the-art in their respective fields and allow us to generate high-quality text and images for our stories. To improve the performance of our system, we used MongoDB to cache images and prompts. This allows us to quickly retrieve data without having to re-process it every time it is requested. To minimize the load on the server, we used socket.io for real-time communication, it allow us to keep the HTTP connection open and once work queue is done processing data, it sends a notification to the React client.
## Challenges we ran into 🚩
One of the challenges we ran into during the development of this project was converting the generated text and images into a PDF format within the React front-end. There were several libraries available for this task, but many of them did not work well with the specific version of React we were using. Additionally, some of the libraries required additional configuration and setup, which added complexity to the project. We had to spend a significant amount of time researching and testing different solutions before we were able to find a library that worked well with our project and was easy to integrate into our codebase. This challenge highlighted the importance of thorough testing and research when working with new technologies and libraries.
## Accomplishments that we're proud of ⭐
One of the accomplishments we are most proud of in this project is our ability to leverage the latest technologies, particularly machine learning, to enhance the user experience. By incorporating natural language processing and image generation, we were able to create a tool that can generate high-quality stories with corresponding text and images. This not only makes the process of story-telling more interactive and fun, but also allows users to create unique and personalized stories.
## What we learned 📚
Throughout the development of this project, we learned a lot about building highly scalable data pipelines and infrastructure. We discovered the importance of choosing the right technology stack and tools to handle large amounts of data and ensure efficient communication between different components of the system. We also learned the importance of thorough testing and research when working with new technologies and libraries.
We also learned about the importance of using message brokers and work queues to handle data flow and communication between different components of the system, which allowed us to create a more robust and scalable infrastructure. We also learned about the use of NoSQL databases, such as MongoDB to cache data and improve performance. Additionally, we learned about the importance of using socket.io for real-time communication, which can minimize the load on the server.
Overall, we learned about the importance of using the right tools and technologies to build a highly scalable and efficient data pipeline and infrastructure, which is a critical component of any large-scale project.
## What's next for Dream.ai 🚀
There are several exciting features and improvements that we plan to implement in the future for Dream.ai. One of the main focuses will be on allowing users to export their generated stories to YouTube. This will allow users to easily share their stories with a wider audience and potentially reach a larger audience.
Another feature we plan to implement is user history. This will allow users to save and revisit old prompts and stories they have created, making it easier for them to pick up where they left off. We also plan to allow users to share their prompts on the site with other users, which will allow them to collaborate and create stories together.
Finally, we are planning to improve the overall user experience by incorporating more customization options, such as the ability to select different themes, characters and settings. We believe these features will further enhance the interactive and fun nature of the tool, making it even more engaging for users. | winning |
# 🪼 **SeaScript** 🪸
## Inspiration
Learning MATLAB can be as appealing as a jellyfish sting. Traditional resources often leave students lost at sea, making the process more exhausting than a shark's endless swim. SeaScript transforms this challenge into an underwater adventure, turning the tedious journey of mastering MATLAB into an exciting expedition.
## What it does
SeaScript plunges you into an oceanic MATLAB adventure with three distinct zones:
1. 🪼 Jellyfish Junction: Help bioluminescent jellies navigate nighttime waters.
2. 🦈 Shark Bay: Count endangered sharks to aid conservation efforts.
3. 🪸 Coral Code Reef: Assist Nemo in finding the tallest coral home.
Solve MATLAB challenges in each zone to collect puzzle pieces, unlocking a final mystery message. It's not just coding – it's saving the ocean, one function at a time!
## How we built it
* Python game engine for our underwater world
* MATLAB integration for real-time, LeetCode-style challenges
* MongoDB for data storage (player progress, challenges, marine trivia)
## Challenges we ran into
* Seamlessly integrating MATLAB with our Python game engine
* Crafting progressively difficult challenges without overwhelming players
* Balancing education and entertainment (fish puns have limits!)
## Accomplishments that we're proud of
* Created a unique three-part underwater journey for MATLAB learning
* Successfully merged MATLAB, Python, and MongoDB into a cohesive game
* Developed a rewarding puzzle system that tracks player progress
## What we learned
* MATLAB's capabilities are as vast as the ocean
* Gamification can transform challenging subjects into adventures
* The power of combining coding, marine biology, and puzzle-solving in education
## What's next for SeaScript
* Expand with more advanced MATLAB concepts
* Implement multiplayer modes for collaborative problem-solving
* Develop mobile and VR versions for on-the-go and immersive learning
Ready to dive in? Don't let MATLAB be the one that got away – catch the wave with SeaScript and code like a fish boss! 🐠👑 | ## Inspiration
We were inspired by hard working teachers and students. Although everyone was working hard, there was still a disconnect with many students not being able to retain what they learned. So, we decided to create both a web application and a companion phone application to help target this problem.
## What it does
The app connects students with teachers in a whole new fashion. Students can provide live feedback to their professors on various aspects of the lecture, such as the volume and pace. Professors, on the other hand, get an opportunity to receive live feedback on their teaching style and also give students a few warm-up exercises with a built-in clicker functionality.
The web portion of the project ties the classroom experience to the home. Students receive live transcripts of what the professor is currently saying, along with a summary at the end of the lecture which includes key points. The backend will also generate further reading material based on keywords from the lecture, which will further solidify the students’ understanding of the material.
## How we built it
We built the mobile portion using react-native for the front-end and firebase for the backend. The web app is built with react for the front end and firebase for the backend. We also implemented a few custom python modules to facilitate the client-server interaction to ensure a smooth experience for both the instructor and the student.
## Challenges we ran into
One major challenge we ran into was getting and processing live audio and giving a real-time transcription of it to all students enrolled in the class. We were able to solve this issue through a python script that would help bridge the gap between opening an audio stream and doing operations on it while still serving the student a live version of the rest of the site.
## Accomplishments that we’re proud of
Being able to process text data to the point that we were able to get a summary and information on tone/emotions from it. We are also extremely proud of the
## What we learned
We learned more about React and its usefulness when coding in JavaScript. Especially when there were many repeating elements in our Material Design. We also learned that first creating a mockup of what we want will facilitate coding as everyone will be on the same page on what is going on and all thats needs to be done is made very evident. We used some API’s such as the Google Speech to Text API and a Summary API. We were able to work around the constraints of the API’s to create a working product. We also learned more about other technologies that we used such as: Firebase, Adobe XD, React-native, and Python.
## What's next for Gradian
The next goal for Gradian is to implement a grading system for teachers that will automatically integrate with their native grading platform so that clicker data and other quiz material can instantly be graded and imported without any issues. Beyond that, we can see the potential for Gradian to be used in office scenarios as well so that people will never miss a beat thanks to the live transcription that happens. | ## Inspiration
Us college students all can relate to having a teacher that was not engaging enough during lectures or mumbling to the point where we cannot hear them at all. Instead of finding solutions to help the students outside of the classroom, we realized that the teachers need better feedback to see how they can improve themselves to create better lecture sessions and better ratemyprofessor ratings.
## What it does
Morpheus is a machine learning system that analyzes a professor’s lesson audio in order to differentiate between various emotions portrayed through his speech. We then use an original algorithm to grade the lecture. Similarly we record and score the professor’s body language throughout the lesson using motion detection/analyzing software. We then store everything on a database and show the data on a dashboard which the professor can access and utilize to improve their body and voice engagement with students. This is all in hopes of allowing the professor to be more engaging and effective during their lectures through their speech and body language.
## How we built it
### Visual Studio Code/Front End Development: Sovannratana Khek
Used a premade React foundation with Material UI to create a basic dashboard. I deleted and added certain pages which we needed for our specific purpose. Since the foundation came with components pre build, I looked into how they worked and edited them to work for our purpose instead of working from scratch to save time on styling to a theme. I needed to add a couple new original functionalities and connect to our database endpoints which required learning a fetching library in React. In the end we have a dashboard with a development history displayed through a line graph representing a score per lecture (refer to section 2) and a selection for a single lecture summary display. This is based on our backend database setup. There is also space available for scalability and added functionality.
### PHP-MySQL-Docker/Backend Development & DevOps: Giuseppe Steduto
I developed the backend for the application and connected the different pieces of the software together. I designed a relational database using MySQL and created API endpoints for the frontend using PHP. These endpoints filter and process the data generated by our machine learning algorithm before presenting it to the frontend side of the dashboard. I chose PHP because it gives the developer the option to quickly get an application running, avoiding the hassle of converters and compilers, and gives easy access to the SQL database. Since we’re dealing with personal data about the professor, every endpoint is only accessible prior authentication (handled with session tokens) and stored following security best-practices (e.g. salting and hashing passwords). I deployed a PhpMyAdmin instance to easily manage the database in a user-friendly way.
In order to make the software easily portable across different platforms, I containerized the whole tech stack using docker and docker-compose to handle the interaction among several containers at once.
### MATLAB/Machine Learning Model for Speech and Emotion Recognition: Braulio Aguilar Islas
I developed a machine learning model to recognize speech emotion patterns using MATLAB’s audio toolbox, simulink and deep learning toolbox. I used the Berlin Database of Emotional Speech To train my model. I augmented the dataset in order to increase accuracy of my results, normalized the data in order to seamlessly visualize it using a pie chart, providing an easy and seamless integration with our database that connects to our website.
### Solidworks/Product Design Engineering: Riki Osako
Utilizing Solidworks, I created the 3D model design of Morpheus including fixtures, sensors, and materials. Our team had to consider how this device would be tracking the teacher’s movements and hearing the volume while not disturbing the flow of class. Currently the main sensors being utilized in this product are a microphone (to detect volume for recording and data), nfc sensor (for card tapping), front camera, and tilt sensor (for vertical tilting and tracking professor). The device also has a magnetic connector on the bottom to allow itself to change from stationary position to mobility position. It’s able to modularly connect to a holonomic drivetrain to move freely around the classroom if the professor moves around a lot. Overall, this allowed us to create a visual model of how our product would look and how the professor could possibly interact with it. To keep the device and drivetrain up and running, it does require USB-C charging.
### Figma/UI Design of the Product: Riki Osako
Utilizing Figma, I created the UI design of Morpheus to show how the professor would interact with it. In the demo shown, we made it a simple interface for the professor so that all they would need to do is to scan in using their school ID, then either check his lecture data or start the lecture. Overall, the professor is able to see if the device is tracking his movements and volume throughout the lecture and see the results of their lecture at the end.
## Challenges we ran into
Riki Osako: Two issues I faced was learning how to model the product in a way that would feel simple for the user to understand through Solidworks and Figma (using it for the first time). I had to do a lot of research through Amazon videos and see how they created their amazon echo model and looking back in my UI/UX notes in the Google Coursera Certification course that I’m taking.
Sovannratana Khek: The main issues I ran into stemmed from my inexperience with the React framework. Oftentimes, I’m confused as to how to implement a certain feature I want to add. I overcame these by researching existing documentation on errors and utilizing existing libraries. There were some problems that couldn’t be solved with this method as it was logic specific to our software. Fortunately, these problems just needed time and a lot of debugging with some help from peers, existing resources, and since React is javascript based, I was able to use past experiences with JS and django to help despite using an unfamiliar framework.
Giuseppe Steduto: The main issue I faced was making everything run in a smooth way and interact in the correct manner. Often I ended up in a dependency hell, and had to rethink the architecture of the whole project to not over engineer it without losing speed or consistency.
Braulio Aguilar Islas: The main issue I faced was working with audio data in order to train my model and finding a way to quantify the fluctuations that resulted in different emotions when speaking. Also, the dataset was in german
## Accomplishments that we're proud of
Achieved about 60% accuracy in detecting speech emotion patterns, wrote data to our database, and created an attractive dashboard to present the results of the data analysis while learning new technologies (such as React and Docker), even though our time was short.
## What we learned
As a team coming from different backgrounds, we learned how we could utilize our strengths in different aspects of the project to smoothly operate. For example, Riki is a mechanical engineering major with little coding experience, but we were able to allow his strengths in that area to create us a visual model of our product and create a UI design interface using Figma. Sovannratana is a freshman that had his first hackathon experience and was able to utilize his experience to create a website for the first time. Braulio and Gisueppe were the most experienced in the team but we were all able to help each other not just in the coding aspect, with different ideas as well.
## What's next for Untitled
We have a couple of ideas on how we would like to proceed with this project after HackHarvard and after hibernating for a couple of days.
From a coding standpoint, we would like to improve the UI experience for the user on the website by adding more features and better style designs for the professor to interact with. In addition, add motion tracking data feedback to the professor to get a general idea of how they should be changing their gestures.
We would also like to integrate a student portal and gather data on their performance and help the teacher better understand where the students need most help with.
From a business standpoint, we would like to possibly see if we could team up with our university, Illinois Institute of Technology, and test the functionality of it in actual classrooms. | partial |
## What it does
ColoVR is a virtual experience allowing users to modify their environment with colors and models. It's a relaxing, low poly experience that is the first of its kind to bring an accessible 3D experience to the Google Cardboard platform. It's a great way to become a creative in 3D without dropping a ton of money on VR hardware.
## How we built it
We used Google Daydream and extended off of the demo scene to our liking. We used Unity and C# to create the scene, and do all the user interactions. We did painting and dragging objects around the scene using a common graphics technique called Ray Tracing. We colored the scene by changing the vertex color. We also created a palatte that allowed you to change tools, colors, and insert meshes. We hand modeled the low poly meshes in Maya.
## Challenges we ran into
The main challenge was finding a proper mechanism for coloring the geometry in the scene. We first tried to paint the mesh face by face, but we had no way of accessing the entire face from the way the Unity mesh was set up. We then looked into an asset that would help us paint directly on top of the texture, but it ended up being too slow. Textures tend to render too slowly, so we ended up doing it with a vertex shader that would interpolate between vertex colors, allowing real time painting of meshes. We implemented it so that we changed all the vertices that belong to a fragment of a face. The vertex shader was the fastest way that we could render real time painting, and emulate the painting of a triangulated face. Our second biggest challenge was using ray tracing to find the object we were interacting with. We had to navigate the Unity API and get acquainted with their Physics raytracer and mesh colliders to properly implement all interaction with the cursor. Our third challenge was making use of the Controller that Google Daydream supplied us with - it was great and very sandboxed, but we were almost limited in terms of functionality. We had to find a way to be able to change all colors, insert different meshes, and interact with the objects with only two available buttons.
## Accomplishments that we're proud of
We're proud of the fact that we were able to get a clean, working project that was exactly what we pictured. People seemed to really enjoy the concept and interaction.
## What we learned
How to optimize for a certain platform - in terms of UI, geometry, textures and interaction.
## What's next for ColoVR
Hats, interactive collaborative space for multiple users. We want to be able to host the scene and make it accessible to multiple users at once, and store the state of the scene (maybe even turn it into an infinite world) where users can explore what past users have done and see the changes other users make in real time. | ## Background
Collaboration is the heart of humanity. From contributing to the rises and falls of great civilizations to helping five sleep deprived hackers communicate over 36 hours, it has become a required dependency in the [git@ssh.github.com](mailto:git@ssh.github.com):**hacker/life**.
Plowing through the weekend, we found ourselves shortchanged by the current available tools for collaborating within a cluster of devices. Every service requires:
1. *Authentication* Users must sign up and register to use a service.
2. *Contact* People trying to share information must establish a prior point of contact(share e-mails, phone numbers, etc).
3. *Unlinked Output* Shared content is not deep-linked with mobile or web applications.
This is where Grasshopper jumps in. Built as a **streamlined cross-platform contextual collaboration service**, it uses our on-prem installation of Magnet on Microsoft Azure and Google Cloud Messaging to integrate deep-linked commands and execute real-time applications between all mobile and web platforms located in a cluster near each other. It completely gets rid of the overhead of authenticating users and sharing contacts - all sharing is done locally through gps-enabled devices.
## Use Cases
Grasshopper lets you collaborate locally between friends/colleagues. We account for sharing application information at the deepest contextual level, launching instances with accurately prepopulated information where necessary. As this data is compatible with all third-party applications, the use cases can shoot through the sky. Here are some applications that we accounted for to demonstrate the power of our platform:
1. Share a video within a team through their mobile's native YouTube app **in seconds**.
2. Instantly play said video on a bigger screen by hopping the video over to Chrome on your computer.
3. Share locations on Google maps between nearby mobile devices and computers with a single swipe.
4. Remotely hop important links while surfing your smartphone over to your computer's web browser.
5. Rick Roll your team.
## What's Next?
Sleep. | ## Inspiration
Metaverse, VR, and games today lack true immersion. Even in the Metaverse, as a person, you exist completely phantom waist down. The movement of your elbows is predicted by an algorithm, and can look unstable and jittery. Worst of all, you have to use joycons to do something like waterbend, or spawn a fireball in your open palm.
## What it does
We built an iPhone powered full-body 3D tracking system that captures every aspect of the way that you move, and it costs practically nothing. By leveraging MediaPipePose's precise body part tracking and Unity's dynamic digital environment, it allows users to embody a virtual avatar that mirrors their real-life movements with precision. The use of Python-based sockets facilitates real-time communication, ensuring seamless and immediate translation of physical actions into the virtual world, elevating immersion for users engaging in virtual experiences.
## How we built it
To create our real-life full-body tracking avatar, we integrated MediaPipePose with Unity and utilized Python-based sockets. Initially, we employed MediaPipePose's computer vision capabilities to capture precise body part coordinates, forming the avatar's basis. Simultaneously, we built a dynamic digital environment within Unity to house the avatar. The critical link connecting these technologies was established through Python-based sockets, enabling real-time communication. This integration seamlessly translated users' physical movements into their virtual avatars, enhancing immersion in virtual spaces.
## Challenges we ran into
There were a number of issues. We first used media pipe holistic but then realized it was a legacy system and we couldn't get 3d coordinates for the hands. Then, we transitioned to using media pipe pose for the person's body and cutting out small sections of the image where we detected hands, and running media pipe hand on those sub images to capture both the position of the body and the position of the hands. The math required to map the local coordinate system of the hand tracking to the global coordinate system of the full body pose was difficult. There were latency issues with python --> Unity, that had to be resolved by decreasing the amount of data points. We also had to use techniques like an exponential moving average to make the movements smoother. And naturally, there were hundreds of bugs that had to be resolved in parsing, moving, storing, and working with the data from these deep learning CV models.
## Accomplishments that we're proud of
We're proud of achieving full-body tracking, which enhances user immersion. Equally satisfying is our seamless transition of the body with little latency, ensuring a fluid user experience.
## What we learned
For one, we learned how to integrate MediaPipePose with Unity, all the while learning socket programming for data transfer via servers in python. We learned how to integrate c# with python since unity only works on c# scripts and MediaPipePose only works on python scripts. We also learned how to use openCV and computer vision pretty intimately, since we had to work around a number of limitations for the libraries and old legacy code lurking around Google. There was also an element of asynchronous code handling via queues. Cool to see the data structure in action!
## What's next for Full Body Fusion!
Optimizing the hand mappings, implementing gesture recognition, and adding a real avatar in Unity instead of white dots. | partial |
# Inspiration and Product
There's a certain feeling we all have when we're lost. It's a combination of apprehension and curiosity – and it usually drives us to explore and learn more about what we see. It happens to be the case that there's a [huge disconnect](http://www.purdue.edu/discoverypark/vaccine/assets/pdfs/publications/pdf/Storylines%20-%20Visual%20Exploration%20and.pdf) between that which we see around us and that which we know: the building in front of us might look like an historic and famous structure, but we might not be able to understand its significance until we read about it in a book, at which time we lose the ability to visually experience that which we're in front of.
Insight gives you actionable information about your surroundings in a visual format that allows you to immerse yourself in your surroundings: whether that's exploring them, or finding your way through them. The app puts the true directions of obstacles around you where you can see them, and shows you descriptions of them as you turn your phone around. Need directions to one of them? Get them without leaving the app. Insight also supports deeper exploration of what's around you: everything from restaurant ratings to the history of the buildings you're near.
## Features
* View places around you heads-up on your phone - as you rotate, your field of vision changes in real time.
* Facebook Integration: trying to find a meeting or party? Call your Facebook events into Insight to get your bearings.
* Directions, wherever, whenever: surveying the area, and find where you want to be? Touch and get instructions instantly.
* Filter events based on your location. Want a tour of Yale? Touch to filter only Yale buildings, and learn about the history and culture. Want to get a bite to eat? Change to a restaurants view. Want both? You get the idea.
* Slow day? Change your radius to a short distance to filter out locations. Feeling adventurous? Change your field of vision the other way.
* Want get the word out on where you are? Automatically check-in with Facebook at any of the locations you see around you, without leaving the app.
# Engineering
## High-Level Tech Stack
* NodeJS powers a RESTful API powered by Microsoft Azure.
* The API server takes advantage of a wealth of Azure's computational resources:
+ A Windows Server 2012 R2 Instance, and an Ubuntu 14.04 Trusty instance, each of which handle different batches of geospatial calculations
+ Azure internal load balancers
+ Azure CDN for asset pipelining
+ Azure automation accounts for version control
* The Bing Maps API suite, which offers powerful geospatial analysis tools:
+ RESTful services such as the Bing Spatial Data Service
+ Bing Maps' Spatial Query API
+ Bing Maps' AJAX control, externally through direction and waypoint services
* iOS objective-C clients interact with the server RESTfully and display results as parsed
## Application Flow
iOS handles the entirety of the user interaction layer and authentication layer for user input. Users open the app, and, if logging in with Facebook or Office 365, proceed through the standard OAuth flow, all on-phone. Users can also opt to skip the authentication process with either provider (in which case they forfeit the option to integrate Facebook events or Office365 calendar events into their views).
After sign in (assuming the user grants permission for use of these resources), and upon startup of the camera, requests are sent with the user's current location to a central server on an Ubuntu box on Azure. The server parses that location data, and initiates a multithread Node process via Windows 2012 R2 instances. These processes do the following, and more:
* Geospatial radial search schemes with data from Bing
* Location detail API calls from Bing Spatial Query APIs
* Review data about relevant places from a slew of APIs
After the data is all present on the server, it's combined and analyzed, also on R2 instances, via the following:
* Haversine calculations for distance measurements, in accordance with radial searches
* Heading data (to make client side parsing feasible)
* Condensation and dynamic merging - asynchronous cross-checking from the collection of data which events are closest
Ubuntu brokers and manages the data, sends it back to the client, and prepares for and handles future requests.
## Other Notes
* The most intense calculations involved the application of the [Haversine formulae](https://en.wikipedia.org/wiki/Haversine_formula), i.e. for two points on a sphere, the central angle between them can be described as:
![Haversine 1](https://upload.wikimedia.org/math/1/5/a/15ab0df72b9175347e2d1efb6d1053e8.png)
and the distance as:
![Haversine 2](https://upload.wikimedia.org/math/0/5/5/055b634f6fe6c8d370c9fa48613dd7f9.png)
(the result of which is non-standard/non-Euclidian due to the Earth's curvature). The results of these formulae translate into the placement of locations on the viewing device.
These calculations are handled by the Windows R2 instance, essentially running as a computation engine. All communications are RESTful between all internal server instances.
## Challenges We Ran Into
* *iOS and rotation*: there are a number of limitations in iOS that prevent interaction with the camera in landscape mode (which, given the need for users to see a wide field of view). For one thing, the requisite data registers aren't even accessible via daemons when the phone is in landscape mode. This was the root of the vast majority of our problems in our iOS, since we were unable to use any inherited or pre-made views (we couldn't rotate them) - we had to build all of our views from scratch.
* *Azure deployment specifics with Windows R2*: running a pure calculation engine (written primarily in C# with ASP.NET network interfacing components) was tricky at times to set up and get logging data for.
* *Simultaneous and asynchronous analysis*: Simultaneously parsing asynchronously-arriving data with uniform Node threads presented challenges. Our solution was ultimately a recursive one that involved checking the status of other resources upon reaching the base case, then using that knowledge to better sort data as the bottoming-out step bubbled up.
* *Deprecations in Facebook's Graph APIs*: we needed to use the Facebook Graph APIs to query specific Facebook events for their locations: a feature only available in a slightly older version of the API. We thus had to use that version, concurrently with the newer version (which also had unique location-related features we relied on), creating some degree of confusion and required care.
## A few of Our Favorite Code Snippets
A few gems from our codebase:
```
var deprecatedFQLQuery = '...
```
*The story*: in order to extract geolocation data from events vis-a-vis the Facebook Graph API, we were forced to use a deprecated API version for that specific query, which proved challenging in how we versioned our interactions with the Facebook API.
```
addYaleBuildings(placeDetails, function(bulldogArray) {
addGoogleRadarSearch(bulldogArray, function(luxEtVeritas) {
...
```
*The story*: dealing with quite a lot of Yale API data meant we needed to be creative with our naming...
```
// R is the earth's radius in meters
var a = R * 2 * Math.atan2(Math.sqrt((Math.sin((Math.PI / 180)(latitude2 - latitude1) / 2) * Math.sin((Math.PI / 180)(latitude2 - latitude1) / 2) + Math.cos((Math.PI / 180)(latitude1)) * Math.cos((Math.PI / 180)(latitude2)) * Math.sin((Math.PI / 180)(longitude2 - longitude1) / 2) * Math.sin((Math.PI / 180)(longitude2 - longitude1) / 2))), Math.sqrt(1 - (Math.sin((Math.PI / 180)(latitude2 - latitude1) / 2) * Math.sin((Math.PI / 180)(latitude2 - latitude1) / 2) + Math.cos((Math.PI / 180)(latitude1)) * Math.cos((Math.PI / 180)(latitude2)) * Math.sin((Math.PI / 180)(longitude2 - longitude1) / 2) * Math.sin((Math.PI / 180)(longitude2 - longitude1) / 2);)));
```
*The story*: while it was shortly after changed and condensed once we noticed it's proliferation, our implementation of the Haversine formula became cumbersome very quickly. Degree/radian mismatches between APIs didn't make things any easier. | ## Inspiration
The inspiration for our project stems from the increasing trend of online shopping and the declining foot traffic in physical stores. Our goal was to provide a unique and engaging experience for customers, encouraging them to visit physical stores and rediscover the joy of in-person shopping. We wanted to create an interactive and entertaining shopping experience that would entice customers to visit stores more frequently and foster a deeper connection between them and the store's brand.
## What it does
Our project is an AR scavenger hunt experience that gamifies the shopping experience. The scavenger hunt encourages customers to explore the store and discover new products they may have otherwise overlooked. As customers find specific products, they can earn points which can be redeemed for exclusive deals and discounts on future purchases. This innovative marketing scheme not only provides customers with an entertaining experience but also incentivizes them to visit stores more frequently and purchase products they may have otherwise overlooked.
## How we built it
To create the AR component of our project, we used Vuforia and Unity, two widely used platforms for building AR applications. The Vuforia platform allowed us to create and track image targets, while Unity was used to design the 3D models for the AR experience. We then integrated the AR component into an Android application by importing it as a Gradle project. Our team utilized agile development methodologies to ensure efficient collaboration and problem-solving throughout the development process.
## Challenges we ran into
One of the challenges we faced was integrating multiple APIs and ensuring that they worked together seamlessly. Another challenge was importing the AR component and creating the desired functionality within our project. We also faced issues with debugging and resolving technical errors that arose during the development process.
## Accomplishments that we're proud of
Despite the challenges we faced, we were able to achieve successful teamwork and collaboration. Despite forming the team later than other groups, we were able to effectively communicate and work together to bring our project to fruition. We are proud of the end result, which was a polished and functional AR scavenger hunt experience that met our objectives.
## What we learned
We learned how difficult it is to truly ship out software, and we are grateful to have joined the hackathon. We gained a deeper understanding of the importance of project planning, effective communication, and collaboration among team members. We also learned that the development process can be challenging and unpredictable, and that it requires perseverance and problem-solving skills. Additionally, participating in the hackathon taught us valuable technical skills such as integrating APIs, creating AR functionality, and importing projects onto an Android application.
## What's next for Winnur
Looking forward, we plan to incorporate Computer Vision technology into our project to prevent potential damage to our product's packaging. We also aim to expand the reach of our AR scavenger hunt experience by partnering with more retailers and enhancing the user interface and experience. We are excited about the potential for future development and growth of Winnur. | ## Where we got the spark?
**No one is born without talents**.
We all get this situation in our childhood, No one gets a chance to reveal their skills and gets guided for their ideas. Some skills are buried without proper guidance, we don't even have mates to talk about it and develop our skills in the respective field. Even in college if we are starters we have trouble in implementation. So we started working for a solution to help others who found themselves in this same crisis.
## How it works?
**Connect with neuron of your same kind**
From the problem faced we are bridging all bloomers on respective fields to experts, people in same field who need a team mate (or) a friend to develop the idea along. They can become aware of source needed to develop themselves in that field by the guidance of experts and also experienced professors.
We can also connect with people all over globe using language translator, this makes us feels everyone feel native.
## How we built it
**1.Problem analysis:**
We ran through all problems all over the globe in the field of education and came across several problems and we chose a problem that gives solution for several problems.
**2.Idea Development:**
We started to examine the problems and lack of features and solution for topic we chose and solved all queries as much as possible and developed it as much as we can.
**3.Prototype development:**
We developed a working prototype and got a good experience developing it.
## Challenges we ran into
Our plan is to get our application to every bloomers and expertise, but what will make them to join in our community. It will be hard to convince them that our application will help them to learn new things.
## Accomplishments that we're proud of
The jobs which are currently popular may or may not be popular after 10 years. Our World will always looks for a better version of our current version . We are satisfied that our idea will help 100's of children like us who don't even know about the new things in todays new world. Our application may help them to know the things earlier than usual. Which may help them to lead a path in their interest. We are proud that we are part of their development.
## What we learned
We learnt that many people are suffering from lack of help for their idea/project and we felt useless when we learnt this. So we planned to build an web application for them to help with their project/idea with experts and same kind of their own. So, **Guidance is important. No one is born pro**
We learnt how to make people understand new things based on the interest of study by guiding them through the path of their dream.
## What's next for EXPERTISE WITH
We're planning to advertise about our web application through all social medias and help all the people who are not able to get help for development their idea/project and implement from all over the world. to the world. | winning |
## Inspiration
My father put me in charge of his finances and in contact with his advisor, a young, enterprising financial consultant eager to make large returns. That might sound pretty good, but to someone financially conservative like my father doesn't really want that kind of risk in this stage of his life. The opposite happened to my brother, who has time to spare and money to lose, but had a conservative advisor that didn't have the same fire. Both stopped their advisory services, but that came with its own problems. The issue is that most advisors have a preferred field but knowledge of everything, which makes the unknowing client susceptible to settling with someone who doesn't share their goals.
## What it does
Resonance analyses personal and investment traits to make the best matches between an individual and an advisor. We use basic information any financial institution has about their clients and financial assets as well as past interactions to create a deep and objective measure of interaction quality and maximize it through optimal matches.
## How we built it
The whole program is built in python using several libraries for gathering financial data, processing and building scalable models using aws. The main differential of our model is its full utilization of past data during training to make analyses more wholistic and accurate. Instead of going with a classification solution or neural network, we combine several models to analyze specific user features and classify broad features before the main model, where we build a regression model for each category.
## Challenges we ran into
Our group member crucial to building a front-end could not make it, so our designs are not fully interactive. We also had much to code but not enough time to debug, which makes the software unable to fully work. We spent a significant amount of time to figure out a logical way to measure the quality of interaction between clients and financial consultants. We came up with our own algorithm to quantify non-numerical data, as well as rating clients' investment habits on a numerical scale. We assigned a numerical bonus to clients who consistently invest at a certain rate. The Mathematics behind Resonance was one of the biggest challenges we encountered, but it ended up being the foundation of the whole idea.
## Accomplishments that we're proud of
Learning a whole new machine learning framework using SageMaker and crafting custom, objective algorithms for measuring interaction quality and fully utilizing past interaction data during training by using an innovative approach to categorical model building.
## What we learned
Coding might not take that long, but making it fully work takes just as much time.
## What's next for Resonance
Finish building the model and possibly trying to incubate it. | ![alt tag](https://raw.githubusercontent.com/zackharley/QHacks/develop/public/pictures/logoBlack.png)
# What is gitStarted?
GitStarted is a developer tool to help get projects off the ground in no time. When time is of the essence, devs hate losing time to setting up repositories. GitStarted streamlines the repo creation process, quickly adding your frontend tools and backend npm modules.
## Installation
To install:
```
npm install
```
## Usage
To run:
```
gulp
```
## Credits
Created by [Jake Alsemgeest](https://github.com/Jalsemgeest), [Zack Harley](https://github.com/zackharley), [Colin MacLeod](https://github.com/ColinLMacLeod1) and [Andrew Litt](https://github.com/andrewlitt)!
Made with :heart: in Kingston, Ontario for QHacks 2016 | ## Inspiration
The inspiration for the project was to design a model that could detect fake loan entries hidden amongst a set of real loan entries. Also, our group was eager to design a dashboard to help see these statistics - many similar services are good at identifying outliers in data but are unfriendly to the user. We wanted businesses to look at and understand fake data immediately because its important to recognize quickly.
## What it does
Our project handles back-end and front-end tasks. Specifically, on the back-end, the project uses libraries like Pandas in Python to parse input data from CSV files. Then, after creating histograms and linear regression models that detect outliers on given input, the data is passed to the front-end to display the histogram and present outliers on to the user for an easy experience.
## How we built it
We built this application using Python in the back-end. We utilized Pandas for efficiently storing data in DataFrames. Then, we used Numpy and Scikit-Learn for statistical analysis. On the server side, we built the website in HTML/CSS and used Flask and Django to handle events on the website and interaction with other parts of the code. This involved retrieving taking a CSV file from the user, parsing it into a String, running our back-end model, and displaying the results to the user.
## Challenges we ran into
There were many front-end and back-end issues, but they ultimately helped us learn. On the front-end, the biggest problem was using Django with the browser to bring this experience to the user. Also, on the back-end, we found using Keras to be an issue during the start of the process, so we had to switch our frameworks mid-way.
## Accomplishments that we're proud of
An accomplishment was being able to bring both sides of the development process together. Specifically, creating a UI with a back-end was a painful but rewarding experience. Also, implementing cool machine learning models that could actually find fake data was really exciting.
## What we learned
One of our biggest lessons was to use libraries more effectively to tackle the problem at hand. We started creating a machine learning model by using Keras in Python, which turned out to be ineffective to implement what we needed. After much help from the mentors, we played with other libraries that made it easier to implement linear regression, for example.
## What's next for Financial Outlier Detection System (FODS)
Eventually, we aim to use a sophisticated statistical tools to analyze the data. For example, a Random Forrest Tree could have been used to identify key characteristics of data, helping us decide our linear regression models before building them. Also, one cool idea is to search for linearly dependent columns in data. They would help find outliers and eliminate trivial or useless variables in new data quickly. | winning |
## Inspiration
The inspiration from merchflow was the Google form that PennApps sent out regarding shipping swag. We found the question regarding distribution on campuses particularly odd, but it made perfect sense after giving it a bit more thought. After all, shipping a few large packages is cheaper than many small shipments. But then we started considering the logistics of such an arrangement, particularly how the event organizers would have to manually figure out these shipments. Thus the concept of merchflow was born.
## What it does
Merchflow is a web app that allows event organizers (like for a hackathon) to easily determine the optimal shipping arrangement for swag (or, more generically, for any package) to event participants. Below is our design for merchflow.
First, the event organizer provides merchflow with the contact info (email) of the event participants. Merchflow will then send out emails on behalf of the organizer with a link to a form and an event-specific code.
The form will ask for information such as shipping address as well if they would be willing to distribute swag to other participants nearby. This information will be sent back to merchflow’s underlying database Firestore and updates the organizer’s dashboard in real-time.
Once the organizer is ready to ship, merchflow will compute the best shipping arrangement based on the participant’s location and willingness to distribute. This will be done according to a shipping algorithm that we define to minimize the number of individual shipments required (which will in turn lower the overall shipping costs for the organizer).
## How we built it
Given the scope of PennApps and the limited time we had, we decided to focus on designing the concept of Merchflow and building out its front end experience. While there is much work to be done in the backend, we believe what we have so far provides a good visualization of its potential.
Merchflow is built using react.js and firebase (and related services such as Firestore and Cloud Functions). We ran into many issues with Firebase and ultimately were not able to fully utilize it; however, we were able to successfully deploy the web app to the provided host.
With react.js, we used bootstrap and started off with airframe React templates and built our own dashboard, tabs, forms, tables, etc. custom to our design and expectations for merchflow. The dashboard and tabs are designed and built with responsiveness in mind as well as an intention to pursue a minimalistic, clean style. For functionalities that our backend isn’t operational in yet, we used faker.js to populate it with data to simulate the real experience an event planner would have.
## Challenges I ran into
During the development of merchflow, we ran into many issues. The one being that we were unable to get Firebase authentication working with our React app. We tried following several tutorials and documentations; however, it was just something that we were unable to resolve in the time span of PennApps. Therefore, we focused our energy on polishing up the front end and the design of the project so that we can relay our project concept well even without the backend being fully operational.
Another issue that we encountered was regarding Firebase deployment (while we weren’t able to connect to any Firebase SDKs, we were still able to connect the web app as a Firebase app and could deploy to the provided hosted site). During deployment, we noticed that the color theme was not properly displaying compared to what we had locally. Since we specify the colors in node\_modules (a folder that we do not commit to Git), we thought that by moving the specific color variable .scss file out of node\_modules, change import paths, we would be able to fix it. And it did, but it took quite some time to realize this because the browser had cached the site prior to this change and it didn’t propagate over immediately.
## Accomplishments that I'm proud of
We are very proud of the level of polish in our design and react front end. As a concept, we fleshed out merchflow quite extensively and considered many different aspects and features that would be required of an actual service that event organizers actually use. This includes dealing with authentication, data storage, and data security. Our diagram describes the infrastructure of merchflow quite well and clearly lays out the work ahead of us.
Likewise, we spent hours reading through how the airframe template was built in the first place before being able to customize and add on top of it, and in the process gained a lot of insight into how React projects should be structured and how each file and component connects with each other. Ultimately, we were able to turn what we dreamed of in our designs into reality that we can present to someone else.
## What I learned
As a team, we learned a lot about web development (which neither of us is particularly strong in) specifically regarding react.js and Firebase. For react.js, we didn’t know the full extent of modularizing components could bring in terms of scale and clarity. We interacted and learned the workings of scss and javascript, including the faker.js package, on the fly as we try to build out merchflow’s front end.
## What's next for merchflow
While we are super excited about our front end, unfortunately, there are still a few more gaps to turn merchflow into an operational tool for event organizers to utilize, primarily dealing with the backend and Firebase. We need to resolve the Firebase connection issues that we were experiencing so we can actually get a backend working for merchflow.
After we are able to integrate Firebase into the react app, we can start connecting the fields and participant list to Firestore which will maintain these documents based on the event organizer’s user id (preventing unauthorized access and modification).
Once that is complete, we can focus on the two main features of merchflow: sending out emails and calculating the best shipping arrangement. Both of these features would be implemented via a Cloud Function and would work with the underlying data stored in Firestore. Sending out emails could be achieved using a library such as Twilio SendGrid using the emails the organizer has provided. Computing the best arrangement would require a bit more work to figure out an algorithm to work with. Regardless of algorithm, it will likely utilize Google Maps API (or some other map API) in order to calculate the distance between addresses (and thus determine viability for proxy distribution). We would also need to utilize some service to programmatically generate (and pay for) shipping labels. | ## Inspiration
iPonzi started off as a joke between us, but we decided that PennApps was the perfect place to make our dream a reality.
## What it does
The app requires the user to sign up using an email and/or social logins. After purchasing the application and creating an account, you can refer your friends to the app. For every person you refer, you are given $3, and the app costs $5. All proceeds will go to Doctors' Without Borders. A leader board of the most successful recruiters and the total amount of money donated will be updated.
## How I built it
Google Polymer, service workers, javascript, shadow-dom
## Challenges I ran into
* Learning a new framework
* Live deployment to firebase hosting
## Accomplishments that I'm proud of
* Mobile like experience offline
* App shell architecture and subsequent load times.
* Contributing to pushing the boundaries of web
## What I learned
* Don't put payment API's into production in 2 days.
* DOM module containment
## What's next for iPonzi
* Our first donation
* Expanding the number of causes we support by giving the user a choice of where their money goes.
* Adding addition features to the app
* Production | ## Why We Created **Here**
As college students, one question that we catch ourselves asking over and over again is – “Where are you studying today?” One of the most popular ways for students to coordinate is through texting. But messaging people individually can be time consuming and awkward for both the inviter and the invitee—reaching out can be scary, but turning down an invitation can be simply impolite. Similarly, group chats are designed to be a channel of communication, and as a result, a message about studying at a cafe two hours from now could easily be drowned out by other discussions or met with an awkward silence.
Just as Instagram simplified casual photo sharing from tedious group-chatting through stories, we aim to simplify casual event coordination. Imagine being able to efficiently notify anyone from your closest friends to lecture buddies about what you’re doing—on your own schedule.
Fundamentally, **Here** is an app that enables you to quickly notify either custom groups or general lists of friends of where you will be, what you will be doing, and how long you will be there for. These events can be anything from an open-invite work session at Bass Library to a casual dining hall lunch with your philosophy professor. It’s the perfect dynamic social calendar to fit your lifestyle.
Groups are customizable, allowing you to organize your many distinct social groups. These may be your housemates, Friday board-game night group, fellow computer science majors, or even a mixture of them all. Rather than having exclusive group chat plans, **Here** allows for more flexibility to combine your various social spheres, casually and conveniently forming and strengthening connections.
## What it does
**Here** facilitates low-stakes event invites between users who can send their location to specific groups of friends or a general list of everyone they know. Similar to how Instagram lowered the pressure involved in photo sharing, **Here** makes location and event sharing casual and convenient.
## How we built it
UI/UX Design: Developed high fidelity mockups on Figma to follow a minimal and efficient design system. Thought through user flows and spoke with other students to better understand needed functionality.
Frontend: Our app is built on React Native and Expo.
Backend: We created a database schema and set up in Google Firebase. Our backend is built on Express.js.
All team members contributed code!
## Challenges
Our team consists of half first years and half sophomores. Additionally, the majority of us have never developed a mobile app or used these frameworks. As a result, the learning curve was steep, but eventually everyone became comfortable with their specialties and contributed significant work that led to the development of a functional app from scratch.
Our idea also addresses a simple problem which can conversely be one of the most difficult to solve. We needed to spend a significant amount of time understanding why this problem has not been fully addressed with our current technology and how to uniquely position **Here** to have real change.
## Accomplishments that we're proud of
We are extremely proud of how developed our app is currently, with a fully working database and custom frontend that we saw transformed from just Figma mockups to an interactive app. It was also eye opening to be able to speak with other students about our app and understand what direction this app can go into.
## What we learned
Creating a mobile app from scratch—from designing it to getting it pitch ready in 36 hours—forced all of us to accelerate our coding skills and learn to coordinate together on different parts of the app (whether that is dealing with merge conflicts or creating a system to most efficiently use each other’s strengths).
## What's next for **Here**
One of **Here’s** greatest strengths is the universality of its usage. After helping connect students with students, **Here** can then be turned towards universities to form a direct channel with their students. **Here** can provide educational institutions with the tools to foster intimate relations that spring from small, casual events. In a poll of more than sixty university students across the country, most students rarely checked their campus events pages, instead planning their calendars in accordance with what their friends are up to. With **Here**, universities will be able to more directly plug into those smaller social calendars to generate greater visibility over their own events and curate notifications more effectively for the students they want to target. Looking at the wider timeline, **Here** is perfectly placed at the revival of small-scale interactions after two years of meticulously planned agendas, allowing friends who have not seen each other in a while casually, conveniently reconnect.
The whole team plans to continue to build and develop this app. We have become dedicated to the idea over these last 36 hours and are determined to see just how far we can take **Here**! | partial |
## Inspiration
We wanted to bring financial literacy into being a part of your everyday life while also bringing in futuristic applications such as augmented reality to really motivate people into learning about finance and business every day. We were looking at a fintech solution that didn't look towards enabling financial information to only bankers or the investment community but also to the young and curious who can learn in an interesting way based on the products they use everyday.
## What it does
Our mobile app looks at company logos, identifies the company and grabs the financial information, recent company news and company financial statements of the company and displays the data in an Augmented Reality dashboard. Furthermore, we allow speech recognition to better help those unfamiliar with financial jargon to better save and invest.
## How we built it
Built using wikitude SDK that can handle Augmented Reality for mobile applications and uses a mix of Financial data apis with Highcharts and other charting/data visualization libraries for the dashboard.
## Challenges we ran into
Augmented reality is very hard, especially when combined with image recognition. There were many workarounds and long delays spent debugging near-unknown issues. Moreover, we were building using Android, something that none of us had prior experience with which made it harder.
## Accomplishments that we're proud of
Circumventing challenges as a professional team and maintaining a strong team atmosphere no matter what to bring something that we believe is truly cool and fun-to-use.
## What we learned
Lots of things about Augmented Reality, graphics and Android mobile app development.
## What's next for ARnance
Potential to build in more charts, financials and better speech/chatbot abilities into out application. There is also a direction to be more interactive with using hands to play around with our dashboard once we figure that part out. | ## Inspiration
Today Instagram has become a huge platform for social activism and encouraging people to contribute to different causes. I've donated to several of these causes but have always been met with a clunky UI that takes several minutes to fully fill out. With the donation inertia already so high, it makes sense to simplify the process and that's exactly what Activst does.
## What it does
It provides social media users to create a personalized profile of what causes they support and different donation goals. Each cause has a description of the rationale behind the movement and details of where the donation money will be spent. Then the user can specify how much they want to donate and finish the process in one click.
## How we built it
ReactJS, Firebase Hosting, Google Pay, Checkbook API, Google Cloud Functions (Python)
## Challenges we ran into
It's very difficult to facilitate payments directly to donation providers and create a one click process to do so as many of the donation providers require specific information from the donor. Using checkbook's API simplified this process as we could simply send a check to the organization's email. CORS.
## What's next for Activst
Add in full payment integration and find a better way to complete the donation process without needing any user engagement. Launch, beta test, iterate, repeat. The goal is to have instagram users have an activst url in their instagram bio. | # turnip - food was made for sharing.
## Inspiration
After reading about the possible projects, we decided to work with Velo by Wix on a food tech project. What are two things that we students never get tired of? Food and social media! We took some inspiration from Radish and GoodReads to throw together a platform for hungry students.
Have you ever wanted takeout but not been sure what you're in the mood for? turnip is here for you!
## What it does
turnip is a website that connects local friends with their favourite food takeout spots. You can leave reviews and share pictures, as well as post asking around for food recommendations. turnip also keeps track of your restaurant wishlist and past orders, so you never forget to check out that place your friend keeps telling you about. With integrated access to partnered restaurants, turnip would allow members to order right on the site seamlessly and get food delivered for cheap. Since the whole design is built around sharing (sharing thoughts, sharing secrets, sharing food), turnip would also allow users to place orders together, splitting the cost right at payment to avoid having to bring out the calculator and figure out who owes who what.
## How we built it
We used Velo by Wix for the entire project, with Carol leading the design of the website while Amir and Tudor worked on the functionality. We also used Wix's integrated "members" area and forum add-ons to implement the "feed".
## Challenges we ran into
One of the bigger challenges we had to face was that none of us had any experience developing full-stack, so we had to learn on the spot how to write a back-end and try to implement it into our website. It was honestly a lot of fun trying to "speedrun" learning the ins and outs of Javascript. Unfortunately, Wix made the project even more difficult to work on as it doesn't natively support multiple people working on it at the same time. As such, our plan to work concurrently fell through and we had to "pass the baton" when it came to working on the website and keep ourselves busy the rest of the time. Lastly, since we relied on Wix add-ons we were heavily limited in the functionality we could implement with Velo. We still created a few functions; however, much of it was already covered by the add-ons and what wasn't was made very difficult to access without rewriting the functionality of the modules from scratch. Given the time crunch, we made do with what we had and had to restrict the scope for McHacks.
## Accomplishments that we're proud of
We're super proud of how the design of the site came together, and all the art Carol drew really flowed great with the look we were aiming for. We're also very proud of what we managed to get together despite all the challenges we faced, and the back-end functionality we implemented.
## What we learned
Our team really learned about the importance of scope, as well as about the importance of really planning out the project before diving right in. Had we done some research to really familiarize ourselves with Wix and Velo we might have reconsidered the functionalities we would need to implement (and/or implemented them ourselves, which in hindsight would have been better), or chosen to tackle this project in a different way altogether!
## What's next for Turnip
We have a lot of features that we really wanted to implement but didn't quite have the time to.
A simple private messaging feature would have been great, as well as fully implementing the block feature (sometimes we don't get along with people, and that's okay!).
We love the idea that a food delivery service like Radish could implement some of our ideas, like the social media/recommendations/friends feature aspect of our project, and would love to help them do it.
Overall, we're extremely proud of the ideas we have come up with and what we have managed to implement, especially the fact that we kept in mind the environmental impact of meal deliveries with the order sharing. | winning |
## Inspiration
Lots of applications require you to visit their website or application for initial tasks such as signing up on a waitlist to be seen. What if these initial tasks could be performed at the convenience of the user on whatever platform they want to use (text, slack, facebook messenger, twitter, webapp)?
## What it does
In a medical setting, allows patients to sign up using platforms such as SMS or Slack to be enrolled on the waitlist. The medical advisor can go through this list one by one and have a video conference with the patient. When the medical advisor is ready to chat, a notification is sent out to the respective platform the patient signed up on.
## How I built it
I set up this whole app by running microservices on StdLib. There are multiple microservices responsible for different activities such as sms interaction, database interaction, and slack interaction. The two frontend Vue websites also run as microservices on StdLib. The endpoints on the database microservice connect to a MongoDB instance running on mLab. The endpoints on the sms microservice connect to the MessageBird microservice. The video chat was implemented using TokBox. Each microservice was developed one by one and then also connected one by one like building blocks.
## Challenges I ran into
Initially, getting the microservices to connect to each other, and then debugging microservices remotely.
## Accomplishments that I'm proud of
Connecting multiple endpoints to create a complex system more easily using microservice architecture.
## What's next for Please Health Me
Developing more features such as position in the queue and integrating with more communication channels such as Facebook Messenger. This idea can also be expanded into different scenarios, such as business partners signing up for advice from a busy advisor, or fans being able to sign up and be able to connect with a social media influencer based on their message. | This project was developed with the RBC challenge in mind of developing the Help Desk of the future.
## What inspired us
We were inspired by our motivation to improve the world of work.
## Background
If we want technical support, we usually contact companies by phone, which is slow and painful for both users and technical support agents, especially when questions are obvious. Our solution is an online chat that responds to people immediately using our own bank of answers. This is a portable and a scalable solution.
## Try it!
<http://www.rbcH.tech>
## What we learned
Using NLP, dealing with devilish CORS, implementing docker successfully, struggling with Kubernetes.
## How we built
* Node.js for our servers (one server for our webapp, one for BotFront)
* React for our front-end
* Rasa-based Botfront, which is the REST API we are calling for each user interaction
* We wrote our own Botfront database during the last day and night
## Our philosophy for delivery
Hack, hack, hack until it works. Léonard was our webapp and backend expert, François built the DevOps side of things and Han solidified the front end and Antoine wrote + tested the data for NLP.
## Challenges we faced
Learning brand new technologies is sometimes difficult! Kubernetes (and CORS brought us some painful pain... and new skills and confidence.
## Our code
<https://github.com/ntnco/mchacks/>
## Our training data
<https://github.com/lool01/mchack-training-data> | ## Inspiration
In this era, with medicines being readily available for consumption, people take on pills without even consulting with a specialist to find out what diagnosis they have. We have created this project to find out what specific illnesses that a person can be diagnosed with, so that they can seek out the correct treatment, without self-treating themselves with pills which might in turn harm them in the long run.
## What it does
This is your personal medical assistant bot which takes in a set of symptoms you are experiencing and returns some illnesses that are most closely matched with that set of symptoms. It is powered by Machine learning which enables it to return more accurate data (tested and verified!) as to what issue the person might have.
## How we built it
We used React for building the front-end. We used Python and its vast array of libraries to design the ML model. For building the model, we used scikit-learn. We used pandas for the data processing. To connect the front end with the model, we used Fast API. We used a Random Forest multi-label classification model to give the diagnosis. Since the model takes in a string, we used the Bag-of-Words from Scikit-Learn to convert it to number-related values.
## Challenges we ran into
Since none of us had significant ML experience, we had to learn how to create an ML model specifically the multi-label classification model, train it and get it deployed on time. Furthermore, FAST API does not have good documentation, we ran into numerous errors while configuring and interfacing it between our front-end and back-end.
## Accomplishments that we're proud of
Creating a Full-Stack Application that would help the public to find a quick diagnosis for the symptoms they experience. Working on the Project as a team and brainstorming ideas for the proof of concept and how to get our app working.
We trained the model with use cases which evaluated to 97% accuracy
## What we learned
Working with Machine Learning and creating a full-stack App. We also learned how to coordinate with the team to work effectively. Reading documentation and tutorials to get an understanding of how the technologies we used work.
## What's next for Medical Chatbot
The first stage for the Medical Chatbot would be to run tests and validate that it works using different datasets. We also plan about adding more features in the front end such as authentication so that different users can register before using the feature. We can get inputs from professionals in healthcare to increase coverage and add more questions to give the correct prediction. | winning |
## Inspiration
I was walking down the streets of Toronto and noticed how there always seemed to be cigarette butts outside of any building. It felt disgusting for me, especially since they polluted the city so much. After reading a few papers on studies revolving around cigarette butt litter, I noticed that cigarette litter is actually the #1 most littered object in the world and is toxic waste. Here are some quick facts
* About **4.5 trillion** cigarette butts are littered on the ground each year
* 850,500 tons of cigarette butt litter is produced each year. This is about **6 and a half CN towers** worth of litter which is huge! (based on weight)
* In the city of Hamilton, cigarette butt litter can make up to **50%** of all the litter in some years.
* The city of San Fransico spends up to $6 million per year on cleaning up the cigarette butt litter
Thus our team decided that we should develop a cost-effective robot to rid the streets of cigarette butt litter
## What it does
Our robot is a modern-day Wall-E. The main objectives of the robot are to:
1. Safely drive around the sidewalks in the city
2. Detect and locate cigarette butts on the ground
3. Collect and dispose of the cigarette butts
## How we built it
Our basic idea was to build a robot with a camera that could find cigarette butts on the ground and collect those cigarette butts with a roller-mechanism. Below are more in-depth explanations of each part of our robot.
### Software
We needed a method to be able to easily detect cigarette butts on the ground, thus we used computer vision. We made use of this open-source project: [Mask R-CNN for Object Detection and Segmentation](https://github.com/matterport/Mask_RCNN) and [pre-trained weights](https://www.immersivelimit.com/datasets/cigarette-butts). We used a Raspberry Pi and a Pi Camera to take pictures of cigarettes, process the image Tensorflow, and then output coordinates of the location of the cigarette for the robot. The Raspberry Pi would then send these coordinates to an Arduino with UART.
### Hardware
The Arduino controls all the hardware on the robot, including the motors and roller-mechanism. The basic idea of the Arduino code is:
1. Drive a pre-determined path on the sidewalk
2. Wait for the Pi Camera to detect a cigarette
3. Stop the robot and wait for a set of coordinates from the Raspberry Pi to be delivered with UART
4. Travel to the coordinates and retrieve the cigarette butt
5. Repeat
We use sensors such as a gyro and accelerometer to detect the speed and orientation of our robot to know exactly where to travel. The robot uses an ultrasonic sensor to avoid obstacles and make sure that it does not bump into humans or walls.
### Mechanical
We used Solidworks to design the chassis, roller/sweeper-mechanism, and mounts for the camera of the robot. For the robot, we used VEX parts to assemble it. The mount was 3D-printed based on the Solidworks model.
## Challenges we ran into
1. Distance: Working remotely made designing, working together, and transporting supplies challenging. Each group member worked on independent sections and drop-offs were made.
2. Design Decisions: We constantly had to find the most realistic solution based on our budget and the time we had. This meant that we couldn't cover a lot of edge cases, e.g. what happens if the robot gets stolen, what happens if the robot is knocked over ...
3. Shipping Complications: Some desired parts would not have shipped until after the hackathon. Alternative choices were made and we worked around shipping dates
## Accomplishments that we're proud of
We are proud of being able to efficiently organize ourselves and create this robot, even though we worked remotely, We are also proud of being able to create something to contribute to our environment and to help keep our Earth clean.
## What we learned
We learned about machine learning and Mask-RCNN. We never dabbled with machine learning much before so it was awesome being able to play with computer-vision and detect cigarette-butts. We also learned a lot about Arduino and path-planning to get the robot to where we need it to go. On the mechanical side, we learned about different intake systems and 3D modeling.
## What's next for Cigbot
There is still a lot to do for Cigbot. Below are some following examples of parts that could be added:
* Detecting different types of trash: It would be nice to be able to gather not just cigarette butts, but any type of trash such as candy wrappers or plastic bottles, and to also sort them accordingly.
* Various Terrains: Though Cigbot is made for the sidewalk, it may encounter rough terrain, especially in Canada, so we figured it would be good to add some self-stabilizing mechanism at some point
* Anti-theft: Cigbot is currently small and can easily be picked up by anyone. This would be dangerous if we left the robot in the streets since it would easily be damaged or stolen (eg someone could easily rip off and steal our Raspberry Pi). We need to make it larger and more robust.
* Environmental Conditions: Currently, Cigbot is not robust enough to handle more extreme weather conditions such as heavy rain or cold. We need a better encasing to ensure Cigbot can withstand extreme weather.
## Sources
* <https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3397372/>
* <https://www.cbc.ca/news/canada/hamilton/cigarette-butts-1.5098782>
* [https://www.nationalgeographic.com/environment/article/cigarettes-story-of-plastic#:~:text=Did%20You%20Know%3F-,About%204.5%20trillion%20cigarettes%20are%20discarded%20each%20year%20worldwide%2C%20making,as%20long%20as%2010%20years](https://www.nationalgeographic.com/environment/article/cigarettes-story-of-plastic#:%7E:text=Did%20You%20Know%3F-,About%204.5%20trillion%20cigarettes%20are%20discarded%20each%20year%20worldwide%2C%20making,as%20long%20as%2010%20years). | ## Inspiration
The EPA estimates that although 75% of American waste is recyclable, only 30% gets recycled. Our team was inspired to create RecyclAIble by the simple fact that although most people are not trying to hurt the environment, many unknowingly throw away recyclable items in the trash. Additionally, the sheer amount of restrictions related to what items can or cannot be recycled might dissuade potential recyclers from making this decision. Ultimately, this is detrimental since it can lead to more trash simply being discarded and ending up in natural lands and landfills rather than being recycled and sustainably treated or converted into new materials. As such, RecyclAIble fulfills the task of identifying recycling objects with a machine learning-based computer vision software, saving recyclers the uncertainty of not knowing whether they can safely dispose of an object or not. Its easy-to-use web interface lets users track their recycling habits and overall statistics like the number of items disposed of, allowing users to see and share a tangible representation of their contributions to sustainability, offering an additional source of motivation to recycle.
## What it does
RecyclAIble is an AI-powered mechanical waste bin that separates trash and recycling. It employs a camera to capture items placed on an oscillating lid and, with the assistance of a motor, tilts the lid in the direction of one compartment or another depending on whether the AI model determines the object as recyclable or not. Once the object slides into the compartment, the lid will re-align itself and prepare for proceeding waste. Ultimately, RecyclAIBle autonomously helps people recycle as much as they can and waste less without them doing anything different.
## How we built it
The RecyclAIble hardware was constructed using cardboard, a Raspberry Pi 3 B+, an ultrasonic sensor, a Servo motor, and a Logitech plug-in USB web camera, and Raspberry PI. Whenever the ultrasonic sensor detects an object placed on the surface of the lid, the camera takes an image of the object, converts it into base64 and sends it to a backend Flask server. The server receives this data, decodes the base64 back into an image file, and inputs it into a Tensorflow convolutional neural network to identify whether the object seen is recyclable or not. This data is then stored in an SQLite database and returned back to the hardware. Based on the AI model's analysis, the Servo motor in the Raspberry Pi flips the lip one way or the other, allowing the waste item to slide into its respective compartment. Additionally, a reactive, mobile-friendly web GUI was designed using Next.js, Tailwind.css, and React. This interface provides the user with insight into their current recycling statistics and how they compare to the nationwide averages of recycling.
## Challenges we ran into
The prototype model had to be assembled, measured, and adjusted very precisely to avoid colliding components, unnecessary friction, and instability. It was difficult to get the lid to be spun by a single Servo motor and getting the Logitech camera to be propped up to get a top view. Additionally, it was very difficult to get the hardware to successfully send the encoded base64 image to the server and for the server to decode it back into an image. We also faced challenges figuring out how to publicly host the server before deciding to use ngrok. Additionally, the dataset for training the AI demanded a significant amount of storage, resources and research. Finally, establishing a connection from the frontend website to the backend server required immense troubleshooting and inspect-element hunting for missing headers. While these challenges were both time-consuming and frustrating, we were able to work together and learn about numerous tools and techniques to overcome these barriers on our way to creating RecyclAIble.
## Accomplishments that we're proud of
We all enjoyed the bittersweet experience of discovering bugs, editing troublesome code, and staying up overnight working to overcome the various challenges we faced. We are proud to have successfully made a working prototype using various tools and technologies new to us. Ultimately, our efforts and determination culminated in a functional, complete product we are all very proud of and excited to present. Lastly, we are proud to have created something that could have a major impact on the world and help clean our environment clean.
## What we learned
First and foremost, we learned just how big of a problem under-recycling was in America and throughout the world, and how important recycling is to Throughout the process of creating RecyclAIble, we had to do a lot of research on the technologies we wanted to use, the hardware we needed to employ and manipulate, and the actual processes, institutions, and statistics related to the process of recycling. The hackathon has motivated us to learn a lot more about our respective technologies - whether it be new errors, or desired functions, new concepts and ideas had to be introduced to make the tech work. Additionally, we educated ourselves on the importance of sustainability and recycling as well to better understand the purpose of the project and our goals.
## What's next for RecyclAIble
RecycleAIble has a lot of potential as far as development goes. RecyclAIble's AI can be improved with further generations of images of more varied items of trash, enabling it to be more accurate and versatile in determining which items to recycle and which to trash. Additionally, new features can be incorporated into the hardware and website to allow for more functionality, like dates, tracking features, trends, and the weights of trash, that expand on the existing information and capabilities offered. And we’re already thinking of ways to make this device better, from a more robust AI to the inclusion of much more sophisticated hardware/sensors. Overall, RecyclAIble has the potential to revolutionize the process of recycling and help sustain our environment for generations to come. | ## Inspiration
How many times have you been walking around the city and seen trash on the ground, sometimes just centimetres away from a trash can? It can be very frustrating to see people who either have no regard for littering, or just have horrible aim. This is what inspired us to create TrashTalk: trash talk for your trash shots.
## What it does
When a piece of garbage is dropped on the ground within the camera’s field of vision, a speaker loudly hurls insults until the object is picked up. Because what could motivate people to pick up after themselves more than public shaming? Perhaps the promise of a compliment: once the litter is picked up, the trash can will deliver praise, designed to lift the pedestrian’s heart.
The ultrasonic sensor attached to the rim of the can will send a ping to the server when the trash can becomes full, thus reducing litter by preventing overfilling, as studies have shown that programmed encouragement as opposed to regular maintenance can reduce littering by as much as 25%. On the website, one can view the current 'full' status of the trash can, how much trash is currently inside and outside the can in a bar graph, and how many pieces of trash have been scanned total. This quantifies TrashTalk's design to drastically reduce littering in public areas, with some nice risk and reward involved for the participant.
## How we built it
We build this project using NEXT.js, Python, MongoDB, and the Express library integrated together using HTTP requests to send data to and from the Arduino, computer, and end-user.
Our initial idea was made quite early on, but as we ran into challenges, the details of the project changed over time in order to reflect what we could realistically accomplish in one hackathon.
We split up our work so we could cover more ground: Abeer would cover trash detection using AI models that could be run on a Raspberry Pi, Kersh would handle the MongoDB interaction, Vansh would help create the Arduino Logic, and Matias would structure the project together.
## Challenges we ran into
We ran into *quite* a few challenges making TrashTalk, and a lot of them had to do with the APIs that we were using for OpenCV. The first major issue was that we were not able to get Raspberry Pi running, so we migrated all the code onto one of our laptops.
Furthermore, none of the pretrained computer vision models we tried to use to recognize trash would work. We realized with the help of one of the mentors that we could simply use an object detection algorithm, and it was smooth sailing from there.
## Accomplishments that we're proud of
* Getting a final working product together
* Being able to demo to people at the hackathon
* Having an interactive project
## What we learned
We learned so many things during this hackathon due to the varying experience levels in our team. Some members learned how to integrate GitHub with VSCode, while others learned how to use Next.js (SHOUTOUT TO FREDERIC) and motion detection with OpenCV.
## What's next for TrashTalk
The next steps for TrashTalk would be to have more advanced analytics being run on each trash can. If we aim to reduce litter through the smart placement of trashcans along with auditory reminders, having a more accurate kit of sensors, such as GPS, weight sensor, etc. would allow us to have a much more accurate picture of the trash can's usage. The notification of a trash can being full could also be used to alert city workers to optimize their route and empty more popular trash cans first, increasing efficiency. | winning |
OUR VIDEO IS IN THE COMMENTS!! THANKS FOR UNDERSTANDING (WIFI ISSUES)
## Inspiration
As a group of four students having completed 4 months of online school, going into our second internship and our first fully remote internship, we were all nervous about how our internships would transition to remote work. When reminiscing about pain points that we faced in the transition to an online work term this past March, the one pain point that we all agreed on was a lack of connectivity and loneliness. Trying to work alone in one's bedroom after experiencing life in the office where colleagues were a shoulder's tap away for questions about work, and the noise of keyboards clacking and people zoned into their work is extremely challenging and demotivating, which decreases happiness and energy, and thus productivity (which decrease energy and so on...). Having a mentor and steady communication with our teams is something that we all valued immensely during our first co-ops. In addition, some of our works had designated exercise times, or even pre-planned one-on-one activities, such as manager-coop lunches, or walk breaks with company walking groups. These activities and rituals bring structure into a sometimes mundane day which allows the brain to recharge and return back to work fresh and motivated. Upon the transition to working from home, we've all found that somedays we'd work through lunch without even realizing it and some days we would be endlessly scrolling through Reddit as there would be no one there to check in on us and make sure that we were not blocked. Our once much-too-familiar workday structure seemed to completely disintegrate when there was no one there to introduce structure, hold us accountable and gently enforce proper, suggested breaks into it. We took these gestures for granted in person, but now they seemed like a luxury- almost impossible to attain.
After doing research, we noticed that we were not alone:
A 2019 Buffer survey asked users to rank their biggest struggles working remotely. Unplugging after work and loneliness were the most common (22% and 19% respectively)
<https://buffer.com/state-of-remote-work-2019>
We set out to create an application that would allow us to facilitate that same type of connection between colleagues and make remote work a little less lonely and socially isolated. We were also inspired by our own online term recently, finding that we had been inspired and motivated when we were made accountable by our friends through usage of tools like shared Google Calendars and Notion workspaces.
As one of the challenges we'd like to enter for the hackathon, the 'RBC: Most Innovative Solution' in the area of helping address a pain point associated with working remotely in an innovative way truly encaptured the issue we were trying to solve perfectly.
Therefore, we decided to develop aibo, a centralized application which helps those working remotely stay connected, accountable, and maintain relationships with their co-workers all of which improve a worker's mental health (which in turn has a direct positive affect on their productivity).
## What it does
Aibo, meaning "buddy" in Japanese, is a suite of features focused on increasing the productivity and mental wellness of employees. We focused on features that allowed genuine connections in the workplace and helped to motivate employees. First and foremost, Aibo uses a matching algorithm to match compatible employees together focusing on career goals, interests, roles, and time spent at the company following the completion of a quick survey. These matchings occur multiple times over a customized timeframe selected by the company's host (likely the People Operations Team), to ensure that employees receive a wide range of experiences in this process. Once you have been matched with a partner, you are assigned weekly meet-ups with your are partner to build that connection. Using Aibo, you can video call with your partner and start creating a To-Do list with your partner and by developing this list together, you can bond over the common tasks to perform despite potentially having seemingly very different roles. Partners would have 2 meetings a day, once in the morning where they would go over to-do lists and goals for the day, and once in the evening in order to track progress over the course of that day and tasks that need to be transferred over to the following day.
## How We built it
This application was built with React, Javascript and HTML/CSS on the front-end along with Node.js and Express on the back-end. We used the Twilio chat room API along with Autocode to store our server endpoints and enable a Slack bot notification that POSTs a message in your specific buddy Slack channel when your buddy joins the video calling room.
In total, we used **4 APIs/ tools** for our project.
* Twilio chat room API
* Autocode API
* Slack API for the Slack bots
* Microsoft Azure to work on the machine learning algorithm
When we were creating our buddy app, we wanted to find an effective way to match partners together. After looking over a variety of algorithms, we decided on the K-means clustering algorithm. This algorithm is simple in its ability to group similar data points together and discover underlying patterns. The K-means will look for a set amount of clusters within the data set. This was my first time working with machine learning but luckily, through Microsoft Azure, I was able to create a working training and interference pipeline. The dataset marked the user’s role and preferences and created n/2 amount of clusters where n are the number of people searching for a match. This API was then deployed and tested on web server. Although, we weren't able to actively test this API on incoming data from the back-end, this is something that we are looking forward to implementing in the future. Working with ML was mainly trial and error, as we have to experiment with a variety of algorithm to find the optimal one for our purposes.
Upon working with Azure for a couple of hours, we decided to pivot towards leveraging another clustering algorithm in order to group employees together based on their answers to the form they fill out when they first sign up on the aido website. We looked into the PuLP, a python LP modeler, and then looked into hierarchical clustering. This seemed similar to our initial approach with Azure, and after looking into the advantages of this algorithm over others for our purpose, we decided to chose this one for the clustering of the form responders. Some pros of hierarchical clustering include:
1. Do not need to specify the number of clusters required for the algorithm- the algorithm determines this for us which is useful as this automates the sorting through data to find similarities in the answers.
2. Hierarchical clustering was quite easy to implement as well in a Spyder notebook.
3. the dendrogram produced was very intuitive and helped me understand the data in a holistic way
The type of hierarchical clustering used was agglomerative clustering, or AGNES. It's known as a bottom-up algorithm as it starts from a singleton cluster then pairs of clusters are successively merged until all clusters have been merged into one big cluster containing all objects. In order to decide which clusters had to be combined and which ones had to be divided, we need methods for measuring the similarity between objects. I used Euclidean distance to calculate this (dis)similarity information.
This project was designed solely using Figma, with the illustrations and product itself designed on Figma. These designs required hours of deliberation and research to determine the customer requirements and engineering specifications, to develop a product that is accessible and could be used by people in all industries. In terms of determining which features we wanted to include in the web application, we carefully read through the requirements for each of the challenges we wanted to compete within and decided to create an application that satisfied all of these requirements.
After presenting our original idea to a mentor at RBC, we had learned more about remote work at RBC and having not yet completed an online internship, we learned about the pain points and problems being faced by online workers such as:
1. Isolation
2. Lack of feedback
From there, we were able to select the features to integrate including: Task Tracker, Video Chat, Dashboard, and Matching Algorithm which will be explained in further detail later in this post.
Technical implementation for AutoCode:
Using Autocode, we were able to easily and successfully link popular APIs like Slack and Twilio to ensure the productivity and functionality of our app. The Autocode source code is linked before:
Autocode source code here: <https://autocode.com/src/mathurahravigulan/remotework/>
**Creating the slackbot**
```
const lib = require('lib')({token: process.env.STDLIB_SECRET_TOKEN});
/**
* An HTTP endpoint that acts as a webhook for HTTP(S) request event
* @returns {object} result Your return value
*/
module.exports = async (context) => {
console.log(context.params)
if (context.params.StatusCallbackEvent === 'room-created') {
await lib.slack.channels['@0.7.2'].messages.create({
channel: `#buddychannel`,
text: `Hey! Your buddy started a meeting! Hop on in: https://aibo.netlify.app/ and enter the room code MathurahxAyla`
});
} // do something
let result = {};
// **THIS IS A STAGED FILE**
// It was created as part of your onboarding experience.
// It can be closed and the project you're working on
// can be returned to safely - or you can play with it!
result.message = `Welcome to Autocode! 😊`;
return result;
};
```
**Connecting Twilio to autocode**
```
const lib = require('lib')({token: process.env.STDLIB_SECRET_TOKEN});
const twilio = require('twilio');
const AccessToken = twilio.jwt.AccessToken;
const { VideoGrant } = AccessToken;
const generateToken =() => {
return new AccessToken(
process.env.TWILIO_ACCOUNT_SID,
process.env.TWILIO_API_KEY,
process.env.TWILIO_API_SECRET
);
};
const videoToken = (identity, room) => {
let videoGrant;
if (typeof room !== 'undefined') {
videoGrant = new VideoGrant({ room });
} else {
videoGrant = new VideoGrant();
}
const token = generateToken();
token.addGrant(videoGrant);
token.identity = identity;
return token;
};
/**
* An HTTP endpoint that acts as a webhook for HTTP(S) request event
* @returns {object} result Your return value
*/
module.exports = async (context) => {
console.log(context.params)
const identity = context.params.identity;
const room = context.params.room;
const token = videoToken(identity, room);
return {
token: token.toJwt()
}
};
```
From the product design perspective, it is possible to explain certain design choices:
<https://www.figma.com/file/aycIKXUfI0CvJAwQY2akLC/Hack-the-6ix-Project?node-id=42%3A1>
1. As shown in the prototype, the user has full independence to move through the designs as one would in a typical website and this supports the non sequential flow of the upper navigation bar as each feature does not need to be viewed in a specific order.
2. As Slack is a common productivity tool in remote work and we're participating in the Autocode Challenge, we chose to use Slack as an alerting feature as sending text messages to phone could be expensive and potentially distract the user and break their workflow which is why Slack has been integrated throughout the site.
3. The to-do list that is shared between the pairing has been designed in a simple and dynamic way that allows both users to work together (building a relationship) to create a list of common tasks, and duplicate this same list to their individual workspace to add tasks that could not be shared with the other (such as confidential information within the company)
In terms of the overall design decisions, I made an effort to create each illustration from hand simply using Figma and the trackpad on my laptop! Potentially a non-optimal way of doing so, but this allowed us to be very creative in our designs and bring that individuality and innovation to the designs.
The website itself relies on consistency in terms of colours, layouts, buttons, and more - and by developing these components to be used throughout the site, we've developed a modern and coherent website.
## Challenges We ran into
Some challenges that we ran into were:
* Using data science and machine learning for the very first time ever! We were definitely overwhelmed by the different types of algorithms out there but we were able to persevere with it and create something amazing.
* React was difficult for most of us to use at the beginning as only one of our team members had experience with it. But by the end of this, we all felt like we were a little more confident with this tech stack and front-end development.
+ Lack of time - there were a ton of features that we were interested in (like user authentication and a Google calendar implementation) but for the sake of time we had to abandon those functions and focus on the more pressing ones that were integral to our vision for this hack. These, however, are features I hope that we can complete in the future. We learned how to successfully scope a project and deliver upon the technical implementation.
## Accomplishments that We're proud of
* Created a fully functional end-to-end full stack application incorporating both the front-end and back-end to enable to do lists and the interactive video chat that can happen between the two participants. I'm glad I discovered Autocode which made this process simpler (shoutout to Jacob Lee - mentor from Autocode for the guidance)
* Solving an important problem that affects an extremely large amount of individuals- according to tnvestmentexecutive.com:
StatsCan reported that five million workers shifted to home working arrangements in late March. Alongside the 1.8-million employees who already work from home, the combined home-bound employee population represents 39.1% of workers.
<https://www.investmentexecutive.com/news/research-and-markets/statscan-reports-numbers-on-working-from-home/>
* From doing user research we learned that people can feel isolated when working from home and miss the social interaction and accountability of a desk buddy. We're solving two problems in one, tackling social problems and increasing worker mental health while also increasing productivity as their buddy will keep them accountable!
* Creating a working matching algorithm for the first time in a time crunch and learning more about Microsoft Azure's capabilities in Machine Learning
* Creating all of our icons/illustrations from scratch using Figma!
## What We learned
* How to create and trigger Slack bots from React
* How to have a live video chat on a web application using Twilio and React hooks
* How to use a hierarchical clustering algorithm (agglomerative clustering algorithms) to create matches based on inputted criteria
* How to work remotely in a virtual hackathon, and what tools would help us work remotely!
## What's next for aibo
* We're looking to improve on our pairing algorithm. I learned that 36 hours is not enough time to create a new Tinder algorithm and that other time these pairing can be improved and perfected.
* We're looking to code more screens and add user authentication to the mix, and integrate more test cases in the designs rather than using Figma prototyping to prompt the user.
* It is important to consider the security of the data as well, and that not all teams can discuss tasks at length due to specificity. That is why we encourage users to create a simple to do list with their partner during their meeting, and use their best judgement to make it vague. In the future, we hope to incorporate machine learning algorithms to take in the data from the user knowing whether their project is NDA or not, and if so, as the user types it can provide warnings for sensitive information.
* Add a dashboard! As can be seen in the designs, we'd like to integrate a dashboard per user that pulls data from different components of the website such as your match information and progress on your task tracker/to-do lists. This feature could be highly effective to optimize productivity as the user simply has to click on one page and they'll be provided a high level explanation of these two details.
* Create our own Slackbot to deliver individualized Kudos to a co-worker, and pull this data onto a Kudos board on the website so all employees can see how their coworkers are being recognized for their hard work which can act as a motivator to all employees. | ## Inspiration
Social interaction with peers is harder than ever in our world today where everything is online. We wanted to create a setting that will mimic organic encounters the same way as if they would occur in real life -- in the very same places that you’re familiar with.
## What it does
Traverse a map of your familiar environment with an avatar, and experience random encounters like you would in real life! A Zoom call will initiate when two people bump into each other.
## Use Cases
Many students entering their first year at university have noted the difficulty in finding new friends because few people stick around after zoom classes, and with cameras off, it’s hard to even put a name to the face. And it's not just first years too - everybody is feeling the [impact](https://www.mcgill.ca/newsroom/channels/news/social-isolation-causing-psychological-distress-among-university-students-324910).
Our solution helps students meet potential new friends and reunite with old ones in a one-on-one setting in an environment reminiscent of the actual school campus.
Another place where organic communication is vital is in the workplace. [Studies](https://pyrus.com/en/blog/how-spontaneity-can-boost-productivity) have shown that random spontaneous meetings between co-workers can help to inspire new ideas and facilitate connections. With indefinite work from home, this simply doesn't happen anymore. Again, Bump fills this gap of organic conversation between co-workers by creating random happenstances for interaction - you can find out which of your co-workers also likes to hang out in the (virtual) coffee room!
## How we built it
Webapp built with Vue.js for the main structure, firebase backend
Video conferencing integrated with Zoom Web SDK. Original artwork was created with Illustrator and Procreate.
## Major Challenges
Major challenges included implementing the character-map interaction and implementing the queueing process for meetups based on which area of the map each person’s character was in across all instances of the Bump client. In the prototype, queueing is achieved by writing the user id of the waiting client in documents located at area-specific paths in the database and continuously polling for a partner, and dequeuing once that partner is found. This will be replaced with a more elegant implementation down the line.
## What's next for bump
* Auto-map generation: give our app the functionality to create a map with zones just by uploading a map or floor plan (using OCR and image recognition technologies)
* Porting it over to mobile: change arrow key input to touch for apps
* Schedule mode: automatically move your avatar around on the map, following your course schedule. This makes it more likely to bump into classmates in the gap between classes.
## Notes
This demo is a sample of BUMP for a single community - UBC. In the future, we plan on adding the ability for users to be part of multiple communities. Since our login authentication uses email addresses, these communities can be kept secure by only allowing @ubc.ca emails into the UBC community, for example. This ensures that you aren’t just meeting random strangers on the Internet - rather, you’re meeting the same people you would have met in person if COVID wasn’t around. | ## **CoLab** makes exercise fun.
In August 2020, **53%** of US adults reported that their mental health has been negatively impacted due to worry and stress over coronavirus. This is **significantly higher** than the 32% reported in March 2020.
That being said, there is no doubt that Coronavirus has heavily impacted our everyday lives. Quarantine has us stuck inside, unable to workout at our gyms, practice with our teams, and socialize in classes.
Doctor’s have suggested we exercise throughout lockdown, to maintain our health and for the release of endorphins.
But it can be **hard to stay motivated**, especially when we’re stuck inside and don’t know the next time we can see our friends.
Our inspiration comes from this, and we plan to solve these problems with **CoLab.**
## What it does
CoLab enables you to workout with others, following a synced YouTube video or creating a custom workout plan that can be fully dynamic and customizable.
## How we built it
Our technologies include: Twilio Programmable Video API, Node.JS and React.
## Challenges we ran into
At first, we found it difficult to resize the Video References for local and remote participants. Luckily, we were able to resize and set the correct ratios using Flexbox and Bootstrap's grid system.
We also needed to find a way to mute audio and disable video as these are core functionalities in any video sharing applications. We were luckily enough to find that someone else had the same issue on [stack overflow](https://stackoverflow.com/questions/41128817/twilio-video-mute-participant) which we were able to use to help build our solution.
## Accomplishments that we're proud of
When the hackathon began, our team started brainstorming a ton of goals like real-time video, customizable workouts, etc. It was really inspiring and motivating to see us tackle these problems and accomplish most of our planned goals one by one.
## What we learned
This sounds cliché but we learned how important it was to have a strong chemistry within our team. One of the many reasons why I believe our team was able to complete most of our goals was because we were all very communicative, helpful and efficient. We knew that we joined together to have a good time but we also joined because we wanted to develop our skills as developers. It helped us grow as individuals and we are now more competent in using new technologies like Twilios Programmable API!
## What's next for CoLab
Our team will continue developing the CoLab platform and polishing it until we deem it acceptable for publishing. We really believe in the idea of CoLab and want to pursue the idea further. We hope you share that vision and our team would like to thank you for reading this verbose project story! | partial |
## Inspiration
Team member's father works in the medical field and he presented the problem to us. We wanted to try to create a tool that he could actually use in the workplace.
## What it does
Allows users to create requests for air ambulances (medically equipped helicopters) and automatically prioritizes and dispatches the helicopters. Displays where the helicopters will be flying and how long it will take.
## How we built it
Java, Firebase realtime database, android-studio, google-maps api for locations
## What we learned
First time integrating google-maps into an android app which was interesting. Firebase has some strange asynchronous issues that we took a lot of time to fix. Android is great for building a quick and dirty UI.
Redbull + a mentor = bug fixes | ## Inspiration
Originally, we wanted to think of various ways drone delivery could be used to solve problems, and decided to create an app for emergency medicine delivery as there are certainly situations in which someone might not be able to make it to the hospital or pharmacy. Drones could deliver life-saving products like insulin and inhalers.
Building upon that, I don't think many people enjoy having to drive to CVS or Walgreens to pick up a prescription medicine, so the drones could be used for all sorts of pharmaceutical deliveries as well.
## What it does
This is a user-facing web app that allows the user to pick a delivery location and follow the delivery through real-time tracking. The app also provides an ETA.
## How we built it
The app is built on CherryPy and the styling is done with HTML/CSS/SCSS/JS (jQuery). In terms of APIs, we used Mapbox to set and view location for tracking and ordering. This app was built off of DroneKit's API and drone delivery example, and we used DroneKit's drone simulator to test it.
## Challenges we ran into
We really wanted to add SMS notifications that would alert the user when the package had arrived but ran into issues implementing the Twilio API without a server using only jQuery, as most solutions utilized PHP. It was also our first time working with CherryPy, so that was a challenging learning experience in terms of picking up a new framework.
## Accomplishments that we're proud of
I'm proud of figuring out how to calculate ETA given coordinates, learning a lot more Python than I'd previously ever known, and integrating nice styling with the bare bones website. I'm also proud of the photoshopped Pusheen background.
## What we learned
I learned how to work with new APIs, since I hadn't had much prior experience using them. I also learned more Python in the context of web development and jQuery.
## What's next for Flight Aid
I really want to figure out how to add notifications so I can flesh out more features of the user-facing app. I would in the future want to build a supplier-facing app that would give the supplier analytics and alarms based on the drone's sensor data. | # BananaExpress
A self-writing journal of your life, with superpowers!
We make journaling easier and more engaging than ever before by leveraging **home-grown, cutting edge, CNN + LSTM** models to do **novel text generation** to prompt our users to practice active reflection by journaling!
Features:
* User photo --> unique question about that photo based on 3 creative techniques
+ Real time question generation based on (real-time) user journaling (and the rest of their writing)!
+ Ad-lib style questions - we extract location and analyze the user's activity to generate a fun question!
+ Question-corpus matching - we search for good questions about the user's current topics
* NLP on previous journal entries for sentiment analysis
I love our front end - we've re-imagined how easy and futuristic journaling can be :)
And, honestly, SO much more! Please come see!
♥️ from the Lotus team,
Theint, Henry, Jason, Kastan | partial |
This project was developed with the RBC challenge in mind of developing the Help Desk of the future.
## What inspired us
We were inspired by our motivation to improve the world of work.
## Background
If we want technical support, we usually contact companies by phone, which is slow and painful for both users and technical support agents, especially when questions are obvious. Our solution is an online chat that responds to people immediately using our own bank of answers. This is a portable and a scalable solution.
## Try it!
<http://www.rbcH.tech>
## What we learned
Using NLP, dealing with devilish CORS, implementing docker successfully, struggling with Kubernetes.
## How we built
* Node.js for our servers (one server for our webapp, one for BotFront)
* React for our front-end
* Rasa-based Botfront, which is the REST API we are calling for each user interaction
* We wrote our own Botfront database during the last day and night
## Our philosophy for delivery
Hack, hack, hack until it works. Léonard was our webapp and backend expert, François built the DevOps side of things and Han solidified the front end and Antoine wrote + tested the data for NLP.
## Challenges we faced
Learning brand new technologies is sometimes difficult! Kubernetes (and CORS brought us some painful pain... and new skills and confidence.
## Our code
<https://github.com/ntnco/mchacks/>
## Our training data
<https://github.com/lool01/mchack-training-data> | ## Problem
In these times of isolation, many of us developers are stuck inside which makes it hard for us to work with our fellow peers. We also miss the times when we could just sit with our friends and collaborate to learn new programming concepts. But finding the motivation to do the same alone can be difficult.
## Solution
To solve this issue we have created an easy to connect, all in one platform where all you and your developer friends can come together to learn, code, and brainstorm together.
## About
Our platform provides a simple yet efficient User Experience with a straightforward and easy-to-use one-page interface.
We made it one page to have access to all the tools on one screen and transition between them easier.
We identify this page as a study room where users can collaborate and join with a simple URL.
Everything is Synced between users in real-time.
## Features
Our platform allows multiple users to enter one room and access tools like watching youtube tutorials, brainstorming on a drawable whiteboard, and code in our inbuilt browser IDE all in real-time. This platform makes collaboration between users seamless and also pushes them to become better developers.
## Technologies you used for both the front and back end
We use Node.js and Express the backend. On the front end, we use React. We use Socket.IO to establish bi-directional communication between them. We deployed the app using Docker and Google Kubernetes to automatically scale and balance loads.
## Challenges we ran into
A major challenge was collaborating effectively throughout the hackathon. A lot of the bugs we faced were solved through discussions. We realized communication was key for us to succeed in building our project under a time constraints. We ran into performance issues while syncing data between two clients where we were sending too much data or too many broadcast messages at the same time. We optimized the process significantly for smooth real-time interactions.
## What's next for Study Buddy
While we were working on this project, we came across several ideas that this could be a part of.
Our next step is to have each page categorized as an individual room where users can visit.
Adding more relevant tools, more tools, widgets, and expand on other work fields to increase our User demographic.
Include interface customizing options to allow User’s personalization of their rooms.
Try it live here: <http://35.203.169.42/>
Our hopeful product in the future: <https://www.figma.com/proto/zmYk6ah0dJK7yJmYZ5SZpm/nwHacks_2021?node-id=92%3A132&scaling=scale-down>
Thanks for checking us out! | ## Inspiration
All of us have gone through the painstaking and difficult process of onboarding as interns and making sense of huge repositories with many layers of folders and files. We hoped to shorten this or remove it completely through the use of Code Flow.
## What it does
Code Flow exists to speed up onboarding and make code easy to understand for non-technical people such as Project Managers and Business Analysts. Once the user has uploaded the repo, it has 2 main features. First, it can visualize the entire repo by showing how different folders and files are connected and providing a brief summary of each file and folder. It can also visualize a file by showing how the different functions are connected for a more technical user. The second feature is a specialized chatbot that allows you to ask questions about the entire project as a whole or even specific files. For example, "Which file do I need to change to implement this new feature?"
## How we built it
We used React to build the front end. Any folders uploaded by the user through the UI are stored using MongoDB. The backend is built using Python-Flask. If the user chooses a visualization, we first summarize what every file and folder does and display that in a graph data structure using the library pyvis. We analyze whether files are connected in the graph based on an algorithm that checks features such as the functions imported, etc. For the file-level visualization, we analyze the file's code using an AST and figure out which functions are interacting with each other. Finally for the chatbot, when the user asks a question we first use Cohere's embeddings to check the similarity of the question with the description we generated for the files. After narrowing down the correct file, we use its code to answer the question using Cohere generate.
## Challenges we ran into
We struggled a lot with narrowing down which file to use to answer the user's questions. We initially thought to simply use Cohere generate to reply with the correct file but knew that it isn't specialized for that purpose. We decided to use embeddings and then had to figure out how to use those numbers to actually get a valid result. We also struggled with getting all of our tech stacks to work as we used React, MongoDB and Flask. Making the API calls seamless proved to be very difficult.
## Accomplishments that we're proud of
This was our first time using Cohere's embeddings feature and accurately analyzing the result to match the best file. We are also proud of being able to combine various different stacks and have a working application.
## What we learned
We learned a lot about NLP, how embeddings work, and what they can be used for. In addition, we learned how to problem solve and step out of our comfort zones to test new technologies.
## What's next for Code Flow
We plan on adding annotations for key sections of the code, possibly using a new UI so that the user can quickly understand important parts without wasting time. | winning |
## Inspiration
We love cooking and watching food videos. From the Great British Baking Show to Instagram reels, we are foodies in every way. However, with the 119 billion pounds of food that is wasted annually in the United States, we wanted to create a simple way to reduce waste and try out new recipes.
## What it does
lettuce enables users to create a food inventory using a mobile receipt scanner. It then alerts users when a product approaches its expiration date and prompts them to notify their network if they possess excess food they won't consume in time. In such cases, lettuce notifies fellow users in their network that they can collect the surplus item. Moreover, lettuce offers recipe exploration and automatically checks your pantry and any other food shared by your network's users before suggesting new purchases.
## How we built it
lettuce uses React and Bootstrap for its frontend and uses Firebase for the database, which stores information on all the different foods users have stored in their pantry. We also use a pre-trained image-to-text neural network that enables users to inventory their food by scanning their grocery receipts. We also developed an algorithm to parse receipt text to extract just the food from the receipt.
## Challenges we ran into
One big challenge was finding a way to map the receipt food text to the actual food item. Receipts often use annoying abbreviations for food, and we had to find databases that allow us to map the receipt item to the food item.
## Accomplishments that we're proud of
lettuce has a lot of work ahead of it, but we are proud of our idea and teamwork to create an initial prototype of an app that may contribute to something meaningful to us and the world at large.
## What we learned
We learned that there are many things to account for when it comes to sustainability, as we must balance accessibility and convenience with efficiency and efficacy. Not having food waste would be great, but it's not easy to finish everything in your pantry, and we hope that our app can help find a balance between the two.
## What's next for lettuce
We hope to improve our recipe suggestion algorithm as well as the estimates for when food expires. For example, a green banana will have a different expiration date compared to a ripe banana, and our scanner has a universal deadline for all bananas. | ## Inspiration
Food waste is a huge issue globally. Overall, we throw out about 1/3 of all of the food we produce ([FAO](https://www.fao.org/3/mb060e/mb060e00.pdf)), and that number is even higher at up to 40% in the U.S. ([Gunders](https://www.nrdc.org/sites/default/files/wasted-food-IP.pdf)). Young adults throw away an even higher proportion of their food than other age groups ([University of Illinois](https://www.sciencedaily.com/releases/2018/08/180822122832.htm)).
All of us have on the team have had problems with buying food and then forgetting about it. It's been especially bad in the last couple of years because the pandemic has pushed us to buy more food less often. The potatoes will be hiding behind some other things and by the time we remember them, they're almost potato plants.
## What it does
Foodpad is an application to help users track what food they have at home and when it needs to be used by. Users simply add their groceries and select how they're planning to store the item (fridge, pantry, freezer), and the app suggests an expiry date. The app even suggests the best storage method for the type of grocery. The items are sorted so that the soonest expiry date is at the top. As the items are used, the user removes them from the list. At any time, the user can access recipes for the ingredients.
## How we built it
We prototyped the application in Figma and built a proof-of-concept version with React JS. We use API calls to the opensourced TheMealDB, which has recipes for given ingredients.
## Challenges we ran into
Only one of us had ever used JavaScript before, so it was tough to figure out how to use that, especially to get it to look nice. None of us had ever used Figma either, and it was tricky at first, but it's a really lovely tool and we'll definitely use it again in the future!
## Accomplishments that we're proud of
* We think it's a really cool idea that would be helpful in our own lives and would also be useful for other people.
* We're all more hardware/backend coders, so we're really proud of the design work that went into this and just pushing ourselves outside of our comfort zones.
## What we learned
* how to prioritize tasks in a project over a very short timeframe for an MVP
* how to code in JS and use React
* how to design an application to look nice
* how to use Figma
## What's next for foodPad
* release it!
* make the application's UI match the design more closely
* expanding the available food options
* giving users the option of multiple recipes for an ingredient
* selecting recipes that use many of the ingredients on the food list
* send push notifications to the user if the product is going to expire in the next day
* if a certain food keeps spoiling, suggest to the user that they should buy less of an item | ## Inspiration:
Every year, the world wastes about 2.5 billion tons of food, with the United States alone discarding nearly 60 million tons. This staggering waste inspired us to create **eco**, an app focused on food sustainability and community generosity.
## What it does:
**eco** leverages advanced Computer Vision technology, powered by YOLOv8 and OpenCV, to detect fruits, vegetables, and groceries while accurately predicting their expiry dates. The app includes a Discord bot that notifies users of impending expirations and alerts them about unused groceries. Users can easily generate delicious recipes using OpenAI's API, utilizing ingredients from their fridge. Additionally, **eco** features a Shameboard to track and highlight instances of food waste, encouraging community members to take responsibility for their consumption habits.
## How we built it:
For the frontend, we chose React, Typescript, and TailwindCSS to create a sleek and responsive interface. The database is powered by Supabase Serverless, ensuring reliable and scalable data management. The heart of **eco** is its advanced Computer Vision model, developed with Python, OpenCV, and YOLOv8, allowing us to accurately predict expiry dates for fruits, vegetables, and groceries. We leveraged OpenAI's API to generate recipes based on expiring foods, providing users with practical and creative meal ideas. Additionally, we integrated a Discord bot using JavaScript for seamless communication and alerts within our Discord server.
## Challenges we ran into:
During development, we encountered significant challenges with WebSockets and training the Computer Vision model. These hurdles ignited our passion for problem-solving, driving us to think creatively and push the boundaries of innovation. Through perseverance and ingenuity, we not only overcame these obstacles but also emerged stronger, armed with newfound skills and a deepened resolve to tackle future challenges head-on.
## Accomplishments that we're proud of:
We take pride in our adaptive approach, tackling challenges head-on to deliver a fully functional app. Our successful integration of Computer Vision, Discord Bot functionality, and recipe generation showcases our dedication and skill in developing **eco**.
## What we learned:
Building **eco** was a transformative journey that taught us invaluable lessons in teamwork, problem-solving, and the seamless integration of technology. We immersed ourselves in the intricacies of Computer Vision, Discord bot development, and frontend/backend development, elevating our skills to new heights. These experiences have not only enriched our project but have also empowered us with a passion for innovation and a drive to excel in future endeavors.
**Eco** is not just an app; it's a movement towards a more sustainable and generous community. Join us in reducing food waste and fostering a sense of responsibility towards our environment with eco. | winning |
## Inspiration
1. Affordable pet doors with simple "flap" mechanisms are not secure
2. Potty trained pets requires the door to be manually opened (e.g. ring a bell, scratch the door)
## What it does
The puppy *(or cat, we don't discriminate)* can exit without approval as soon as the sensor detects an object within the threshold distance. When entering back in, the ultrasonic sensor will trigger a signal that something is at the door and the camera will take a picture and send to the owner's phone through a web app. The owner may approve or deny the request depending on the photo. If the owner approves the request, the door will open automatically.
## How we built it
Ultrasonic sensors relay the distance from the sensor to an object to the Arduino, which sends this signal to Raspberry Pi. The Raspberry Pi program handles the stepper motor movement (rotate ~90 degrees CW and CCW) to open and close the door and relays information to the Flask server to take a picture using the Kinect camera. This photo will display on the web application, where an approval to the request will open the door.
## Challenges we ran into
1. Connecting everything together (Arduino, Raspberry Pi, frontend, backend, Kinect camera) despite each component working well individually
2. Building cardboard prototype with limited resources = lots of tape & poor wire management
3. Using multiple different streams of I/O and interfacing with each concurrently
## Accomplishments that we're proud of
This was super rewarding as it was our first hardware hack! The majority of our challenges lie in the camera component as we're unfamiliar with Kinect but we came up with a hack-y solution and nothing had to be hardcoded.
## What we learned
Hardware projects require a lot of troubleshooting because the sensors will sometimes interfere with eachother or the signals are not processed properly when there is too much noise. Additionally, with multiple different pieces of hardware, we learned how to connect all the subsystems together and interact with the software components.
## What's next for PetAlert
1. Better & more consistent photo quality
2. Improve frontend notification system (consider push notifications)
3. Customize 3D prints to secure components
4. Use thermal instead of ultrasound
5. Add sound detection | ## Inspiration
The failure of a certain project using Leap Motion API motivated us to learn it and use it successfully this time.
## What it does
Our hack records a motion password desired by the user. Then, when the user wishes to open the safe, they repeat the hand motion that is then analyzed and compared to the set password. If it passes the analysis check, the safe unlocks.
## How we built it
We built a cardboard model of our safe and motion input devices using Leap Motion and Arduino technology. To create the lock, we utilized Arduino stepper motors.
## Challenges we ran into
Learning the Leap Motion API and debugging was the toughest challenge to our group. Hot glue dangers and complications also impeded our progress.
## Accomplishments that we're proud of
All of our hardware and software worked to some degree of success. However, we recognize that there is room for improvement and if given the chance to develop this further, we would take it.
## What we learned
Leap Motion API is more difficult than expected and communicating with python programs and Arduino programs is simpler than expected.
## What's next for Toaster Secure
-Wireless Connections
-Sturdier Building Materials
-User-friendly interface | ## Inspiration
## What it does
## It searches for a water Bottle!
## How we built it
## We built it using a roomba, raspberrypi with a picamera, python, and Microsoft's Custom Vision
## Challenges we ran into
## Attaining wireless communications between the pi and all of our PCs was tricky. Implementing Microsoft's API was troublesome as well. A few hours before judging we exceeded the call volume for the API and were forced to create a new account to use the service. This meant re-training the AI and altering code, and loosing access to our higher accuracy recognition iterations during final tests.
## Accomplishments that we're proud of
## Actually getting the wireless networking to consistently. Surpassing challenges like framework indecision between IBM and Microsoft. We are proud of making things that lacked proper documentation, like the pi camera, work.
## What we learned
## How to use Github, train AI object recognition system using Microsoft APIs, write clean drivers
## What's next for Cueball's New Pet
## Learn to recognize other objects. | winning |
## Inspiration
We aims to bridge the communication gap between hearing-impaired individuals and those who don't understand sign language.
## What it does
This web app utilizes the webcam to capture hand gestures, recognizes the corresponding sign language symbols using machine learning models, and displays the result on the screen.
### Features
* Real-time hand gesture recognition
* Supports standard ASL
* Intuitive user interface
* Cross-platform compatibility (iOS and Android via web browsers)
## How we built it
We use [Hand Pose Detection Model](https://github.com/tensorflow/tfjs-models/tree/master/hand-pose-detection) and [Fingerpose](https://github.com/andypotato/fingerpose) to detect hand and its correspond gesture from webcam. For frontend, we use ReactJS and Vite as the build tool and serve our website on Netlify. There is no backend since we embed the models on client side.
## Challenges we ran into
### Model Choice
We originally used [google web ML model](https://teachablemachine.withgoogle.com/train/image) to trained with data that contains many noises such as the person and the background. This implies that the prediction will be heavily biased on objects that are not hands. Next, we thought about labeling the image to emphasize on hand. That leads to another bias on some hand gestures are weighted more, so the model tends to predict certain gestures even when we did not pose it that way. Later, we discover to use hand landscape to recognize hand only and joint pose to recognize the gesture which gives much better prediction.
### Streaming
In order to let the webcam worked for all device(mobile and desktop) of different specs and screen ratio, we struggle to find a way to enable full screen on all of them. First solution was to hard code the width and height for one device but that was hard to adjust and limited. During try and error, there's another issue that raised, the width and height is applied horizontally for mobile devices, so to work on both mobile and desktop, we dynamically check the user's screen ratio. To solve full screen issue, we used progressive web app to capture the device window.
## Accomplishments that we're proud of
* Active and accurate hand tracking from webcam streaming
* Finding ways to translate different gestures from ASL to English
* Being able to use across mobile and desktop
* Intuitive yet functional design
## What we learned
* Learned about American Sign Language
* Deploy progressive web app
* How machine learning model takes inputs and make prediction
* How to stream from webcam to inputs to our model
* Variations of machine learning model and how to fine-tune them
## What's next for Signado
Next step, we plan to add two hand and motion based gestures support since many words in sign language require the use of these two properties. Also to improve on model accuracy, we can use the latest [handpose]{<https://blog.tensorflow.org/2021/11/3D-handpose.html>} model that transform the hand into a 3D mesh. This will provide more possibility and variation to the gesture that can be performed. | ## **1st Place!**
## Inspiration
Sign language is a universal language which allows many individuals to exercise their intellect through common communication. Many people around the world suffer from hearing loss and from mutism that needs sign language to communicate. Even those who do not experience these conditions may still require the use of sign language for certain circumstances. We plan to expand our company to be known worldwide to fill the lack of a virtual sign language learning tool that is accessible to everyone, everywhere, for free.
## What it does
Here at SignSpeak, we create an encouraging learning environment that provides computer vision sign language tests to track progression and to perfect sign language skills. The UI is built around simplicity and useability. We have provided a teaching system that works by engaging the user in lessons, then partaking in a progression test. The lessons will include the material that will be tested during the lesson quiz. Once the user has completed the lesson, they will be redirected to the quiz which can result in either a failure or success. Consequently, successfully completing the quiz will congratulate the user and direct them to the proceeding lesson, however, failure will result in the user having to retake the lesson. The user will retake the lesson until they are successful during the quiz to proceed to the following lesson.
## How we built it
We built SignSpeak on react with next.js. For our sign recognition, we used TensorFlow with a media pipe model to detect points on the hand which were then compared with preassigned gestures.
## Challenges we ran into
We ran into multiple roadblocks mainly regarding our misunderstandings of Next.js.
## Accomplishments that we're proud of
We are proud that we managed to come up with so many ideas in such little time.
## What we learned
Throughout the event, we participated in many workshops and created many connections. We engaged in many conversations that involved certain bugs and issues that others were having and learned from their experience using javascript and react. Additionally, throughout the workshops, we learned about the blockchain and entrepreneurship connections to coding for the overall benefit of the hackathon.
## What's next for SignSpeak
SignSpeak is seeking to continue services for teaching people to learn sign language. For future use, we plan to implement a suggestion box for our users to communicate with us about problems with our program so that we can work quickly to fix them. Additionally, we will collaborate and improve companies that cater to phone audible navigation for blind people. | ## What it does
What our project does is introduce ASL letters, words, and numbers to you in a flashcard manner.
## How we built it
We built our project with React, Vite, and TensorFlowJS.
## Challenges we ran into
Some challenges we ran into included issues with git commits and merging. Over the course of our project we made mistakes while resolving merge conflicts which resulted in a large part of our project being almost discarded. Luckily we were able to git revert back to the correct version but time was misused regardless. With our TensorFlow model we had trouble reading the input/output and getting webcam working.
## Accomplishments that we're proud of
We are proud of the work we got done in the time frame of this hackathon with our skill level. Out of the workshops we attended I think we learned a lot and can't wait to implement them in future projects!
## What we learned
Over the course of this hackathon we learned that it is important to clearly define our project scope ahead of time. We spent a lot of our time on day 1 thinking about what we could do with the sponsor technologies and should have looked into them more in depth before the hackathon.
## What's next for Vision Talks
We would like to train our own ASL image detection model so that people can practice at home in real time. Additionally we would like to transcribe their signs into plaintext and voice so that they can confirm what they are signing. Expanding our project scope beyond ASL to other languages is also something we wish to do. | winning |
## Inspiration
As computer science students, our writing skills are not as strong as our technical abilities, so we want a motivating platform to improve them.
## What it does
Fix My Mistake prompts users with a grammatically incorrect sentence that they must fix. Depending on the accuracy of their attempt, experience points are rewarded to them which is used to level up.
## How we built it
Fix My Mistake was built in a team of four using Google Firebase, HTML, CSS, JavaScript, React, React Router DOM, Material UI, JSON API, and Advice Slip API.
## Challenges we ran into
It was difficult to find an efficient open-source API that could randomly generate sentences for the user to read.
## Accomplishments that we're proud of
Our team is proud that we were able to complete a useful and functional product within the deadline to share with others.
## What we learned
The team behind Fix My Mistake got to experience the fundamentals of web development and web scraping, in addition to sharpening our skills in communication and collaboration.
## What's next for Fix My Mistake
The next level for Fix My Mistake includes better grammar detection and text generation systems, multiple gamemodes, and an improved user-interface. | ## Inspiration
Patients often have trouble with following physiotherapy routines correctly which can often lead to worsening the affects of existing issues.
## What it does
Our project gives patients audio tips on where they are going wrong in real time and responds to voice commands to control the flow of their workout. Data from these workouts is then uploaded to the cloud for physiotherapists to track the progress of their patients!
## How we built it
Using the Kinect C# SDK we extracted a human wireframe from the sensors and performed calculations on the different limbs to detect improper form. We also used the .NET speech libraries to create an interactive "trainer" experience.
## Challenges we ran into
Analyzing movements over time is pretty hard due to temporal stretching and temporal misalignment. We solved this by option to code particular rules for difference exercises which were far more robust. This also allowed us to be more inclusive of all body sizes and shapes (everyone has different limb sizes relative to their entire body).
## Accomplishments that we're proud of
Creating a truly helpful trainer experience. Believe it or not we have actually gotten better at squatting over the course of this hackathon 😂😂
## What we learned
Xbox kinect is SUPER COOL and there is so much more that can be done with it.
## What's next for Can we change this later?
Many improvements to the patient management platform. We figured this wasn't super important since fullstack user management applications have been created millions of times over.
## Note
We also have prototypes for what the front-end would look like from the patient and physiotherapist's perspective, to be presented during judging! | ## Inspiration
On social media, most of the things that come up are success stories. We've seen a lot of our friends complain that there are platforms where people keep bragging about what they've been achieving in life, but not a single one showing their failures.
We realized that there's a need for a platform where people can share their failure episodes for open and free discussion. So we have now decided to take matters in our own hands and are creating Failed-In to break the taboo around failures! On Failed-in, you realize - "You're NOT alone!"
## What it does
* It is a no-judgment platform to learn to celebrate failure tales.
* Enabled User to add failure episodes (anonymously/non-anonymously), allowing others to react and comment.
* Each episode on the platform has #tags associated with it, which helps filter out the episodes easily. A user's recommendation is based on the #tags with which they usually interact
* Implemented sentiment analysis to predict the sentiment score of a user from the episodes and comments posted.
* We have a motivational bot to lighten the user's mood.
* Allowed the users to report the episodes and comments for
+ NSFW images (integrated ML check to detect nudity)
+ Abusive language (integrated ML check to classify texts)
+ Spam (Checking the previous activity and finding similarities)
+ Flaunting success (Manual checks)
## How we built it
* We used Node for building REST API and MongoDb as database.
* For the client side we used flutter.
* Also we used tensorflowjs library and its built in models for NSFW, abusive text checks and sentiment analysis.
## Challenges we ran into
* While brainstorming on this particular idea, we weren't sure how to present it not to be misunderstood. Mental health issues from failure are serious, and using Failed-In, we wanted to break the taboo around discussing failures.
* It was the first time we tried using Flutter-beta instead of React with MongoDB and node. It took a little longer than usual to integrate the server-side with the client-side.
* Finding the versions of tensorflow and other libraries which could integrate with the remaining code.
## Accomplishments that we're proud of
* During the 36 hour time we were able to ideate and build a prototype for the same.
* From fixing bugs to resolving merge conflicts the whole experience is worth remembering.
## What we learned
* Team collaboration
* how to remain calm and patient during the 36 hours
* Remain up on caffeine.
## What's next for Failed-In
* Improve the model of sentiment analysis to get more accurate results so we can understand the users and recommend them famous failure to success stories using web scraping.
* Create separate discussion rooms for each #tag, facilitating users to communicate and discuss their failures.
* Also provide the option to follow/unfollow a user. | losing |
## Inspiration: The BarBot team came together with such diverse skill sets. We wanted to create a project that would highlight each member's various expertise. This expertise includes Internet of Things, hardware, and web development. After a long ideation session, we came up with BarBot. This robotic bartender will serve as a great addition to forward-seeing bars, where the drink dispensing and delivery process is automated.
## What it does: Barbot is an integrated butler robot that allows the user to order a drink through a touch screen ordering station. A highly capable drink dispensary system will perform desired option chosen from the ordering station. The beverage options that will be dispensed through the dispensary system include Redbull or Soylent. An additional option in the ordering station named "Surprise Me" is available. This particular option takes a photograph of the user and runs a Microsoft Emotions API. Running this API will allow BarBot to determine the user's current mood and decide which drink is suitable for this user based on the photograph taken. After the option is determined by the user (Redbull or Soylent) or by BarBot ("Surprise Me" running Microsoft emotion API), BarBot's dispensary system will dispense the determined beverage onto the glass on the BarBot. This robot will then travel to the user in order to deliver the drink. Barbot will only return to its original position (under the dispensary station) when the cup has been lifted.
## How we built it: The BarBot team allocated different tasks to group members with various expertise. Our frontend specialist, Sabrina Smai, built the mobile application for ordering station that has now had a touchscreen. Our hardware specialist, Lucas Moisuyev, built the BarBot itself along with the dispensary system with the assistance of Tony Cheng. Our backend specialist, Ben Weinfeld, built the ordering station by programming raspberry pi and the touchscreen. Through our collaboration, we were able to revolutionize the bartending process.
## Challenges we ran into: The most reoccurring issue we encountered was a lack of proper materials for specific parts of our hack. When we were building our pouring mechanism, we did not have proper tubing for transferring our beverages, so we had to go out and purchase materials. After buying more tubing materials, we then ran into the issue of not having strong enough servos or motors to turn the valves of the dispensers. This caused us to totally change the original design of the pouring mechanism. In addition, we underestimated the level of difficulty that came with creating a communication system among all of our parts.
## Accomplishments that we're proud of: Despite our challenges, we are proud to have been able to create a functional product within the limited amount of time. We needed to learn new skills and improvise hardware components but never gave up.
## What we learned: During this hackathon, we learned to program the Particle Photon Raspberry Pi, building web apps, and leap over the hurdles of creating a hardware hack with very limited supplies.
## What's next for BarBot: The BarBot team is very passionate about this project and we will continue to work on BarBot after this Hackathon. We plan to integrate more features that will incorporate more Microsoft APIs. An expansion of the touch ordering station will be considered as more variety of drink options will be required. | ## Inspiration
We often want to read certain manga because we are interested in their stories, but are completely barred from doing so purely because there is no translation for them in English. As a result, we decided to make our own solution, in which we take the pages of the manga, translate them, and rewrite them for you.
## What it does
It takes the image of the manga page and sends it to a backend. The backend first uses a neural network trained with thousands of cases of actual manga to detect the areas of writing and text on the manga. Then, the server clears that area out of the image. Using Google Cloud Services, we then take the written Japanese and translate into English. Lastly we rewrite that English in its corresponding postitions on the original image to complete the manga page.
## How we built it
We used python and flask with a bit of html and css for the front end Web server. We used Expo to create a mobile front end as well. We wrote the backend in python.
## Challenges we ran into
One of the challenges was properly using Expo, a service/platform new to us, to fit our many needs. There were some functionalities we wanted that Expo didn't have. However, we found manual work-arounds.
## Accomplishments that we're proud of
We are proud of successfully creating this project, especially because it was a difficult task. The fact that we successfully completed a working product that we can consider using ourselves makes this accomplishment even better.
## What we learned
We learned a lot about how to use Expo, since it was our first time using it. We also learned about how to modify images through python along the way.
## What's next for Kero
Kero's front-end can be expanded to look nicer and have more functionality, like multiple images at once for translating a whole chapter of a manga. | ## Inspiration
My friend Pablo used to throw me ball when playing beer pong, he moved away so i replaced him with a much better robot.
## What it does
it tosses you a ping pong ball right when you need, you just have to show it your hand.
## How we built it
With love sweat tears and lots of energy drink.
## Challenges we ran into
getting open cv and arduino to communicate.
## Accomplishments that we're proud of
getting the arduino to communicate with python
## What we learned
open cv
## What's next for P.A.B.L.O (pong assistant beats losers only)
use hand tracking to track the cups and actually play and win the game | partial |
## Inspiration
According to Statistics Canada, nearly 48,000 children are living in foster care. In the United States, there are ten times as many. Teenagers aged 14-17 are the most at risk of aging out of the system without being adopted. Many choose to opt-out when they turn 18. At that age, most youths like our team are equipped with a lifeline back to a parent or relative. However, without the benefit of a stable and supportive home, fostered youths, after emancipation, lack the consistent security for their documents, tacit guidance for practical tasks, and moral aid in building meaningful relationships through life’s ups and downs.
Despite the success possible during foster care, there is overwhelming evidence that shows how our conventional system alone inherently cannot guarantee the necessary support to bridge a foster youth’s path into adulthood once they exit the system.
## What it does
A virtual, encrypted, and decentralized safe for essential records. There is a built-in scanner function and a resource of contacts who can mentor and aid the user. Alerts can prompt the user to tasks such as booking the annual doctors' appointments and tell them, for example, about openings for suitable housing and jobs. Youth in foster care can start using the app at age 14 and slowly build a foundation well before they plan for emancipation.
## How we built it
The essential decentralized component of this application, which stores images on an encrypted blockchain, was built on the Internet Computer Protocol (ICP) using Node JS and Azle. Node JS and React were also used to build our user-facing component. Encryption and Decryption was done using CryptoJS.
## Challenges we ran into
ICP turned out to be very difficult to work with - attempting to publish the app to a local but discoverable device was nearly impossible. Apart from that, working with such a novel technology through an unfamiliar library caused many small yet significant mistakes that we wouldn't be able to resolve without the help of ICP mentors. There were many features we worked on that were put aside to prioritize, first and foremost, the security of the users' sensitive documents.
## Accomplishments that we're proud of
Since this was the first time any of us worked on blockchain, having a working application make use of such a technology was very satisfying. Some of us also worked with react and front-end for the first time, and others worked with package managers like npm for the first time as well. Apart from the hard skills developed throughout the hackathon, we're also proud of how we distributed the tasks amongst ourselves, allowing us to stay (mostly) busy without overworking anyone.
## What we learned
As it turns out, making a blockchain application is easier than expected! The code was straightforward and ICP's tutorials were easy to follow. Instead, we spent most of our time wrangling with our coding environment, and this experience gave us a lot of insight into computer networks, blockchain organization, CORS, and methods of accessing blockchain applications through code run in standard web apps like React.
## What's next for MirrorPort
Since the conception of MirrorPort, it has always been planned to become a safe place for marginalized youths. Often, they would also lose contact with adults who have mentored or housed them. This app will provide this information to the user, with the consent of the mentor. Additionally, alerts will be implemented to prompt the user to tasks such as booking the annual doctors' appointments and tell them, for example, about openings for suitable housing and jobs. It could also be a tool for tracking progress against their aspirations and providing tailored resources that map out a transition plan. We're looking to migrate the dApp to mobile for more accessibility and portability. 2FA would be implemented for login security. It could also be a tool for tracking progress against their aspirations and providing tailored resources that map out a transition plan. Adding a document translation feature would also make the dApp work well with immigrant documents across borders. | ## Inspiration
In 2020, Canada received more than 200,000 refugees and immigrants. The more immigrants and BIPOC individuals I spoke to, the more I realized, they were only aiming for employment opportunities as cab drivers, cleaners, dock workers, etc. This can be attributed to a discriminatory algorithm that scraps their resumes, and a lack of a formal network to engage and collaborate in. Corporate Mentors connects immigrants and BIPOC individuals with industry professionals who overcame similar barriers as mentors and mentees.
This promotion of inclusive and sustainable economic growth has the potential of creating decent jobs and significantly improving living standards can also aid in their seamless transition into Canadian society. Thereby, ensuring that no one gets left behind.
## What it does
To tackle the global rise of unemployment and increasing barriers to mobility for marginalized BIPOC communities and immigrants due to racist and discriminatory machine learning algorithms and lack of networking opportunities by creating an innovative web platform that enables people to receive professional mentorship and access to job opportunities that are available through networking.
## How we built it
The software architecture model being used is the three-tiered architecture, where we are specifically using the MERN Stack. MERN stands for MongoDB, Express, React, Node, after the four key technologies that make up the stack: React(.js) make up the top ( client-side /frontend), Express and Node make up the middle (application/server) tier, and MongoDB makes up the bottom(Database) tier. System Decomposition explains the relationship better below. The software architecture diagram below details the interaction of varying components in the system.
## Challenges we ran into
The mere fact that we didn't have a UX/UI designer on the team made us realize how difficult it was to create an easy-to-navigate user interface.
## Accomplishments that we're proud of
We are proud of the matching algorithm we created to match mentors with mentees based on their educational qualifications, corporate experience, and desired industry. Additionally, we would also be able to monetize the website utilizing the Freemium subscription model we developed if we stream webinar videos using Accedo.
## What's next for Corporate Mentors
1) The creation of a real mentor pool with experienced corporate professionals is the definite next step.
2) Furthermore, the development of the freemium model (4 hrs of mentoring every month) @ $60 per 6 months or $100 per 12 months.
3) Paid Webinars (price determined by the mentor with 80% going to them) and 20% taken as platform maintenance fee.
4) Create a chat functionality between mentor and mentee using Socket.io and add authorization in the website to limit access to the chats from external parties
5) Create an area for the mentor and mentee to store and share files | ## Inspiration
In 2012 in the U.S infants and newborns made up 73% of hospitals stays and 57.9% of hospital costs. This adds up to $21,654.6 million dollars. As a group of students eager to make a change in the healthcare industry utilizing machine learning software, we thought this was the perfect project for us. Statistical data showed an increase in infant hospital visits in recent years which further solidified our mission to tackle this problem at its core.
## What it does
Our software uses a website with user authentication to collect data about an infant. This data considers factors such as temperature, time of last meal, fluid intake, etc. This data is then pushed onto a MySQL server and is fetched by a remote device using a python script. After loading the data onto a local machine, it is passed into a linear regression machine learning model which outputs the probability of the infant requiring medical attention. Analysis results from the ML model is passed back into the website where it is displayed through graphs and other means of data visualization. This created dashboard is visible to users through their accounts and to their family doctors. Family doctors can analyze the data for themselves and agree or disagree with the model result. This iterative process trains the model over time. This process looks to ease the stress on parents and insure those who seriously need medical attention are the ones receiving it. Alongside optimizing the procedure, the product also decreases hospital costs thereby lowering taxes. We also implemented a secure hash to uniquely and securely identify each user. Using a hyper-secure combination of the user's data, we gave each patient a way to receive the status of their infant's evaluation from our AI and doctor verification.
## Challenges we ran into
At first, we challenged ourselves to create an ethical hacking platform. After discussing and developing the idea, we realized it was already done. We were challenged to think of something new with the same amount of complexity. As first year students with little to no experience, we wanted to tinker with AI and push the bounds of healthcare efficiency. The algorithms didn't work, the server wouldn't connect, and the website wouldn't deploy. We persevered and through the help of mentors and peers we were able to make a fully functional product. As a team, we were able to pick up on ML concepts and data-basing at an accelerated pace. We were challenged as students, upcoming engineers, and as people. Our ability to push through and deliver results were shown over the course of this hackathon.
## Accomplishments that we're proud of
We're proud of our functional database that can be accessed from a remote device. The ML algorithm, python script, and website were all commendable achievements for us. These components on their own are fairly useless, our biggest accomplishment was interfacing all of these with one another and creating an overall user experience that delivers in performance and results. Using sha256 we securely passed each user a unique and near impossible to reverse hash to allow them to check the status of their evaluation.
## What we learned
We learnt about important concepts in neural networks using TensorFlow and the inner workings of the HTML code in a website. We also learnt how to set-up a server and configure it for remote access. We learned a lot about how cyber-security plays a crucial role in the information technology industry. This opportunity allowed us to connect on a more personal level with the users around us, being able to create a more reliable and user friendly interface.
## What's next for InfantXpert
We're looking to develop a mobile application in IOS and Android for this app. We'd like to provide this as a free service so everyone can access the application regardless of their financial status. | partial |
## Inspiration
To do our part to spread awareness and inspire the general public to make adjustments that will improve everyone's air quality. Also wanted to demonstrate that these adjustments are not as challenging and in our simulator, it shows that frequent small top ups go a long way.
## What it does
Our website includes information about EV and a simulation game where you have to drive past EV charging stations for quick top ups otherwise the vehicle will slow down to a crawl. EV stations come up fairly frequent, weather it be a regular wall socket or supercharger station.
## How we built it
Our website was built on repl.it, where one of us was working on a game while the other used html/css to create a website. After a domain was chosen from domain.com, we started to learn how to create a website using HTML. For some parts, code was taken from free html templates and were later manipulated in an HTML editor. Afterwards, google cloud was used to host the website, forcing us to learn how to use servers.
## Challenges we ran into
For starters, almost everything was new for all of us, from learning HTML to learning how to host off of a server. As new coders, we had to spend many hours learning how to code before we do anything. Once that happens we had to spend many hours testing code to see if it produced the wanted result. After all that was over, we had to learn how to use google cloud, our first experience with servers.
## Accomplishments that we're proud of
Actually having a working website, and having the website be hosted.
## What we learned
HTML, CSS, JS, Server hosting.
## What's next for EVolving Tech
We want to add destinations to give our simulation more complexity and context. This will allow the users to navigate between points of interest in their home city to get a feel of how range measures up to level of charge. | ## Inspiration
One charge of the average EV's battery uses as much electricity as a house uses every 2.5 days. This puts a huge strain on the electrical grid: people usually plug in their car as soon as they get home, during what is already peak demand hours. At this time, not only is electricity the most expensive, but it is also the most carbon-intensive; as much as 20% generated by fossil fuels, even in Ontario, which is not a primarily fossil-fuel dependent region. We can change this: by charging according to our calculated optimal time, not only will our users save money, but save the environment.
## What it does
Given an interval in which the user can charge their car (ex., from when they get home to when they have to leave in the morning), ChargeVerte analyses live and historical data of electricity generation to calculate an interval in which electricity generation is the cleanest. The user can then instruct their car to begin charging at our recommended time, and charge with peace of mind knowing they are using sustainable energy.
## How we built it
ChargeVerte was made using a purely Python-based tech stack. We leveraged various libraries, including requests to make API requests, pandas for data processing, and Taipy for front-end design. Our project pulls data about the electrical grid from the Electricity Maps API in real-time.
## Challenges we ran into
Our biggest challenges were primarily learning how to handle all the different libraries we used within this project, many of which we had never used before, but were eager to try our hand at. One notable challenge we faced was trying to use the Flask API and React to create a Python/JS full-stack app, which we found was difficult to make API GET requests with due to the different data types supported by the respective languages. We made the decision to pivot to Taipy in order to overcome this hurdle.
## Accomplishments that we're proud of
We built a functioning predictive algorithm, which, given a range of time, finds the timespan of electricity with the lowest carbon intensity.
## What we learned
We learned how to design critical processes related to full-stack development, including how to make API requests, design a front-end, and connect a front-end and backend together. We also learned how to program in a team setting, and the many strategies and habits we had to change in order to make it happen.
## What's next for ChargeVerte
A potential partner for ChargeVerte is power-generating companies themselves. Generating companies could package ChargeVerte and a charging timer, such that when a driver plugs in for the night, ChargeVerte will automatically begin charging at off-peak times, without any needed driver oversight. This would reduce costs significantly for the power-generating companies, as they can maintain a flatter demand line and thus reduce the amount of expensive, polluting fossil fuels needed. | As a response to the ongoing wildfires devastating vast areas of Australia, our team developed a web tool which provides wildfire data visualization, prediction, and logistics handling. We had two target audiences in mind: the general public and firefighters. The homepage of Phoenix is publicly accessible and anyone can learn information about the wildfires occurring globally, along with statistics regarding weather conditions, smoke levels, and safety warnings. We have a paid membership tier for firefighting organizations, where they have access to more in-depth information, such as wildfire spread prediction.
We deployed our web app using Microsoft Azure, and used Standard Library to incorporate Airtable, which enabled us to centralize the data we pulled from various sources. We also used it to create a notification system, where we send users a text whenever the air quality warrants action such as staying indoors or wearing a P2 mask.
We have taken many approaches to improving our platform’s scalability, as we anticipate spikes of traffic during wildfire events. Our codes scalability features include reusing connections to external resources whenever possible, using asynchronous programming, and processing API calls in batch. We used Azure’s functions in order to achieve this.
Azure Notebook and Cognitive Services were used to build various machine learning models using the information we collected from the NASA, EarthData, and VIIRS APIs. The neural network had a reasonable accuracy of 0.74, but did not generalize well to niche climates such as Siberia.
Our web-app was designed using React, Python, and d3js. We kept accessibility in mind by using a high-contrast navy-blue and white colour scheme paired with clearly legible, sans-serif fonts. Future work includes incorporating a text-to-speech feature to increase accessibility, and a color-blind mode. As this was a 24-hour hackathon, we ran into a time challenge and were unable to include this feature, however, we would hope to implement this in further stages of Phoenix. | partial |
## Inspiration
school
## What it does
school
## How we built it
school
## Challenges we ran into
school
## Accomplishments that we're proud of
school
## What we learned
school
## What's next for Aula
school | ## Inspiration
For University of Toronto students, the campus can be so large, that many times it can be difficult to find quiet spaces to study or louder spaces to spend some time with friends. We created UofT SoundSpace to help students find quiet and loud spaces around campus to help reduce conflicts between those who want to study and those who want to talk.
## What it does
SoundSpace relies on having many students record small audio samples at different times to understand when spaces are the quietest or the loudest. Then, when a student needs to find a quiet place to work, or a loud environment to relax in, they can use the app to figure out what locations would be the best choice.
## How we built it
We used Javascript, React Native, Python, GraphQL, and Flask. Our frontend was built with React Native for it to be cross-platform, and we sought to use Python to perform audio analysis to get key details about the audio average amplitude, volume spikes, or reverberation. These details would inform the user on what locations they would prefer.
## Challenges we ran into
Many of the difficulties arose from the collaboration between the front-end and back-end. The front-end of the system works great and looks nice, and we figured out how to perform audio analysis using Python, but we ran into problems transferring the audio files from the front-end to the back-end for processing.
In addition, we ran into problems getting geo-location to work with React Native, as we kept getting errors from extraneous files that we weren't using.
## Accomplishments that we're proud of
Learning new technologies and moving away from our comfort zones to try and build a practical project that can be accessible to anyone. Most of us had little experience with React Native and back-end development, so it was a valuable experience to work through our issues and put together a working product.
## What's next for UofT SoundSpace
We want to have more functionalities that expand to more buildings, specific floors, and rooms. This can ultimately expand to more post-secondary institutions.
Here are a list of the things that we wanted to get done, but couldn't in the time span:
* Login/Register Authentication
* Connection between front-end and back-end
* Getting the current geo-location of the user
* Uploading our data to a database
* Viewing all of the data on the map
* Using Co:here to help the user figure out what locations are best (ex. Where is the quietest place near me? --- The quietest place near you is Sidney Smith.)
Sadly, our group ran into many bugs and installation issues with React Native that halted our progression early on, so we were not able to complete a lot of these features before the deadline.
However, we hope that our idea is inspiring, and believe that our app would serve as a solid foundation for this project given more time! | ## Inspiration 💥
Our inspiration is to alter the way humans have acquired knowledge and skills over the last hundred years. Instead of reading or writing, we devised of a method fo individuals to teach others through communication and mentoring. A way that not only benefits those who learn but also helps them achieve their goals.
## What it does 🌟
Intellex is a diverse skill swapping platform for those eager to learn more. In this era, information is gold. Your knowledge is valuable, and people want it. For the price of a tutoring session, you can receive back a complete and in depth tutorial on whatever you want. Join one on one video calls with safe and rated teachers, and be rewarded for learning more.
We constantly move away from agencies and the government and thus Intellex strives to decentralize education. Slowly, the age old classroom is changing. Intellex presents a potential step towards education decentralization by incentivizing education with NFT rewards which include special badges and a leaderboard.
## How we built it 🛠️
We began with planning out our core features and determining what technologies we would use. Later, we created a Figma design to understand what pages we would need for our project, planning our backend integration to store and fetch data from a database.
We used Next.js to structure the project which uses React internally. We used TypeScript for type safety across my project which was major help when it came to debugging. Tailwind CSS was leveraged for its easy to use classes. We also utilized Framer Motion for the landing page animations
## Challenges we ran into 🌀
The obstacles we faced were coming up with a captivating idea, which caused us to lose productivity. We've also faced difficult obstacles in languages we're unfamiliar with, and some of us are also beginners which created much confusion during the event. Time management was really difficult to cope with because of the many changes in plans, but overall we have improved our knowledge and experience.
## Accomplishments that we're proud of 🎊
We are proud of building a very clean, functional, and modern-looking user interface for Intellex, allowing users to experience an intuitive and interactive educational environment. This aligns seamlessly with our future use of Whisper AI to enhance user interactions.
To ensure optimized site performance, we're implementing Next.js with Server-Side Rendering (SSR), providing an extremely fast and responsive feel when using the app. This approach not only boosts efficiency but also improves the overall user experience, crucial for educational applications.
In line with the best practices of React, we're focusing on using client-side rendering at the most intricate points of the application, integrating it with mock data initially. This setup is in preparation for later fetching real-time data from the backend, including interactive whiteboard sessions and peer ratings. Our aim is to create a dynamic, adaptive learning platform that is both powerful and easy to use, reflecting our commitment to pioneering in the educational technology space.
## What we learned 🧠
Besides the technologies that were listed above, we as a group learned an exceptional amount of information in regards to full stack web applications. This experience marked the beginning of our full stack journey and we took it approached it with a cautious approach, making sure we understood all aspects of a website, which is something that a lot of people tend to overlook. We learned about the planning process, backend integration, REST API's, etc. Most importantly, we learned about the importance of having cooperative and helpful team that will have your back in building out these complex apps on time.
## What's next for Intellex ➡️
We fully plan to build out the backend of Intellex to allow for proper functionality using Whisper AI. This innovative technology will enhance user interactions and streamline the learning process. Regarding the product itself, there are countless educational features that we want to implement, such as an interactive whiteboard for real-time collaboration and a comprehensive rating system to allow peers to see and evaluate each other's contributions. These features aim to foster a more engaging and interactive learning environment. Additionally, we're exploring the integration of adaptive learning algorithms to personalize the educational experience for each user. This is a product we've always wanted to pursue in some form, and we look forward to bringing it to life and seeing its positive impact on the educational community. | losing |
Need to filter my messages based on date, sender and content
Saw playstore, not many apps were there, and most of them were paid
So made a free version for all | ## Inspiration
Exercising is very important to living a healthy and happy life. Motivation and consistency are key factors that prevent people from reaching their fitness goals. There are many apps to try to help motivate aspiring athletes, but they often just pit people against each other and focus on raw performance. People, however, are not equally athletic and therefore the progress should not be based on absolute performances. Ananke uses a new approach to encourage people to improve their fitness.
## What it does
Ananke does not determine your progress based on absolute miles ran or the time spent on a road bike, but more on invested efforts instead. If a 2 mile run is exhausting for you, that is fine! And if you managed to run 3 miles the next day, we reward your progress. That's how Ananke will continuously empower you to achieve more - every single day. The strongest competitor is always yourself!
To suggest the optimal workouts for you that suit your performance level, Ananke takes your fitness history into account. Ananke will provide you with a suggestion for a light, medium and challenging workout, and it is up to you to choose the preferable workout depending on your mood and well-being. Whatever workout you choose, it is going to be an advancement and propel you forward on your journey to a blossomed life.
Our app-architecture has a functional, minimalistic design. By completing suggested workouts and pushing yourself to your limits, you will grow plants. The more you exercise, the greener your profile becomes. Ananke will analyse the fitness data corresponding to the accomplished workout and determine an intensity-score based on factors other than pure data - we use an algorithm to determine how hard you have worked. This score will have an influence on the growth of your plants. Not only do more efforts make your plants grow faster, but also make you happier!
We also want to incentivize rest and community, so we built a friendship system where friends can help water your plants and encourage you to keep working hard.
## How we built it
Ananke is a (web-)app which has been mainly built in React. We use an API provided by TerraAPI which enables us to draw fitness data from mobile wearables. In our case, we integrated a FitBit as well as an Apple Watch as our data sources. Once the fitness data is extracted from the API it is transferred to a web-hook which works as the interface between the local and cloud server of TerraAPI. The data is then processed on a server and subsequently passed to an artificial intelligence application called OdinAI which is also a product of TerraAPI. The AI will determine suitable workout suggestions based on the API data. Ultimately, the output is presented on the frontend application.
## Challenges we ran into
The API provided us with A LOT of data, so we encountered many challenges with data processing. Also the server architecture, as well as the communication between front- and backend posed some challenges for us to overcome.
## Accomplishments that we're proud of
We managed to create a beautiful frontend design while maintaining ultimate functionality in the code. The server architecture is stable and works suitably for our purposes. Most importantly we are most proud of our team work: Everybody contributed to different components of our application and we worked very efficiently within our team. We are located across the world from each other and half the team hadn't met prior, yet we were able to work together very well and create an app we are very proud of.
## What we learned
The Devpost submission form doesn't auto-save :/
## What's next for Ananke
To determine the intensity-score of a workout it is advantageous to integrate an AI-driven tool which can recognise tendencies i.e. in your heart-rate dynamic and thereby indicate progress more precisely. If your heart-rate decreases while other factors like distance, pace, etc. are taken as constant, it would indicate an extended endurance. This is important for further workout suggestions. We would also like to incorporate more of TerraAPI's data access tools. For example, one interesting feature that could be pushed in the future is the ability to start a workout from the web app. TerraAPI has a function that can start workouts remotely by providing activity and time. This would further integrate the user into the app and allow them to start their suggested workouts easily. We'd also like to integrate a more robust community and messaging system, as well as more rewards like customizing plants. | ## Inspiration
In one calendar year, approximately 1 in 6 children are sexually victimized within the United States. Unfortunately, technology has enabling instant messaging and social media has been identified as a large source of where these grave events trace back. With this information, we knew that helping the efforts of undermining sexual predators was a must, and one that could additionally be helped with through machine learning and blockchain technologies, combined with an easy-to-use user interface.
## What it does
This app basically can take as an input a sentence, phrase or a few words from a conversation, and using Text analysis and machine learning can determine whether the dialogue in the conversation may be potentially considered harassment. If so, the input transcript is stored on a blockchain which can then generate a report that can be reviewed and signed by authorities to verify the harassment claim, and therefore this becomes a proof of any subsequent claim of abuse or harassment.
## How we built it
1: scraping the web for dialogue and conversation data
2: extracting raw chat logs using STDLib from perverted justice (to catch a predator NBC series) archives which resulted in actual arrests and convictions (600+ convictions)
3; curating scraped and extracted data into a labelled dataset
4: building a neural network (3 layers, 40 neurons)
5: using the nltk toolkit to extract keywords, stems and roots from the corpus
6: sanitizing input data
7: training neural network
8: evaluating neural network and retraining with modified hyperparameters
9: curating and uploading dataset to google containers
10: setup automl instance on google cloud
11: train a batch of input corpora with automl
12: evaluate model, update overall corpus and retrain automl model
13: create a blockchain to store immutable and verified copies of the transcript along with author
14: wrap machine learning classifiers around with flask server
15: attach endpoints of blockchain service as pipelines from classifiers.
16: setup frontend for communication and interfacing
## Challenges we ran into
extracting and curating raw conversation data is slow, tedious and cumbersome. To do this well, a ton of patience is required.
the ARK blockchain does not have smart contracts fully implemented yet. we used some shortcuts and hacky tricks, but ideally the harassment reports would be generated using a solidity-like contract on the blockchain
Google's AutoML, although promising, takes a very long time to train a model (~7 hours for one model)
There is a serious paucity of publicly available social media interaction dialogue corpora, especially for one to one conversations. Those that are publicly available often have many labeling, annotation and other errors which are challenging to sanitize.Google cloud SDK libraries, especially for newer products like AutoML often have conflicts with earlier versions of the google cloud SDK (atleast from what we saw using the python sdk)
## Accomplishments that we're proud of
cross validation gave our model a very high score using the test set. However, there needs to me much more data from a generic (non-abuse/harassment) conversation corpus as it seems the model is "eagerly" biased towards harassment label.
tl.dr: the model works for almost all phrases we considered as "harassment".
The scraper and curating code for the perverted justice transcripts are now publicly available functions on STDLib. these can be used for future research and development work
## What we learned
Scraping, extracting and curating data actually consumes most of the time in a machine learning project.
## What's next for To Blockchain a Predator
integration with current chat interfaces like Facebook messenger, WhatsApp, Instagram etc. An immutable record of possible harassing messages, especially to children using these platforms is a very useful tool to have, especially with the increasing prevalence of sexual predators using social media to interact with potential victims.
## Video Link
<https://splice.gopro.com/v?id=bJ2xdG> | losing |
The simple pitch behind ResQ: disaster recovery and evacuation are hard.
People crowd the streets making it difficult to quickly evacuate an area. Additionally, for those who choose not to evacuate, they face the possibility of being difficult to find for rescuers. What we've built is a 3 pronged approach.
1. ResQ Responder: An android application for rescuers that presents them with a triage list prepared by the ResQ ML Engine so as to attempt to save the most lives in the most efficient order. We also provide the ability to view this triage list in AR, making it easy to spot evacuees stuck on roofs or in hard to see places.
2. ResQ: The user-facing application. Simplicity is the goal. Only asking for user information to create a medical ID and rescue profile we use the application to record their GPS coordinates for rescue, as well as present them with push notifications about impending rescue. An evacuee can also use the application to broadcast his/her location to others.
3. ResQ ML Engine: The algorithms behind the ResQ platform. These allow us to effectively rank, triage and save victims while minimizing loss of life. | # Doctors Within Borders
### A crowdsourcing app that improves first response time to emergencies by connecting city 911 dispatchers with certified civilians
## 1. The Challenge
In Toronto, ambulances get to the patient in 9 minutes 90% of the time. We all know
that the first few minutes after an emergency occurs are critical, and the difference of
just a few minutes could mean the difference between life and death.
Doctors Within Borders aims to get the closest responder within 5 minutes of
the patient to arrive on scene so as to give the patient the help needed earlier.
## 2. Main Features
### a. Web view: The Dispatcher
The dispatcher takes down information about an ongoing emergency from a 911 call, and dispatches a Doctor with the help of our dashboard.
### b. Mobile view: The Doctor
A Doctor is a certified individual who is registered with Doctors Within Borders. Each Doctor is identified by their unique code.
The Doctor can choose when they are on duty.
On-duty Doctors are notified whenever a new emergency occurs that is both within a reasonable distance and the Doctor's certified skill level.
## 3. The Technology
The app uses *Flask* to run a server, which communicates between the web app and the mobile app. The server supports an API which is used by the web and mobile app to get information on doctor positions, identify emergencies, and dispatch doctors. The web app was created in *Angular 2* with *Bootstrap 4*. The mobile app was created with *Ionic 3*.
Created by Asic Chen, Christine KC Cheng, Andrey Boris Khesin and Dmitry Ten. | ## Inspiration
In times of disaster, the capacity of rigid networks like cell service and internet dramatically decreases at the same time demand increases as people try to get information and contact loved ones. This can lead to crippled telecom services which can significantly impact first responders in disaster struck areas, especially in dense urban environments where traditional radios don't work well. We wanted to test newer radio and AI/ML technologies to see if we could make a better solution to this problem, which led to this project.
## What it does
Device nodes in the field network to each other and to the command node through LoRa to send messages, which helps increase the range and resiliency as more device nodes join. The command & control center is provided with summaries of reports coming from the field, which are visualized on the map.
## How we built it
We built the local devices using Wio Terminals and LoRa modules provided by Seeed Studio; we also integrated magnetometers into the devices to provide a basic sense of direction. Whisper was used for speech-to-text with Prediction Guard for summarization, keyword extraction, and command extraction, and trained a neural network on Intel Developer Cloud to perform binary image classification to distinguish damaged and undamaged buildings.
## Challenges we ran into
The limited RAM and storage of microcontrollers made it more difficult to record audio and run TinyML as we intended. Many modules, especially the LoRa and magnetometer, did not have existing libraries so these needed to be coded as well which added to the complexity of the project.
## Accomplishments that we're proud of:
* We wrote a library so that LoRa modules can communicate with each other across long distances
* We integrated Intel's optimization of AI models to make efficient, effective AI models
* We worked together to create something that works
## What we learned:
* How to prompt AI models
* How to write drivers and libraries from scratch by reading datasheets
* How to use the Wio Terminal and the LoRa module
## What's next for Meshworks - NLP LoRa Mesh Network for Emergency Response
* We will improve the audio quality captured by the Wio Terminal and move edge-processing of the speech-to-text to increase the transmission speed and reduce bandwidth use.
* We will add a high-speed LoRa network to allow for faster communication between first responders in a localized area
* We will integrate the microcontroller and the LoRa modules onto a single board with GPS in order to improve ease of transportation and reliability | winning |
## Inspiration
In today's fast-paced world, we're all guilty of losing ourselves in the vast digital landscape. The countless hours we spend on our computers can easily escape our attention, as does the accumulating stress that can result from prolonged focus. Screen Sense is here to change that. It's more than just a screen time tracking app; it's your personal guide to a balanced digital life. We understand the importance of recognizing not only how much time you spend on your devices but also how these interactions make you feel.
## What it does
Screen Sense is a productivity tool that monitors a user's emotions while tracking their screen time on their device. Our app analyzes visual data using the Hume API and presents it to the user in an simple graphical interface that displays their strongest emotions along with a list of the user's most frequented digital activities.
## How we built it
Screen Sense is a desktop application built with the React.js, Python, and integrates the Hume AI API. Our project required us to execute of multiple threads to handle tasks such as processing URL data from the Chrome extension, monitoring active computer programs, and transmitting image data from a live video stream to the Hume AI.
## Challenges we ran into
We encountered several challenges during the development process. Initially, our plan involved creating a Chrome extension for its cross-platform compatibility. However, we faced security obstacles that restricted our access to the user's webcam. As a result, we pivoted to developing a desktop application capable of processing a webcam stream in the background, while also serving as a fully compatible screen time tracker for all active programs.
Although nearly half of the codebase was written in Python, none of our group members had ever coded using that language. This presented a significant learning curve, and we encountered a variety of issues. The most complex challenge we faced was Python's nature as a single-threaded language.
## Accomplishments that we're proud of
As a team of majority beginner coders, we were able to create a functional productivity application that integrates the Hume AI, especially while coding in a language none of us were familiar with beforehand.
## What we learned
Throughout this journey, we gained valuable experience in adaptability, as demonstrated by how we resolved our HTTP communication issues and successfully shifted our development platform when necessary.
## What's next for Screen Sense
Our roadmap includes refining the user interface and adding more visual animations to enhance the overall experience, expanding our reach to mobile devices, and integrating user notifications to proactively alert users when their negative emotions reach unusually high levels. | ## Table Number
358
## Inspiration
As international students, many of us on the TheraAI team have traveled thousands of miles to build a future for ourselves. In the process, we often become so busy that we can't spend as much time with our parents, who miss us deeply as they grow older. This personal experience inspired us to create a solution using what we do best: coding. We wanted to help people like our parents, who need constant companionship and emotional support.
## What it does
TheraAI enables users to interact with an AI companion that uses advanced facial and audio recognition technologies to assess their emotions and respond accordingly, improving their mood over time.
## How we built it
On the frontend, we used React, Streamlit, and Figma for webpage creation and data visualization. On the backend, Flask handled API requests, OpenCV provided visual inputs for the HUME API to identify emotions, and Pandas and NumPy managed data on the Streamlit dashboard.
## Challenges we ran into
We faced challenges in integrating audio and visual inputs simultaneously, dynamically updating the dashboard with real-time data, and pipelining data between Streamlit and React.
## Accomplishments that we're proud of
Two of our team members participated in their first hackathon, where they worked on the front end and rapidly picked up React and Figma to produce the best user experience possible. We were also proud to have used OpenCV in conjunction with HUME to detect emotions in real time. Finally, despite the fact that none of the members of our team had previously met, we worked incredibly well together.
## What we learned
Some team members learned React whilst others learned Streamlit for the first time. We also learned how to smoothly integrate OpenCV, the HUME API, and backend-frontend connections.
## What's next for TheraAI - Personal Therapist
Our next steps include addressing the challenges we faced and deploying TheraAI to the public to support the emotional well-being of elderly individuals. | A long time ago (last month) in a galaxy far far away (literally my room) I was up studying late for exams and decided to order some hot wings. With my food being the only source of joy that night you can imaging how devastated I was to find out that they were stolen from the front lobby of my building! That's when the idea struck to create a secure means of ordering food without the stress of someone else stealing it.
## What it does
Locker as a service is a full hardware and software solution that intermediates a the food exchange between seller and buyer. Buyers can order food from our mobile app, the seller receives this notification on their end of the app, fills the box with its contents, and locks the box. The buyer is notified that the order is ready and using face biometrics receives permission to open the box and safely. The order can specify whether the food needs to be refrigerated or heated and the box's temperature is adjusted accordingly. Sounds also play at key moments in the exchange such as putting in a cold or hot item as well as opening the box.
## How we built it
The box is made out of cardboard and uses a stepper motor to open and close the main door, LED's are in the top of the box to indicate it's content status and temperature. A raspberry pi controls these devices and is also connected to a Bluetooth speaker which is also inside of the box playing the sounds. The frontend was developed using Flutter and IOS simulator. Commands from the front end are sent to Firebase which is a realtime cloud database which can be connected to the raspberry pi to send all of the physical commands. Since the raspberry pi has internet and Bluetooth access, it can run wirelessly (with the exception of power to the pi)
## Challenges we ran into
A large challenge we ran into was having the raspberry-pi run it's code wirelessly. Initially we needed to connect to VNC Viewer to via ethernet to get a GUI. Only after we developed all the python code to control the hardware seamlessly could we disconnect the VNC viewer and let the script run autonomously. Another challenge we ran into was getting the IoS simulated app to run on a real iphone, this required several several YouTube tutorials and debugging could we get it to work.
## Accomplishments that we're proud of
We are proud that we were able to connect both the front end (flutter) and backend (raspberry pi) to the firebase database, it was very satisfying to do so.
## What we learned
Some team members learned about mobile development for the first time while others learned about control systems (we had to track filled state, open state, and led colour for 6 stages of the cycle)
## What's next for LaaS | losing |
## Inspiration
Why not? We wanted to try something new. Lip reading and the Symphonic Labs API was something we hadn't seen before. **We wanted to see how far we could push it!**
## What it does
Like singing? LLR is for you! Don’t like singing? LLR IS STILL FOR YOU! Fight for the top position in a HackTheNorth lip-syncing challenge. How does it work? Very simple, very demure:
1. Choose a song. Harder songs -> more points.
2. Lip sync to a 10-15 second clip of the song. (Don’t mumble!)
3. LLR reads your lips to determine your skill of lip-syncing to popular songs!
4. Scan your HTN QR code to submit your score and watch as you rise in ranking.
## How we built it
LLR is a web app built with Next.js, enabling the rapid development of reactive apps with backend capability. The Symphonic API is at the core of LLR, powering the translation from lip movement to text. We’re using OpenAI’s embedding models to determine the lip sync’s accuracy/similarity and MongoDB as the database for score data.
## Challenges we ran into
While the Symphonic API is super cool, we found it slow sometimes. We found that a 10-second video took around 5 seconds to upload and 30 seconds to translate. This just wasn’t fast enough for users to get immediate feedback. We looked at Symphonic Lab’s demo of Mamo, and it was much faster. We delved deeper into Mamo’s network traffic, we found that it used a much faster web socket API. By figuring out the specifications of this newfound API, we lowered our latency from 30 seconds to 7 seconds with the same 10-second clip.
## Accomplishments that we're proud of
The friends we made along the way.
## What we learned
Over the course of this hackathon, we learned from workshops and our fellow hackers. We learned how to quickly create an adaptive front end from the RWD workshop and were taught how to use network inspection to reverse engineer API processes.
## What's next for LipsLips Revolution
We hope to integrate with the Spotify API or other music services to offer a larger variety of songs. We also wish to add a penalty system based on the amount of noise made. It is, after all, lip-syncing and not just singing. We do hope to turn this into a mobile app! It’ll be the next TikTok, trust… | ## Inspiration
We've all been there - racing to find an empty conference room as your meeting is about to start, struggling to hear a teammate who's decided to work from a bustling coffee shop, or continuously muting your mic because of background noise.
As the four of us all conclude our internships this summer, we’ve all experienced these over and over. But what if there is a way for you to simply take meetings in the middle of the office…
## What it does
We like to introduce you to Unmute, your solution to clear and efficient virtual communication. Unmute transforms garbled audio into audible speech by analyzing your lip movements, all while providing real-time captions as a video overlay. This means your colleagues and friends can hear you loud and clear, even when you’re not. Say goodbye to the all too familiar "wait, I think you're muted".
## How we built it
Our team built this application by first designing it on Figma. We built a React-based frontend using TypeScript and Vite for optimal performance. The frontend captures video input from the user's webcam using the MediaRecorder API and sends it to our Flask backend as a WebM file. On the server side, we utilized FFmpeg for video processing, converting the WebM to MP4 for wider compatibility. We then employed Symphonic's API to transcribe visual cues.
## Challenges we ran into
Narrowing an idea was one of the biggest challenges. We had many ideas, including a lip-reading language course, but none of them had a solid use case. It was only after we started thinking about problems we encountered in our daily lives did we find our favorite project idea.
Additionally, there were many challenges on the technical side with using Flask and uploading and processing videos.
## Accomplishments that we're proud of
We are proud that we were able to make this project come to life.
## Next steps
Symphonic currently does not offer websocket functionality, so our vision of making this a real-time virtual meeting extension is not yet realizable. However, when this is possible, we are excited for the improvements this project will bring to meetings of all kinds. | ## Inspiration
We wanted to make a natural language processing app that inferred sentiment from vocal performance, while incorporating all of this within a game.
## What it does
You can upload a YouTube video that will be parsed into text, and then challenge friends to make a rendition of what the text entails. Live scores are always changing via the realtime database.
## How we built it
Anthony used Ionic 3 to make a mobile app that connects with Firebase to send challenge and user data, while Steven and Roy developed a REST API in Node.js that handles the transcription processing and challenge requests.
## Challenges we ran into
Heroku is terribly incompatible with FFMPEG, the only reasonable API for media file conversion. On top of that, Heroku was the only online server option to provide a buildpack for FFMPEG.
## Accomplishments that we're proud of
The UI runs smoothly and Firebase loads and transmits the data rapidly and correctly.
## What we learned
If you want to get a transcript for a natural language, you should do so in-app before processing the data on a REST API.
## What's next for Romeo
We are excited to implement invitation features that rely on push notifications. For instance, I can invite my friends via push to compete against my best rendition of a scene from Romeo and Juliet. | partial |
## Inspiration
Survival from out-of-hospital cardiac arrest remains unacceptably low worldwide, and it is the leading cause of death in developed countries. Sudden cardiac arrest takes more lives than HIV and lung and breast cancer combined in the U.S., where survival from cardiac arrest averages about 6% overall, taking the lives of nearly 350,000 annually. To put it in perspective, that is equivalent to three jumbo jet crashes every single day of the year.
For every minute that passes between collapse and defibrillation survival rates decrease 7-10%. 95% of cardiac arrests die before getting to the hospital, and brain death starts 4 to 6 minutes after the arrest.
Yet survival rates can exceed 50% for victims when immediate and effective cardiopulmonary resuscitation (CPR) is combined with prompt use of a defibrillator. The earlier defibrillation is delivered, the greater chance of survival. Starting CPR immediate doubles your chance of survival. The difference between the current survival rates and what is possible has given rise to the need for this app - IMpulse.
Cardiac arrest can occur anytime and anywhere, so we need a way to monitor heart rate in realtime without imposing undue burden on the average person. Thus, by integrating with Apple Watch, IMpulse makes heart monitoring instantly available to anyone, without requiring a separate device or purchase.
## What it does
IMpulse is an app that runs continuously on your Apple Watch. It monitors your heart rate, detecting for warning signs of cardiac distress, such as extremely low or extremely high heart rate. If your pulse crosses a certain threshold, IMpulse captures your current geographical location and makes a call to an emergency number (such as 911) to alert them of the situation and share your location so that you can receive rapid medical attention. It also sends SMS alerts to emergency contacts which users can customize through the app.
## How we built it
With newly-available access to Healthkit data, we queried heart sensor data from the Apple Watch in real time. When these data points are above or below certain thresholds, we capture the user's latitude and longitude and make an HTTPRequest to a Node.js server endpoint (currently deployed to heroku at <http://cardiacsensor.herokuapp.com>) with this information. The server uses the Google Maps API to convert the latitude and longitude values into a precise street address. The server then makes calls to the Nexmo SMS and Call APIs which dispatch the information to emergency services such as 911 and other ICE contacts.
## Challenges we ran into
1. There were many challenges testing the app through the XCode iOS simulators. We couldn't find a way to simulate heart sensor data through our laptops. It was also challenging to generate Location data through the simulator.
2. No one on the team had developed in iOS before, so learning Swift was a fun challenge.
3. It was challenging to simulate the circumstances of a cardiac arrest in order to test the app.
4. Producing accurate and precise geolocation data was a challenge and we experimented with several APIs before using the Google Maps API to turn latitude and longitude into a user-friendly, easy-to-understand street address.
## Accomplishments that we're proud of
This was our first PennApps (and for some of us, our first hackathon). We are proud that we finished our project in a ready-to-use, demo-able form. We are also proud that we were able to learn and work with Swift for the first time. We are proud that we produced a hack that has the potential to save lives and improve overall survival rates for cardiac arrest that incorporates so many different components (hardware, data queries, Node.js, Call/SMS APIs).
## What's next for IMpulse
Beyond just calling 911, IMpulse hopes to build out an educational component of the app that can instruct bystanders to deliver CPR. Additionally, with the Healthkit data from Apple Watch, IMpulse could expand to interact with a user's pacemaker or implantable cardioverter defibrillator as soon as it detects cardiac distress. Finally, IMpulse could communicate directly with a patient's doctor to deliver realtime heart monitor data. | ## Inspiration
The world is going through totally different times today, and there is more need to give back and help each other during these uncertain times. Volassis was born during the COVID-19 pandemic when so much volunteering was happening but not a single system was available to enable and motivate individuals to give back to the community. The project started ideation during my Sophomore year when I was volunteering with an organization for senior care called "Sunrise Senior Living", and all their records were manual. I started to automate the attendance system, and in a matter of few months many new features required for full online volunteering were added during the COVID-19 times.
## What it does
The system provides the youth with a completely automated one-stop platform for finding volunteer opportunities, volunteering and get approved volunteer hours in matter of few clicks. The system was developed out of a need to improve the existing archaic email based manual systems, which make this process very cumbersome and time-consuming.
## Why I built it
Volassis is a centralized system developed by Lakshya Gupta from Tompkins High School in Katy, Texas, who recognized the need for such a system while volunteering himself. Lakshya says, "I have the knack to recognize limitations in existing systems, and I feel almost irresistible drive to fix and improve them. With my passion in Computer Science and a good hold on several software technologies, I wanted to create an enhanced and easy-to-use volunteering system that not only made finding opportunities easier but also provided one-stop platform for all volunteer needs."
## How I built it
I built it by starting off with designing the database tables in MYSQL. I designed several tables to track the users logging and give an analysis based off of this. Then, I started developing the REST API since this would be the backend of my project, allowing me to call functions to view the users logs and give analysis from the database content through the REST API function calls. After this, I starting developing the react native app and called the REST API functions to be able to keep track of the user entered data and view the database content through the app. And finally, I made a website using mostly HTML, Javascript, and Typescript in order to allow the user to see their logged in hours on the app through the website. The link to the website is volassis.com. Github repository link is <https://github.com/LakshyaGupta/Volassis-REST-API/> and <https://github.com/LakshyaGupta/Volassis-Mobile-Application/>.
## Challenges I ran into
Some challenges I ran into were initially getting started with the API, building an API is very tough in the sense that many errors occur when you first run the API. Another challenge I ran into was efficiently several new languages in a short time frame and being able to effectively deploy the project in a timely manner. I believe the toughest challenges of the project were being able to finalize the program, make the website, design the database table, and running REST API.
## Accomplishments that I'm proud of
Volassis is a centralized system developed by Lakshya Gupta from Tompkins High School in Katy, Texas, who recognized the need for such a system while volunteering himself. Lakshya says, "I have the knack to recognize limitations in existing systems, and I feel almost irresistible drive to fix and improve them. With my passion in Computer Science and a good hold on several software technologies, I wanted to create an enhanced and easy-to-use volunteering system that not only made finding opportunities easier but also provided one-stop platform for all volunteer needs."
## What I learned
I learned several new languages such as React Native, Typescript, and js, which I believe will be truly beneficial to me when pursuing computer science in college and later in a job. Through this hackathon, my passion for computer science has greatly increased.
## What's next for Volassis
Currently, 4 organizations are using my system for their volunteering needs, and I am in the process of contacting more organizations to assist them during these difficult times. | ## Inspiration
Recently, we came across a BBC article reporting a teenager having seizure was saved by online gamer - 5,000 miles away in Texas, via alerting medical services. Surprisingly, the teenager's parents weren't aware of his condition despite being in the same home.
## What it does
Using remote photoplethysmography to measure one's pulse rate using webcam, and alerting a friend/guarding via SMS if major irregularities are detected.
## How we built it
We created a React app that implements some openCV libraries in order to control the webcam and detect the user's face to try to detect the heart rate. We deployed onto Google Cloud Engine and made use of StdLib to handle sending SMS messages.
## Challenges we ran into
Java > Javascript
## Accomplishments that we're proud of
Cloud deployment, making a React app, using Stdlib, using Figma.
## What we learned
## What's next for ThumpTech | partial |
# Relive and Relearn
*Step foot into a **living photo album** – a window into your memories of your time spent in Paris.*
## Inspiration
Did you know that 70% of people worldwide are interested in learning a foreign language? However, the most effective learning method, immersion and practice, is often challenging for those hesitant to speak with locals or unable to find the right environment. We sought out to try and solve this problem by – even for experiences you yourself may not have lived; While practicing your language skills and getting personalized feedback, enjoy the ability to interact and immerse yourself in a new world!
## What it does
Vitre allows you to interact with a photo album containing someone else’s memories of their life! We allow you to communicate and interact with characters around you in those memories as if they were your own. At the end, we provide tailored feedback and an AI backed DELF (Diplôme d'Études en Langue Française) assessment to quantify your French capabilities. Finally, it allows for the user to make learning languages fun and effective; where users are encouraged to learn through nostalgia.
## How we built it
We built all of it on Unity, using C#. We leveraged external API’s to make the project happen.
When the user starts speaking, we used ChatGPT’s Whisper API to transform speech into text.
Then, we fed that text into co:here, with custom prompts so that it could role play and respond in character.
Meanwhile, we are checking the responses by using co:here rerank to check on the progress of the conversation, so we knew when to move on from the memory.
We store all of the conversation so that we can later use co:here classify to give the player feedback on their grammar, and give them a level on their french.
Then, using Eleven Labs, we converted co:here’s text to speech and played it for the player to simulate a real conversation.
## Challenges we ran into
VR IS TOUGH – but incredibly rewarding! None of our team knew how to use Unity VR and the learning curve sure was steep. C# was also a tricky language to get our heads around but we pulled through! Given that our game is multilingual, we ran into challenges when it came to using LLMs but we were able to use and prompt engineering to generate suitable responses in our target language.
## Accomplishments that we're proud of
Figuring out how to build and deploy on Oculus Quest 2 from Unity
Getting over that steep VR learning curve – our first time ever developing in three dimensions
Designing a pipeline between several APIs to achieve desired functionality
Developing functional environments and UI for VR
## What we learned
* 👾 An unfathomable amount of **Unity & C#** game development fundamentals – from nothing!
* 🧠 Implementing and working with **Cohere** models – rerank, chat & classify
* ☎️ C# HTTP requests in a **Unity VR** environment
* 🗣️ **OpenAI Whisper** for multilingual speech-to-text, and **ElevenLabs** for text-to-speech
* 🇫🇷🇨🇦 A lot of **French**. Our accents got noticeably better over the hours of testing.
## What's next for Vitre
* More language support
* More scenes for the existing language
* Real time grammar correction
* Pronunciation ranking and rating
* Change memories to different voices
## Credits
We took inspiration from the indie game “Before Your Eyes”, we are big fans! | ## Inspiration
A chatbot is often described as one of the most advanced and promising expressions of interaction between humans and machines. For this reason we wanted to create one in order to become affiliated with Natural Language Processing and Deep-Learning through neural networks.
Due to the current pandemic, we are truly living in an unprecedented time. As the virus' spread continues, it is important for all citizens to stay educated and informed on the pandemic. So, we decided to give back to communities by designing a chatbot named Rona who a user can talk to, and get latest information regarding COVID-19.
(This bot is designed to function similarly to ones used on websites for companies such as Amazon or Microsoft, in which users can interact with the bot to ask questions they would normally ask to a customer service member, although through the power of AI and deep learning, the bot can answer these questions for the customer on it's own)
## What it does
Rona answers questions the user has regarding COVID-19.
More specifically, the training data we fed into our feed-forward neural network to train Rona falls under 5 categories:
* Deaths from COVID-19
* Symptoms of COVID-19
* Current Cases of COVID-19
* Medicines/Vaccines
* New Technology/Start-up Companies working to fight coronavirus
We also added three more categories of data for Rona to learn, those being greetings, thanks and goodbyes, so the user can have a conversation with Rona which is more human-like.
## How we built it
First, we had to create my training data. Commonly referred to as 'intentions', the data we used to train Rona consisted of different phrases that a user could potentially ask. We split up all of my intentions into 7 categories, which we listed above, and these were called 'tags'. Under our sub-branch of tags, we would provide Rona several phrases the user could ask about that tag, and also gave it responses to choose from to answer questions related to that tag. Once the intentions were made, we put this data in a json file for easy access in the rest of the project.
Second, we had to use 3 artificial-intelligence, natural language processing, techniques to process the data, before it was fed into our training model. These were 'bag-of-words', 'tokenization' and 'stemming'. First, bag-of-words is a process which took a phrase, which were all listed under the tags, and created an array of all the words in that phrase, making sure there are no repeats of any words. This array was assigned to an x-variable. A second y-variable delineated which tag this bag-of-words belonged to. After these bags-of-words were created, tokenization was applied through each bag-of-words and split them up even further into individual words, special characters (like @,#,$,etc.) and punctuation. Finally, stemming created a crude heuristic, i.e. it chopped off the ending suffixes of the words (organize and organizes both becomes organ), and replaced the array again with these new elements. These three steps were necessary, because the training model is much more effective when the data is pre-processed in this way, it's most fundamental form.
Next, we made the actual training model. This model was a feed-forward neural network with 2 hidden layers. The first step was to create what are called hyper-parameters, which is a standard procedure for all neural networks. These are variables that can be adjusted by the user to change how accurate you want your data to be. Next, the network began with 3 layers which were linear, and these were the layers which inputted the data which was pre-processed earlier. After, these were passed on into what are called activation functions. Activation functions output a small value for small inputs, and a larger value if its inputs exceed a threshold. If the inputs are large enough, the activation function "fires", otherwise it does nothing. In other words, an activation function is like a gate that checks that an incoming value is greater than a critical number.
The training was completed, and the final saved model was saved into a 'data.pth' file using pytorch's save method.
## Challenges we ran into
The most obvious challenge was simply time constraints. We spent most of our time trying to make sure the training model was efficient, and had to search up several different articles and tutorials on the correct methodology and API's to use. Numpy and pytorch were the best ones.
## Accomplishments that we're proud of
This was our first deep-learning project so we are very proud of completing at least the basic prototype. Although we were aware of NLP techniques such as stemming and tokenization, this is our first time actually implementing them in action. We have created basic neural nets in the past, but also never a feed-forward one which provides an entire model as its output.
## What we learned
We learned a lot about deep learning, neural nets, and how AI is trained for communication in general. This was a big step up for us in Machine Learning.
## What's next for Rona: Deep Learning Chatbot for COVID-19
We will definitely improve on this in the future by updating the model, providing a lot more types of questions/data related to COVID-19 for Rona to be trained on, and potentially creating a complete service or platform for users to interact with Rona easily. | ## Inspiration
After Apple announced their first ARDevKit, our group knew we wanted to tinker with the idea. Having an experienced group who won previous hackathons with mobile apps, we were excited to delve head-first into the world of Augmented Reality.
## What it does
It calculates the instantaneous/average velocity of an object.
## How we built it
Using Swift in Xcode, we incorporated Calculus concepts into the development of the AR.
## Challenges we ran into
To calculate instantaneous velocity, we had to get very small time increments that approach infinitesimally small changes in time. Processing as many position values per second becomes important to improving the accuracy. However, this can be CPU-intensive. So we created an efficient and optimised program.
## Accomplishments that we're proud of
Creating a full-functioning app in less than 24 hours after concept.
## What we learned
Working as a cohesive unit and the potential AR has.
Managing time properly.
## What's next for SpeedAR
Adding a slider to change the accuracy of the velocity.
Adding a low-power mode to further save battery and limit processing clock speeds.
Individual object-tracking to remove the need to manually pan the camera to trace an object's movement. | winning |
## Members
Keith Khadar, Alexander Salinas, Eli Campos, Gabriella Conde
## Inspiration
Last year, Keith had a freshman roommate named Cayson who had played high school football until a knee injury sidelined him. While his condition improved in college—allowing him to walk, he couldn’t run. Keith remembered how he often had to make a 30-minute walk to his physical therapist. It was through witnessing his struggle and through Keiths experience working on medical devices in Dream Team Engineering, a Club at the University of Florida dedicated to improving patient care, and our curiosity to work on real world problems with AI, that we began to think about this issue.
## What it does
Our device tracks an injured athlete's movements and provides personalized advice, comparable to that of a world-class physical therapist, ensuring the patient recovers effectively and safely. Our device helps users perform physical therapy exercises at home safely while AI analyzes their movements to ensure they operate within their expected range of motion and effort values.
## How we built it
Using web technologies (Angular, Python, Tune) and microcontrollers (Flex-sensor + ESP32) to track, give insights, and show improvement over time.
## Challenges we ran into
Bluetooth Implementation: Establishing a reliable and efficient Bluetooth connection between the microcontroller and our web application proved more complex than anticipated.
Sleeve Assembly: Designing and constructing a comfortable, functional sleeve that accurately houses our sensors while maintaining flexibility was a delicate balance.
Data Interpretation: Translating raw sensor data into meaningful, actionable insights for users required extensive algorithm development and testing.
Cross-platform Compatibility: Ensuring our web application functioned seamlessly across various devices and browsers presented unexpected complications. Specifically browser as well as
## Accomplishments that we're proud of
Seamless Bluetooth Integration: We successfully implemented robust Bluetooth communication between our hardware and software components, enabling real-time data transfer.
Real Time Digital Signal Processing: Our team developed sophisticated algorithms to analyze data from our sensors, providing a comprehensive view of the user's movements and progress.
Intuitive User Interface: We created a user-friendly interface that clearly presents complex data and personalized recommendations in an easily digestible format.
Rapid Prototyping: Despite time constraints, we produced a fully functional prototype that demonstrates the core capabilities of our concept.
Tune AI Integration: We are proud of our connection to tune ai and using their llama ai model to provide insights into the patients movements.
## What we learned
Full-Stack Development: We gained valuable experience in integrating frontend and backend technologies, particularly in using Python for backend operations and Angular for the frontend.
Interdisciplinary Collaboration: We learned the importance of effective communication and teamwork when combining expertise from various fields (e.g., software development, hardware engineering, and physical therapy).
Real-world Problem Solving: This experience reinforced the value of addressing genuine societal needs through innovative technological solutions.
## What's next for Glucose
Enhanced Sensor Array: Integrate additional sensors (e.g., accelerometers, gyroscopes) for more comprehensive movement tracking and analysis.
Machine Learning Integration: Implement more advanced ML algorithms to improve personalization and predictive capabilities of our advice engine.
Clinical Trials: Conduct rigorous testing with physical therapists and patients to validate and refine our system's effectiveness.
Mobile App Development: Create dedicated iOS and Android apps to increase accessibility and user engagement.
Expanding Use Cases: Explore applications beyond athletic injuries, such as rehabilitation for stroke patients or elderly care. Another use case is to help correct diagnostic error. We will have a lot of data on how the patient moves and we can train machine learning models to then analyze that data and affirm the diagnostic that the doctor gave. | ## Inspiration
One of our team members, Aditya, has been in physical therapy (PT) for the last year after a wrist injury on the tennis court. He describes his experience with PT as expensive and inconvenient. Every session meant a long drive across town, followed by an hour of therapy and then the journey back home. On days he was sick or traveling, he would have to miss his PT sessions.
Another team member, Adarsh, saw his mom rushed to the hospital after suffering from a third degree heart block. In the aftermath of her surgery, in which she was fitted with a pacemaker, he noticed how her vital signs monitors, which were supposed to aid in her recovery, inhibited her movement and impacted her mental health.
These insights together provided us with the inspiration to create TherapEase.ai. TherapEase.ai uses AI-enabled telehealth to bring **affordable and effective PT** and **contactless vital signs monitoring services** to consumers, especially among the **elderly and disabled communities**. With virtual sessions, individuals can receive effective medical care from home with the power of pose correction technology and built-in heart rate, respiratory, and Sp02 monitoring. This evolution of telehealth flips the traditional narrative of physical development—the trainee can be in more control of their body positioning, granting them greater levels of autonomy.
## What it does
The application consists of the following features:
Pose Detection and Similarity Tracking
Contactless Vital Signs Monitoring
Live Video Feed with Trainer
Live Assistant Trainer Chatbot
Once a PT Trainer or Medical Assistant creates a specific training room, the user is free to join said room. Immediately, the user’s body positioning will be highlighted and compared to that of the trainer. This way the user can directly mimic the actions of the trainer and use visual stimuli to better correct their position. Once the trainer and the trainee are aligned, the body position highlights will turn blue, indicating the correct orientation has been achieved.
The application also includes a live assistant trainer chatbot to provide useful tips for the user, especially when the user would like to exercise without the presence of the trainer.
Finally, on the side of the video call, the user can monitor their major vital signs: heart rate, respiratory rate, and blood oxygen levels without the need for any physical sensors or wearable devices. All three are estimated using remote Photoplethysmography: a technique in which fluctuations in camera color levels are used to predict physiological markers.
## How we built it
We began first with heart rate detection. The remote Photoplethysmography (rPPG) technique at a high level works by analyzing the amount of green light that gets absorbed by the face of the trainee. This serves as a useful proxy as when the heart is expanded, there is less red blood in the face, which means there is less green light absorption. The opposite is true when the heart is contracted. By magnifying these fluctuations using Eulerian Video Magnification, we can then isolate the heart rate by applying a Fast Fourier Transform on the green signal.
Once the heart rate detection software was developed, we integrated in PoseNet’s position estimation algorithm, which draws 17 key points on the trainee in the video feed. This lead to the development of two-way video communication using webRTC, which simulates the interaction between the trainer and the trainee. With the trainer’s and the trainee’s poses both being estimated, we built the weighted distance similarity comparison function of our application, which shows clearly when the user matched the position of the trainer.
At this stage, we then incorporated the final details of the application: the LLM assistant trainer and the additional vital signs detection algorithms. We integrated **Intel’s Prediction Guard**, into our chat bot to increase speed and robustness of the LLM. For respiratory rate and blood oxygen levels, we integrated algorithms that built off of rPPG technology to determine these two metrics.
## Challenges we ran into (and solved!)
We are particularly proud of being able to implement the two-way video communication that underlies the interaction between a patient and specialist on TherapEase.ai. There were many challenges associated with establishing this communication. We spent many hours building an understanding of webRTC, web sockets, and HTTP protocol. Our biggest ally in this process was the developer tools of Chrome, which we could use to analyze network traffic and ensure the right information is being sent.
We are also proud of the cosine similarity algorithm which we use to compare the body pose of a specialist/trainer with that of a patient. A big challenge associated with this was finding a way to prioritize certain points (from posnet) over others (e.g. an elbow joint should be given more importance than an eye point in determining how off two poses are from each other). After hours of mathematical and programming iteration, we devised an algorithm that was able to weight certain joints more than others leading to much more accurate results when comparing poses on the two way video stream. Another challenge was finding a way to efficiently compute and compare two pose vectors in real time (since we are dealing with a live video stream). Rather than having a data store, for this hackathon we compute our cosine similarity in the browser.
## What's next for TherapEase.ai
We all are very excited about the development of this application. In terms of future technical developments, we believe that the following next steps would take our application to the next level.
* Peak Enhancement for Respiratory Rate and SpO2
* Blood Pressure Contactless Detection
* Multi-channel video Calling
* Increasing Security | ## what we do:
we give a platform for local business
## Inspiration:
Saw a lot of local businesses struggle during covid and was motivated to help these businesses continue to run
## What it does:
It is a platform where businesses can post their "business cards" in a free gallery. This gallery can be accessed by people
## How we built it:
Python, Flask, HTML, CSS
## Challenges:
Database/Backend, Frontend designing (making it responsive)
## Accomplishments:
Design, Data storage, Integration, Git/Github, Logo
## What we learned:
Flask, Backend, Frontend
## Next for sb gallery:
Improve the backend and make it more responsive | partial |
## Inspiration
We wanted to create an interactive desktop with the concept of space involved just as in the days before computers became a common workplace tool. We went for the futuristic approach where icons and files can be grabbed and interacted with in virtual reality.
## What it does
It places the user in a 3D virtual reality environment, and provides icons that can be interacted with via hand gestures. Different gestures manipulate the icons in different ways and gesture to action control is covered by scripts inserted into Unity
## How I built it
We attached a Leap-Motion sensor to the front of an Oculus Rift, and ran the program through Unity and scripts with C#. The sensor is responsible for input from the arms and the rift itself creates the environment.
## Challenges I ran into
We ran into major hardware compatibility issues that were compounded by sdk issues between the hardware. Furthermore, on our rented Alienware laptop we couldn't install all the sdks until later on in the project because we didn't have administrative rights. Furthermore, the documentation pages and tutorials were at times different, with small updates to names of functions that we had to figure out.
## Accomplishments that I'm proud of
-Linking the hardware
-Figuring out gesture controls
-Designing the landscape
-Getting icons to interact
## What I learned
-Never give up on an idea
-VR is cool
## What's next for INvironment
-Fleshing out further ideas
-Adding more features
-Smoothing and improving interaction | ## 💡Inspiration
* In Canada, every year there are 5.3 million people who feel they need some form of help for their mental health! But ordinary therapies is unfortunately boring and might be ineffective :( Having to deal with numerous patients every day, it also might be difficult for a mental health professional to build a deeper connection with their patient that allows the patient to heal and improve mentally and in turn physically.
* Therefore, we built TheraVibe.VR! A portable professional that is tailored to understand you! TheraVibe significantly improves patients' mental health by gamifying therapy sessions so that patients can heal wherever and with whomever they can imagine!
## 🤖 What it does
* TheraVibe provides professional psychological advisory powered by Cohere's API with the assistance of a RAG!
* It is powered by Starknet for its private and decentralized approach to store patient information!
* To aid the effort of helping more people, TheraVibe also uses Starknet to reward patients with cryptocurrencies effectively in a decentralized network to incentivize consistency in attending our unique "therapy sessions"!
## 🧠 How we built it
* With the base of C# and Unity Engine, we used blockchain technology via the beautiful Starknet API to create and deploy smart contracts to ensure safe storage of a "doctor's" evaluation of a patient's health condition as well as blockchain transactions made to patient in a gamified manner to incentivize future participation and maximize healing!
* For the memory import NextJS web app, we incorporated Auth0 for the security of our users and hosted it with a GoDaddy domain!
* The verbal interaction of the therapist and the user is powered by ElevenLabs and AssemblyAI! The cognitive process of the therapist is given by Cohere's and a RAG!
* To implement the VR project, we developed animation in Unity with C#, and used the platform to build and run our VR project!
## 🧩 Challenges we ran into
* Auth0 helped reveal a cache problem in our program, and so we had to deal with server-side rendered issues in Nextjs last-minute!
* We managed to deal with a support issue when hosting the domain name on GoDaddy and linking it to our Vercel deployment!
* Deploying the C# Unity APP on Meta Quest 2 took 24 hours of our development!
## 🏆 Accomplishments that we're proud of
* First time deploying on MetaQuest2 and building a project with Unity!
* Integrating multiple API's together like AssemblyAI for speech transcription, ElevenLabs for speech generation, Cohere, and the implementation of a RAG through a complex pipeline with minimal latency!
## 🌐What we learned
* How Auth0 requires HTTPS protocol(always fascinating how we don't have to reinvent the wheel for authenticating users!)
* Our first time hosting on GoDaddy(especially a cool project domain!)
* Building and running production-fast pipelines that have minimal latency to maximize user experience!
## 🚀What's next for TheraVibe.VR
* In the exciting journey ahead, TheraVibe.VR aspires to revolutionize personalized therapy by reducing latency, expanding our immersive scenes, and introducing features like virtual running. Our future goals include crafting an even more seamless and enriching experience and pushing the boundaries of therapeutic possibilities for all our users. | ## Inspiration
We believe current reCAPTCHA v.3 has few problems. First, it is actually hard to prove myself to be not robot. It is because Machine Learning is advancing everyday, and ImageToText's (Computer Vision) accuracy is also skyrocketing. Thus, CAPTCHA question images have to be more difficult and vague. Second, the dataset used for current CAPTCHA is limited. It becomes predictable as it's repeating its questions or images (All of you should have answered "check all the images with traffic lights"). In this regard, several research paper has been published through Black Hat using Machine learning models to break CAPTCHA.
## What it does
Therefore, we decided to build a CAPTCHA system that would generate a totally non-sensical picture, and making humans to select the description for that AI-created photo of something 'weird'. As it will be an image of something that is non-existent in this world, machine learning models like ImageToText will have to idea what the matching prompt would be. However, it will be very clear for human even though the images might not be 100% accurate of the description, it's obvious to tell which prompt the AI try to draw. Also, it will randomly create image from scratch every time, we don't need a database having thousands of photos and prompts. Therefore, we will be able to have non-repeating 'im not a robot' question every single time -> No pattern, or training data for malicious programs.
Very easy and fun 'Im not a robot' challenge.
## How we built it
We used AI-painting model called 'Stable Diffusion', which takes a prompt as an input, and creates an image of the prompt. The key of our CAPTCHA is that the prompt that we feed in to this model is absurd and non-existent in real world. We used NLP APIs provided by Cohere in order to generate this prompts. Firstly, we gathered 4,000 English sentences and clustered them to groups based on the similarity of topics using Cohere's embed model. Then, from each clusters, we extracted on key words and using that keywords generated a full sentence prompt using Cohere's generate model. And with that prompt, we created an image using stable diffusion.
## Challenges we ran into
As stable-diffusion is a heavy computation and for sure needed GPU power, we needed to use a cloud GPU. However, cloud GPU that we used from paperspace had its own firewall, which prevented us to deploy server from the environment that we were running tests.
## Accomplishments that we're proud of
We incorporated several modern machine learning techniques to tackle a real world problem and suggested a possible solution. CAPTCHA is especially a security protocol that basically everyone who uses internet encounters. By making it less-annoying and safer, we think it could have a positive impact in a large scale, and are proud of that.
## What we learned
We learned about usability of Cohere APIs and stable diffusion. Also learned a lot about computer vision and ImageToText model, a possible threat model for all CAPTCHA versions. Additionally, we learned a lot about how to open a server and sending arguments in real-time.
## What's next for IM NOT A ROBOT - CAPTCHA v.4
As not everyone can run stable diffusion on their local computer, we need to create a server, which the server does the calculation and creation for the prompt and image. | losing |
Memento is an all in one IoT application that provides therapy and care for dementia patients. It runs on two separate platforms that communicate their data with each other. The first part of our solution is a mobile Android Application. In this application, the user can enter Admin mode and create their settings for voice control and right and wrong answers. Then the user or their family members can create stimuli that will then be used in their game. The game then will show the user pictures of people, places, or memories, and ask them what the picture is of. The user can then use voice commands to tell the application what is in that picture. If the user gets the answer right, he scores a point and continue playing the game. This data is then sent to the second part of this solution. But, there is an additional part of the mobile application.
## Location Services for Tracking the Patient
One important concern for dementia patients is that they begin to wander and don't remember where they were going or how they got there, and most importantly, they don't remember how to get home. So in our mobile application, we have incorporated location services that keep track of the user in real time. The user can go and check how far he has strayed from home by simply pressing a single button, and this will send data to our web application as well. In addition, with the integration of the Uber API, the user can simply press a single button to call for an uber to bring them home using the location they are currently at.
## Web Application
The second portion of this hack, is the web application hosted by Microsoft Azure. The web application allows the user to log into their dashboard and access information about themselves in the case that they begin to forget things from their history. We focused on the personal aspect of this mental illness, such that the patient could trigger their memories or answer the questions the ask over and over again by viewing this web page. So, once they log in, they will have access to the most basic of information which includes their name, birthday, phone number, address, and a link to 911 in case of an emergency. They will see links to "Family", "Friends", "Memories", or "Home" with corresponding pictures. If the user clicks on the **Family** or **Friends** tab, they will be brought to their contacts page, where they have access to a database of family and friends' contact information along with corresponding pictures of the contact. This page shows the contact's picture, their relation to the user, their phone number, address, and email. This way, the user can contact them however they'd like.
If the user were to click on the **Home** button, they would be brought to a map page. Using the Google Maps API, we integrated a map that provides them with interactive access to past events in their life as well as their current location. They can scan over the map to see where they were born and on what date. Other examples of events that could potentially go on the map include the date of marriage or when their first child was born.
If the user were to click on the **Charts** tab, they would be brought to a page full of data visualizations for various things regarding Dementia in general as well as their own personal data. The first graph they would see is how well they are performing in the mobile application on a day to day basis. Then, they'd have access to a pie chart providing awareness on the most common forms of Dementia, and lastly, they could see how much time they are spending within their home by gathering data from the location tracking aspect of the mobile application. This data can be used to see how long they have wandered for to see how far away they might have gone, or track daily habits on how much time they typically spend in their homes on a day to day basis. These data visualizations were made using Morris and Flot.
Finally, if the user were to click on the **Memories** tab, they would be brought to an interactive time line that is easily customizable. They can add new events or delete old ones. This time line will show them the date, time, place, information, and a picture of the event taking place at the current time. And the user can swipe through events to recall what he or she has done throughout their life as well as view pictures of the event. This is really important to the user because this will provide them answers to a lot of questions, such as who was at which event and what exactly took place or when the event took place and what the importance was. It provides the user not only with information about the date, but with information on when it happened and pictures for further memory recollection.
## Purpose
As some of our team member have witness how hard it is for Dementia patients first hand, we thought this series of applications would come in great use not only for the patient themselves, but for their families and care takers. We put everything a patient would need in one easy to access place. They can view their memories, call for an Uber if they get lost, play memory games and track their progress, they have access to a database of contacts, and they have location services so their caretaker can see how far from home they are. | ## Inspiration
The summer before college started, all of us decided to take trips to different parts of the world to live our last few days before moving out to the fullest. During these trips, we all figured that there were so many logistics and little things that we needed to worried about in order to have a successful trip, and it created much unneeded stress for all of us. We then came up with the idea to create an app that would streamline the process of planning trips and creating itineraries through simple and convenient features.
## What it does
It is a travel companion app with functionality such as itinerary generation, nearby search, translation, and currency exchange. The itinerary generator will weigh preferences given by the user, and generate a list of points of interest accordingly. The translator will allow the user to type any phrase in their preferred starting language and output in the language of the country they are in. The user is also allowed to save phrases whenever they would like for quick access. Finally, the currency exchange allows the user to see the exchange rate from their currency to the currency of whichever country they are in, and they are also able to convert between the two currencies.
## How we built it
We built the front-end using Android Studio. We built the back-end using StdLib, which also made use of other APIs including Google Places, Google Places Photos, Countries API, Fixer.io and Google Translate. The front end utilizes the HERE.com Android SDK to get the location of our device from GPS coordinates.
## Challenges we ran into
We were all relatively inexperienced with Android Studio, and thus we spent a lot of time figuring out how to use it but we eventually managed to figure out its ins and outs. There was also an issue with Standard Library and compiling one of our dependencies to work.
## Accomplishments that we're proud of
We are proud of creating a functional app that is on the verge of being a super powerful traveling tool for people to use when seeing the world. We're also proud of aggregating all the APIs needed to make this hack possible as well as synthesizing all of them within Android Studio.
## What we learned
We definitely learned a lot more about utilizing Android Studio, since our hack mostly revolved around its use. Increased experience in Java, including managing asynchronous calls and interactions with the internet were among some of the most valuable lessons.
## What's next for Wanderful
Better design is certainly a priority, however functionally the app can be improved within each aspect, such as allowing the user to generate their own itinerary entries, tailoring nearby search to user specifics, increased translating capability, and increased personalization for each user. | ## What it does
MemoryLane is an app designed to support individuals coping with dementia by aiding in the recall of daily tasks, medication schedules, and essential dates.
The app personalizes memories through its reminisce panel, providing a contextualized experience for users. Additionally, MemoryLane ensures timely reminders through WhatsApp, facilitating adherence to daily living routines such as medication administration and appointment attendance.
## How we built it
The back end was developed using Flask and Python and MongoDB. Next.js was employed for the app's front-end development. Additionally, the app integrates the Google Cloud speech-to-text API to process audio messages from users, converting them into commands for execution. It also utilizes the InfoBip SDK for caregivers to establish timely messaging reminders through a calendar within the application.
## Challenges we ran into
An initial hurdle we encountered involved selecting a front-end framework for the app. We transitioned from React to Next due to the seamless integration of styling provided by Next, a decision that proved to be efficient and time-saving. The subsequent challenge revolved around ensuring the functionality of text messaging.
## Accomplishments that we're proud of
The accomplishments we have achieved thus far are truly significant milestones for us. We had the opportunity to explore and learn new technologies that were previously unfamiliar to us. The integration of voice recognition, text messaging, and the development of an easily accessible interface tailored to our audience is what fills us with pride.
## What's next for Memory Lane
We aim for MemoryLane to incorporate additional accessibility features and support integration with other systems for implementing activities that offer memory exercises. Additionally, we envision MemoryLane forming partnerships with existing systems dedicated to supporting individuals with dementia. Recognizing the importance of overcoming organizational language barriers in healthcare systems, we advocate for the formal use of interoperability within the reminder aspect of the application. This integration aims to provide caregivers with a seamless means of receiving the latest health updates, eliminating any friction in accessing essential information. | partial |
## Inspiration
During the past summer, we experienced the struggles of finding subletters for our apartment. Ads were posted in various locations, ranging from Facebook to WeChat. Our feeds were filled with other people looking for people to sublet as well. As a result, we decided to create Subletting Made Easy. We envision a platform where the process for students looking for a place to stay as well as students looking to rent their apartment out is as simple as possible.
## What it does
Our application provides an easy-to-use interface for both students looking for subletters, and studentsseeking sublets to find the right people/apartments.
## Challenges we ran into
Aside from building a clean UI and adding correct functionality, we wanted to create an extremely secure platform for each user on our app. Integrating multiple authentication tools from the Firebase and Docusign API caused various roadblocks in our application development. Additionally, despite working earlier in the morning, we ran into an Authentication Error when trying to access the HTTP Get REST API call within Click API, thus inhibiting our ability to verify the registration status of users.
## What we learned
We learned a lot about the process of building an application from scratch, from front-end/UI design to back-end/database integration.
## What's next
We built a functional MVP during this hackathon, but we want to expand our app to include more features such as adding secure payments and more methods to search and filter results. There's tons of possibilities for what we can add for the future to help students around the globe find sublets and subleters. | ## Inspiration
In the “new normal” that COVID-19 has caused us to adapt to, our group found that a common challenge we faced was deciding where it was safe to go to complete trivial daily tasks, such as grocery shopping or eating out on occasion. We were inspired to create a mobile app using a database created completely by its users - a program where anyone could rate how well these “hubs” were following COVID-19 safety protocol.
## What it does
Our app allows for users to search a “hub” using a Google Map API, and write a review by rating prompted questions on a scale of 1 to 5 regarding how well the location enforces public health guidelines. Future visitors can then see these reviews and add their own, contributing to a collective safety rating.
## How I built it
We collaborated using Github and Android Studio, and incorporated both a Google Maps API as well integrated Firebase API.
## Challenges I ran into
Our group unfortunately faced a number of unprecedented challenges, including losing a team member mid-hack due to an emergency situation, working with 3 different timezones, and additional technical difficulties. However, we pushed through and came up with a final product we are all very passionate about and proud of!
## Accomplishments that I'm proud of
We are proud of how well we collaborated through adversity, and having never met each other in person before. We were able to tackle a prevalent social issue and come up with a plausible solution that could help bring our communities together, worldwide, similar to how our diverse team was brought together through this opportunity.
## What I learned
Our team brought a wide range of different skill sets to the table, and we were able to learn lots from each other because of our various experiences. From a technical perspective, we improved on our Java and android development fluency. From a team perspective, we improved on our ability to compromise and adapt to unforeseen situations. For 3 of us, this was our first ever hackathon and we feel very lucky to have found such kind and patient teammates that we could learn a lot from. The amount of knowledge they shared in the past 24 hours is insane.
## What's next for SafeHubs
Our next steps for SafeHubs include personalizing user experience through creating profiles, and integrating it into our community. We want SafeHubs to be a new way of staying connected (virtually) to look out for each other and keep our neighbours safe during the pandemic. | ## Being a university student during the pandemic is very difficult. Not being able to connect with peers, run study sessions with friends and experience university life can be challenging and demotivating. With no present implementation of a specific data base that allows students to meet people in their classes and be automatically put into group chats, we were inspired to create our own.
## Our app allows students to easily setup a personalized profile (school specific) to connect with fellow classmates, be automatically put into class group chats via schedule upload and be able to browse clubs and events specific to their school. This app is a great way for students to connect with others and stay on track of activities happening in their school community.
## We built this app using an open-source mobile application framework called React Native and a real-time, cloud hosted database called Firebase. We outlined the GUI with the app using flow diagrams and implemented an application design that could be used by students via mobile. To target a wide range of users, we made sure to implement an app that could be used on android and IOS.
## Being new to this form of mobile development, we faced many challenges creating this app. The first challenge we faced was using GitHub. Although being familiar to the platform, we were unsure how to use git commands to work on the project simultaneously. However, we were quick to learn the required commands to collaborate and deliver the app on GitHub. Another challenge we faced was nested navigation within the software. Since our project highly relied on a real-time database, we also encountered difficulties with implementing the data base framework into our implementation.
## An accomplishment we are proud of is learning a plethora of different frameworks and how to implement them. We are also proud of being able to learn, design and code a project that can potentially help current and future university students across Ontario enhance their university lifestyles.
## We learned many things implementing this project. Through this project we learned about version control and collaborative coding through Git Hub commands. Using Firebase, we learned how to handle changing data and multiple authentications. We were also able to learn how to use JavaScript fundamentals as a library to build GUI via React Native. Overall, we were able to learn how to create an android and IOS application from scratch.
## What's next for USL- University Student Life!
We hope to further our expertise with the various platforms used creating this project and be able to create a fully functioning version. We hope to be able to help students across the province through this application. | partial |
## Inspiration
Ever sit through a long and excruciating video like a lecture or documentary? Is 2x speed too slow for youtube? TL;DW
## What it does
Just put in the link to the YouTube video you are watching, then wait as our Revlo and NLTK powered backend does natural language processing to give you the GIFs from GIPHY that best reflect the video!
## How I built it
The webapp takes in a link to a youtube video. We download the youtube video with pytube and convert the video into audio mp3 with ffmpeg. We upload the audio to Revspeech API to transcribe the video. Then, we used NLTK (natural language toolkit) for python in order to process the text. We first perform "part of speech" tagging and frequency detection of different words in order to identify key words in the video. In addition, we we identify key words from the title of the video. We pool these key words together in order to search for gifs on GIPHY. We then return these results on the React/Redux frontend of our app.
## Challenges I ran into
We experimented with different NLP algorithms to extract key words to search for gifs. One of which was RAKE keyword extraction. However, the algorithm relied on identifying uncommonly occurring words in the text, which did not line up well in finding relevant gifs.
tf-idf also did not work as well for our task because we had one document from the transcript rather than a library.
## Accomplishments that I'm proud of
We are proud of accomplishing the goal we set out to do. We were able to independently create different parts of the backend and frontend (NLP, flask server, and react/redux) and unify them together in the project.
## What I learned
We learned a lot about natural language processing and the applications it has with video. From the Rev API, we learned about how to handle large file transfer through multipart form data and to interface with API jobs.
## What's next for TLDW
Summarizing into 7 gifs (just kidding). We've discussed some of the limitations and bottlenecks of our app with the Rev team, who have told us about a faster API or a streaming API. This would be very useful to reduce wait times because our use case does not need to prioritize accuracy so much. We're also looking into a ranking system for sourced GIFs to provide funnier, more specific GIFs. | ## Inspiration
The inspiration for the project was our desire to make studying and learning more efficient and accessible for students and educators. Utilizing advancements in technology, like the increased availability and lower cost of text embeddings, to make the process of finding answers within educational materials more seamless and convenient.
## What it does
Wise Up is a website that takes many different types of file format, as well as plain text, and separates the information into "pages". Using text embeddings, it can then quickly search through all the pages in a text and figure out which ones are most likely to contain the answer to a question that the user sends. It can also recursively summarize the file at different levels of compression.
## How we built it
With blood, sweat and tears! We used many tools offered to us throughout the challenge to simplify our life. We used Javascript, HTML and CSS for the website, and used it to communicate to a Flask backend that can run our python scripts involving API calls and such. We have API calls to openAI text embeddings, to cohere's xlarge model, to GPT-3's API, OpenAI's Whisper Speech-to-Text model, and several modules for getting an mp4 from a youtube link, a text from a pdf, and so on.
## Challenges we ran into
We had problems getting the backend on Flask to run on a Ubuntu server, and later had to instead run it on a Windows machine. Moreover, getting the backend to communicate effectively with the frontend in real time was a real challenge. Extracting text and page data from files and links ended up taking more time than expected, and finally, since the latency of sending information back and forth from the front end to the backend would lead to a worse user experience, we attempted to implement some features of our semantic search algorithm in the frontend, which led to a lot of difficulties in transferring code from Python to Javascript.
## Accomplishments that we're proud of
Since OpenAI's text embeddings are very good and very new, and we use GPT-3.5 based on extracted information to formulate the answer, we believe we likely equal the state of the art in the task of quickly analyzing text and answer complex questions on it, and the ease of use for many different file formats makes us proud that this project and website can be useful for so many people so often. To understand a textbook and answer questions about its content, or to find specific information without knowing any relevant keywords, this product is simply incredibly good, and costs pennies to run. Moreover, we have added an identification system (users signing up with a username and password) to ensure that a specific account is capped at a certain usage of the API, which is at our own costs (pennies, but we wish to avoid it becoming many dollars without our awareness of it.
## What we learned
As time goes on, not only do LLMs get better, but new methods are developed to use them more efficiently and for greater results. Web development is quite unintuitive for beginners, especially when different programming languages need to interact. One tool that has saved us a few different times is using json for data transfer, and aws services to store Mbs of data very cheaply. Another thing we learned is that, unfortunately, as time goes on, LLMs get bigger and so sometimes much, much slower; api calls to GPT3 and to Whisper are often slow, taking minutes for 1000+ page textbooks.
## What's next for Wise Up
What's next for Wise Up is to make our product faster and more user-friendly. A feature we could add is to summarize text with a fine-tuned model rather than zero-shot learning with GPT-3. Additionally, a next step is to explore partnerships with educational institutions and companies to bring Wise Up to a wider audience and help even more students and educators in their learning journey, or attempt for the website to go viral on social media by advertising its usefulness. Moreover, adding a financial component to the account system could let our users cover the low costs of the APIs, aws and CPU running Whisper. | ## Inspiration
I like looking at things. I do not enjoy bad quality videos . I do not enjoy waiting. My CPU is a lazy fool. He just lays there like a drunkard on new years eve. My poor router has a heart attack every other day so I can stream the latest Kylie Jenner video blog post, or has the kids these days call it, a 'vlog' post.
CPU isn't being effectively leveraged to improve video quality. Deep learning methods are in their own world, concerned more with accuracy than applications. We decided to develop a machine learning application to enhance resolution while developing our models in such a way that they can effective run without 10,000 GPUs.
## What it does
We reduce your streaming bill. We let you stream Kylie's vlog in high definition. We connect first world medical resources to developing nations. We make convert an unrecognizeable figure in a cop's body cam to a human being. We improve video resolution.
## How I built it
Wow. So lots of stuff.
Web scraping youtube videos for datasets of 144, 240, 360, 480 pixels. Error catching, thread timeouts, yada, yada. Data is the most import part of machine learning, and no one cares in the slightest. So I'll move on.
## ML stuff now. Where the challenges begin
We tried research papers. Super Resolution Generative Adversarial Model [link](https://arxiv.org/abs/1609.04802). SRGAN with an attention layer [link](https://arxiv.org/pdf/1812.04821.pdf). These were so bad. The models were to large to hold in our laptop, much less in real time. The model's weights alone consisted of over 16GB. And yeah, they get pretty good accuracy. That's the result of training a million residual layers (actually *only* 80 layers) for months on GPU clusters. We did not have the time or resources to build anything similar to these papers. We did not follow onward with this path.
We instead looked to our own experience. Our team had previously analyzed the connection between image recognition and natural language processing and their shared relationship to high dimensional spaces [see here](https://arxiv.org/abs/1809.05286). We took these learnings and built a model that minimized the root mean squared error as it upscaled from 240 to 480 px.
However, we quickly hit a wall, as this pixel based loss consistently left the upscaled output with blurry edges. In order to address these edges, we used our model as the Generator in a Generative Adversarial Network. However, our generator was too powerful, and the discriminator was lost.
We decided then to leverage the work of the researchers before us in order to build this application for the people. We loaded a pretrained VGG network and leveraged its image embeddings as preprocessing for our discriminator. Leveraging this pretrained model, we were able to effectively iron out the blurry edges while still minimizing mean squared error.
Now model built. We then worked at 4 AM to build an application that can convert videos into high resolution.
## Accomplishments that I'm proud of
Building it good.
## What I learned
Balanced approaches and leveraging past learning
## What's next for Crystallize
Real time stream-enhance app. | partial |
## Inspiration
Many people on our campus use an app called When2Meet to schedule meetings, but their UI is terrible, their features are limited, and overall we thought it could be done better. We brainstormed what would make When2Meet better and thought the biggest thing would be a simple new UI as well as a proper account system to see all the meetings you have.
## What it does
Let's Meet is an app that allows people to schedule meetings effortlessly. "Make an account and make scheduling a breeze." A user can create a meeting and share it with others. Then everyone with access can choose which times work best for them.
## How we built it
We used a lot of Terraform! We really wanted to go with a serverless microservice architecture on AWS and thus chose to deploy via AWS. Since we were already using lambdas for the backend, it made sense to add Amplify for the frontend, Cognito for logging in, and DynamoDB for data storage. We wrote over 900 lines of Terraform to get our lambdas deployed, api gateway properly configured, permissions correct, and everything else we do in AWS configured. Other than AWS, we utilized React with Ant Design components. Our lambdas ran on Python 3.12.
## Challenges we ran into
The biggest challenge we ran into was a bug with AWS. For roughly 5 hours we fought intermittent 403 responses. Initially we had an authorizer on the API gateway, but after a short time we removed it. We confirmed it was deleting by searching the CLI for it. We double checked in the web console because we thought it may be the authorizer but it wasn't there anyway. This ended up requiring everything to be manually deleted around the API gate way and everything have to be rebuilt. Thanks to Terraform it made restoring everything relatively easy.
Another challenge was using Terraform and AWS itself. We had almost no knowledge of it going in and coming out we know there is so much more to learn, but with these skills we feel confident to set up anything in AWS.
## Accomplishments that we're proud of
We are so proud of our deployment and cloud architecture. We think that having built a cloud project of this scale in this time frame is no small feat. Even with some challenges our determination to complete the project helped us get through. We are also proud of our UI as we continue to strengthen our design skills.
## What we learned
We learned that implementing Terraform can sometimes be difficult depending on the scope and complexity of the task. This was our first time using a component library for frontend development and we now know how to design, connect, and build an app from start to finish.
## What's next for Let's Meet
We would add more features such as syncing the meetings to a Google Calendar. More customizations and features such as location would also be added so that users can communicate where to meet through the web app itself. | ## Inspiration
Since the advent of cloud based computing, personal computers have started to become less and less powerful, to the point where they have started to become little more than web viewers. While this has lowered costs, and consequently more people have access to computers, people have less freedom to run the programs that they want to and are limited to using applications that large companies, who are usually very disconnected from their users, decide they can run.
This is where we come in.
## What it does
Our project allows people to to connect to a wifi network, but instead of getting access to just the internet, they also get access to a portal where they can run code on powerful computers
For example, a student can come to campus and connect to the network, and they instantly have a way to run their projects, or train their neural networks with much more power than their laptop can provide.
## How we built it
We used used Django and JavaScript for the the interface that the end user accesses. We used python and lots of bash scripts to get stuff working on our servers, on both the low cost raspberry pis, and the remote computer that does most of the procesing.
## Challenges we ran into
We had trouble sand boxing code and setting limits on how much compute time one person has access to. We also had issues with lossy compression.
## Accomplishments that we're proud of
Establishing asynchronous connections between 3 or more different computers at once.
Managed to gain access to our server after I disabled passwords but forgot to copy over my ssh keys.
## What we learned
How to not mess up permissions, how to manage our very limited time even though we're burn't out.
## What's next for Untitled Compute Power Sharing Thing
We intend to fix a few small security issues and add support for more programming languages. | ## Inspiration
After experiencing multiple online meetings and courses, a constant ending question that arose from the meeting hosts was always a simple "How are we feeling?" or "Does everybody understand?" These questions are often followed by a simple nod from the audience despite their true comprehension of the information presented. Ultimately, the hosts will move on from the content as from what they know, the audience has understood the content. However, for many of us, this is not the case because of the intense Zoom fatigue that overcomes us and ends up hindering our full comprehension of all the material. It is extremely important to allow teachers to gain a better understanding of the more realistic "vibe" of the audience in the meeting, and thus improve the overall presentation method of future meetings.
## What it does
Our website plays a role in allowing meeting hosts to analyze the audiences receptiveness to the content. The host would upload any meeting recording as a .mp4 file on our website. Our application will output a table with each individual’s name and the most occurring “emotion” for each individual during the meeting. Based on the results, the host would know how to acknowledge his/her group's concerns in the next meeting.
## How we built it
We utilized the Hume.AI API to allow us to do an analysis on the emotions of the individuals in the meeting. Utilizing the data the Hume.AI provided us we ran an analysis on the average emotions each meeting participant felt throughout the meeting. That data was processed in Python and sent to our frontend using Flask. Our frontend was built using React.js. We stored the uploaded video in Google Cloud.
## Challenges we ran into
From our team, two members had no experience in HTML, CSS, and JavaScript, so they spent a lot of time practicing web development. They faced issues along the way with the logic and implementation of the code for the user interface of our website. This was also our first time using the Hume.AI API and also our first time playing with Google Cloud.
## Accomplishments that we're proud of
Every team member successfully learned from each and learned a great deal from this hackathon. We were a team that had fun hacking together and built a reasonable MVP. The highlight was definitely learning since for half the team it was their first hackathon and they had very little prior coding exposure.
## What we learned
From our team, two of the members had very minimal experience with web development. By the end of the hackathon, they learned how to develop a website and eventually built our final website using ReactJS. The other 2 team members, relatively new to AI, explored and applied the HumeAI API for the first time, and learned how it can be applied to detect individual facial expressions and emotions from a video recording. We also were able to successfully connect frontend to backend for the first time using Flask and also used Google cloud storage for the first time. This hackathon marked a lot of firsts for the team!
## What's next for BearVibeCheck
We hope to further improve upon the UI and make our algorithm faster and more scalable. Due to the nature of the hackathon a lot of pieces were hard-coded. Our goal is to provide this resource to Cal Professors who teach in a hybrid or fully online setting to allow them to gauge how students are feeling about certain content material. | partial |
## Inspiration
The motivation behind creating catPhish arises from the unfortunate reality that many non-tech-savvy individuals often fall victim to phishing scams. These scams can result in innocent people losing nearly their entire life savings due to these deceptive tactics employed by cybercriminals. By leveraging both AI technology and various APIs, this tool aims to empower users to identify and prevent potential threats. It serves as a vital resource in helping users recognize whether a website is reputable and trusted, thereby contributing in the prevention of financial and personal data loss.
## What it does
catPhish integrates multiple APIs, including the OpenAI API, to combat phishing schemes effectively. Designed as a user friendly Chrome extension, catPhish unites various services into a single tool. With just a simple click, users can diminish their doubts or avoid potential mistakes, making it an accessible solution for users of all levels of technical expertise.
## How we built it
CatPhish was developed using React for the user interface visible in the browser, while Python and JavaScript were employed for the backend operations. We integrated various tools to enhance its effectiveness in combating phishing attempts. These tools include the Google Safe Browsing API, which alerts users about potentially harmful websites, virus total, Exerra Anti-Phish, which specializes in detecting phishing threats. In addition, we incorporated OpenAI to leverage advanced technology for identifying malicious websites. To assess the credibility of websites, we employed the IP Quality Score tool, which evaluates factors like risk level. For managing user authentication and data storage, we relied on Firebase, a comprehensive platform that facilitates secure user authentication and data management. By combining these components, CatPhish emerges as a sturdy solution for safeguarding users against online scams, offering enhanced security and peace of mind during web browsing.
## Challenges we ran into
Throughout the development process, we came across various permissions and security related challenges essential to the project. Issues such as CORS (Cross-Origin Resource Sharing) and web-related security hurdles posed a significant amount of obstacles. While there were no straightforward solutions to these challenges, we adopted a proactive approach to address them effectively. One of the strategies we employed involved leveraging Python's Flask CORS to navigate around permission issues arising from cross origin requests. This allowed us to facilitate communication between different domains. Additionally, we encountered security issues such as unauthorized routing, however through careful analysis, we patched up these vulnerabilities to ensure the integrity and security of the application. Despite the complexity of the challenges, our team remained resilient and resourceful, allowing us to overcome them through critical thinking and innovative problem solving techniques. One noteworthy challenge we faced was the limitation of React browser routing within Chrome extensions. We discovered that traditional routing methods didn't work as expected within this environment, which allowed us to explore alternative solutions. Through research and experimentation, we learned about MemoryBrowsing, one of React's components. Implementing this approach enabled us to get around the limitations of Chrome's native routing restrictions.
## Accomplishments that we're proud of
We take great pride in our ability to successfully integrate several functionalities into a single project, despite facing several complexities and challenges along the way. Our team's collaborative effort, resilience, and support for one another have been extremely resourceful in overcoming obstacles and achieving our goals. By leveraging our expertise and working closely together, we were able to navigate through many technical issues, implement sophisticated features, and deliver a solid solution that addresses the critical need for enhanced security against phishing attacks. We take pride in the teamwork and trust among our team members.
## What we learned
Our journey with this project has been an extremely profound learning experience for all of us. As a team, it was our first venture into building a browser extension, which provided valuable insights into the complexity of extension development. We navigated through the process, gaining a deeper understanding of extension architecture and functionality. One of the significant learning points was integrating Python with TypeScript to facilitate communication between different parts of the project. This required us to manage API requests and data fetching efficiently, enhancing our skills in backend/frontend integration. Furthermore, diving into routing mechanisms within the extension environment expanded our knowledge base, with some team members developing a stronger grasp of routing concepts and implementation. The use of Tailwind CSS for styling purposes presented another learning opportunity. We explored its features and capabilities, improving our skills in responsive design and UI development. Understanding how extensions operate and interact with web browsers was another enlightening aspect of the project as it actually differed from how a web application operates. It provided practical insights into the inner workings of browser extensions and their functionalities. Additionally, our hands-on experience with Firebase empowered us to practice database implementation. Leveraging Firebase's user friendly interface, we gained experience in managing and storing data securely. The project also afforded us the chance to integrate multiple APIs using both Python and JavaScript, strengthening our understanding of API integration. Implementing these APIs within the React framework, coupled with TypeScript, improved our ability to build sturdy and scalable applications. Overall, our journey with this project has been marked by continuous learning and growth, furnishing us with valuable skills and insights that will undoubtedly benefit us in future endeavors.
## What's next for catPhish
What's next for CatPhish
The future holds exciting possibilities for CatPhish as we continue to enhance its capabilities and expand some of its offerings. One of our key objectives is to integrate additional trusted APIs to increase its phishing detection capabilities further. By leveraging a huge range of API services, we aim to further CatPhish's ability to identify and raduce phishing threats. We were also exploring the development of a proprietary machine learning model trained specifically on phishing attempts. This dedicated model will allow CatPhish to evolve and adapt to emerging phishing techniques. As the cybersecurity realm grows, on the other hand, cybercriminals are using effective and advanced skills such as MiTM (Man In the Middle) Attacks through advanced use of phishing pages and such. In addition to refining our machine learning capabilities, we plan to enhance the functionality of the OpenAI API chat GPT bot. By using advanced features such as web browsing using Bing and expanding its conversational abilities, we see ourselves creating a more comprehensive and intuitive user experience. | ## Inspiration
Having grown frustrated with the sheer volume of spam, fraudulent, and malicious unsolicited messages being sent over the internet, we wanted to develop a reliable way to distinguish the authentic from the spurious. We identified that the elderly and those who are not as technologically literate are at greater risk of falling victim to these social engineering attempts. These vulnerable sections of the population, typically overlooked by technology services, should be provided with more resources for protecting them from malicious actors.
Although current technology is capable of searching for specific keywords, it is not always capable of identifying the more advanced attacks that are becoming increasingly common. With recent advancements in Natural Language Processing models, however, we realized the potential that they had in the space of recognizing, identifying, and analyzing human-readable text for malicious content. We decided to use these new tools to identify higher-level and abstract features of potentially harmful messages, and use these analyses to better filter out those messages.
## What it does
We first collect text-based messages from different sources, such as messages and emails. Using these messages and existing threat knowledge, we engineered prompts for OpenAI to detect features that are common to modern attacks, including concrete topics (such as Finance) but also abstract concepts (such as friendliness or perceived closeness). We pass this information into a classifier, which generates a predictive score about the likelihood that this message is malicious or fraudulent.
We filter out highly suspicious messages and allow through very safe messages. We recognize that human involvement is important to developing an evolving tool, both through catching edge cases and data generation. Therefore, we have a dashboard for trained analysts, to which we provide information for both oversight and processing messages with less-clear classification.
## Ethics
The power behind Pistchio’s ability to detect potentially fraudulent behavior comes from a highly advanced Natural Language Processing tool. This means that the messages received by the user are scanned and analyzed before a final safety rating can be determined. With this in mind, it is most important that Pistachio recognizes and upholds the user’s data security and privacy. Additionally, the team behind Pistachio recognizes the existence of potential ethical risks associated with AI and large-scale data analytics products.
Some areas of concern have been addressed by our team in the following ways:
**Privacy:** Our users will be made aware of the way in which our Natural Language Processing tool analyzes their information, including how their data will be stored, and who it can be accessed by. We require that users are fully informed of how we use their data before giving their consent.
One major issue with our platform is that senders of messages cannot give informed consent before sending their initial message. We acknowledge that people expect privacy in their communications, but message gathering is essential to our platform. To better address this step in the future, we will research and carefully follow the laws and regulations around third party collection and viewing of communication.
Pistachio will need to collect sensitive information or private communications that the user may not want to reveal to others. Therefore, we decided to develop an option for users to not filter out certain known forms of communication. This feature, enabled by default and configurable by users, will allow them to protect their private communications, though at the slight expense of product effectiveness. We also hope to develop personal information filters and message scrambling services to ensure that our analysts do not have intrusive levels of access to personal data.
We also try to avoid data security issues by storing only data that we need to provide our services. This includes deleting the raw messages as soon as processing them is complete. Only portions of messages flagged as highly likely to contain malicious content will be stored for the purpose of justification and training.
**Bias:** The nature of Natural Language Processing models requires a large amount of data to be trained on. Since our model is trained in English, it may be the case that text containing minor grammatical, spelling, or syntactic errors will be flagged as potentially fraudulent. This means that some users for whom English is not their primary language of communication may experience mistakenly labeled fraudulent messages more frequently compared to native English speakers. This may also extend to those from other underrepresented backgrounds, whose language usage patterns may trigger more investigation.
To address these issues, future training will use data collected from a diverse set of sources and reflect the different way in which all people communicate. We can also give extra attention to sources that may be underrepresented, which would allow us to provide a more equitable analysis. Finally, we need to make sure that our models are equitable through thorough testing on this equitable data before deployment.
**Reliance on technology:** We recognize that technology is fallible and that something as important as security should always have human oversight. Therefore, we designed our technological tools to support, rather than replace our analysts, who are essential to ensuring user safety.
The use of technology to identify fraudulent activity may also result in a reliance on technology over human judgment, which could lead to an erosion of critical thinking skills in detecting fraudulent behavior. It is important for Pistachio to educate users on how to identify and handle suspicious messages themselves, rather than solely relying on technology to do so.
## What's next for Pistachio
We want to continue to develop the most advanced and effective form of fraud protection for our customers. Some improvements that we want to develop to achieve this goal include forming better methods of gathering data to train our model to identify fraud and providing support for more mediums including phone calls and social media messaging. In order to keep up with the advancing sophistication of scamming techniques, we want to be able to rapidly iterate new and improved analysis patterns to stay ahead of the curve.
## How we built it
Palantir Foundry, Python code, and OpenAI API
## Challenges we ran into
New technology - We were new to Palantir Foundry and accessing the OpenAI API. There were a few configuration issues we ran into, we really appreciated all the help from the Palantir team to get us through these!
Lack of training data - Publicly accessible data is quite old and not relevant. The only data we had was the messages and spam from our own devices, which was limited.
## Accomplishments that we're proud of
1. Quickly learning how to use Palantir Foundry.
2. Solving a real world problem using cutting edge technology. | ## Inspiration
We are tired of being forgotten and not recognized by others for our accomplishments. We built a software and platform that helps others get to know each other better and in a faster way, using technology to bring the world together.
## What it does
Face Konnex identifies people and helps the user identify people, who they are, what they do, and how they can help others.
## How we built it
We built it using Android studio, Java, OpenCV and Android Things
## Challenges we ran into
Programming Android things for the first time. WiFi not working properly, storing the updated location. Display was slow. Java compiler problems.
## Accomplishments that we're proud of.
Facial Recognition Software Successfully working on all Devices, 1. Android Things, 2. Android phones.
Prototype for Konnex Glass Holo Phone.
Working together as a team.
## What we learned
Android Things, IOT,
Advanced our android programming skills
Working better as a team
## What's next for Konnex IOT
Improving Facial Recognition software, identify and connecting users on konnex
Inputting software into Konnex Holo Phone | losing |
## Inspiration
NFTs or Non-Fungible Tokens are a new form of digital assets stored on blockchains. One particularly popular usage of NFTs is to record ownership of digital art. NFTs offer several advantages over traditional forms of art including:
1. The ledger of record which is a globally distributed database, meaning there is persistent, incorruptible verification of who is the actual owner
2. The art can be transferred electronically and stored digitally, saving storage and maintenance costs while simultaneously providing a memetic vehicle that can be seen by billions of people over the internet
3. Royalties can be programmatically paid out to the artist whenever the NFT is transferred between parties leading to more fair compensation and better funding for the creative industry
These advantages resulted in, [the total value of NFTs reaching $41 billion dollars at the end of 2021](https://markets.businessinsider.com/news/currencies/nft-market-41-billion-nearing-fine-art-market-size-2022-1). Clearly, there is a huge market for NFTs.
However, many people do not know the first thing about creating an NFT and the process can be quite technically complex. Artists often hire developers to help turn their art into NFTs and [businesses have been created merely to help create NFTs](https://synapsereality.io/services/synapse-new-nft-services/).
## What it does
SimpleMint is a web app that allows anyone to create an NFT with a few clicks of a button. All it requires is for the user to upload an image and give the NFT a name. Upon clicking ‘mint now’, an NFT is created with the image stored in IPFS and automatically deposited into the creator's blockchain wallet. The underlying blockchain is [Hedera](https://hedera.com/), which is a carbon negative, enterprise grade blockchain trusted by companies like Google and Boeing.
## How we built it
* React app
* IPFS for storage of uploaded images
* Hedera blockchain to create, mint, and store the NFTs
## Challenges we ran into
* Figuring out how to use the IPFS js-sdk to programatically store & retrieve image files
* Figuring out wallet authentication due to the chrome web store going down for the hashpack app which rendered the one click process to connect wallet useless. Had to check on Hedera’s discord to find an alternative solution
## Accomplishments that we're proud of
* Building a working MVP in a day!
## What we learned
* How IPFS works
* How to build on Hedera with the javascript SDK
## What's next for SimpleMint
We hope that both consumers and creators will be able to conveniently turn their images into NFTs to create art that will last forever and partake in the massive financial upside of this new technology. | ## Inspiration
Medical hospitals often conduct “discharge interviews” with healing patients deemed healthy enough to leave the premises. The purpose of this test is to determine what accommodations patients will require post-hospital-admittance. For instance, elderly patients who live in multi-level houses might require extra care and attention. The issue, however, is that doctors and nurses must conduct such interviews and then spend about 30 to 40 minutes afterwards documenting the session. This is an inefficient use of time. A faster, streamlined system would allow medical professionals to spend their time on more pressing matters, such as examining or interviewing more patients.
## What it does
Airscribe is a smart, automated interview transcriber that is also able to do a cursory examination of the exchange and give recommendations to the patient. (We don’t intend for this to be the sole source of recommendations, however. Doctors will likely review the documents afterwards and follow up accordingly.)
## How we built it
Speech-to-text: built on IBM Watson.
Text-to-better-text: Interviewer and patient’s comments are processed with a Python script that breaks text into Q & A dialogue. Algorithm evaluates patient’s response to find key information, which is displayed in easy-to-read, standardized survey format. Based on patient’s responses to questions, suggestions and feedback for the patient after hospital discharge are generated.
## Challenges we ran into
Getting speech recognition to work, recognizing question types, recognizing answer types, formatting them into the HTML.
## Accomplishments that we're proud of
After talking with health professionals we decided this idea was a better direction than our first idea and completed it in 8 hours.
## What we learned
Teamwork! Javascript! Python!
Web development for mobile is difficult (\* cough \* phonegap) !
## What's next for Airscribe
* Smarter, more flexible natural language processing.
* Draw from a database and use algorithms to generate better feedback/suggestions.
* A more extensive database of questions. | ## Inspiration
With NFTs, Crypto, and Blockchain becoming more mainstream, online marketplaces like digital art platforms have never been more relevant. However, until recently, life has been very difficult for artists to portray and sell their art in art galleries. Our team saw this as an opportunity to not only introduce artists to crypto space and enable them to sell digital art online but also donate to modern social causes.
## What it does
1. Through our platform artists can upload their digital art and create an NFT for the art.
2. Anyone on the platform can buy the artists art (NFT) using crypto.
3. When a consumer buys the NFT, the ownership will transfer from seller to buyer.
4. Ten percent or more (if the artist chooses) of the proceeds will go to a charitable organization of the artists choice.
5. Depending on the amount of money donated towards a cause will determine the artist's position on our leaderboard.
## How we built it
1. Blockchain, Vue.js, Hedera Token Service, NFT.
2. React, Node.js, Docker.
3. Frontend are using HTML, CSS, and Javascript.
4. We also used Figma .
## Challenges we ran into
Most of our team members barely knew anything about cryptocurrencies and blockchains. We took on this project knowing full on that we will have to do a lot of work (and suffer), and we are proud that we were able to build this project. We ran into errors on front-end and back-end parts but by searching on Google and reading through documentations, we figured them out.
## Accomplishments that we're proud of
1. Creating an interface that creates NFT's
2. Creating a frontend that properly represents our platform
## What we learned
A lot about digital collectibles, NFT, Crypto currency, Block chain, Front-end development, Figma, React and many more technologies.
## What's next for Influenza
In future we hope to expand on the website, adding more functionality to the components and ensuring security of transactions and streamlining the NFT creation process. | partial |
## Inspiration
We're computer science students, need we say more?
## What it does
"Single or Nah" takes in the name of a friend and predicts if they are in a relationship, saving you much time (and face) in asking around. We pull relevant Instagram data including posts, captions, and comments to drive our Azure-powered analysis. Posts are analyzed for genders, ages, emotions, and smiles -- with each aspect contributing to the final score. Captions and comments are analyzed for their sentiment, which give insights into one's relationship status. Our final product is a hosted web-app that takes in a friend's Instagram handle and generate a percentage denoting how likely they are to be in a relationship.
## How we built it
Our first problem was obtaining Instagram data. The tool we use is a significantly improved version of an open-source Instagram scraper API (<https://github.com/rarcega/instagram-scraper>). The tool originally ran as a Python command line argument, which was impractical to use in a WebApp. We modernized the tool, giving us increased flexibility and allowing us to use it within a Python application.
We run Microsoft's Face-API on the target friend's profile picture to guess their gender and age -- this will be the age range we are interested in. Then, we run through their most recent posts, using Face-API to capture genders, ages, emotions, and smiles of people in those posts to finally derive a sub-score that will factor into the final result. We guess that the more happy and more pictures with the opposite gender, you'd be less likely to be single!
We take a similar approach to captions and comments. First, we used Google's Word2vec to generate semantically similar words to certain keywords (love, boyfriend, girlfriend, relationship, etc.) as well as assign weights to those words. Furthermore, we included Emojis (is usually a good giveaway!) into our weighting scheme[link](https://gist.github.com/chrisfischer/144191eae03e64dc9494a2967241673a). We use Microsoft's Text Analytics API on this keywords-weight scheme to obtain a sentiment sub-score and a keyword sub-score.
Once we have these sub-scores, we aggregate them into a final percentage, denoting how likely your friend is single. It was time to take it live. We integrated all the individual calculations and aggregations into a Django., then hosted all necessary computation using Azure WebApps. Finally, we designed a simple interface to allow inputs as well as to display results with a combination of HTML, CSS, JavaScript, and JQuery.
## Challenges we ran into
The main challenge was that we were limited by our resources. We only had access to basic accounts for some of the software we used, so we had to be careful how on often and how intensely we used tools to prevent exhausting our subscriptions. For example, we limited the number of posts we analyzed per person. Also, our Azure server uses the most basic service, meaning it does not have enough computing power to host more than a few clients.
The application only works on "public" Instagram ideas, so we were unable to find a good number of test subjects to fine tune our process. For the accounts we did have access to, the application produced a reasonable answer, leading us to believe that the app is a good predictor.
## Accomplishments that we're proud of
We proud that we were able to build this WebApp using tools and APIs that we haven't used before. In the end, our project worked reasonably well and accurately. We were able to try it on people and get a score which is an accomplishment in that. Finally, we're proud that we were able to create a relevant tool in today's age of social media -- I mean I know I would use this app to narrow down who to DM.
## What we learned
We learned about the Microsoft Azure API (Face API, Text Analytics API, and web hosting), NLP techniques, and full stack web development. We also learned a lot of useful software development techniques such as how to better use git to handle problems, creating virtual environments, as well as setting milestones to meet.
## What's next for Single or Nah
The next steps for Single or Nah is to make the website and computations more scalable. More scalability allows more people to use our product to find who they should DM -- and who doesn't want that?? We also want to work on accuracy, either by adjusting weights given more data to learn from or by using full-fledged Machine Learning. Hopefully more accuracy would save "Single or Nah" from some awkward moments... like asking someone out... who isn't single... | ## Inspiration
Companies lack insight into their users, audiences, and marketing funnel.
This is an issue I've run into on many separate occasions. Specifically,
* while doing cold marketing outbound, need better insight onto key variables of successful outreach
* while writing a blog, I have no idea who reads it
* while triaging inbound, which users do I prioritize
Given a list of user emails, Cognito scrapes the internet finding public information about users and the companies they work at. With this corpus of unstructured data, Cognito allows you to extract any relevant piece of information across users. An unordered collection of text and images becomes structured data relevant to you.
## A Few Example Use Cases
* Startups going to market need to identify where their power users are and their defining attributes. We allow them to ask questions about their users, helping them define their niche and better focus outbound marketing.
* SaaS platforms such as Modal have trouble with abuse. They want to ensure people joining are not going to abuse it. We provide more data points to make better judgments such as taking into account how senior of a developer a user is and the types of companies they used to work at.
* VCs such as YC have emails from a bunch of prospective founders and highly talented individuals. Cognito would allow them to ask key questions such as what companies are people flocking to work at and who are the highest potential people in my network.
* Content creators such as authors on Substack looking to monetize their work have a much more compelling case when coming to advertisers with a good grasp on who their audience is.
## What it does
Given a list of user emails, we crawl the web, gather a corpus of relevant text data, and allow companies/creators/influencers/marketers to ask any question about their users/audience.
We store these data points and allow for advanced querying in natural language.
[video demo](https://www.loom.com/share/1c13be37e0f8419c81aa731c7b3085f0)
## How we built it
we orchestrated 3 ML models across 7 different tasks in 30 hours
* search results person info extraction
* custom field generation from scraped data
* company website details extraction
* facial recognition for age and gender
* NoSQL query generation from natural language
* crunchbase company summary extraction
* email extraction
This culminated in a full-stack web app with batch processing via async pubsub messaging. Deployed on GCP using Cloud Run, Cloud Functions, Cloud Storage, PubSub, Programmable Search, and Cloud Build.
## What we learned
* how to be really creative about scraping
* batch processing paradigms
* prompt engineering techniques
## What's next for Cognito
1. predictive modeling and classification using scraped data points
2. scrape more data
3. more advanced queries
4. proactive alerts
[video demo](https://www.loom.com/share/1c13be37e0f8419c81aa731c7b3085f0) | ## Inspiration
The inspiration for building this project likely stemmed from the desire to empower students and make learning coding more accessible and engaging. It combines AI technology with education to provide tailored support, making it easier for students to grasp coding concepts. The goal may have been to address common challenges students face when learning to code, such as doubts and the need for personalized resources. Overall, the project's inspiration appears to be driven by a passion for enhancing the educational experience and fostering a supportive learning environment.
## What it does
The chatbot project is designed to cater to a range of use cases, with a clear hierarchy of priorities. At the highest priority level, the chatbot serves as a real-time coding companion, offering students immediate and accurate responses and explanations to address their coding questions and doubts promptly. This ensures that students can swiftly resolve any coding-related issues they encounter. Moving to the medium priority use case, the chatbot provides personalized learning recommendations. By evaluating a student's individual skills and preferences, the chatbot tailors its suggestions for learning resources, such as tutorials and practice problems. This personalized approach aims to enhance the overall learning experience by delivering materials that align with each student's unique needs. At the lowest priority level, the chatbot functions as a bridge, facilitating connections between students and coding mentors. When students require more in-depth assistance or guidance, the chatbot can help connect them with human mentors who can provide additional support beyond what the chatbot itself offers. This multi-tiered approach reflects the project's commitment to delivering comprehensive support to students learning to code, spanning from immediate help to personalized recommendations and, when necessary, human mentorship.
## How we built it
The development process of our AI chatbot involved a creative integration of various Language Models (**LLMs**) using an innovative technology called **LangChain**. We harnessed the capabilities of LLMs like **Bard**, **ChatGPT**, and **PaLM**, crafting a robust pipeline that seamlessly combines all of them. This integration forms the core of our powerful AI bot, enabling it to efficiently handle a wide range of coding-related questions and doubts commonly faced by students. By unifying these LLMs, we've created a chatbot that excels in providing accurate and timely responses, enhancing the learning experience for students.
Moreover, our project features a **centralized database** that plays a pivotal role in connecting students with coding mentors. This database serves as a valuable resource, ensuring that students can access the expertise and guidance of coding mentors when they require additional assistance. It establishes a seamless mechanism for real-time interaction between students and mentors, fostering a supportive learning environment. This element of our project reflects our commitment to not only offer AI-driven solutions but also to facilitate meaningful human connections that further enrich the educational journey.
In essence, our development journey has been marked by innovation, creativity, and a deep commitment to addressing the unique needs of students learning to code. By integrating advanced LLMs and building a robust infrastructure for mentorship, we've created a holistic AI chatbot that empowers students and enhances their coding learning experience.
## Challenges we ran into
Addressing the various challenges encountered during the development of our AI chatbot project involved a combination of innovative solutions and persistent efforts. To conquer integration complexities, we invested substantial time and resources in research and development, meticulously fine-tuning different Language Models (LLMs) such as Bard, ChatGPT, and Palm to work harmoniously within a unified pipeline. Data quality and training challenges were met through an ongoing commitment to curate high-quality coding datasets and an iterative training process that continually improved the chatbot's accuracy based on real-time user interactions and feedback.
For real-time interactivity, we optimized our infrastructure, leveraging cloud resources and employing responsive design techniques to ensure low-latency communication and enhance the overall user experience. Mentor matching algorithms were refined continuously, considering factors such as student proficiency and mentor expertise, making the pairing process more precise. Ethical considerations were addressed by implementing strict ethical guidelines and bias audits, promoting fairness and transparency in chatbot responses.
User experience was enhanced through user-centric design principles, including usability testing, user interface refinements, and incorporation of user feedback to create an intuitive and engaging interface. Ensuring scalability involved the deployment of elastic cloud infrastructure, supported by regular load testing and optimization to accommodate a growing user base.
Security was a paramount concern, and we safeguarded sensitive data through robust encryption, user authentication protocols, and ongoing cybersecurity best practices, conducting regular security audits to protect user information. Our collective dedication, collaborative spirit, and commitment to excellence allowed us to successfully navigate and overcome these challenges, resulting in a resilient and effective AI chatbot that empowers students in their coding education while upholding the highest standards of quality, security, and ethical responsibility.
## Accomplishments that we're proud of
Throughout the development and implementation of our AI chatbot project, our team has achieved several accomplishments that we take immense pride in:
**Robust Integration of LLMs:** We successfully integrated various Language Models (LLMs) like Bard, ChatGPT, and Palm into a unified pipeline, creating a versatile and powerful chatbot that combines their capabilities to provide comprehensive coding assistance. This accomplishment showcases our technical expertise and innovation in the field of natural language processing.
**Real-time Support**: We achieved the goal of providing real-time coding assistance to students, ensuring they can quickly resolve their coding questions and doubts. This accomplishment significantly enhances the learning experience, as students can rely on timely support from the chatbot.
**Personalized Learning Recommendations**: Our chatbot excels in offering personalized learning resources to students based on their skills and preferences. This accomplishment enhances the effectiveness of the learning process by tailoring educational materials to individual needs.
**Mentor-Matching Database**: We established a centralized database for coding mentors, facilitating connections between students and mentors when more in-depth assistance is required. This accomplishment emphasizes our commitment to fostering meaningful human connections within the digital learning environment.
**Ethical and Bias Mitigation**: We implemented rigorous ethical guidelines and bias audits to ensure that the chatbot's responses are fair and unbiased. This accomplishment demonstrates our dedication to responsible AI development and user fairness.
**User-Centric Design**: We created an intuitive and user-friendly interface that simplifies the interaction between students and the chatbot. This user-centric design accomplishment enhances the overall experience for students, making the learning process more engaging and efficient.
**Scalability**: Our chatbot's architecture is designed to scale efficiently, allowing it to accommodate a growing user base without compromising performance. This scalability accomplishment ensures that our technology remains accessible to a broad audience.
**Security Measures**: We implemented robust security protocols to protect user data, ensuring that sensitive information is safeguarded. Regular security audits and updates represent our commitment to user data privacy and cybersecurity.
These accomplishments collectively reflect our team's dedication to advancing education through technology, providing students with valuable support, personalized learning experiences, and access to coding mentors. We take pride in the positive impact our AI chatbot has on the educational journey of students and our commitment to responsible and ethical AI development.
## What we learned
The journey of developing our AI chatbot project has been an enriching experience, filled with valuable lessons that have furthered our understanding of technology, education, and teamwork. Here are some of the key lessons we've learned:
**Complex Integration Requires Careful Planning**: Integrating diverse Language Models (LLMs) is a complex task that demands meticulous planning and a deep understanding of each model's capabilities. We learned the importance of a well-thought-out integration strategy.
**Data Quality Is Paramount**: The quality of training data significantly influences the chatbot's performance. We've learned that meticulous data curation and continuous improvement are essential to building an accurate AI model.
**Real-time Interaction Enhances Learning**: The ability to provide real-time coding assistance has a profound impact on the learning experience. We learned that prompt support can greatly boost students' confidence and comprehension.
**Personalization Empowers Learners**: Tailoring learning resources to individual students' needs is a powerful way to enhance education. We've discovered that personalization leads to more effective learning outcomes.
**Mentorship Matters**: Our mentor-matching database has highlighted the importance of human interaction in education. We learned that connecting students with mentors for deeper assistance is invaluable.
Ethical AI Development Is Non-Negotiable: Addressing ethical concerns and bias in AI systems is imperative. We've gained insights into the importance of transparent, fair, and unbiased AI interactions.
**User Experience Drives Engagement**: A user-centric design is vital for engaging students effectively. We've learned that a well-designed interface improves the overall educational experience.
**Scalability Is Essential for Growth**: Building scalable infrastructure is crucial to accommodate a growing user base. We've learned that the ability to adapt and scale is key to long-term success.
**Security Is a Constant Priority**: Protecting user data is a fundamental responsibility. We've learned that ongoing vigilance and adherence to best practices in cybersecurity are essential.
**Teamwork Is Invaluable**: Collaborative and cross-disciplinary teamwork is at the heart of a successful project. We've experienced the benefits of diverse skills and perspectives working together.
These lessons have not only shaped our approach to the AI chatbot project but have also broadened our knowledge and understanding of technology's role in education and the ethical responsibilities that come with it. As we continue to develop and refine our chatbot, these lessons serve as guideposts for our future endeavors in enhancing learning and supporting students through innovative technology.
## What's next for ~ENIGMA
The journey of our AI chatbot project is an ongoing one, and we have ambitious plans for its future:
**Continuous Learning and Improvement**: We are committed to a continuous cycle of learning and improvement. This includes refining the chatbot's responses, expanding its knowledge base, and enhancing its problem-solving abilities.
**Advanced AI Capabilities**: We aim to incorporate state-of-the-art AI techniques to make the chatbot even more powerful and responsive. This includes exploring advanced machine learning models and technologies.
**Expanded Subject Coverage**: While our chatbot currently specializes in coding, we envision expanding its capabilities to cover a wider range of subjects and academic disciplines, providing comprehensive educational support.
**Enhanced Personalization**: We will invest in further personalization, tailoring learning resources and mentor matches even more closely to individual student needs, preferences, and learning styles.
**Multi-Lingual Support**: We plan to expand the chatbot's language capabilities, enabling it to provide support to students in multiple languages, making it accessible to a more global audience.
**Mobile Applications**: Developing mobile applications will enhance the chatbot's accessibility, allowing students to engage with it on their smartphones and tablets.
**Integration with Learning Management Systems**: We aim to integrate our chatbot with popular learning management systems used in educational institutions, making it an integral part of formal education.
**Feedback Mechanisms**: We will implement more sophisticated feedback mechanisms, allowing users to provide input that helps improve the chatbot's performance and user experience.
**Research and Publication**: Our team is dedicated to advancing the field of AI in education. We plan to conduct research and contribute to academic publications in the realm of AI-driven educational support.
**Community Engagement**: We are eager to engage with the educational community to gather insights, collaborate, and ensure that our chatbot remains responsive to the evolving needs of students and educators.
In essence, the future of our project is marked by a commitment to innovation, expansion, and a relentless pursuit of excellence in the realm of AI-driven education. Our goal is to provide increasingly effective and personalized support to students, empower educators, and contribute to the broader conversation surrounding AI in education. | winning |
As a team, we collectively understand the struggles of being forgetful. With needing to remake an Apple ID password on every login, it gets frustrating, and that’s only a fraction of what patients with Alzheimer's experience. With this in mind, we built ActivEar.
ActivEar is an application that continuously records, transfers, and transcribes data from the user. Utilizing the microphone, it transforms audio into text that is then stored in a vector database. When the user asks a question, ActivEar retrieves and searches the relevant data, using an API to provide helpful answers.
Originally, we had planned for a major hardware component that consisted of an ESP32-based microcontroller, a microphone, a speaker, and a battery. The goal was to have an inexpensive, portable method of access to ActivEar's services. Unfortunately, we ran across multiple unrecoverable issues. After we had tested each component and subsequently soldered the circuit together, we discovered a short that proved fatal to the microcontroller. Despite this, we carried on and loaned an alternative microcontroller. After a minor redesign and reassembly, we later discovered that some of the crucial libraries we had been using were no longer compatible and there were no functional equivalents. Defeated, sleep-deprived, and with 9 hours remaining, we went back to the drawing board to see how we could salvage what we had. Most of the software backend had been completed at this point, so we made the difficult decision of dropping the hardware component completely in favour of a multi-platform application. With only 8 hours remaining, we successfully put together a working browser demo as shown.
Despite facing so many challenges, we never gave up and continued to work past them. We learned the importance of working together to push through challenges and what could be achieved when we do so. Every project has its challenges, it is just a matter of working through them.
As young students, it is common for us to be overlooked and underestimated. Building this product, which is fully functional (even after our hiccups), is a huge achievement for all of us. Between building the hardware model from scratch using our resources, to designing the software, we accomplished more than we expected. | ## Inspiration
To students, notes are an academic gold standard. Many work tirelessly to create immaculate lecture transcriptions and course study guides - but not everyone writes in a style that is accessible to everyone else, making the task of sharing notes sometimes fruitless. Moreover, board notes are often messy, or unreliable. Wouldn't it be nice for a crowdsourced bank of lecture notes to be minimized to just the bare essentials? But moreover, essentials tailored to your chosen style of reading and learning?
## What it does
The hope was to build a web app for students to share class notes, and be able to merge those notes into their own custom file, which would contain their favourite parts of everyone else's notes. In reality, we made an app that authenticates a user, and not much else.
## How we built it
We built a MERN stack app - MongoDB, Express.js, and Node.js for the back end, while React for the front end - and powered the AI that we were able to put together using Google Cloud.
## Challenges we ran into
We found user registration and authentication to be a huge hurdle to our development, stunting nearly all of our wished-for features. Moreover, the lack of a solid plan and framework made fitting the back-end and the front-end together rather difficult, as these pieces were all developed seperately
## Accomplishments that we're proud of
We're proud that we all learned a great deal about web development, front-end frameworks, databasing, and why we should use Flask. We're also incredibly proud of our designer, who put together the most impressive parts of our app. | ## Inspiration
In a world where a tweet out of context can cost you your career, it is increasingly important to be in the right, but this rigidity alienates a productive and proud group of people in the world--the impulsive. Politically Correct is a solution for those who would risk a slap for a laugh and who would make light of a dark situation.
The question of whether we have stepped too far over the line often comes into our minds, sparking the endless internal debate of "Should I?" or "Should I not?" Politically Correct, leveraging both artificial and natural intelligence, gives its users the opportunity to get safe and effective feedback to end these constant internal dialogues.
## What it does
Through a carefully integrated messaging backend, this application utilizes Magnet's API to send the text that the user wants to verify to a randomly selected group of users. These user express their opinion of the anonymous user's statement, rating it as acceptable or unacceptable. This application enhances the user's experience with a seamless graphical interface with a "Feed" giving the user messages from others to judge and "My Questions" allowing users to receive feedback. The machine learning component, implemented and ready to be rolled out in "My Questions", will use an Azure-based logistic regression to automatically classify text as politically correct or incorrect.
## How I built it
Blood, sweat, tears, and Red Bull were the fuel that ignited Politically Correct. Many thanks to the kind folks from Magnet and Azure (Microsoft) for helping us early in the morning or late at night. For the build we utilized the Magnet SDK to enable easy in-app messaging and receiving between users and a random sample of users. With the messages, we added and triggered a message 'send-event' based on the click of a judgement button or an ask button. When a message was received we sorted the message (either a message to be judged or a message that is a judgement). To ensure that all judgement messages corresponded to the proper question messages we used special hash ids and stored these ids in serialized data. We updated the Feed and the MyQuestions tab on every message receive.
For Azure we used logistic regression and a looooooooonnnnnnnnggggggg list of offensive and not offensive phrases. Then after training the set to create a model, we set up a web api that will be called by Politically Correct to get an initial sentiment analysis of the message.
## Challenges We ran into
Aside from the multiple attempts of putting foot in mouth, the biggest challenges came from both platforms:
**Azure**: *Perfectionism*
While developing a workflow for the app the question of "How do I accurately predict the abuse in a statement?" often arose. As this challenge probably provokes similar doubts from Ph.Ds we would like to point to perfectionism as the biggest challenge with Azure.
**Magnet:** *Impatience*
Ever the victims, we like to blame companies for putting a lot of words in their tutorials because it makes it hard for us to skim through (we can't be bothered with learning we want to DO!!). The tutorials and documentation provided the support and gave all of us the ability to learn to sit down and iterate through a puzzle until we understood the problem.
It was difficult to figure out the format in which we would communicate the judgement of the random sample of users.
## Accomplishments that I'm proud of
We are very proud of the fact that we have a fully integrated Magnet messaging API, and the perfect implementation of the database backend.
## What I learned
Aside from the two virtues of "good enough" and "patience", we learned how to work together, how to not work together, and how to have fun (in a way that sleep deprivation can allow). In the context of technical expertise (which is what everyone is going to be plugging right here), we gained a greater depth of knowledge on the Magnet SDK, and how to create a work flow and api on Azure.
## What's next for Politically Correct
The future is always amazing, and the future for Politically Correct is better (believe it or not). The implementation for Politically Correct enjoys the partial integration of two amazing technologies Azure and Magnet, but the full assimilation (we are talking Borg level) would result in the fulfillment of two goals:
1) Dynamically train the offense of specific language by accounting for the people's responses to the message.
2) Allow integrations with various multimedia sites (i.e. Facebook and Twitter) to include an automatic submission/decline feature when there is a consensus on the statement. | losing |
## Inspiration
## What it does
The leap motion controller tracks hand gestures and movements like what an actual DJ would do (raise/lower volume, cross-fade, increase/decrease BPM, etc.) which translate into the equivalent in the VirtualDJ software. Allowing the user to mix and be a DJ without touching a mouse or keyboard. Added to this is a synth pad for the DJ to use.
## How we built it
We used python to interpret gestures using Leap Motion and translating them into how a user in VirtualDJ would do that action using the keyboard and mouse. The synth pad we made using an Arduino and wiring it to 6 aluminum "pads" that make sounds when touched.
## Challenges we ran into
Creating all of the motions and make sure they do not overlap was a big challenge. The synth pad was challenging to create also because of lag problems that we had to fix by optimizing the C program.
## Accomplishments that we're proud of
Actually changing the volume in the VirtualDJ using leap motion. That was the first one we made work.
## What we learned
Using the Leap Motion, learned how to wire an arduino to create a MIDI synthesizer.
## What's next for Tracktive
Sell to DJ Khaled! Another one. | ## Motivation
Coding skills are in high demand and will soon become a necessary skill for nearly all industries. Jobs in STEM have grown by 79 percent since 1990, and are expected to grow an additional 13 percent by 2027, according to a 2018 Pew Research Center survey. This provides strong motivation for educators to find a way to engage students early in building their coding knowledge.
Mixed Reality may very well be the answer. A study conducted at Georgia Tech found that students who used mobile augmented reality platforms to learn coding performed better on assessments than their counterparts. Furthermore, research at Tufts University shows that tangible programming encourages high-level computational thinking. Two of our team members are instructors for an introductory programming class at the Colorado School of Mines. One team member is an interaction designer at the California College of the Art and is new to programming. Our fourth team member is a first-year computer science at the University of Maryland. Learning from each other's experiences, we aim to create the first mixed reality platform for tangible programming, which is also grounded in the reality-based interaction framework. This framework has two main principles:
1) First, interaction **takes place in the real world**, so students no longer program behind large computer monitors where they have easy access to distractions such as games, IM, and the Web.
2) Second, interaction behaves more like the real world. That is, tangible languages take advantage of **students’ knowledge of the everyday, non-computer world** to express and enforce language syntax.
Using these two concepts, we bring you MusicBlox!
## What is is
MusicBlox combines mixed reality with introductory programming lessons to create a **tangible programming experience**. In comparison to other products on the market, like the LEGO Mindstorm, our tangible programming education platform **cuts cost in the classroom** (no need to buy expensive hardware!), **increases reliability** (virtual objects will never get tear and wear), and **allows greater freedom in the design** of the tangible programming blocks (teachers can print out new card/tiles and map them to new programming concepts).
This platform is currently usable on the **Magic Leap** AR headset, but will soon be expanded to more readily available platforms like phones and tablets.
Our platform is built using the research performed by Google’s Project Bloks and operates under a similar principle of gamifying programming and using tangible programming lessons.
The platform consists of a baseboard where students must place tiles. Each of these tiles is associated with a concrete world item. For our first version, we focused on music. Thus, the tiles include a song note, a guitar, a piano, and a record. These tiles can be combined in various ways to teach programming concepts. Students must order the tiles correctly on the baseboard in order to win the various levels on the platform. For example, on level 1, a student must correctly place a music note, a piano, and a sound in order to reinforce the concept of a method. That is, an input (song note) is fed into a method (the piano) to produce an output (sound).
Thus, this platform not only provides a tangible way of thinking (students are able to interact with the tiles while visualizing augmented objects), but also makes use of everyday, non-computer world objects to express and enforce computational thinking.
## How we built it
Our initial version is deployed on the Magic Leap AR headset. There are four components to the project, which we split equally among our team members.
The first is image recognition, which Natalie worked predominantly on. This required using the Magic Leap API to locate and track various image targets (the baseboard, the tiles) and rendering augmented objects on those tracked targets.
The second component, which Nhan worked on, involved extended reality interaction. This involved both Magic Leap and Unity to determine how to interact with buttons and user interfaces in the Magic leap headset.
The third component, which Casey spearheaded, focused on integration and scene development within Unity. As the user flows through the program, there are different game scenes they encounter, which Casey designed and implemented. Furthermore, Casey ensured the seamless integration of all these scenes for a flawless user experience.
The fourth component, led by Ryan, involved project design, research, and user experience. Ryan tackled user interaction layouts to determine the best workflow for children to learn programming, concept development, and packaging of the platform.
## Challenges we ran into
We faced many challenges with the nuances of the Magic Leap platform, but we are extremely grateful to the Magic Leap mentors for providing their time and expertise over the duration of the hackathon!
## Accomplishments that We're Proud of
We are very proud of the user experience within our product. This feels like a platform that we could already begin testing with children and getting user feedback. With our design expert Ryan, we were able to package the platform to be clean, fresh, and easy to interact with.
## What We learned
Two of our team members were very unfamiliar with the Magic Leap platform, so we were able to learn a lot about mixed reality platforms that we previously did not. By implementing MusicBlox, we learned about image recognition and object manipulation within Magic Leap. Moreover, with our scene integration, we all learned more about the Unity platform and game development.
## What’s next for MusicBlox: Tangible Programming Education in Mixed Reality
This platform is currently only usable on the Magic Leap AR device. Our next big step would be to expand to more readily available platforms like phones and tablets. This would allow for more product integration within classrooms.
Furthermore, we only have one version which depends on music concepts and teaches methods and loops. We would like to expand our versions to include other everyday objects as a basis for learning abstract programming concepts. | ## 💡Inspiration
Gaming is often associated with sitting for long periods of time in front of a computer screen, which can have negative physical effects. In recent years, consoles such as the Kinect and Wii have been created to encourage physical fitness through games such as "Just Dance". However, these consoles are simply incompatible with many of the computer and arcade games that we love and cherish.
## ❓What it does
We came up with Motional at HackTheValley wanting to create a technological solution that pushes the boundaries of what we’re used to and what we can expect. Our product, Motional, delivers on that by introducing a new, cost-efficient, and platform-agnostic solution to universally interact with video games through motion capture, and reimagining the gaming experience.
Using state-of-the-art machine learning models, Motional can detect over 500 features on the human body (468 facial features, 21 hand features, and 33 body features) and use these features as control inputs to any video game.
Motional operates in 3 modes: using hand gestures, face gestures, or full-body gestures. We ship certain games out-of-the-box such as Flappy Bird and Snake, with predefined gesture-to-key mappings, so you can play the game directly with the click of a button. For many of these games, jumping in real-life (body gesture) /opening the mouth (face gesture) will be mapped to pressing the "space-bar"/"up" button.
However, the true power of Motional comes with customization. Every simple possible pose can be trained and clustered to provide a custom command. Motional will also play a role in creating a more inclusive gaming space for people with accessibility needs, who might not physically be able to operate a keyboard dexterously.
## 🤔 How we built it
First, a camera feed is taken through Python OpenCV. We then use Google's Mediapipe models to estimate the positions of the features of our subject. To learn a new gesture, we first take a capture of the gesture and store its feature coordinates generated by Mediapipe. Then, for future poses, we compute a similarity score using euclidean distances. If this score is below a certain threshold, we conclude that this gesture is the one we trained on. An annotated image is generated as an output through OpenCV. The actual keyboard presses are done using PyAutoGUI.
We used Tkinter to create a graphical user interface (GUI) where users can switch between different gesture modes, as well as select from our current offering of games. We used MongoDB as our database to keep track of scores and make a universal leaderboard.
## 👨🏫 Challenges we ran into
Our team didn't have much experience with any of the stack before, so it was a big learning curve. Two of us didn't have a lot of experience in Python. We ran into many dependencies issues, and package import errors, which took a lot of time to resolve. When we initially were trying to set up MongoDB, we also kept timing out for weird reasons. But the biggest challenge was probably trying to write code while running on 2 hours of sleep...
## 🏆 Accomplishments that we're proud of
We are very proud to have been able to execute our original idea from start to finish. We managed to actually play games through motion capture, both with our faces, our bodies, and our hands. We were able to store new gestures, and these gestures were detected with very high precision and low recall after careful hyperparameter tuning.
## 📝 What we learned
We learned a lot, both from a technical and non-technical perspective. From a technical perspective, we learned a lot about the tech stack (Python + MongoDB + working with Machine Learning models). From a non-technical perspective, we worked a lot working together as a team and divided up tasks!
## ⏩ What's next for Motional
We would like to implement a better GUI for our application and release it for a small subscription fee as we believe there is a market for people that would be willing to invest money into an application that will help them automate and speed up everyday tasks while providing the ability to play any game they want the way they would like. Furthermore, this could be an interesting niche market to help gamify muscle rehabilition, especially for children. | winning |
>
> <very-low.tech>
>
>
>
# Why we did
---
Ever since the creation of online video sharing, high bandwidth usages have always been a major issue. We wanted to develop a service that allowed for low data usage video sharing, because even now some areas in the world have limited bandwidth making it very difficult to do video calls. This would even have an effect on when we would try and call our families overseas. Video adds a layer of depth that in unattainable by voice calling only.
# How we did
---
ASKEY was developed to be a solution to all of these issues. By converting the video to an Ascii string before transmission and then displaying the video through Ascii, we end up sending a significantly smaller amount of data. However, we managed to maintain a reasonable amount of video quality despite going through this process.
# What we did
---
In the end, we built a functional web application that allows you to chat and facetime people with audio at ultra-low bandwidth requirement (Whooping ~250KB/s **RAW** uncompressed). As a comparison, a 480p **RAW** video requires 44 times the bandwidth at 11MB/s! Imagine what we could achieve in the future if we work on actual compression!
We hope using this new technology we can break the locks that divide the world, one asKEY at a time.
Our url is **very-low.tech** (Pun intended) | ## What it does
MusiCrowd is an interactive democratic music streaming service that allows individuals to vote on what songs they want to play next (i.e. if three people added three different songs to the queue the song at the top of the queue will be the song with the most upvotes). This system was built with the intentions of allowing entertainment venues (pubs, restaurants, socials, etc.) to be inclusive allowing everyone to interact with the entertainment portion of the venue.
The system has administrators of rooms and users in the rooms. These administrators host a room where users can join from a code to start a queue. The administrator is able to play, pause, skip, and delete and songs they wish. Users are able to choose a song to add to the queue and upvote, downvote, or have no vote on a song in queue.
## How we built it
Our team used Node.js with express to write a server, REST API, and attach to a Mongo database. The MusiCrowd application first authorizes with the Spotify API, then queries music and controls playback through the Spotify Web SDK. The backend of the app was used primarily to the serve the site and hold an internal song queue, which is exposed to the front-end through various endpoints.
The front end of the app was written in Javascript with React.js. The web app has two main modes, user and admin. As an admin, you can create a ‘room’, administrate the song queue, and control song playback. As a user, you can join a ‘room’, add song suggestions to the queue, and upvote / downvote others suggestions. Multiple rooms can be active simultaneously, and each room continuously polls its respective queue, rendering a sorted list of the queued songs, sorted from most to least popular. When a song ends, the internal queue pops the next song off the queue (the song with the most votes), and sends a request to Spotify to play the song. A QR code reader was added to allow for easy access to active rooms. Users can point their phone camera at the code to link directly to the room.
## Challenges we ran into
* Deploying the server and front-end application, and getting both sides to communicate properly.
* React state mechanisms, particularly managing all possible voting states from multiple users simultaneously.
* React search boxes.
* Familiarizing ourselves with the Spotify API.
* Allowing anyone to query Spotify search results and add song suggestions / vote without authenticating through the site.
## Accomplishments that we're proud of
Our team is extremely proud of the MusiCrowd final product. We were able to build everything we originally planned and more. The following include accomplishments we are most proud of:
* An internal queue and voting system
* Frontloading the development & working hard throughout the hackathon > 24 hours of coding
* A live deployed application accessible by anyone
* Learning Node.js
## What we learned
Garrett learned javascript :) We learned all about React, Node.js, the Spotify API, web app deployment, managing a data queue and voting system, web app authentication, and so so much more.
## What's next for Musicrowd
* Authenticate and secure routes
* Add IP/device tracking to disable multiple votes for browser refresh
* Drop songs that are less than a certain threshold of votes or votes that are active
* Allow tv mode to have current song information and display upcoming queue with current number of votes | ## Inspiration
Currently, there are over **360 million people** across the globe who suffer from disabling hearing loss. When these individuals want to watch videos or movies online, they have no choice but to read the live captions.
Studies have shown that individuals who are hearing impaired generally **prefer having a sign-language interpreter** rather than reading closed captioning or reading subtitles. However, ASL interpreting technology available to the public is minimal.
**This is what inspired the creation of SignStream**. We identified a need for this technology and built what we believe is an **innovative technological solution.**
## What it does
**SignStream** creates a \**virtual ASL interpreter \** that translates speech from online videos in real-time. Users play videos or audio and an ASL translation is returned.
## How we built it
The initial design of the website layout was made on **Figma**. Unique graphics to SignStream were created on **Photoshop** and **Adobe-Illustrator**.
The frontend components including references to pages and interactive features were built through **React.js**.
To build the backend, **Python** and **Beautiful Soup** were first used to web scrape and store jpg/mp4 files of ASL vocabulary.
The **AssemblyAI API** was used to create the portion of the program that receives audio input and returns text.
Finally, the backend was written in **JavaScript** and interpreted by the website using **Node.js**.
## Challenges we ran into
The main challenge we faced was displaying the jpg/mp4 files onto our website in real time as audio was being input. This was primarily due to the fact that not all words can be displayed in ASL. This delay was created due to the run-time of sorting through the text output to identify words. However, by taking time to improve our code, we were able to reduce the delay.
## Accomplishments that we're proud of
As a team of first year students, we had very limited prior knowledge of web development. Elements such as coding the backend of our website and figuring out how to use the AssemblyAI API within 36 hours proved to be quite the challenge. However, after hard work and collaboration we were able to successfully implement the back end and feature a real-time streaming page.
## What we learned
During our QHacks weekend, every team member gained precious knowledge about web development and problem solving.
Our frontend developers increased their ability to build a website using React.js. For some of us, this was our first time working with the library. However, by the end of the weekend we created a fully-functioning, responsive website.
The backend developers learned a lot about working with APIs and Node.js. We learned the importance of persevering when faced with challenges and collaborating with fellow developers.
## What's next for SignStream
In the future we are looking to expand SignStream to provide an optimal all around user experience. There are a few steps we think could benefit the user experience of SignStream. First, we would like to expand our database of ASL vocabulary. There are over 10 000 signs in the ASL language and expanding our database would allow for clearer interpretation of audio for users.
We are also looking to implement SignStream as a Chrome/Web extension. This extension would have the virtual interpreter appear as a smaller window, and reduce the amount of the display required to use the program. | winning |
## Inspiration
Many of us have a hard time preparing for interviews, presentations, and any other social situation. We wanted to sit down and have a real talk... with ourselves.
## What it does
The app will analyse your speech, hand gestures, and facial expressions and give you both real-time feedback as well as a complete rundown of your results after you're done.
## How We built it
We used Flask for the backend and used OpenCV, TensorFlow, and Google Cloud speech to text API to perform all of the background analyses. In the frontend, we used ReactJS and Formidable's Victory library to display real-time data visualisations.
## Challenges we ran into
We had some difficulties on the backend integrating both video and voice together using multi-threading. We also ran into some issues with populating real-time data into our dashboard to display the results correctly in real-time.
## Accomplishments that we're proud of
We were able to build a complete package that we believe is purposeful and gives users real feedback that is applicable to real life. We also managed to finish the app slightly ahead of schedule, giving us time to regroup and add some finishing touches.
## What we learned
We learned that planning ahead is very effective because we had a very smooth experience for a majority of the hackathon since we knew exactly what we had to do from the start.
## What's next for RealTalk
We'd like to transform the app into an actual service where people could log in and save their presentations so they can look at past recordings and results, and track their progress over time. We'd also like to implement a feature in the future where users could post their presentations online for real feedback from other users. Finally, we'd also like to re-implement the communication endpoints with websockets so we can push data directly to the client rather than spamming requests to the server.
![Image](https://i.imgur.com/aehDk3L.gif)
Tracks movement of hands and face to provide real-time analysis on expressions and body-language.
![Image](https://i.imgur.com/tZAM0sI.gif) | ## Inspiration
Knowtworthy is a startup that all three of us founded together, with the mission to make meetings awesome. We have spent this past summer at the University of Toronto’s Entrepreneurship Hatchery’s incubator executing on our vision. We’ve built a sweet platform that solves many of the issues surrounding meetings but we wanted a glimpse of the future: entirely automated meetings. So we decided to challenge ourselves and create something that the world has never seen before: sentiment analysis for meetings while transcribing and attributing all speech.
## What it does
While we focused on meetings specifically, as we built the software we realized that the applications for real-time sentiment analysis are far more varied than initially anticipated. Voice transcription and diarisation are very powerful for keeping track of what happened during a meeting but sentiment can be used anywhere from the boardroom to the classroom to a psychologist’s office.
## How I built it
We felt a web app was best suited for software like this so that it can be accessible to anyone at any time. We built the frontend on React leveraging Material UI, React-Motion, Socket IO and ChartJS. The backed was built on Node (with Express) as well as python for some computational tasks. We used GRPC, Docker and Kubernetes to launch the software, making it scalable right out of the box.
For all relevant processing, we used Google Speech-to-text, Google Diarization, Stanford Empath, SKLearn and Glove (for word-to-vec).
## Challenges I ran into
Integrating so many moving parts into one cohesive platform was a challenge to keep organized but we used trello to stay on track throughout the 36 hours.
Audio encoding was also quite challenging as we ran up against some limitations of javascript while trying to stream audio in the correct and acceptable format.
Apart from that, we didn’t encounter any major roadblocks but we were each working for almost the entire 36-hour stretch as there were a lot of features to implement.
## Accomplishments that I'm proud of
We are super proud of the fact that we were able to pull it off as we knew this was a challenging task to start and we ran into some unexpected roadblocks. There is nothing else like this software currently on the market so being first is always awesome.
## What I learned
We learned a whole lot about integration both on the frontend and the backend. We prototyped before coding, introduced animations to improve user experience, too much about computer store numbers (:p) and doing a whole lot of stuff all in real time.
## What's next for Knowtworthy Sentiment
Knowtworthy Sentiment aligns well with our startup’s vision for the future of meetings so we will continue to develop it and make it more robust before integrating it directly into our existing software. If you want to check out our stuff you can do so here: <https://knowtworthy.com/> | ## Inspiration
In high school, a teacher of ours used to sit in the middle of the discussion to draw lines from one person to another on paper to identify the trends of the discussion. It's a very meaningful activity as we could see how balanced the discussion is, allowing us to highlight to people who had less chance to express their ideas. It could also be used by teacher in earlier education, to identify social challenges such as anxiety, speech disorder in children.
## What it does
The app initially is trained on a short audio from each of the member of the discussion. Using transfer learning, it will able to recognize the person talking. And, during discussion, very colorful and aesthetic lines will began drawing from person to another on REAL-TIME!
## How we built it
On the front-end, we used react and JavaScript to create a responsive and aesthetic website. Vanilla css (and a little bit of math, a.k.a, Bézier curve) to create beautiful animated lines, connecting different profiles.
On the back-end, python and tensorflow was used to train the AI model. First, the audios are pre-processed into smaller 1-second chunks of audio, before turning them into a spectrogram picture. With this, we performed transfer learning with VGG16, to extract features from the spectrograms. Then, the features are used to fit a SVM model, using scikit-learn. Subsequently, the back-end opens a web-socket with the front-end to receive stream of data, and return label of the person talking. This is also done with multi-threading to ensure all the data is being processed quickly.
## Challenges we ran into
As it's our first time with deep-learning or training an AI for that matter, it was very difficult to get started. Despite the copious amount of resources and projects done, it was hard to identify a suitable source. Different processing techniques were also needed to know, before the model could be trained. In additions, finding a platform (such as Google Colab) was also necessary to train the model in appropriate time. Finally, it was fairly hard to incorporate the project with the rest of the project. It needs to be able to process the data in real-time, while keeping the latency low.
Another major challenge that we ran into was connecting the back-end with the front-end. As we wanted it to be real-time, we had to be able to stream the raw data to the back-end. But, there were problems reconstructing the binary files into appropriate format, because we were unsure what format RecordRTC uses to record audio. There was also a problem of how much data or how frequent the data should be sent over due to our high latency of predicting (~500ms). It's a problem that we couldn't figure out in time
## Accomplishments that we're proud of
The process of training the model was really cool!!! We could never think of training a voice recognition model similar to how you would to to an image/face recognition. It was a very out-of-the-box method, that we stumbled up online. It really motivated us to get out there and see what else. We were also fairly surprised to get a proof-of-concept real-time processing with local audio input from microphone. We had to utilize threading to avoid overflowing the audio input buffer. And if you get to use threading, you know it's a cool project :D.
## What we learned
Looking back, the project was quite ambitious. BUT!! That's how we learned. We learned so much about training machine learning as well as different connection protocols over the internet. Threading was also a "wet" dream of ours, so it was really fun experimenting with the concept in python.
## What's next for Hello world
The app would be much better on mobile. So, there are plans to port the entire project to mobile (maybe learning React Native?). We're also planning on retrain the voice recognition model with different methods, and improve the accuracy as well as confidence level. Lastly, we're planning on deploying the app and sending it back to our high school teacher, who was the inspiration for this project, as well as teachers around the world for their classrooms.
## Sources
These two sources helped us tremendously in building the model:
<https://medium.com/@omkarade9578/speaker-recognition-using-transfer-learning-82e4f248ef09>
<https://towardsdatascience.com/automatic-speaker-recognition-using-transfer-learning-6fab63e34e74> | winning |
## Inspiration
The inspiration behind Matcha stemmed from our collective frustration with the overwhelming nature of email overload. We envisioned a tool that could streamline the email sorting process.
## What it does
Matcha utilizes LLMs to automatically categorize and prioritize incoming emails based on their content and context. Matcha intelligently sorts emails into relevant folders, and identifies summaries.
## How we built it
We had malvyn learn react to build the frontend, while ayush and justin worked on the backend and making a flask server to serve requests.
## Challenges we ran into
It was really hard to get the llm's working and we ended up not having enough tokens to actually do anything useful which sucked becuase it broke for the demo. also using the gmail api was hard
## Accomplishments that we're proud of
in the end we were really happy to get the gmail api working, that was really cool and just working on making a good looking slick user interface
What's next for Matcha: Email Client
we wanted too make it even better and actually integrate the llm's properly into everything. | ## Inspiration
I was inspired to create this personalized affirmation app because sometimes I feel like I’m not good enough, and looking at my achievements helps me stay motivated.
## What it does
Our application searches the user's email and calendar history, for documents it can use to uplift the user. It proves the self-deprecating users wrong and affirms people of their self worth.
## How we built it
Backend and AI Integration: Using Flask as the backend framework, we connected the frontend with AI functionalities. We integrated LangChain and Google Gemini API to enable a Retrieval-Augmented Generation (RAG) model that processes context and provides personalized responses. Google Calendar API and Gmail API were used to gather user-specific context, such as past events and emails.
Data Processing and Retrieval: We used LangChain’s text splitting capabilities to handle large amounts of data, splitting the context into manageable chunks. Chroma and Google Generative AI embeddings allowed us to efficiently store and retrieve relevant information from this context, ensuring quick and relevant responses.
Frontend: The frontend was built using HTML, CSS, and JavaScript, providing a smooth, responsive interface where users can interact with the chatbot in real-time. Flask handled all communication between the frontend and backend, ensuring the user’s requests were processed and responded to efficiently.
## Challenges we ran into
Frontend-Backend Integration: Since it was our first time connecting the frontend (HTML, CSS, and JavaScript) with the backend using Flask, we faced difficulties in properly implementing the communication between the two. Getting the data flow and requests to work smoothly took time and troubleshooting.
Code Structure Issues: As we kept adding new features and functionalities, we realized that our initial code structure wasn’t scalable. This caused confusion and delays, as we had to reorganize and refactor large portions of the codebase to ensure everything functioned correctly.
RAG Model Challenges: Our Retrieval-Augmented Generation (RAG) setup wasn’t working initially. The LLM responses were not properly using the context from the Calendar and Gmail APIs. We had to dive deep into debugging, ensuring that the context was passed correctly to the model for relevant, personalized outputs.
CORS Issues in Flask: Setting up cross-origin resource sharing (CORS) was a significant challenge for us. Since Flask was handling backend requests, we needed to enable CORS to ensure the frontend could communicate with the backend without issues, which took extra time to configure properly.
## Accomplishments that we're proud of
We are happy that we could integrate the back-end and front-end while using different frameworks in this project. Even though some of our members were new to the technology, the final product worked. Despite being fairly new to the Gemini 1.5 API, our team successfully implemented it in our project to get the responses.
## What we learned
Working on this project exposed us to different technologies, and the workshops throughout the event gave us key insights into the latest technologies being used in the industry. Working in a time-crunch environment taught us to stay focused and continue pushing despite receiving errors and bugs. Overall, we learned the importance of teamwork and planning required to succeed in a project.
## What's next for UpliftMe | ## Inspiration
Recently we have noticed an influx in elaborate spam calls, email, and texts. Although for a native English and technologically literate person in Canada, these phishing attempts are a mere inconvenience, to susceptible individuals falling for these attempts may result in heavy personal or financial loss. We aim to reduce this using our hack.
We created PhishBlock to address the disparity in financial opportunities faced by minorities and vulnerable groups like the elderly, visually impaired, those with limited technological literacy, and ESL individuals. These groups are disproportionately targeted by financial scams. The PhishBlock app is specifically designed to help these individuals identify and avoid scams. By providing them with the tools to protect themselves, the app aims to level the playing field and reduce their risk of losing savings, ultimately giving them the same financial opportunities as others.
## What it does
PhishBlock is a web application that leverages LLMs to parse and analyze email messages and recorded calls.
## How we built it
We leveraged the following technologies to create a pipeline classify potentially malicious email from safe ones.
Gmail API: Integrated reading a user’s email.
Cloud Tech: Enabled voice recognition, data processing and training models.
Google Cloud Enterprise (Vertex AI): Leveraged for secure cloud infrastructure.
GPT: Employed for natural language understanding and generation.
numPy, Pandas: Data collection and cleaning
Scikit-learn: Applied for efficient model training
## Challenges we ran into
None of our team members had worked with google’s authentication process and the gmail api, so much of saturday was devoted to hashing out technical difficulties with these things. On the AI side, data collection is an important step in training and fine tuning. Assuring the quality of the data was essential
## Accomplishments that we're proud of
We are proud of coming together as a group and creating a demo to a project in such a short time frame
## What we learned
The hackathon was just one day, but we realized we could get much more done than we initially intended. Our goal seemed tall when we planned it on Friday, but by Saturday night all the functionality we imagined had fallen into place. On the technical side, we didn’t use any frontend frameworks and built interactivity the classic way and it was incredibly challenging. However, we discovered a lot about what we’re capable of under extreme time pressures!
## What's next for PhishBlock
We used closed source OpenAI API to fine tune a GPT 3.5 Model. This has obvious privacy concerns, but as a proof of concept it demonstrate the ability of LLMs to detect phishing attempts. With more computing power open source models can be used. | losing |
## Inspiration
We wanted to draw awareness and highlight the disparity in access to quality healthcare.
## What it does
The dataset combines demographic data by county, whether it is a medically underserved community, and the average insurance data per county.
## How we built it
We looked on the internet for datasets and used R to combine them and create visualizations.
## Challenges we ran into
We had tried to fit a neural net model to the data and hopefully be able to visualize implicit bias in the model, but we ran into issues of time and had to stop there.
## Accomplishments that we're proud of
We implemented knowledge we learned from our classes this year, such as R, in order to make the datasets and the visualizations.
## What we learned
An interesting graph we have is one that graphs black percentage vs premium and white percentage vs premium. Premiums increase at a much larger rate when the percentage of black people increases.
## What's next for Visualizing Health Insurance Inequality
We want to create a library that can automatically spot implicit racial biases in models. Models need to become "race-aware" in order to prevent implicit biases. | ## Inspiration
The Among Us Game has recently become extremely popular as friends and family are trying to stay connected - from a distance. Our bot enhances these interactions with easy-to-use, fun commands.
## What it does
!mute: mutes all players in your voice channel
!unmute : unmutes all players for inputted length (generally the length of the discussion and vote time); to unmute for an indefinite amount of time, just skip the number of seconds arg
!map : displays inputted map with vent passages highlighted. Our available maps include Skeld, Polus, and Mira.
!randomize: generate some random game settings for a fun time!
!roulette: roll a random Among Us character; collect as many as you can!
!collection: view the characters that you have collected
## How we built it
We built our bot in Visual Studio Code using discord.js
## Challenges we ran into
This was three of our teammates first time coding in javascript which presented us with a bit of a learning curve.
## Accomplishments that we're proud of
We are proud to have a functioning bot that integrates well with the actual Among Us game.
## What we learned
We learned basic javascript syntax as well as how to implement a functioning bot on discord.
## What's next for Among Us Bot
There are a few improvements that could still be made to the roulette command such as only allowing for one player to claim a character per each roll. | # Stegano
## End-to-end steganalysis and steganography tool
#### Demo at <https://stanleyzheng.tech>
Please see the video before reading documentation, as the video is more brief: <https://youtu.be/47eLlklIG-Q>
A technicality, GitHub user RonanAlmeida ghosted our group after committing react template code, which has been removed in its entirety.
### What is steganalysis and steganography?
Steganography is the practice of concealing a message within a file, usually an image. It can be done one of 3 ways, JMiPOD, UNIWARD, or UERD. These are beyond the scope of this hackathon, but each algorithm must have its own unique bruteforce tools and methods, contributing to the massive compute required to crack it.
Steganoanalysis is the opposite of steganography; either detecting or breaking/decoding steganographs. Think of it like cryptanalysis and cryptography.
### Inspiration
We read an article about the use of steganography in Al Qaeda, notably by Osama Bin Laden[1]. The concept was interesting. The advantage of steganography over cryptography alone is that the intended secret message does not attract attention to itself as an object of scrutiny. Plainly visible encrypted messages, no matter how unbreakable they are, arouse interest.
Another curious case was its use by Russian spies, who communicated in plain sight through images uploaded to public websites hiding steganographed messages.[2]
Finally, we were utterly shocked by how difficult these steganographs were to decode - 2 images sent to the FBI claiming to hold a plan to bomb 11 airliners took a year to decode. [3] We thought to each other, "If this is such a widespread and powerful technique, why are there so few modern solutions?"
Therefore, we were inspired to do this project to deploy a model to streamline steganalysis; also to educate others on stegography and steganalysis, two underappreciated areas.
### What it does
Our app is split into 3 parts. Firstly, we provide users a way to encode their images with a steganography technique called least significant bit, or LSB. It's a quick and simple way to encode a message into an image.
This is followed by our decoder, which decodes PNG's downloaded from our LSB steganograph encoder. In this image, our decoder can be seen decoding a previoustly steganographed image:
![](https://i.imgur.com/dge0fDw.png)
Finally, we have a model (learn more about the model itself in the section below) which classifies an image into 4 categories: unstegographed, MiPOD, UNIWARD, or UERD. You can input an image into the encoder, then save it, and input the encoded and original images into the model, and they will be distinguished from each other. In this image, we are inferencing our model on the image we decoded earlier, and it is correctly identified as stegographed.
![](https://i.imgur.com/oa0N8cc.png)
### How I built it (very technical machine learning)
We used data from a previous Kaggle competition, [ALASKA2 Image Steganalysis](https://www.kaggle.com/c/alaska2-image-steganalysis). This dataset presented a large problem in its massive size, of 305 000 512x512 images, or about 30gb. I first tried training on it with my local GPU alone, but at over 40 hours for an Efficientnet b3 model, it wasn't within our timeline for this hackathon. I ended up running this model on dual Tesla V100's with mixed precision, bringing the training time to about 10 hours. We then inferred on the train set and distilled a second model, an Efficientnet b1 (a smaller, faster model). This was trained on the RTX3090.
The entire training pipeline was built with PyTorch and optimized with a number of small optimizations and tricks I used in previous Kaggle competitions.
Top solutions in the Kaggle competition use techniques that marginally increase score while hugely increasing inference time, such as test time augmentation (TTA) or ensembling. In the interest of scalibility and low latency, we used neither of these. These are by no means the most optimized hyperparameters, but with only a single fold, we didn't have good enough cross validation, or enough time, to tune them more. Considering we achieved 95% of the performance of the State of the Art with a tiny fraction of the compute power needed due to our use of mixed precision and lack of TTA and ensembling, I'm very proud.
One aspect of this entire pipeline I found very interesting was the metric. The metric is a weighted area under receiver operating characteristic (AUROC, often abbreviated as AUC), biased towards the true positive rate and against the false positive rate. This way, as few unstegographed images are mislabelled as possible.
### What I learned
I learned about a ton of resources I would have never learned otherwise. I've used GCP for cloud GPU instances, but never for hosting, and was super suprised by the utility; I will definitely be using it more in the future.
I also learned about stenography and steganalysis; these were fields I knew very little about, but was very interested in, and this hackathon proved to be the perfect place to learn more and implement ideas.
### What's next for Stegano - end-to-end steganlaysis tool
We put a ton of time into the Steganalysis aspect of our project, expecting there to be a simple steganography library in python to be easy to use. We found 2 libraries, one of which had not been updated for 5 years; ultimantely we chose stegano[4], the namesake for our project. We'd love to create our own module, adding more algorithms for steganography and incorporating audio data and models.
Scaling to larger models is also something we would love to do - Efficientnet b1 offered us the best mix of performance and speed at this time, but further research into the new NFNet models or others could yeild significant performance uplifts on the modelling side, but many GPU hours are needed.
## References
1. <https://www.wired.com/2001/02/bin-laden-steganography-master/>
2. <https://www.wired.com/2010/06/alleged-spies-hid-secret-messages-on-public-websites/>
3. <https://www.giac.org/paper/gsec/3494/steganography-age-terrorism/102620>
4. <https://pypi.org/project/stegano/> | losing |
## Inspiration
Due to inadequate sleep, many teens and adolescents have trouble staying awake while working, often resulting in poor performance. At least once a week, more than a quarter of high school students fall asleep in class, and 22% of students fall asleep during homework.
## What it does
To detect drowsiness, No Doze would use the following sensors:
PPG (To monitor heart rate)
Thermostat (To monitor body temperature)
GSR (To check for alertness)
BMI160 (To monitor body movement)
As an individual progresses into sleep, his/her heart rate, body temperature, and alertness will decline. When No Doze detects drowsiness, the Pebble will continue to vibrate until the user starts to feel more awake.
## How we built it
We used C to program the Pebble and Cloudpebble to deploy our app.
## Challenges we ran into
Lack of knowledge on C and hardware, namely the Pebble 2.
## Accomplishments that we're proud of
Being able to make our first app!
## What I learned
Some C programming and what happens to the body when it is sleepy.
## What's next for No Doze
Many college students use their laptops during class. Implementing Microsoft Cognitive Services can help enhance the accuracy drowsy detection by using the laptop's camera to track and analyze the student's face. | ## Inspiration
We have all had that situation where our alarm goes off and we wake up super drowsy despite getting at least 8 hours of sleep. It turns out, this drowsiness is due to sleep inertia, which is at its worst when we are woken up in REM or deep sleep.
## What it does
Our device uses an 3-axis accelerometer and a noise sensor to detect when a user is likely in light sleep, which is often indicated by movement and snoring. We eliminate the use of a phone by placing all of the electronics within a stuffed bird that can be easily placed on the bed. The alarm only goes off if light sleep is detected within 30 minutes of a user-defined alarm time. The alarm time is set by the user through an app on their phone, and analytics and patterns about their sleep can be viewed on a phone app or through our web application.
## How we built it
We first started with the hardware components of the system. All of the sensors are sourced from the MLH Hardware store, so we then needed to take the time to figure out how to properly integrate all the sensors with the Arduino/Genuino 101. Following this, we began development of the Arduino code, which involved some light filtering and adjustment for ambient conditions. A web application was simultaneously developed using React. After some feedback from mentors, we determined that putting our electronics in a watch wasn't the best approach and a separate object would be the most ideal for comfort. We determined that a stuffed animal would be the best option. At the time, we did not have a stuffed animal, however we figured out that we could win one from Hootsuite. We managed to get one of these animals, which we then used as housing for our electronics. Unfortunately, this was the farthest we could get with our technical progress, so we only had time to draw some mock-ups for the mobile application.
## Challenges we ran into
We had some issues getting the sensors to work, which were fixed by using the datasheets provided at Digikey. Additionally, we needed to find a way to eliminate some of the noise from the sensors, which we eventually fixed using some low-pass, software filtering. A lot of time was invested in attempting to connect the web application to the electronics, unfortunately we did not have enough time to finish this.
## Accomplishments that we're proud of
We're proud that we were able to integrate all of the electronics into a small stuffed bird that is self-powered.
## What we learned
We learned quite a bit about the impact of sleep cycle on sleep inertia, and how this impacts daily activities and long-term health. We also learned a lot about software filtering and how to properly identify the most important information in a datasheet.
## What's next for Sleep Sweet
We would first develop the connection between the electronics and the web application, and eventually the mobile application. After this, we would prefer to get more robust electronics that are smaller in size. Lastly, we would like to integrate the electronics into a pillow or mattress which is more desirable for adults. | ## Inspiration
We were inspired by Katie's 3-month hospital stay as a child when she had a difficult-to-diagnose condition. During that time, she remembers being bored and scared -- there was nothing fun to do and no one to talk to. We also looked into the larger problem and realized that 10-15% of kids in hospitals develop PTSD from their experience (not their injury) and 20-25% in ICUs develop PTSD.
## What it does
The AR iOS app we created presents educational, gamification aspects to make the hospital experience more bearable for elementary-aged children. These features include:
* An **augmented reality game system** with **educational medical questions** that pop up based on image recognition of given hospital objects. For example, if the child points the phone at an MRI machine, a basic quiz question about MRIs will pop-up.
* If the child chooses the correct answer in these quizzes, they see a sparkly animation indicating that they earned **gems**. These gems go towards their total gem count.
* Each time they earn enough gems, kids **level-up**. On their profile, they can see a progress bar of how many total levels they've conquered.
* Upon leveling up, children are presented with an **emotional check-in**. We do sentiment analysis on their response and **parents receive a text message** of their child's input and an analysis of the strongest emotion portrayed in the text.
* Kids can also view a **leaderboard of gem rankings** within their hospital. This social aspect helps connect kids in the hospital in a fun way as they compete to see who can earn the most gems.
## How we built it
We used **Xcode** to make the UI-heavy screens of the app. We used **Unity** with **Augmented Reality** for the gamification and learning aspect. The **iOS app (with Unity embedded)** calls a **Firebase Realtime Database** to get the user’s progress and score as well as push new data. We also use **IBM Watson** to analyze the child input for sentiment and the **Twilio API** to send updates to the parents. The backend, which communicates with the **Swift** and **C# code** is written in **Python** using the **Flask** microframework. We deployed this Flask app using **Heroku**.
## Accomplishments that we're proud of
We are proud of getting all the components to work together in our app, given our use of multiple APIs and development platforms. In particular, we are proud of getting our flask backend to work with each component.
## What's next for HealthHunt AR
In the future, we would like to add more game features like more questions, detecting real-life objects in addition to images, and adding safety measures like GPS tracking. We would also like to add an ALERT button for if the child needs assistance. Other cool extensions include a chatbot for learning medical facts, a QR code scavenger hunt, and better social interactions. To enhance the quality of our quizzes, we would interview doctors and teachers to create the best educational content. | losing |
## Inspiration
I personally like to journal a lot. It's instrumental in helping my understand my own mental state during times of highs and especially times of lows. When I flip through my journal in the past, it's only then that I start to realize the patterns that developed and the big picture emotional swings I was in. Thus, I thought: why doesn't there exist a way for me to journal while having easy view and access to my mental state?
## What it does
Vibes allows you to write down your thoughts like in a traditional, but gives you further tools to help understand yourself and help you stay on track to a better mental state. When you complete a journal entry, it adds that entry to the database along with a calculated sentiment analysis value. This ranges from 0 to 1, normalized, and changes the calendar view to help you see this change immediately. Green days indicate positive days while red days indicate negative days. You can see this change immediately, and can take action if necessary.
One of those action items is to text you a message depending on how it seems your week is going. It has three distinct messages you can send yourself as a pick me up to try to motivate you further in taking care of yourself.
## How I built it
I build this through the use of stdlib's apis--airtable and sms--and ml-sentiment library for sentiment analysis. I used nodejs for server application communications with the front end.
## Challenges I ran into
I struggled with using the sentiment analysis a lot and using it, but finally figured out how to use ml-sentiment analysis and how to normalize it into a hex value for the calendar
## Accomplishments that I'm proud of
Learning how to use stdlib's airtable api to populate a database was a huge win and very motivating. It was also a test to myself to see myself grow as a developer from freshman year. I came from a place where I felt major imposter syndrome to showing myself that I was able to build this from beginning to end.
## What I learned
I have a better grasp on nodejs now along with an interesting new service, stdlib, that I see amazing amounts of potential for in my future applications.
## What's next for Vibe Journal
If this project were to be taken further, it would be developed into a full journaling application with normal text editor functions. It would also automatically connect and advertise other mental health services near you if it determines that you might be in need of one. Additionally, it would be generalized to multiple users, have authentication, and have multiple months throughout the year, and allow more options in editing abilities.
I believe this tool can help people know when they need to seek outside help if taken further! | ## Inspiration
Inspired by the sucky social life of students in McGill!
## What it does:
Meggie take care of you !
Basically, you get a virtual social life (barely) so Concordia students won't have a chance to post memes about how lonely is McGill, cool right ?
## How I built it:
Android, NLU, Naunce Mix and so many other random sh\*ts !
## Challenges I ran into:
Trying to capture the cheerful Meggie in an App!
## Accomplishments that I'm proud of:
Meggie loves everyone including herself!
## What I learned:
NLU is messy and hard to control, Android Dev need time, I need to bring a pillow next time!
## What's next for Meggie:
Future Goals:
1) The app becomes the official virtual recruiter for McGill!
2) Work on the front-end to make the app more user-friendly and add some animations (these take a lot of time)
3) Become the personal assistant for students for other universities too!
4) Meggie get an undergrad degree in communication!
Our Emails:
[andree\_kaba93@hotmail.com](mailto:andree_kaba93@hotmail.com)
[jiapeng.wu@mail.mcgill.ca](mailto:jiapeng.wu@mail.mcgill.ca)
[parth.khanna@mail.mcgill.ca](mailto:parth.khanna@mail.mcgill.ca)
[it-elias@hotmail.com](mailto:it-elias@hotmail.com) | ## 💡Inspiration
* 2020 US Census survey showed that adults were 3x more likely to screen positive for depression or anxiety in 2020 vs 2019
* A 2019 review of 18 papers summarized that wearable data could help identify depression, and coupled with behavioral therapy can help improve mental health
* 1 in 5 americans owns wearables now, and this adoption is projected to grow 18% every year
* Pattrn aims to turn activity and mood data into actionable insights for better mental health.
## 🤔 What it does
* Digests activity monitor data and produces bullet point actionable summary on health status
* Allows users to set goals on health metrics, and provide daily, weekly, month review against goals
* Based on user mood rating and memo entry, deduce activities that correlates with good and bad days
[![Screen-Shot-2022-10-16-at-1-09-40-PM.jpg](https://i.postimg.cc/MZhjdqRw/Screen-Shot-2022-10-16-at-1-09-40-PM.jpg)](https://postimg.cc/bd9JvX3V)
[![Fire-Shot-Capture-060-Pattrn-localhost.png](https://i.postimg.cc/zBQpx6wQ/Fire-Shot-Capture-060-Pattrn-localhost.png)](https://postimg.cc/bDQQJ6B0)
## 🦾 How we built it
* Frontend: ReactJS
* Backend: Flask, Google Cloud App Engine, Intersystems FHIR, Cockroach Labs DB, Cohere
## 👨🏻🤝👨🏽 Challenges / Accomplishments
* Ideating and validating took up a big chunk of this 24 hour hack
* Continuous integration and deployment, and Github collaboration for 4 developers in this short hack
* Each team member pushing ourselves to try something we have never tried before
## 🛠 Hack for Health
* Pattrn currently is able to summarize actionable steps for users to take towards a healthy lifestyle
* Apart from health goal setting and reviewing, pattrn also analyses what activities have historically correlated with "good" and "bad" days
## 🛠 Intersystems Tech Prize
* We paginated a GET and POST request
* Generated synthetic data and pushed it in 2 different time resolution (Date, Minutes)
* Endpoints used: Patient, Observation, Goals, Allergy Intolerance
* Optimized API calls in pushing payloads through bundle request
## 🛠 Cockroach Labs Tech Prize
* Spawned a serverless Cockroach Lab instance
* Saved user credentials
* Stored key mapping for FHIR user base
* Stored sentiment data from user daily text input
## 🛠 Most Creative Use of GitHub
* Implemented CICD, protected master branch, pull request checks
## 🛠 Cohere Prize
* Used sentiment analysis toolkit to parse user text input, model human languages and classify sentiments with timestamp related to user text input
* Framework designed to implement a continuous learning pipeline for the future
## 🛠 Google Cloud Prize
* App Engine to host the React app and Flask observer and linked to Compute Engine
* Hosted Cockroach Lab virtual machine
## What's next for Pattrn
* Continue working on improving sentiment analysis on user’s health journal entry
* Better understand pattern between user health metrics and daily activities and events
* Provide personalized recommendations on steps to improve mental health
* Provide real time feedback eg. haptic when stressful episode are predicted
Temporary login credentials:
Username: [norcal2@hacks.edu](mailto:norcal2@hacks.edu)
Password: norcal | losing |
## Inspiration
We’re huge believers that every individual can make a stand against climate change. We realised that there are many apps that give nutritional information regarding the food you consume, but there exists no tool that shows you the environmental impact.
We wanted to make a difference.
## What it does
EcoScan allows you to scan barcodes found on food packaging and will bring up alternative products, disposal methods, and a letter grade to represent the product’s carbon footprint. In addition, we’ve added authentication and history (to see previously scanned products). We also “reward” users with NFT coupons/savings every time they scan three products with a great “EcoScore” rating.
## How we built it
We made use of frontend, backend, blockchain (NFT), and external APIs to build EcoScan.
* ReactJS for the frontend
* QuaggaJS to scan barcodes
* Hedera to mint NFTs as coupons
* OpenFoodFacts API to get information regarding food products given a barcode
## Accomplishments that we're proud of
The proudest moment was deployment. Having worked hard overnight, the levels of serotonin when finally seeing our hard work on the World Wide Web was insane.
## What we learned
For some context, our team consisted of one frontend engineer, one jack-of-all trades, one backend engineer, and one person who had never programmed before.
Our learnings were far and wide - from learning CSS for the first time, to minting our first NFTs with Hedera. All in all, we learned how to work together, and we learned about each other - building deep bonds in the process.
## What's next for Eco-Scan
Get feedback from real users/people. We want to talk to people and get a real understanding of the problem we're solving - there are also always improvements to make for UI/UX, better data, etc. | ## Inspiration
There are very small but impactful ways to be eco-conscious 🌱 in your daily life, like using reusable bags, shopping at thrift stores, or carpooling. We know one thing for certain; people love rewards ✨. So we thought, how can we reward people for eco-conscious behaviour such as taking the bus or shopping at sustainable businesses?
We wanted a way to make eco-consciousness simple, cost-effective, rewarding, and accessible to everyone.
## What it does
Ecodes rewards you for every sustainable decision you make. Some examples are: shopping at sustainable partner businesses, taking the local transit, and eating at sustainable restaurants. Simply scanning an Ecode at these locations will allow you to claim EcoPoints that can be converted into discounts, coupons or gift cards to eco-conscious businesses. Ecodes also sends users text-based reminders when acting sustainably is especially convenient (ex. take the bus when the weather is unsafe for driving). Furthermore, sustainable businesses also get free advertising, so it's a win-win for both parties! See the demo [here](https://drive.google.com/file/d/1suT7tPila3rz4PSmoyl42G5gyAwrC_vu/view?usp=sharing).
## How we built it
We initially prototyped UI/UX using Figma, then built onto a React-Native frontend and a Flask backend. QR codes were generated for each business via python and detected using a camera access feature created in React-Native. We then moved on to use the OpenWeatherMaps API and the Twilio API in the backend to send users text-based eco-friendly reminders.
## Challenges we ran into
Implementing camera access into the app and actually scanning specific QR codes that corresponded to a unique business and number of EcoPoints was a challenge. We had to add these technical features to the front-end seamlessly without much effort from the user but also have it function correctly. But after all, there's nothing a little documentation can't solve! In the end, we were able to debug our code and successfully implement this key feature.
## Accomplishments that we're proud of
**Kemi** is proud that she learned how to implement new features such as camera access in React Native. 😙
**Akanksha** is proud that she learnt Flask and interfacing with Google Maps APIs in python. 😁
**Vaisnavi** is proud that she was able to generate multiple QR codes in python, each with a unique function. 😝
**Anna** is proud to create the logistics behind the project and learnt about frontend and backend development. 😎
Everyone was super open to working together as a team and helping one another out. As as a team, we learnt a lot from each other in a short amount of time, and the effort was worth it!
## What we learned
We took the challenge to learn new skills outside of our comfort zone, learning how to add impressive features to an app such as camera access, QR code scanning, counter updates, and aesthetic UI. Our final hack turned out to be better than we anticipated, and inspired us to develop impactful and immensely capable apps in the future :)
## What's next for Ecodes
Probably adding a location feature to send users text-based reminders to the user, informing them that an Ecode is nearby. We can use the Geolocation Google Maps API and Twilio API to implement this. Additionally, we hope to add a carpooling feature which enables users to earn points together by carpooling with one another!! | ## Inspiration
We love cooking and watching food videos. From the Great British Baking Show to Instagram reels, we are foodies in every way. However, with the 119 billion pounds of food that is wasted annually in the United States, we wanted to create a simple way to reduce waste and try out new recipes.
## What it does
lettuce enables users to create a food inventory using a mobile receipt scanner. It then alerts users when a product approaches its expiration date and prompts them to notify their network if they possess excess food they won't consume in time. In such cases, lettuce notifies fellow users in their network that they can collect the surplus item. Moreover, lettuce offers recipe exploration and automatically checks your pantry and any other food shared by your network's users before suggesting new purchases.
## How we built it
lettuce uses React and Bootstrap for its frontend and uses Firebase for the database, which stores information on all the different foods users have stored in their pantry. We also use a pre-trained image-to-text neural network that enables users to inventory their food by scanning their grocery receipts. We also developed an algorithm to parse receipt text to extract just the food from the receipt.
## Challenges we ran into
One big challenge was finding a way to map the receipt food text to the actual food item. Receipts often use annoying abbreviations for food, and we had to find databases that allow us to map the receipt item to the food item.
## Accomplishments that we're proud of
lettuce has a lot of work ahead of it, but we are proud of our idea and teamwork to create an initial prototype of an app that may contribute to something meaningful to us and the world at large.
## What we learned
We learned that there are many things to account for when it comes to sustainability, as we must balance accessibility and convenience with efficiency and efficacy. Not having food waste would be great, but it's not easy to finish everything in your pantry, and we hope that our app can help find a balance between the two.
## What's next for lettuce
We hope to improve our recipe suggestion algorithm as well as the estimates for when food expires. For example, a green banana will have a different expiration date compared to a ripe banana, and our scanner has a universal deadline for all bananas. | partial |
## 💡Inspiration
In a time of an emergency when a patient is unconscious, how can we be so sure that the first aid responders know what painkillers they can give, what condition they might've fallen ill to, and what their medical history is? Many years ago, there was an incident where a patient was allergic to the narcotics that were given to them. We want to change that and create a new standard when it comes to a new place for health records.
Patients are also more entitled to more privacy as blockchain is more secure and safe as possible. With new and more secure technologies, people can use blockchain to be confident that their information is immutable and protected with encryption and a private key that only their wearable/device has.
We give healthcare professionals access to data for their personal healthcare information ONLY when the patient has fallen ill.
## 🔍What it does
Emergenchain provides three primary uses. Firstly, we offer a secure and organized way to store one's medical records. This provides a convenient way for doctors to access a patient's medical history. Additionally, we also offer a certificate for vaccines and immunizations. This way people have an easy way to access their proof of vaccination for pandemics and other necessary immunizations. Furthermore, we offer an emergency summary sheet compiled from the information on their patient's medical history. This includes known health conditions and their risk. Finally, we have a QR code that displays the emergency information tab when scanned. This acts as a precaution for when someone is found unconscious, as first aid responders/medics can scan their QR code and immediately find details about the patient's health conditions, history, emergency contact information, and treatment methods.
## ⚙️How we built it
We designed our front end using Figma and coded it on React. For our navbar, we used react-router-dom, and for styling, we used Tailwind CSS to decorate our elements and used framer motion to add animation. All of the medical records are stored DeSo blockchain as posts, and all of our certificates are NFTs minted with DeSo. We also used DeSo login, to implement our login. We minted NFTs on Georli testnet and can be seen using the contract <https://goerli.etherscan.io/address/0x7D157EFe11FadC50ef28A509b6958F7320A6E6f9#writeContract> all on DeSo.
## 🚧Challenges we ran into
Throughout Hack the Valley, we ran into various challenges. Our biggest challenge is definitely that we did not have a programmer proficient with the back end in our team. This was a huge challenge, as we had to learn back-end programming from the basics. Additionally, this was our first time straying from liveshare, and using GitHub to its fullest. It was a challenge to learn to coordinate with each other through branches, pull requests, and code issues. Finally, we are proud to say that we have successfully coded a fully functional project in under 36 hours!
## ✔️Accomplishments that we're proud of
We are proud of surmounting the seemingly endless streams of obstacles. Specifically, learning the fundamentals of back-end programming, utilizing it in a real-world project, and learning how to integrate it with our front end. Furthermore, we are proud to have successfully coordinated our project with each other through Github, in a more organized fashion than liveshare, with properly documented source control. Finally, we are proud of ourselves for creating a fully functional program that tackles a severe issue our world faces, changing the world step by line!
## 📚What we learned
We learned many things about the fundamentals of back-end programming such as POST and GET requests, as well as interpreting and implementing algorithms through their documentation. Furthermore, we learned a lot about the DeSo Protocol library from posting records onto the blockchain, to minting NFTs, to implementing a Login system. Additionally, we learned many new features regarding Github. Specifically, how to collaborate with each other by utilizing many tools including branches, pull requests, merges, code reviews, and many more!
## 🔭What's next for Health Hub
We want to be the company that revolutionizes the world by storm and creates a new mass adoption through healthcare data. We believe that blockchain and crypto could be used to revolutionize the healthcare industry and not only create an actual handheld device but also partner with the government to have ambulances and first-aid responders check our chip or code if anyone falls unconscious to see if they have any healthcare data on them.
## 🌐 Best Domain Name from Domain.com
As a part of our project, we registered callamed.tech using Domain.com! | ## Inspiration
Our inspiration for this project was the technological and communication gap between healthcare professionals and patients, restricted access to both one’s own health data and physicians, misdiagnosis due to lack of historical information, as well as rising demand in distance-healthcare due to the lack of physicians in rural areas and increasing patient medical home practices. Time is of the essence in the field of medicine, and we hope to save time, energy, money and empower self-care for both healthcare professionals and patients by automating standard vitals measurement, providing simple data visualization and communication channel.
## What it does
What eVital does is that it gets up-to-date daily data about our vitals from wearable technology and mobile health and sends that data to our family doctors, practitioners or caregivers so that they can monitor our health. eVital also allows for seamless communication and monitoring by allowing doctors to assign tasks and prescriptions and to monitor these through the app.
## How we built it
We built the app on iOS using data from the health kit API which leverages data from apple watch and the health app. The languages and technologies that we used to create this are MongoDB Atlas, React Native, Node.js, Azure, Tensor Flow, and Python (for a bit of Machine Learning).
## Challenges we ran into
The challenges we ran into are the following:
1) We had difficulty narrowing down the scope of our idea due to constraints like data-privacy laws, and the vast possibilities of the healthcare field.
2) Deploying using Azure
3) Having to use Vanilla React Native installation
## Accomplishments that we're proud of
We are very proud of the fact that we were able to bring our vision to life, even though in hindsight the scope of our project is very large. We are really happy with how much work we were able to complete given the scope and the time that we have. We are also proud that our idea is not only cool but it actually solves a real-life problem that we can work on in the long-term.
## What we learned
We learned how to manage time (or how to do it better next time). We learned a lot about the health care industry and what are the missing gaps in terms of pain points and possible technological intervention. We learned how to improve our cross-functional teamwork, since we are a team of 1 Designer, 1 Product Manager, 1 Back-End developer, 1 Front-End developer, and 1 Machine Learning Specialist.
## What's next for eVital
Our next steps are the following:
1) We want to be able to implement real-time updates for both doctors and patients.
2) We want to be able to integrate machine learning into the app for automated medical alerts.
3) Add more data visualization and data analytics.
4) Adding a functional log-in
5) Adding functionality for different user types aside from doctors and patients. (caregivers, parents etc)
6) We want to put push notifications for patients' tasks for better monitoring. | ## Inspiration
Many people around the world do not know what health insurance is and how it can benefit them. For example, during the COVID-19 pandemic, people in India did not have access to health insurance which led to many deaths. It also caused many families to have debt due to expensive medical bills. Our hope is to educate people about health insurance and provide greater accessibility towards those resources so everyone can lead happier and healthier lives.
## What it does
A web application allows users to interact and learn about health insurance as well as various health topics. The personalized account feature lets users view and edit their health data. Users are also able to search for hospitals nearby. These resources allow users to get the information they need all in one convenient place.
## How we built it
We implemented a MERN stack for our web application. The front-end integrated React, Node. js, and Express to create a functional UI. The back-end uses MongoDB for database management and Google API Keys for the hospital mapping feature of our website. Javascript and HTML/CSS were also used to stylize the UI.
## Challenges we ran into
One of the biggest challenges was connecting the backend through a server and querying the data to be used within the application. There was also going to be a 24-hour AI Chatbot that could answer questions but due to time constraints it was not fully completed. There were challenges getting API Keys from Google which is why the backend does not have all structural features.
## Accomplishments that we're proud of
Even though there were obstacles, everyone in the team learned something new. The team’s spirit and the willingness to persevere helped everyone contribute towards the project. We are proud of the Figma prototype showcasing our idea. We are also happy to have built a web application from a basic understanding of web development. Also, being able to collaborate through Git was a huge achievement for the entire team.
## What we learned
Everyone learned how to create a full stack website using a MERN stack. This allowed us all to learn the key differences between frontend and backend development. Some of the team members learned the basics of large language models and how they could be used to train an AI Chatbot. The beginners on the team also learned how to code in Javascript to create a web application.
## What's next for Assured Health
The main goal is to create a more elegant user interface and have the backend fully functional. We hope to build a website that is useful and accessible to all as we continue on our mission to educate people about health insurance. | partial |
## Inspiration 💭
With the current staffing problem in hospitals due to the lingering effects of COVID-19, we wanted to come up with a solution for the people who are the backbone of healthcare. **Personal Support Workers** (or PSWs) are nurses that travel between personal and nursing homes to help the elderly and disabled to complete daily tasks such as bathing and eating. PSWs are becoming increasingly more needed as the aging population grows in upcoming years and these at-home caregivers become even more sought out.
## What it does 🙊
Navcare is our solution to improve the scheduling and traveling experience for Personal Support Workers in Canada. It features an optimized shift schedule with vital information about where and when each appointment is happening and is designed to be within an optimal traveling distance. Patients are assigned to nurses such that the nurse will not have to travel more than 30 mins outside of their home radius to treat a patient. Additionally, it features a map that allows a nurse to see all the locations of their appointments in a day, as well as access the address to each one so they can easily travel there.
## How we built it 💪
Django & ReactJS with google maps API.
## Challenges we ran into 😵
Many many challenges. To start off, we struggled to properly connect our backend API to our front end, which was essential to pass the information along to the front end and display the necessary data. This was resolved through extensive exploration of the documentation, and experimentation. Next while integrating the google maps API, we continuously faced various dependency issues as well as worked to resolve more issues relating to fetching data through our Django rest API. Since it was our first time implementing such an infrastructure, to this extent, we struggled at first to find our footing and correctly connect and create the necessary elements between the front and back end. However, after experimenting with the process and testing out different elements and methods, we found a combination that worked!
## Accomplishments that we're proud of 😁
We made it! We all felt as though we have learned a tremendous amount. This weekend, we really stepped out of our comfort zones with our assignments and worked on new things that we didn't think we would work on. Despite our shortcomings in our knowledge, we were still able to create an adequately functioning app with a sign-in feature, the ability to make API requests, and some of our own visuals to make the app stand out. If given a little more time, we could have definitely built an industry-level app that could be used by PSWs anywhere. The fact we were able to solve a breadth of challenges in such little time gives us hope that we BELONG in STEM!
## What's next for Navcare 😎
Hopefully, we can keep working on Navcare and add/change features based on testing with actual PSWs. Some features include easier input and tracking of information from previous visits, as well as a more robust infrastructure to support more PSWs. | ## Inspiration
Our project is inspired by the sister of one our creators, Joseph Ntaimo. Joseph often needs to help locate wheelchair accessible entrances to accommodate her, but they can be hard to find when buildings have multiple entrances. Therefore, we created our app as an innovative piece of assistive tech to improve accessibility across the campus.
## What it does
The user can find wheelchair accessible entrances with ease and get directions on where to find them.
## How we built it
We started off using MIT’s Accessible Routes interactive map to see where the wheelchair friendly entrances were located at MIT. We then inspected the JavaScript code running behind the map to find the latitude and longitude coordinates for each of the wheelchair locations.
We then created a Python script that filtered out the latitude and longitude values, ignoring the other syntax from the coordinate data, and stored the values in separate text files.
We tested whether our method would work in Python first, because it is the language we are most familiar with, by using string concatenation to add the proper Java syntax to the latitude and longitude points. Then we printed all of the points to the terminal and imported them into Android Studio.
After being certain that the method would work, we uploaded these files into the raw folder in Android Studio and wrote code in Java that would iterate through both of the latitude/longitude lists simultaneously and plot them onto the map.
The next step was learning how to change the color and image associated with each marker, which was very time intensive, but led us to having our custom logo for each of the markers.
Separately, we designed elements of the app in Adobe Illustrator and imported logos and button designs into Android Studio. Then, through trial and error (and YouTube videos), we figured out how to make buttons link to different pages, so we could have both a FAQ page and the map.
Then we combined both of the apps together atop of the original maps directory and ironed out the errors so that the pages would display properly.
## Challenges we ran into/Accomplishments
We had a lot more ideas than we were able to implement. Stripping our app to basic, reasonable features was something we had to tackle in the beginning, but it kept changing as we discovered the limitations of our project throughout the 24 hours. Therefore, we had to sacrifice features that we would otherwise have loved to add.
A big difficulty for our team was combining our different elements into a cohesive project. Since our team split up the usage of Android Studio, Adobe illustrator, and programming using the Google Maps API, it was most difficult to integrate all our work together.
We are proud of how effectively we were able to split up our team’s roles based on everyone’s unique skills. In this way, we were able to be maximally productive and play to our strengths.
We were also able to add Boston University accessible entrances in addition to MIT's, which proved that we could adopt this project for other schools and locations, not just MIT.
## What we learned
We used Android Studio for the first time to make apps. We discovered how much Google API had to offer, allowing us to make our map and include features such as instant directions to a location. This helped us realize that we should use our resources to their full capabilities.
## What's next for HandyMap
If given more time, we would have added many features such as accessibility for visually impaired students to help them find entrances, alerts for issues with accessing ramps and power doors, a community rating system of entrances, using machine learning and the community feature to auto-import maps that aren't interactive, and much, much more. Most important of all, we would apply it to all colleges and even anywhere in the world. | ## Inspiration
As children of immigrants, we have seen how the elderly in our respective communities struggle to communicate and find suitable PSWs that can provide the care that they need in their homes.
Currently, finding a personal support worker in Canada is not as easy as it should be. Connecting PSWs with patients based on different criteria ensures better satisfaction for both PSWs and patients. Ultimately, we want to eliminate barriers so that all Canadians can get the full care that they are comfortable with.
## What it does
We thought of creating this web app to match patients based on various attributes, including language, patient needs, and location. The app matches PSWs with patients who may communicate better and provide better care, as well as ensuring that PSWs don't need to travel great distances.
## How we built it
We built this web app using React, HTML, and CSS for the front-end, and JavaScript processing for the back-end. This was built entirely on Repl.it, allowing for easy team collaboration and debugging.
## Challenges we ran into
We encountered some challenges with the form functionality of the web app, including obtaining and processing the data using `states`. With some help from a mentor, we were able to process form data in the back-end according to the app requirements.
## Accomplishments that we're proud of
We created a full-stack website using React.
## What we learned
We learned many React functions, states, and its use in the processing of data.
## What's next for CareConnect
For scalability, we could implement a database to store user data and PSW data.
Further, we can also implement the entire process dynamically from the perspectives of the PSWs, allowing a similar user experience for them. | winning |
## Inspiration
Not wanting to keep moving my stuff around all the time while moving between SF and Waterloo, Canada.
## What it does
It will call a Postmate to pick up your items, which will then be delivered to our secure storage facility. The Postmate will be issued a one time use code for the lock to our facility, and they will store the item. When the user wants their item back, they will simply request it and it will be there in minutes.
## How I built it
The stack is Node+Express and the app is on Android. It is hosted on Azure. We used the Postmate and Here API
## Challenges I ran into
## Accomplishments that I'm proud of
A really sleek and well built app! The API is super clean, and the Android interface is sexy
## What I learned
## What's next for Stockpile
Better integrations with IoT devices and better item management. | ## Inspiration
## What it does
The SmartBox Delivery Service allows independent couriers and their customers to easily deliver and securely receive packages. The delivery person starts by scanning the package, and the system opens and reserves a SmartBox for them, for every package they scan. When they deliver the package in the SmartBox, an indicator shows its presence, and they can mark it as delivered to lock the box. The customer can then scan the SmartBox to unlock it and retrieve their package. Amazon Alexa is also used to notify the customer of different events throughout the delivery.
## How we built it
Our system is comprised of 3 main components: the client application, the backend service, and the smart boxes themselves. We built it by leveraging Solace as a messaging service to enable instant communication between each component. This allows for instant notifications to the Amazon Echo, and perfect feedback when reserving, locking, and unlocking the SmartBox.
## Challenges we ran into
We faced several challenges during development. The first was that the node.js client for Solace was difficult to use. After spending several hours trying to use it, we had to switch to another library and so, lots of time was lost. The second major challenge was that we first tried to use React Native to build our client application. We found it to be very tedious to setup and use as well. More specifically, we spent a lot of time trying to implement NFC in our app, but React Native did not seem to have any reasonable way of reading the tags. We therefore had to make a web-app instead, and switch to QR codes.
## Accomplishments that we're proud of
We're proud of building a fully working hack that includes a good portion of hardware, which was difficult to get right. We're also happy we managed to leverage Solace, as IoT is a perfect application of their software, and empowers our hack.
## What we learned
We learned about Solace and Publisher/Subscriber architecture, as well as making reactive hardware technology using arduino. | ## Inspiration
We are inspired by...
## What it does
Doc Assist is a physician tool used to help automate and help expedite the patient diagnosis process. The Doc Assist MVP specifically will take information about a users symptoms and imaging records such as chest Xray images to predict the chance of pneumonia.
## How we built it
The front end was made with React and MUI to create the checkboxes, text and image uploading fields. Python was used for the backend, for the visualization, preprocessing, data augmentation, and training of the CNN model that predicted the chance of pneumonia based on chest X ray images. Flask was used to connect the front end to the backend.
## Challenges we ran into
* Data collection
* Integration of front end components with backend
## Accomplishments that we're proud of
* Getting an MVP completed
## What's next for DocAssist
* Include capability to diagnose other diseases
* Add ability to insert personal data
* Create more accurate predictions with classification models trained on personal data and symptoms, and add more CNN models to predict diagnosis for other types of imaging other than Xrays | partial |
## Inspiration
Relevant related news stories can be difficult to obtain on the average news website. We wanted a better way to visualize the world news and how each story was related to the others through time.
## What it does
Our application shows suggested news related news stories in time to the one the user is looking at. It displays a short blurb of each article, a main article picture and visually compares other news stories in time to the one the user is looking at.
## How we built it
We built it using d3 java, html, css3 and processing.
## Challenges we ran into
None of our team members have a background in coding, and we had to use d3 to accomplish our goal, which took a real concerted effort to learn something new.
## Accomplishments that we're proud of
The first moment our application returned a result we knew it had all been time well spent. There were trials and we moved through them, we didn't let setbacks or a previous lack of knowledge prevent us from completing out dream.
## What we learned
Learning to try and try again, learning is the persistence of a dream, and we realized that dream this weekend. We came in with an idea and no real idea how to make it, and together we figured worked until we got it.
## What's next for TWOFOLD
We are all in the same program at school, and as of now would like to continue working on the product. | ## Inspiration
As high school seniors, we are all taking a government course this year, and a key part of the class is staying up to date on current events in politics. However, in recent years, with the country being more politically polarized than ever, it's not always easy to find an unbiased source of news. On top of this, fake news has become increasingly prevalent, especially in a world where much of the media we consume is online. However, one day during a discussion of current events in class, our vice principal interrupted to share that he used several apps to remain well educated about American politics and policy. He had anything from Breitbart to MSNBC, and numerous other news applications for the sole purpose of receiving a mixed pool of ideas and viewpoints, and after hearing from all the sides he was able to develop his own informed beliefs. This practice became the inspiration for Debrief.
## What it does
Debrief is a news aggregator and analyzer. The application is entirely independent, capable of tracking current trends, and retrieving and analyzing news articles related to these topics. At its core, Debrief uses cutting edge natural language processing technology and machine learning to discover the sentiment behind each article and locate a wide set of key words and entities. Using these, Debrief builds summaries of each article and collects them all under one roof to be presented to to the reader (**thats you!**). On our website where the news is listed, we also have visual aides (like graphs) to display the variety of news each source produces, display trends between multiple providers, and help you to see which articles will provide the most diverse viewpoints.
## How we built it
We created a Node.js server that pulls articles about politics from a variety of sources, finds overarching trends, and performs natural language processing with the help of Google's Cloud API. These articles are assessed for sentiment toward people, places, and things in the article and are shown in the website in a way that readers can easily which news sources are biased towards which things.
## Challenges we ran into
Challenges we ran into was how to show to a common reader the difference in sentiment expressed by the different articles. We overcame this challenge by creating helpful graphics that explained the difference.
## Accomplishments that we are proud of
We are very proud of creating an application that we think is very topical in today's political climate. With so many sources of news and many different perspectives, we think that this website will help some Americans become more informed about politics.
## What we learned
We learned a lot about natural language processing and how to make interesting graphics that would appeal to and inform the common reader.
## What's next for debrief
Next for debrief is to take our natural language processing even further and create totally unbiased news by synthesizing the articles from our many different sources. | ## Inspiration
When reading news articles, we're aware that the writer has bias that affects the way they build their narrative. Throughout the article, we're constantly left wondering—"What did that article not tell me? What convenient facts were left out?"
## What it does
*News Report* collects news articles by topic from over 70 news sources and uses natural language processing (NLP) to determine the common truth among them. The user is first presented with an AI-generated summary of approximately 15 articles on the same event or subject. The references and original articles are at your fingertips as well!
## How we built it
First, we find the top 10 trending topics in the news. Then our spider crawls over 70 news sites to get their reporting on each topic specifically. Once we have our articles collected, our AI algorithms compare what is said in each article using a KL Sum, aggregating what is reported from all outlets to form a summary of these resources. The summary is about 5 sentences long—digested by the user with ease, with quick access to the resources that were used to create it!
## Challenges we ran into
We were really nervous about taking on a NLP problem and the complexity of creating an app that made complex articles simple to understand. We had to work with technologies that we haven't worked with before, and ran into some challenges with technologies we were already familiar with. Trying to define what makes a perspective "reasonable" versus "biased" versus "false/fake news" proved to be an extremely difficult task. We also had to learn to better adapt our mobile interface for an application that’s content varied so drastically in size and available content.
## Accomplishments that we're proud of
We’re so proud we were able to stretch ourselves by building a fully functional MVP with both a backend and iOS mobile client. On top of that we were able to submit our app to the App Store, get several well-deserved hours of sleep, and ultimately building a project with a large impact.
## What we learned
We learned a lot! On the backend, one of us got to look into NLP for the first time and learned about several summarization algorithms. While building the front end, we focused on iteration and got to learn more about how UIScrollView’s work and interact with other UI components. We also got to worked with several new libraries and APIs that we hadn't even heard of before. It was definitely an amazing learning experience!
## What's next for News Report
We’d love to start working on sentiment analysis on the headlines of articles to predict how distributed the perspectives are. After that we also want to be able to analyze and remove fake news sources from our spider's crawl. | losing |
## Inspiration
Some things can only be understood through experience, and Virtual Reality is the perfect medium for providing new experiences. VR allows for complete control over vision, hearing, and perception in a virtual world, allowing our team to effectively alter the senses of immersed users. We wanted to manipulate vision and hearing in order to allow players to view life from the perspective of those with various disorders such as colorblindness, prosopagnosia, deafness, and other conditions that are difficult to accurately simulate in the real world. Our goal is to educate and expose users to the various names, effects, and natures of conditions that are difficult to fully comprehend without first-hand experience. Doing so can allow individuals to empathize with and learn from various different disorders.
## What it does
Sensory is an HTC Vive Virtual Reality experience that allows users to experiment with different disorders from Visual, Cognitive, or Auditory disorder categories. Upon selecting a specific impairment, the user is subjected to what someone with that disorder may experience, and can view more information on the disorder. Some examples include achromatopsia, a rare form of complete colorblindness, and prosopagnosia, the inability to recognize faces. Users can combine these effects, view their surroundings from new perspectives, and educate themselves on how various disorders work.
## How we built it
We built Sensory using the Unity Game Engine, the C# Programming Language, and the HTC Vive. We imported a rare few models from the Unity Asset Store (All free!)
## Challenges we ran into
We chose this project because we hadn't experimented much with visual and audio effects in Unity and in VR before. Our team has done tons of VR, but never really dealt with any camera effects or postprocessing. As a result, there are many paths we attempted that ultimately led to failure (and lots of wasted time). For example, we wanted to make it so that users could only hear out of one ear - but after enough searching, we discovered it's very difficult to do this in Unity, and would've been much easier in a custom engine. As a result, we explored many aspects of Unity we'd never previously encountered in an attempt to change lots of effects.
## What's next for Sensory
There's still many more disorders we want to implement, and many categories we could potentially add. We envision this people a central hub for users, doctors, professionals, or patients to experience different disorders. Right now, it's primarily a tool for experimentation, but in the future it could be used for empathy, awareness, education and health. | ## Inspiration
Shashank Ojha, Andreas Joannou, Abdellah Ghassel, Cameron Smith
#
![](https://drive.google.com/uc?export=view&id=1griTlDOUhpmhqq7CLNtwrQnRGaBXGn72)
Clarity is an interactive smart glass that uses a convolutional neural network, to notify the user of the emotions of those in front of them. This wearable gadget has other smart glass abilities such as the weather and time, viewing daily reminders and weekly schedules, to ensure that users get the best well-rounded experience.
## Problem:
As mental health raises barriers inhibiting people's social skills, innovative technologies must accommodate everyone. Studies have found that individuals with developmental disorders such as Autism and Asperger’s Syndrome have trouble recognizing emotions, thus hindering social experiences. For these reasons, we would like to introduce Clarity. Clarity creates a sleek augmented reality experience that allows the user to detect the emotion of individuals in proximity. In addition, Clarity is integrated with unique and powerful features of smart glasses including weather and viewing daily routines and schedules. With further funding and development, the glasses can incorporate more inclusive features straight from your fingertips and to your eyes.
![](https://drive.google.com/uc?export=view&id=1eVZFYgQIm7vu5UOjp5tvgFOxvf3kv4Oj)
![](https://drive.google.com/uc?export=view&id=1L-5w9jzwKG0dLdwe-OCMUa6S2HnZeaFo)
![](https://drive.google.com/uc?export=view&id=1LP7bI9jAupQDQcfbQIszs9igVEFSuqDb)
## Mission Statement:
At Clarity, we are determined to make everyone’s lives easier, specifically to help facilitate social interactions for individuals with developmental disorders. Everyone knows someone impacted by mental health or cognitive disabilities and how meaningful those precious interactions are. Clarity wants to leap forward to make those interactions more memorable, so they can be cherished for a lifetime.
![](https://drive.google.com/uc?export=view&id=1qJgJIAwDI0jxhs1Q59WyaGAvFg5fysTt)
![](https://drive.google.com/uc?export=view&id=1AY5zbgfUB4c_4feWVVrQcuOGtn_yGc99)
We are first-time Makeathon participants who are determined to learn what it takes to make this project come to life and to impact as many lives as possible. Throughout this Makeathon, we have challenged ourselves to deliver a well-polished product that, with the purpose of doing social good. We are second-year students from Queen's University who are very passionate about designing innovative solutions to better the lives of everyone. We share a mindset to give any task our all and obtain the best results. We have a diverse skillset and throughout the hackathon, we utilized everyone's strengths to work efficiently. This has been a great learning experience for our first makeathon, and even though we have some respective experiences, this was a new journey that proved to be intellectually stimulating for all of us.
## About:
### Market Scope:
![](https://drive.google.com/uc?export=view&id=10LWCDhgfDPp1scpVI1GSAGIWrjprQtOY)
Although the main purpose of this device is to help individuals with mental disorders, the applications of Clarity are limitless. Other integral market audiences to our device include:
• Educational Institutions can use Clarity to help train children to learn about emotions and feelings at a young age. Through exposure to such a powerful technology, students can be taught fundamental skills such as sharing, and truly caring by putting themselves in someone else's shoes, or lenses in this case.
• The interview process for social workers can benefit from our device to create a dynamic and thorough experience to determine the ideal person for a task. It can also be used by social workers and emotional intelligence researchers to have better studies and results.
• With further development, this device can be used as a quick tool for psychiatrists to analyze and understand their patients at a deeper level. By assessing individuals in need of help at a faster level, more lives can be saved and improved.
### Whats In It For You:
![](https://drive.google.com/uc?export=view&id=1XbrcnIEc3eAYDmkopmwGbSew11GQv91v)
The first stakeholder to benefit from Clarity is our users. This product provides accessibility right to the eye for almost 75 million users (number of individuals in the world with developmental disorders). The emotion detection system is accessible at a user's disposal and makes it easy to recognize anyone's emotions. Whether one watching a Netflix show or having a live casual conversation, Clarity has got you covered.
Next, Qualcomm could have a significant partnership in the forthcoming of Clarity, as they would be an excellent distributor and partner. With professional machining and Qualcomm's Snapdragon processor, the model is guaranteed to have high performance in a small package.
Due to the various applications mentioned of this product, this product has exponential growth potential in the educational, research, and counselling industry, thus being able to offer significant potential in profit/possibilities for investors and researchers.
## Technological Specifications
## Hardware:
At first, the body of the device was a simple prism with an angled triangle to reflect the light at 90° from the user. The initial intention was to glue the glass reflector to the outer edge of the triangle to complete the 180° reflection. This plan was then scrapped in favour of a more robust mounting system, including a frontal clip for the reflector and a modular cage for the LCD screen. After feeling confident in the primary design, a CAD prototype was printed via a 3D printer. During the construction of the initial prototype, a number of challenges surfaced including dealing with printer errors, component measurement, and manufacturing mistakes. One problem with the prototype was the lack of adhesion to the printing bed. This resulted in raised corners which negatively affected component cooperation. This issue was overcome by introducing a ring of material around the main body. Component measurements and manufacturing mistakes further led to improper fitting between pieces. This was ultimately solved by simplifying the initial design, which had fewer points of failure. The evolution of the CAD files can be seen below.
![](https://drive.google.com/uc?export=view&id=1vDT1gGyfM7FgioSRr71yBSysGntOfiFC)
The material chosen for the prototypes was PLA plastic for its strength to weight ratio and its low price. This material is very lightweight and strong, allowing for a more comfortable experience for the user. Furthermore, inexpensive plastic allows for inexpensive manufacturing.
Clarity runs on a Raspberry Pi Model 4b. The RPi communicates with the OLED screen using the I2C protocol. It additionally powers and communicates with the camera module and outputs a signal to a button to control the glasses. The RPi handles all the image processing, to prepare the image for emotion recognition and create images to be output to the OLED screen.
### Optics:
Clarity uses two reflections to project the image from the screen to the eye of the wearer. The process can be seen in the figure below. First, the light from the LCD screen bounces off the mirror which has a normal line oriented at 45° relative to the viewer. Due to the law of reflection, which states that the angle of incidence is equal to the angle of reflection relative to the normal line, the light rays first make a 90° turn. This results in a horizontal flip in the projected image. Then, similarly, this ray is reflected another 90° against a transparent piece of polycarbonate plexiglass with an anti-reflective coating. This flips the image horizontally once again, resulting in a correctly oriented image. The total length that the light waves must travel should be equivalent to the straight-line distance required for an image to be discernible. This minimum distance is roughly 25 cm for the average person. This led to shifting the screen back within the shell to create a clearer image in the final product.
![](https://drive.google.com/uc?export=view&id=1dOHIXN2L045LHh7rCoD0iTrW_IVKf7dz)
## Software:
![](https://drive.google.com/uc?export=view&id=1DzqhM4p5y729deKQQkTw5isccUeZRCP8)
The emotion detection capabilities of Clarity smart glasses are powered by Google Cloud Vision API. The glasses capture a photo of the people in front of the user, runs the photo through the Cloud Vision model using an API key, and outputs a discrete probability distribution of the emotions. This probability distribution is analyzed by Clarity’s code to determine the emotion of the people in the image. The output of the model is sent to the user through the OLED screen using the Pillow library.
The additional features of the smart glasses include displaying the current time, weather, and the user’s daily schedule. These features are implemented using various Python libraries and a text file-based storage system. Clarity allows all the features of the smart glasses to be run concurrently through the implementation of asynchronous programming. Using the asyncio library, the user can iterate through the various functionalities seamlessly.
The glasses are interfaced through a button and the use of Siri. Using an iPhone, Siri can remotely power on the glasses and start the software. From there, users can switch between the various features of Clarity by pressing the button on the side of the glasses.
The software is implemented using a multi-file program that calls functions based on the current state of the glasses, acting as a finite state machine. The program looks for the rising edge of a button impulse to receive inputs from the user, resulting in a change of state and calling the respective function.
## Next Steps:
The next steps include integrating a processor/computer inside the glasses, rather than using raspberry pi. This would allow for the device to take the next step from a prototype stage to a mock mode. The model would also need to have Bluetooth and Wi-Fi integrated, so that the glasses are modular and easily customizable. We may also use magnifying lenses to make the images on the display bigger, with the potential of creating a more dynamic UI.
## Timelines:
As we believe that our device can make a drastic impact in people’s lives, the following diagram is used to show how we will pursue Clarity after this Makathon:
![](https://drive.google.com/uc?export=view&id=1m85rTMVAqIIK5VRbjqESn1Df-H0Pilx8)
## References:
• <https://cloud.google.com/vision>
• Python Libraries
### Hardware:
All CADs were fully created from scratch. However, inspiration was taken from conventional DIY smartglasses out there.
### Software:
### Research:
• <https://www.vectorstock.com/royalty-free-vector/smart-glasses-vector-3794640>
• <https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2781897/>
• <https://www.google.com/search?q=how+many+people+have+autism&rlz=1C1CHZN_enCA993CA993&oq=how+many+people+have+autism+&aqs=chrome..69i57j0i512l2j0i390l5.8901j0j9&sourceid=chrome&ie=UTF-8>
• (<http://labman.phys.utk.edu/phys222core/modules/m8/human_eye.html>)
• <https://mammothmemory.net/physics/mirrors/flat-mirrors/normal-line-and-two-flat-mirrors-at-right-angles.html> | ## Inspiration
This generation of technological innovation and human factor design focuses heavily on designing for individuals with disabilities. As such, the inspiration for our project was an application of object detection (Darknet YOLOv3) for visually impaired individuals. This user group in particular has limited visual modality, which the project aims to provide.
## What it does
Our project aims to provide the visually impaired with sufficient awareness of their surroundings to maneuver. We created a head-mounted prototype that provides the user group real-time awareness of their surroundings through haptic feedback. Our smart wearable technology uses a computer-vision ML algorithm (convolutional neural network) to help scan the user’s environment to provide haptic feedback as a warning of a nearby obstacle. These obstacles are further categorized by our algorithm to be dynamic (moving, live objects), as well as static (stationary) objects. For our prototype, we filtered through all the objects detected to focus on the nearest object to provide immediate feedback to the user, as well as providing a stronger or weaker haptic feedback if said object is near or far respectively.
## Process
While our idea is relatively simple in nature, we had no idea going in just how difficult the implementation was.
Our main goal was to meet a minimum deliverable product that was capable of vibrating based on the position, type, and distance of an object. From there, we had extra goals like distance calibration, optimization/performance improvements, and a more complex human interface.
Originally, the processing chip (the one with the neural network preloaded) was intended to be the Huawei Atlas. With the additional design optimizations unique to neural networks, it was perfect for our application. After 5 or so hours of tinkering with no progress however, we realized this would be far too difficult for our project.
We turned to a Raspberry Pi and uploaded Google’s pre-trained image analysis network. To get the necessary IO for the haptic feedback, we also had this hooked up to an Arduino which was connected to a series of haptic motors. This showed much more promise than the Huawei board and we had a functional object identifier in no time. The rest of the night was spent figuring out data transmission to the Arduino board and the corresponding decoding/output.
With only 3 hours to go, we still had to finish debugging and assemble the entire hardware rig.
## Key Takeaways
In the future, we all learned just how important having a minimum deliverable product (MDP) was. Our solution could be executed with varying levels of complexity and we wasted a significant amount of time on unachievable pipe dreams instead of focusing on the important base implementation.
The other key takeaway of this event was to be careful with new technology. Since the Huawei boards were so new and relatively complicated to set up, they were incredibly difficult to use. We did not even use the Huawei Atlas in our final implementation meaning that all our work was not useful to our MDP.
## Possible Improvements
If we had more time, there are a few things we would seek to improve.
First, the biggest improvement would be to get a better processor. Either a Raspberry Pi 4 or a suitable replacement would significantly improve the processing framerate. This would make it possible to provide more robust real-time tracking instead of tracking with significant delays.
Second, we would expand the recognition capabilities of our system. Our current system only filters for a very specific set of objects, particular to an office/workplace environment. Our ideal implementation would be a system applicable to all aspects of daily life. This means more objects that are recognized with higher confidence.
Third, we would add a robust distance measurement tool. The current project uses object width to estimate the distance to an object. This is not always accurate unfortunately and could be improved with minimal effort. | winning |
## Inspiration
We noticed a lot of stress among students around midterm season and wanted to utilize our programming skills to support them both mentally and academically. Our implementation was profoundly inspired by Jerry Xu's Simply Python Chatbot repository, which was built on a different framework called Keras. Through this project, we hoped to build a platform where students can freely reach out and find help whenever needed.
## What it does
Students can communicate their feelings, seek academic advice, or say anything else that is on their mind to the eTA. The eTA will respond with words of encouragement, point to helpful resources relevant to the student's coursework, or even make light conversation.
## How we built it
Our team used python as the main programming language including various frameworks, such as PyTorch for machine learning and Tkinter for the GUI. The machine learning model was trained by a manually produced dataset by considering possible user inputs and creating appropriate responses to given inputs.
## Challenges we ran into
It was difficult to fine tune the number of epochs of the machine learning algorithm in a way that it yielded the best final results. Using many of the necessary frameworks and packages generally posed a challenge as well.
## Accomplishments that we're proud of
We were impressed by the relative efficacy and stability of the final product, taking into account the fast-paced and time-sensitive nature of the event. We are also proud of the strong bonds that we have formed among team members through our collaborative efforts.
## What we learned
We discovered the versatility of machine learning algorithms but also their limitations in terms of accuracy and consistency under unexpected or ambiguous circumstances. We believe, however, that this drawback can be addressed with the usage of a more complex model, allotment of more resources, and a larger supply of training data.
## What's next for eTA
We would like to accommodate a wider variety of topics in the program by expanding the scope of the dataset--potentially through the collection of more diverse user inputs from a wider sample population at Berkeley. | ## Inspiration
Our inspiration for this project came from the issue we had in classrooms where many students would ask the same questions in slightly different ways, causing the teacher to use up valuable time addressing these questions instead of more pertinent and different ones.
Also, we felt that the bag of words embedding used to vectorize sentences does not make use of the sentence characteristics optimally, so we decided to create our structure in order to represent a sentence more efficiently.
## Overview
Our application allows students to submit questions onto a website which then determines whether this question is either:
1. The same as another question that was previously asked
2. The same topic as another question that was previously asked
3. A different topic entirely
The application does this by using the model proposed by the paper "Bilateral Multi-Perspective Matching for Natural Language Sentences" by Zhiguo et. al, with a new word structure input which we call "sentence tree" instead of a bag-of-words that outputs a prediction of whether the new question asked falls into one of the above 3 categories.
## Methodology
We built this project by splitting the task into multiple subtasks which could be done in parallel. Two team members worked on the web app while the other two worked on the machine learning model in order to our expertise efficiently and optimally. In terms of the model aspect, we split the task into getting the paper's code work and implementing our own word representation which we then combined into a single model.
## Challenges
Majorly, modifying the approach presented in the paper to suit our needs was challenging. On the web development side, we could not integrate the model in the web app easily as envisioned since we had customized our model.
## Accomplishments
We are proud that we were able to get accuracy close to the ones provided by the paper and for developing our own representation of a sentence apart from the classical bag of words approach.
Furthermore, we are excited to have created a novel system that eases the pain of classroom instructors a great deal.
## Takeaways
We learned how to implement research papers and improve on the results from these papers. Not only that, we learned more about how to use Tensorflow to create NLP applications and the differences between Tensorflow 1 and 2.
Going further, we also learned how to use the Stanford CoreNLP toolkit. We also learned more about web app design and how to connect a machine learning backend in order to run scripts from user input.
## What's next for AskMe.AI
We plan on finetuning the model to improve its accuracy and to also allow for questions that are multi sentence. Not only that, we plan to streamline our approach so that the tree sentence structure could be seamlessly integrated with other NLP models to replace bag of words and to also fully integrate the website with the backend. | ## Inspiration
As students, we understand the stress that builds up in our lives. Furthermore, we know how important it is to reflect on the day and plan how to improve for tomorrow. It might be daunting to look for help from others, but wouldn't an adorable dino be the perfect creature to talk to your day about? Cuteness has been scientifically proven to increase happiness, and our cute dino will always be there to cheer you up! We want students to have a way to talk about their day, get cheered up by a cute little dino buddy, and have suggestions on what to focus on tomorrow to increase their productivity! DinoMind is your mental health dino companion to improve your mental wellbeing!
## What it does
DinoMind uses the power of ML models, LLMs, and of course, cute dinos (courtesy of DeltaHacks of course <3)!! Begin your evening by opening DinoMind and clicking the **Record** button, and tell your little dino friend all about your day! A speech-to-text model will transcribe your words, and save the journal entry in the "History" tab. We then use an LLM to summarize your day for you in easy to digest bullet points, allowing you to reflect on what you accomplished. The LLM then creates action items for tomorrow, allowing you to plan ahead and have some sweet AI-aided productivity! Finally, your dino friend gives you an encouraging message if they notice you're feeling a bit down thanks to our sentiment analysis model!
## How we built it
Cloudflare was our go-to for AI/ML models. These model types used were:
1. Text generation
2. Speech-to-text
3. Text classification (in our case, it was effectively used for sentiment analysis)
We used their AI Rest API, and luckily the free plan allowed for lots of requests!
Expo was the framework we used for our front-end, since we wanted some extra oomph to our react native application.
## Challenges we ran into
A small challenge was that we really really wanted to use the Deltahacks dino mascots for this year in our application (they're just so cute!!). But there wasn't anything with each one individually online, so we realized we could take photos of the shirts and extra images of the dinos from that!!
As for the biggest challenges, that was integrating our Cloudflare requests with the front-end. We had our Cloudflare models working fully with test cases too! But once we used the recording capabilities of react native and tried sending that to our speech-to-text model, everything broke. We spent far too long adding `console.log` statements everywhere, checking the types of the variables, the data inside, hoping somewhere we'd see what the difference was in the input from our test cases and the recorded data. That was easily our biggest bottleneck, because once we moved past it, we had the string data from what the user said and were able to send it to all of our Cloudflare models.
## Accomplishments that we're proud of
We are extremely proud of our brainstorming process, as this was easily one of the most enjoyable parts of the hackathon. We were able to bring our ideas from 10+ to 3, and then developed these 3 ideas until we decided that the mental health focused journaling app seemed the most impactful, especially when mental health is so important.
We are also proud of our ability to integrate multiple AI/ML models into our application, giving each one a unique and impactful purpose that leads to the betterment of the user's productivity and mental wellbeing. Furthermore, majority of the team had never used AI/ML models in an application before, so seeing their capabilities and integrating them into a final product was extremely exciting!
Finally, our perseverance and dedication to the project carried us through all the hard times, debugging, and sleepless night (singular, because luckily for our sleep deprived bodies, this wasn't a 2 night hackathon). We are so proud to present the fruits of our labour and dedication to improving the mental health of students just like us.
## What we learned
We learned that even though every experience we've had shows us how hard integrating the back-end with the front-end can be, nothing ever makes it easier. However, your attitude towards the challenge can make dealing with it infinitely easier, and enables you to create the best product possible.
We also learned a lot about integrating different frameworks and the conflicts than can arise. For example, did you know that using expo (and by extension, react native), you make it impossible to use certain packages?? We wanted to use the `fs` package for our file systems, but it was impossible! Instead, we needed to use the `FileSystem` from `expo-file-system` :sob:
Finally, we learned about Cloudflare and Expo since we'd never used those technologies before!
## What's next for DinoMind
One of the biggest user-friendly additions to any LLM response is streaming, and DinoMind is no different. Even ChatGPT isn't always that fast at responding, but it looks a lot faster when you see each word as it's produced! Integrating streaming into our responses would make it a more seamless experience for users as they are able to immediately see a response and read along as it is generated.
DinoMind also needs a lot of work in finding mental health resources from professionals in the field that we didn't have access to during the hackathon weekend. With mental wellbeing at the forefront of our design, we need to ensure we have professional advice to deliver the best product possible! | losing |
## Inspiration
* "How do animals perceive the world compared to humans?"
* "Given that dogs have fewer cone cells, how different is their color vision?"
* "What's the visual experience of a person with color blindness?"
* "How have animals' visual adaptations influenced their survival and behavior?"
* Drawing from Carvalho et al. 2017's study on primate color vision and MIT's Byrne & Hilbert's insights on color science, we studied and realized the profound implications of color perception on species' lifestyles and evolutionary trajectories. The biological investigation inspired us for this project.
## What it does
# 1. Transformation of Webpage Visuals:
* Simulate deuteranopia by shifting red (570-750nm) to brownish-green and (380-495nm) to greyish-blue.
* Acknowledge the diverse color perceptions stemming from differences in retinal cone cells in humans and animals.
# 2. Educational & Awareness Drive:
* Offer interactive insights into diverse color perceptions, enhancing user understanding of animal behavior.
* Foster greater awareness and empathy for the richness of biodiversity, animal welfare, and protection.
* Highlight visual impairments, fostering a deeper societal empathy.
## How we built it
* HTML, JS, CSS, PHP
* Prototype with Sketch and Procreate
## Challenges we ran into
* No APIs, have to code everything by ourselves
* Extremely hard to extract loads of pictures, modify them, and replace them in a high speed.
## Accomplishments that we're proud of
-Learnt and discussed comprehensive issues related to the visually impaired
-explored various ways of image processing
* The front end works
* We learned a lot from peers
## What we learned
* This is not a beginner-friendly project :(
## What's next for Doggle(VisionVerse) Chrome Extension
1. Species-Specific Filters: Differentiate visuals based on animal classes like marine life, birds, and primates.
2. Knowledge Hub: Incorporate a backend database detailing color explanations and extend a reference section for color scheme insights.
3. Inclusive Graph Interpretation: Introduce alerts for graphics, especially scientific diagrams, that might confuse visually impaired users by assessing RGB similarity.
4. Optimization & Enhancement: Debug current functionalities and enhance the user interface for a more intuitive experience. | ## Inspiration
We wanted to create an AI-based web app that would allow visually impaired people to understand the content of photographs.
## What it does
This project uses an encoder-decoder architecture to caption images, which are then output as audio.
## How we built it
We used a ResNet50 model to create image encodings and an LSTM to predict words of the caption from the image and the already partially-generated sequence. We use Flask to create the web app.
## Challenges we ran into
* Integrating the image captioning model into a web applications
* Converting the generated caption to audio and have that outputted
## Accomplishments that we're proud of
* Successful and functioning web application that is able to caption new images and produce audio with moderate accuracy
* First time competing in a hackathon solo
## What we learned
* Flask as a framework for web application development
* The drastic differences in conditioning on partial input sequences alone vs. both the image and input sequence
## What's next for CaptionToLife
* Implementing a beam search decoder as opposed to a greedy decoder for better captioning results
* Allowing camera functionality that enables users to take a picture and have it captioned in real-time | ## Inspiration
There are approximately **10 million** Americans who suffer from visual impairment, and over **5 million Americans** suffer from Alzheimer's dementia. This weekend our team decided to help those who were not as fortunate. We wanted to utilize technology to create a positive impact on their quality of life.
## What it does
We utilized a smartphone camera to analyze the surrounding and warn visually impaired people about obstacles that were in their way. Additionally, we took it a step further and used the **Azure Face API** to detect the faces of people that the user interacted with and we stored their name and facial attributes that can be recalled later. An Alzheimer's patient can utilize the face recognition software to remind the patient of who the person is, and when they last saw him.
## How we built it
We built our app around **Azure's APIs**, we created a **Custom Vision** network that identified different objects and learned from the data that we collected from the hacking space. The UI of the iOS app was created to be simple and useful for the visually impaired, so that they could operate it without having to look at it.
## Challenges we ran into
Through the process of coding and developing our idea we ran into several technical difficulties. Our first challenge was to design a simple UI, so that the visually impaired people could effectively use it without getting confused. The next challenge we ran into was attempting to grab visual feed from the camera, and running them fast enough through the Azure services to get a quick response. Another challenging task that we had to accomplish was to create and train our own neural network with relevant data.
## Accomplishments that we're proud of
We are proud of several accomplishments throughout our app. First, we are especially proud of setting up a clean UI with two gestures, and voice control with speech recognition for the visually impaired. Additionally, we are proud of having set up our own neural network, that was capable of identifying faces and objects.
## What we learned
We learned how to implement **Azure Custom Vision and Azure Face APIs** into **iOS**, and we learned how to use a live camera feed to grab frames and analyze them. Additionally, not all of us had worked with a neural network before, making it interesting for the rest of us to learn about neural networking.
## What's next for BlindSpot
In the future, we want to make the app hands-free for the visually impaired, by developing it for headsets like the Microsoft HoloLens, Google Glass, or any other wearable camera device. | losing |
## Inspiration
Since the days of yore, we have heard how an army of drones is going to bridge the last-mile gap of accessibility in the supply chain of various products for many people in the world. A good example of this can already be seen with Zipline in Rwanda and Ghana, which is already on its way to success with this model. While the quest to bridging the last-mile gap is well on it's way to getting completed, there are significant challenges in the overall supply chain that have had no evolution. As technology gets more and more advanced, people get more disillusioned by the possible use cases that are not being explored.
The inspiration for this hack very much stems from the fact that the Treehacks Hardware Lab provided us with a drone that did not come with an SDK, or any developer support. And therefore, I decided to build a whole new approach to voice recognition and instant payments that not only tries to address this growing disillusionment with technology, but also provides a use case for drones, a technology that is no longer *"unimaginable"* or *"out of this world"* that has not yet been explored: *refunds* and *accessbility*.
## What it does
With Dronations, you can ask a voice-activated drone to:
* `TAKEOFF`
* `TOUCHDOWN`
* `TAKE A PICTURE` (using gesture)
* `TRANSFER MONEY TO PERSON IN FRAME`
Now, you can receive instant refunds for a product or an item you have returned after buying. And people with limited mobility can send payments reliably to people they trust using just simple voice commands delivered to a drone.
## How I built it
Since the Snaptain drone provided to me did not come with an SDK, I was left with no choice but to find a way to hack into it. Fortunately, the flimsy drone did not have that much going on in terms of protection from adversarial attacks. After fiddling with the drone for a few hours, it was clear that the best way of trying to break into the drone in order to control it with a computer or connect it to a server would be through the unsecured WiFi it uses to connect to an app on my phone.
I spent another few hours reverse engineering the drone protocol by sniffing packets on the channel and frequency used by this drone to get commands from my phone. For this purpose, I used Airmon-NG on a computer running a distribution of Kali-Linux (thankfully provided by my senior project lab at Stanford). Having figured out how to send basic takeoff, touchdown and image commands to the drone by bypassing its WiFi gave me an immense of control of the drone and now it was time to integrate voice commands.
Out of all the voice recognition softwares on offer, Houndify came through as my top choice. This was because of the intuitive design of adding domains to add more functionality on their app and the fact that the mentors were super helpful while answering questions to help us debug. I build three custom commands for takeoff, touchdown and sending money.
Finally, I integrated Checkbook's API for sending money. The API didn't quite work as expected and it took a while for me to finagle it into making it do what I wanted. However, eventually, everything came together and the result is an end-to-end solution for instant payments using voice-activated drones.
## Challenges I ran into
* Wifi reliability issues on the cheap hardware of a Snaptain drone
* Checkbook API's refusal to accept my POST requests
* Sleep Deprivation?
* The endless bad humor of the team sitting next to me
## Accomplishments that I'm proud of
* Literally hacking into a drone with little to no prior experience hacking an embedded system
* Sniffing packets over the air to reverse engineer drone protocol
* Going through tens of videos showcasing obscure details of packet sniffing
* Making a voice-activated drone
* Integrating with Checkbook's API
* Making new friends in the process of this hackathon
## What I learned
* How to sniff packets on an unsecure HTTP connection
* How to write server level Node.js commands that communicate directly with the terminal
* How to spoof my MAC address
* How to spoof my identity to gain unauthorized access into a device
* How to build an app with Houndify
* How to integrate voice commands to trigger Checkbook's API
* If it exists, there is an AR/VR project of it
## What's next for Dronations: Enabling Instant Refunds for Purchase Returns
* Integrate more features from Checkbook's API to provide an end-to-end payment solution
* Understand more edge use cases for drones - I have barely scratched the surface
* Do user research with people with low accessibility to understand if this can become a viable alternative to send payments reliably to people they trust
* Getting in touch with Snaptain and letting them know how easy it is to hack their drones | ## Inspiration
DeliverAI was inspired by the current shift we are seeing in the automotive and delivery industries. Driver-less cars are slowly but surely entering the space, and we thought driverless delivery vehicles would be a very interesting topic for our project. While drones are set to deliver packages in the near future, heavier packages would be much more fit for a ground base vehicle.
## What it does
DeliverAI has three primary components. The physical prototype is a reconfigured RC car that was hacked together with a raspberry pi and a whole lot of motors, breadboards and resistors. Atop this monstrosity rides the package to be delivered in a cardboard "safe", along with a front facing camera (in an Android smartphone) to scan the faces of customers.
The journey begins on the web application, at [link](https://deliverai.github.io/dAIWebApp/). To sign up, a user submits webcam photos of themselves for authentication when their package arrives. They then select a parcel from the shop, and await its arrival. This alerts the car that a delivery is ready to begin. The car proceeds to travel to the address of the customer. Upon arrival, the car will text the customer to notify them that their package has arrived. The customer must then come to the bot, and look into the camera on its front. If the face of the customer matches the face saved to the purchasing account, the car notifies the customer and opens the safe.
## How we built it
As mentioned prior, DeliverAI has three primary components, the car hardware, the android application and the web application.
### Hardware
The hardware is built from a "repurposed" remote control car. It is wired to a raspberry pi which has various python programs checking our firebase database for changes. The pi is also wired to the safe, which opens when a certain value is changed on the database.
\_ note:\_ a micro city was built using old cardboard boxes to service the demo.
### Android
The onboard android device is the brain of the car. It texts customers through Twilio, scans users faces, and authorizes the 'safe' to open. Facial recognition is done using the Kairos API.
### Web
The web component, built entirely using HTML, CSS and JavaScript, is where all of the user interaction takes place. This is where customers register themselves, and also where they order items. Original designs and custom logos were created to build the website.
### Firebase
While not included as a primary component, Firebase was essential in the construction of DeliverAI. The real-time database, by Firebase, is used for the communication between the three components mentioned above.
## Challenges we ran into
Connecting Firebase to the Raspberry Pi proved more difficult than expected. A custom listener was eventually implemented that checks for changes in the database every 2 seconds.
Calibrating the motors was another challenge. The amount of power
Sending information from the web application to the Kairos API also proved to be a large learning curve.
## Accomplishments that we're proud of
We are extremely proud that we managed to get a fully functional delivery system in the allotted time.
The most exciting moment for us was when we managed to get our 'safe' to open for the first time when a valid face was exposed to the camera. That was the moment we realized that everything was starting to come together.
## What we learned
We learned a *ton*. None of us have much experience with hardware, so working with a Raspberry Pi and RC Car was both stressful and incredibly rewarding.
We also learned how difficult it can be to synchronize data across so many different components of a project, but were extremely happy with how Firebase managed this.
## What's next for DeliverAI
Originally, the concept for DeliverAI involved, well, some AI. Moving forward, we hope to create a more dynamic path finding algorithm when going to a certain address. The goal is that eventually a real world equivalent to this could be implemented that could learn the roads and find the best way to deliver packages to customers on land.
## Problems it could solve
Delivery Workers stealing packages or taking home packages and marking them as delivered.
Drones can only deliver in good weather conditions, while cars can function in all weather conditions.
Potentially more efficient in delivering goods than humans/other methods of delivery | ## Inspiration:
As a group of 4 people who met each other for the first time, we saw this event as an inspiring opportunity to learn new technology and face challenges that we were wholly unfamiliar with. Although intuitive when combined, every feature of this project was a distant puzzle piece of our minds that has been collaboratively brought together to create the puzzle you see today over the past three days. Our inspiration was not solely based upon relying on the minimum viable product; we strived to work on any creative idea sitting in the corner of our minds, anticipating its time to shine. As a result of this incredible yet elusive strategy, we were able to bring this idea to action and customize countless features in the most innovative and enabling ways possible.
## Purpose:
This project involves almost every technology we could possibly work with - and even not work with! Per the previous work experience of Laurance and Ian in the drone sector, both from a commercial and a developer standpoint, our project’s principal axis revolved around drones and their limitations. We improved and implemented features that previously seemed to be the limitations of drones. Gesture control and speech recognition were the main features created, designed to empower users with the ability to seamlessly control the drone. Due to the high threshold commonly found within controllers, many people struggle to control drones properly in tight areas. This can result in physical, mental, material, and environmental damages which are harmful to the development of humans. Laurence was handling all the events at the back end by using web sockets, implementing gesture controllers, and adding speech-to-text commands. As another aspect of the project, we tried to add value to the drone by designing 3D-printed payload mounts using SolidWorks and paying increased attention to detail. It was essential for our measurements to be as exact as possible to reduce errors when 3D printing. The servo motors mount onto the payload mount and deploy the payload by moving its shaft. This innovation allows the drone to drop packages, just as we initially calculated in our 11th-grade physics classes. As using drones for mailing purposes was not our first intention, our main idea continuously evolved around building something even more mind-blowing - innovation! We did not stop! :D
## How We Built it?
The prototype started in small but working pieces. Every person was working on something related to their interests and strengths to let their imaginations bloom. Kevin was working on programming with the DJI Tello SDK to integrate the decisions made by the API into actual drone movements. The vital software integration to make the drone work was tested and stabilized by Kevin. Additionally, he iteratively worked on designing the mount to perfectly fit onto the drone and helped out with hardware issues.
Ian was responsible for setting up the camera streaming. He set up the MONA Server and broadcast the drone through an RTSP protocol to obtain photos. We had to code an iterative python script that automatically takes a screenshot every few seconds. Moreover, he worked toward making the board static until it received a Bluetooth signal from the laptop. At the next step, it activated the Servo motor and pump.
But how does the drone know what it knows?
The drone is able to recognize fire with almost 97% accuracy through deep learning. Paniz was responsible for training the CNN model for image classification between non-fire and fire pictures. The model has been registered and ready for use to receive data from the drone to detect fire.
Challenges we ran into:
There were many challenges that we faced and had to find a way around them in order to make the features work together as a system. Our most significant challenge was the lack of cross-compatibility between software, libraries, modules, and networks. As an example, Kevin had to find an alternative path to connect the drone to the laptop since the UDP network protocol was unresponsive. Moreover, he had to investigate gesture integration with drones during this first prototype testing. On the other hand, Ian struggled to connect the different sensors to the drone due to their heavy weight. Moreover, the hardware compatibility called for deep analysis and research since the source of error was unresolved. Laurence was responsible for bringing all the pieces together and integrating them through each feature individually. He was successful not only through his technical proficiencies but also through continuous integration - another main challenge that he resolved. Moreover, the connection between gesture movement and drone movement due to responsiveness was another main challenge that he faced. Data collection was another main challenge our team faced due to an insufficient amount of proper datasets for fire. Inadequate library and software versions and the incompatibility of virtual and local environments led us to migrate the project from local completion to cloud servers.
## Things we have learned:
Almost every one of us had to work with at least one new technology such as the DJI SDK, New Senos Modulos, and Python packages. This project helped us to earn new skills in a short amount of time with a maximized focus on productivity :D As we ran into different challenges, we learned from our mistakes and tried to eliminate repetitive mistakes as much as possible, one after another.
## What is next for Fire Away?
Although we weren't able to fully develop all of our ideas here are some future adventures we have planned for Fire Away :
Scrubbing Twitter for user entries indicating a potential nearby fire.
Using Cohere APIs for fluent user speech recognition
Further develop and improve the deep learning algorithm to handle of variety of natural disasters | partial |
## Inspiration
An abundance of qualified applicants lose their chance to secure their dream job simply because they are unable to effectively present their knowledge and skills when it comes to the interview. The transformation of interviews into the virtual format due to the Covid-19 pandemic has created many challenges for the applicants, especially students as they have reduced access to in-person resources where they could develop their interview skills.
## What it does
Interviewy is an **Artificial Intelligence** based interface that allows users to practice their interview skills by providing them an analysis of their video recorded interview based on their selected interview question. Users can reflect on their confidence levels and covered topics by selecting a specific time-stamp in their report.
## How we built it
This Interface was built using the MERN stack
In the backend we used the AssemblyAI APIs for monitoring the confidence levels and covered topics. The frontend used react components.
## Challenges we ran into
* Learning to work with AssemblyAI
* Storing files and sending them over an API
* Managing large amounts of data given from an API
* Organizing the API code structure in a proper way
## Accomplishments that we're proud of
• Creating a streamlined Artificial Intelligence process
• Team perseverance
## What we learned
• Learning to work with AssemblyAI, Express.js
• The hardest solution is not always the best solution
## What's next for Interviewy
• Currently the confidence levels are measured through analyzing the words used during the interview. The next milestone of this project would be to analyze the alterations in tone of the interviewees in order to provide a more accurate feedback.
• Creating an API for analyzing the video and the gestures of the the interviewees | ## Inspiration 🌟
Nostalgia comes through small items that trigger our sweet old memories. It reminds us of simpler times when a sticky note was all we needed to remember something important and have fun. Those satisfying moments of peeling the last layer of the memo pad, the embarrassing history of putting online passwords around the computer, and the hilarious actions of putting a thousand window XP sticky notes on the home screen are tiny but significant memories. In today's digital age, we wanted to bring back that simplicity and tangible feeling of jotting down notes on a memo sticker, but with a twist. ScribbleSync is our homage to the past, reimagined with today's technology to enhance and organize our daily lives.
## What it does 📝
ScribbleSync takes the traditional office memo sticker into the digital era. It's an online interface where you can effortlessly scribble notes, ideas, or events. These digital sticky notes are then intelligently synced with your Google Calendar with Large Language Models and Computer Vision, turning your random notes into scheduled commitments and reminders, ensuring that the essence of the physical memo lives on in a more productive and organized digital format.
## How we built it 🛠️
We built ScribbleSync using a combination of modern web technologies and APIs. The front-end interface was designed with HTML/CSS/JS for simplistic beauty. For the backend, we used Flask mainly and integrated Google Calendar API to sync the notes with the user’s calendar. We use state-of-the-art models from Google Vision, Transformer models from Hugginface for image analysis and fine-tuned Cohere models with our custom dataset in a semi-supervised manner to achieve textual classification of tasks and time relation.
## Challenges we ran into 😓
One of our biggest challenges was mainly incorporating multiple ML models and making them communicate with the front end and back end. Meanwhile, all of us are very new to hackathons so we are still adapting to the high-intensity coding and eventful environment.
## Accomplishments that we're proud of 🏆
We're incredibly proud of developing a fully functional prototype within the limited timeframe. We managed to create an intuitive, cute UI and the real-time sync feature works and communicates flawlessly. Overcoming the technical challenges and seeing our idea come to life has been immensely rewarding.
## What we learned 📚
Throughout this journey, we've learned a great deal about API integration, real-time data handling, and creating user-centric designs. We also gained valuable insights into teamwork and problem-solving under pressure. Individually, we tried tech stacks that were unfamiliar to most of us such as Cohere and Google APIs, it is a long but fruitful process and we are now confident to explore other APIs provided by different companies.
## What's next for ScribbleSync 🚀
Our next step is to add practical and convenient functions such as allowing the sticky notes to set up drafts for email, schedules in Microsoft Teams and create Zoom links. We could also add features such as sticking to the home screen to enjoy those fun features from sticky notes in the good old days. | ## Inspiration
We recognized the overwhelming amount of disparate information scattered across the web. This made it hard for individuals to find concise, reliable instructions for a wide array of tasks. "How To ANYTHING" was born out of the desire to streamline this process, turning any query into a detailed guide.
## What it does
"How To ANYTHING" transforms user queries into step-by-step guides akin to WikiHow articles. By leveraging advanced text and image generation techniques, our platform creates uniquely tailored instructions with associated images, ensuring that even the most intricate tasks become straightforward.
## How we built it
The foundation of our platform lies in the integration of state-of-the-art text generation and image synthesis technologies. We utilize LLMs for generating detailed, accurate textual content, while sophisticated algorithms create corresponding images that enhance comprehension. We used Together AI, Aleo, zillisDB, and HUME AI
The system is built on the Next.js / FastAPI framework, ensuring a seamless and responsive user experience.
## Challenges we ran into
Token Limitations: When we initially built our "How to Anything" tool, we faced significant challenges due to token limitations in our text-to-image model. Many complex tasks and detailed instructions required a large number of tokens to convey accurately. This limitation hampered our ability to generate comprehensive and detailed step-by-step instructions, which was a roadblock for providing valuable user experiences.
Transition to Together AI: To address the token limitations and other challenges, we made the strategic decision to transition to Together AI. While this transition was beneficial in many ways, it also presented its own set of challenges, such as adapting our existing infrastructure and workflows to a new platform.
## Accomplishments that we're proud of
Achieving a seamless blend of generated text and images that genuinely aid users in understanding complex tasks is our hallmark. The intuitive interface and the reliability of our guides have garnered positive early feedback, reinforcing our belief in the platform's potential.
## What we learned
The project was a deep dive into the intricacies of text and image generation technologies. We learned about different LLMs and Stable Diffusion models. We learned how to deploy a backend and a frontend. We learned about prompt engineering, tokenization and token count, and much more!
## What's next for How To ANYTHING
Our vision is to continually refine and expand the range of topics available. We're exploring the integration of videos and interactive elements, and we're also looking into fostering a community where users can contribute, modify, and rate guides, making "How To ANYTHING" a collaborative knowledge hub powered by Generative AI. | partial |
## Inspiration
My recent job application experience with a small company opened my eyes to the hiring challenges faced by recruiters. After taking time to thoughtfully evaluate each candidate, they explained how even a single bad hire wastes significant resources for small teams. This made me realize the need for a better system that saves time and reduces stress for both applicants and hiring teams. That sparked the idea for CareerChain.
## What it does
CareerChain allows job seekers and recruiters to create verified profiles on our blockchain-based platform.
For applicants, we use a microtransaction system similar to rental deposits or airport carts. A small fee is required to submit each application, refunded when checking status later. This adds friction against mass spam applications, ensuring only serious, passionate candidates apply.
For recruiters, our AI prescreens applicants, filtering out unqualified candidates. This reduces time wasted on low-quality applications, allowing teams to focus on best fits. Verified profiles also prevent fraud.
By addressing inefficiencies for both sides, CareerChain streamlines hiring through emerging technologies.
## How I built it
I built CareerChain using:
* XRP Ledger for blockchain transactions and smart contracts
* Node.js and Express for the backend REST API
* Next.js framework for the frontend
## Challenges we ran into
Implementing blockchain was challenging as it was my first time building on the technology. Learning the XRP Ledger and wiring up the components took significant learning and troubleshooting.
## Accomplishments that I'm proud of
I'm proud to have gained hands-on blockchain experience and built a working prototype leveraging these cutting-edge technologies.
## What I learned
I learned so much about blockchain capabilities and got exposure to innovative tools from sponsors. The hacking experience really expanded my skills.
## What's next for CareerChain
Enhancing fraud detection, improving the microtransaction UX, and exploring integrations like background checks to further optimize hiring efficiency. | ## Inspiration
In 2020, Canada received more than 200,000 refugees and immigrants. The more immigrants and BIPOC individuals I spoke to, the more I realized, they were only aiming for employment opportunities as cab drivers, cleaners, dock workers, etc. This can be attributed to a discriminatory algorithm that scraps their resumes, and a lack of a formal network to engage and collaborate in. Corporate Mentors connects immigrants and BIPOC individuals with industry professionals who overcame similar barriers as mentors and mentees.
This promotion of inclusive and sustainable economic growth has the potential of creating decent jobs and significantly improving living standards can also aid in their seamless transition into Canadian society. Thereby, ensuring that no one gets left behind.
## What it does
To tackle the global rise of unemployment and increasing barriers to mobility for marginalized BIPOC communities and immigrants due to racist and discriminatory machine learning algorithms and lack of networking opportunities by creating an innovative web platform that enables people to receive professional mentorship and access to job opportunities that are available through networking.
## How we built it
The software architecture model being used is the three-tiered architecture, where we are specifically using the MERN Stack. MERN stands for MongoDB, Express, React, Node, after the four key technologies that make up the stack: React(.js) make up the top ( client-side /frontend), Express and Node make up the middle (application/server) tier, and MongoDB makes up the bottom(Database) tier. System Decomposition explains the relationship better below. The software architecture diagram below details the interaction of varying components in the system.
## Challenges we ran into
The mere fact that we didn't have a UX/UI designer on the team made us realize how difficult it was to create an easy-to-navigate user interface.
## Accomplishments that we're proud of
We are proud of the matching algorithm we created to match mentors with mentees based on their educational qualifications, corporate experience, and desired industry. Additionally, we would also be able to monetize the website utilizing the Freemium subscription model we developed if we stream webinar videos using Accedo.
## What's next for Corporate Mentors
1) The creation of a real mentor pool with experienced corporate professionals is the definite next step.
2) Furthermore, the development of the freemium model (4 hrs of mentoring every month) @ $60 per 6 months or $100 per 12 months.
3) Paid Webinars (price determined by the mentor with 80% going to them) and 20% taken as platform maintenance fee.
4) Create a chat functionality between mentor and mentee using Socket.io and add authorization in the website to limit access to the chats from external parties
5) Create an area for the mentor and mentee to store and share files | ## Inspiration
Blockchain has created new opportunities for financial empowerment and decentralized finance (DeFi), but it also introduces several new considerations. Despite its potential for equitability, malicious actors can currently take advantage of it to launder money and fund criminal activities. There has been a recent wave of effort to introduce regulations for crypto, but the ease of money laundering proves to be a serious challenge for regulatory bodies like the Canadian Revenue Agency. Recognizing these dangers, we aimed to tackle this issue through BlockXism!
## What it does
BlockXism is an attempt at placing more transparency in the blockchain ecosystem, through a simple verification system. It consists of (1) a self-authenticating service, (2) a ledger of verified users, and (3) rules for how verified and unverified users interact. Users can "verify" themselves by giving proof of identity to our self-authenticating service, which stores their encrypted identity on-chain. A ledger of verified users keeps track of which addresses have been verified, without giving away personal information. Finally, users will lose verification status if they make transactions with an unverified address, preventing suspicious funds from ever entering the verified economy. Importantly, verified users will remain anonymous as long as they are in good standing. Otherwise, such as if they transact with an unverified user, a regulatory body (like the CRA) will gain permission to view their identity (as determined by a smart contract).
Through this system, we create a verified market, where suspicious funds cannot enter the verified economy while flagging suspicious activity. With the addition of a legislation piece (e.g. requiring banks and stores to be verified and only transact with verified users), BlockXism creates a safer and more regulated crypto ecosystem, while maintaining benefits like blockchain’s decentralization, absence of a middleman, and anonymity.
## How we built it
BlockXism is built on a smart contract written in Solidity, which manages the ledger. For our self-authenticating service, we incorporated Circle wallets, which we plan to integrate into a self-sovereign identification system. We simulated the chain locally using Ganache and Metamask. On the application side, we used a combination of React, Tailwind, and ethers.js for the frontend and Express and MongoDB for our backend.
## Challenges we ran into
A challenge we faced was overcoming the constraints when connecting the different tools with one another, meaning we often ran into issues with our fetch requests. For instance, we realized you can only call MetaMask from the frontend, so we had to find an alternative for the backend. Additionally, there were multiple issues with versioning in our local test chain, leading to inconsistent behaviour and some very strange bugs.
## Accomplishments that we're proud of
Since most of our team had limited exposure to blockchain prior to this hackathon, we are proud to have quickly learned about the technologies used in a crypto ecosystem. We are also proud to have built a fully working full-stack web3 MVP with many of the features we originally planned to incorporate.
## What we learned
Firstly, from researching cryptocurrency transactions and fraud prevention on the blockchain, we learned about the advantages and challenges at the intersection of blockchain and finance. We also learned how to simulate how users interact with one another blockchain, such as through peer-to-peer verification and making secure transactions using Circle wallets. Furthermore, we learned how to write smart contracts and implement them with a web application.
## What's next for BlockXism
We plan to use IPFS instead of using MongoDB to better maintain decentralization. For our self-sovereign identity service, we want to incorporate an API to recognize valid proof of ID, and potentially move the logic into another smart contract. Finally, we plan on having a chain scraper to automatically recognize unverified transactions and edit the ledger accordingly. | partial |
# 🤖🖌️ [VizArt Computer Vision Drawing Platform](https://vizart.tech)
Create and share your artwork with the world using VizArt - a simple yet powerful air drawing platform.
![image](https://user-images.githubusercontent.com/65676392/215330789-e38f2b41-1d7b-45b9-bb4f-09be3ffb9bf8.png)
## 💫 Inspiration
>
> "Art is the signature of civilizations." - Beverly Sills
>
>
>
Art is a gateway to creative expression. With [VizArt](https://vizart.tech/create), we are pushing the boundaries of what's possible with computer vision and enabling a new level of artistic expression. ***We envision a world where people can interact with both the physical and digital realms in creative ways.***
We started by pushing the limits of what's possible with customizable deep learning, streaming media, and AR technologies. With VizArt, you can draw in art, interact with the real world digitally, and share your creations with your friends!
>
> "Art is the reflection of life, and life is the reflection of art." - Unknow
>
>
>
Air writing is made possible with hand gestures, such as a pen gesture to draw and an eraser gesture to erase lines. With VizArt, you can turn your ideas into reality by sketching in the air.
![image](https://user-images.githubusercontent.com/65676392/215330736-0e670fe9-4b35-47f5-a948-a8cc107e78e1.png)
![4](https://user-images.githubusercontent.com/65676392/215330565-568a319a-6175-434e-b2de-5017ea4853c5.png)
![5](https://user-images.githubusercontent.com/65676392/215330572-36799049-dc33-430d-b59b-59ad50eb9e7a.png)
Our computer vision algorithm enables you to interact with the world using a color picker gesture and a snipping tool to manipulate real-world objects.
![](https://user-images.githubusercontent.com/65676392/215331038-055999cb-85ad-4383-8373-f47d3534457d.png)
>
> "Art is not what you see, but what you make others see." - Claude Monet
>
>
>
The features I listed above are great! But what's the point of creating something if you can't share it with the world? That's why we've built a platform for you to showcase your art. You'll be able to record and share your drawings with friends.
![image](https://user-images.githubusercontent.com/65676392/215331079-f676ea67-5e5c-4164-9c92-969919ef285b.png)
![image](https://user-images.githubusercontent.com/65676392/215331103-10c5a04c-f4f8-48a1-b40c-a1ff06202ffa.png)
I hope you will enjoy using VizArt and share it with your friends. Remember: Make good gifts, Make good art.
# ❤️ Use Cases
### Drawing Competition/Game
VizArt can be used to host a fun and interactive drawing competition or game. Players can challenge each other to create the best masterpiece, using the computer vision features such as the color picker and eraser.
### Whiteboard Replacement
VizArt is a great alternative to traditional whiteboards. It can be used in classrooms and offices to present ideas, collaborate with others, and make annotations. Its computer vision features make drawing and erasing easier.
### People with Disabilities
VizArt enables people with disabilities to express their creativity. Its computer vision capabilities facilitate drawing, erasing, and annotating without the need for physical tools or contact.
### Strategy Games
VizArt can be used to create and play strategy games with friends. Players can draw their own boards and pieces, and then use the computer vision features to move them around the board. This allows for a more interactive and engaging experience than traditional board games.
### Remote Collaboration
With VizArt, teams can collaborate remotely and in real-time. The platform is equipped with features such as the color picker, eraser, and snipping tool, making it easy to interact with the environment. It also has a sharing platform where users can record and share their drawings with anyone. This makes VizArt a great tool for remote collaboration and creativity.
# 👋 Gestures Tutorial
![image](https://user-images.githubusercontent.com/65676392/215335093-d911eaa1-0cc6-4e78-adc7-b63b323b2f74.png)
![image](https://user-images.githubusercontent.com/65676392/215335107-09c394a4-4811-4199-b692-74ef7377b23c.png)
![image](https://user-images.githubusercontent.com/65676392/215335122-8a517c4a-1374-42f0-ac71-6372a63a7075.png)
![image](https://user-images.githubusercontent.com/65676392/215335137-61a1bd8a-a95c-4e0d-806c-53c443dcdd9d.png)
![image](https://user-images.githubusercontent.com/65676392/215335143-93bc8edb-c2b2-4a8f-b562-d67b8524ac66.png)
# ⚒️ Engineering
Ah, this is where even more fun begins!
## Stack
### Frontend
We designed the frontend with Figma and after a few iterations, we had an initial design to begin working with. The frontend was made with React and Typescript and styled with Sass.
### Backend
We wrote the backend in Flask. To implement uploading videos along with their thumbnails we simply use a filesystem database.
## Computer Vision AI
We use MediaPipe to grab the coordinates of the joints and upload images. WIth the coordinates, we plot with CanvasRenderingContext2D on the canvas, where we use algorithms and vector calculations to determinate the gesture. Then, for image generation, we use the DeepAI open source library.
# Experimentation
We were using generative AI to generate images, however we ran out of time.
![image](https://user-images.githubusercontent.com/65676392/215340713-9b4064a0-37ac-4760-bd35-e6a30c2f4613.png)
![image](https://user-images.githubusercontent.com/65676392/215340723-ee993e2b-70bb-4aa3-a009-ac4459f23f72.png)
# 👨💻 Team (”The Sprint Team”)
@Sheheryar Pavaz
@Anton Otaner
@Jingxiang Mo
@Tommy He | ## Inspiration
Neuro-Matter is an integrated social platform designed to combat not one, but 3 major issues facing our world today: Inequality, Neurological Disorders, and lack of information/news.
We started Neuro-Matter with the aim of helping people facing the issue of Inequality at different levels of society. Though it was assumed that inequality only leads to physical violence, its impacts on neurological/mental levels are left neglected.
Upon seeing the disastrous effects, we have realized the need of this hour and have come up with Neuro-Matter to effectively combat these issues in addition to the most pressing issue our world faces today: mental health!
## What it does
1. "Promotes Equality" and provides people the opportunity to get out of mental trauma.
2. Provides a hate-free social environment.
3. Helps People cure the neurological disorder
4. Provide individual guidance to support people with the help of our volunteers.
5. Provides reliable news/information.
6. Have an AI smart chatbot to assist you 24\*7.
## How we built it
Overall, we used HTML, CSS, React.js, google cloud, dialogue flow, google maps Twilio's APIs. We used Google Firebase's Realtime Database to store, organize, and secure our user data. This data is used to login and sign up for the service. The service's backend is made with node.js, which is used to serve the webpages and enable many useful functions. We have multiple different pages as well like the home page, profile page, signup/login pages, and news/information/thought sharing page.
## Challenges we ran into
We had a couple of issues with databasing as the password authentication would work sometimes. Moreover, since we used Visual Studio multiplayer for the first time it was difficult as we faced many VSCode issues (not code related). Since we were working in the same time zones, it was not so difficult for all of us to work together, but It was hard to get everything done on time and have a rigid working module.
## Accomplishments that we're proud of
Overall, we are proud to create a working social platform like this and are hopeful to take it to the next steps in the future as well. Specifically, each of our members is proud of their amazing contributions.
We believe in the module we have developed and are determined to take this forward even beyond the hackathon to help people in real life.
## What we learned
We learned a lot, to say the least!! Overall, we learned a lot about databasing and were able to strengthen our React.js, Machine Learning, HTML, and CSS skills as well. We successfully incorporated Twilio's APIs and were able to pivot and send messages. We have developed a smart bot that is capable of both text and voice-based interaction. Overall, this was an extremely new experience for all of us and we greatly enjoyed learning new things. This was a great project to learn more about platform development.
## What's next for Neuro-Matter
This was an exciting new experience for all of us and we're all super passionate about this platform and can't wait to hopefully unveil it to the general public to help people everywhere by solving the issue of Inequality. | ## Inspiration
After a bit of research, we found that immediately following a natural disaster, the lack of aid is the cause of most of the casualties. In many third world countries, it takes as long as \_ a few weeks \_ for first responders to rescue many survivors. Many people in need of food, water, medicine, and other aid supplies can unfortunately not survive for longer than 48 hours. For this, we have created a POC drone that can work with a thousand others to search, identify and deliver aid to survivors.
## What it does
The drone is fully autonomous, given a set of GPS coordinates, it will be able to deliver a package without human intervention. We took this concept a step further: our drone will search a predefined area, immediately following a natural disaster. While it is doing so, it is looking for survivors using some type of machine vision implementation. For the purpose of this project, we are using a color sensor camera and a color tag as our delivery point. Once the target has been found, a small medicinal package is released by means of an electromagnet. The drone then sends over the cellular network, the coordinates of the drop, notifying ground crews of the presence of a survivor. Furthermore, this also prevents multiple drones from delivering a package to the same location. The server coordinates the delivery on the scale of hundreds or thousands of drones, simultaneously searching the area.
## How I built it
The flight control is provided by the Ardupilot module. The collision avoidance and color detection is handled by an Arduino. Internet access and pipelining is provided by a Raspberry Pi, and the server is an online database running Python on Google App Engine. The ground station control is using the Mission Planner, and provides realtime updates about the location of the drone, cargo condition, temperature, etc. Everything is built on a 250mm racer quad frame. Essentially, the flight computer handles all of the low-level flight controls, as well as the GPS navigation. The main workhorse of this drone is the Arduino, which integrates information from sonar sensors, an object sensing camera, as well as a GPS to guide the flight computer around obstacles, and to locate the color tag, which is representative of a victim of a natural disaster. The Arduino also takes in information from the accelerometer to be able to compute its position with the help of a downward facing camera in GPS-compromised areas.
## Challenges I ran into
**weight** and **power management**. Considering the fact that we had to use a small racer quad that only had a 2 pound maximum payload, we had to use as few and as light components as possible. Additionally, the power regulators on the motor controllers were only capable of powering a few small electronics. We had to combine multiple regulators to be able to power all of the equipment.
## Accomplishments that I'm proud of
Perhaps coming up with an application of drones that could have a far-reaching impact; the ability to save hundred or thousands of lives, and possibly assist in economic development of third-world countries.
## What I learned
Something as small as a drone can really make a difference in the world. Even saving a single life is substantial for a pseudo-self-aware machine, and is truly a step forward for technology.
## What's next for Second-Order Autonomy
Robustness testing and implementation of the machine vision system. Of course scaling up the platform so that it may be a viable solution in third-world countries. Considering a clear need for the technology, this project could be taken further for practical use. | winning |
## Inspiration 💥
Our inspiration is to alter the way humans have acquired knowledge and skills over the last hundred years. Instead of reading or writing, we devised of a method fo individuals to teach others through communication and mentoring. A way that not only benefits those who learn but also helps them achieve their goals.
## What it does 🌟
Intellex is a diverse skill swapping platform for those eager to learn more. In this era, information is gold. Your knowledge is valuable, and people want it. For the price of a tutoring session, you can receive back a complete and in depth tutorial on whatever you want. Join one on one video calls with safe and rated teachers, and be rewarded for learning more.
We constantly move away from agencies and the government and thus Intellex strives to decentralize education. Slowly, the age old classroom is changing. Intellex presents a potential step towards education decentralization by incentivizing education with NFT rewards which include special badges and a leaderboard.
## How we built it 🛠️
We began with planning out our core features and determining what technologies we would use. Later, we created a Figma design to understand what pages we would need for our project, planning our backend integration to store and fetch data from a database.
We used Next.js to structure the project which uses React internally. We used TypeScript for type safety across my project which was major help when it came to debugging. Tailwind CSS was leveraged for its easy to use classes. We also utilized Framer Motion for the landing page animations
## Challenges we ran into 🌀
The obstacles we faced were coming up with a captivating idea, which caused us to lose productivity. We've also faced difficult obstacles in languages we're unfamiliar with, and some of us are also beginners which created much confusion during the event. Time management was really difficult to cope with because of the many changes in plans, but overall we have improved our knowledge and experience.
## Accomplishments that we're proud of 🎊
We are proud of building a very clean, functional, and modern-looking user interface for Intellex, allowing users to experience an intuitive and interactive educational environment. This aligns seamlessly with our future use of Whisper AI to enhance user interactions.
To ensure optimized site performance, we're implementing Next.js with Server-Side Rendering (SSR), providing an extremely fast and responsive feel when using the app. This approach not only boosts efficiency but also improves the overall user experience, crucial for educational applications.
In line with the best practices of React, we're focusing on using client-side rendering at the most intricate points of the application, integrating it with mock data initially. This setup is in preparation for later fetching real-time data from the backend, including interactive whiteboard sessions and peer ratings. Our aim is to create a dynamic, adaptive learning platform that is both powerful and easy to use, reflecting our commitment to pioneering in the educational technology space.
## What we learned 🧠
Besides the technologies that were listed above, we as a group learned an exceptional amount of information in regards to full stack web applications. This experience marked the beginning of our full stack journey and we took it approached it with a cautious approach, making sure we understood all aspects of a website, which is something that a lot of people tend to overlook. We learned about the planning process, backend integration, REST API's, etc. Most importantly, we learned about the importance of having cooperative and helpful team that will have your back in building out these complex apps on time.
## What's next for Intellex ➡️
We fully plan to build out the backend of Intellex to allow for proper functionality using Whisper AI. This innovative technology will enhance user interactions and streamline the learning process. Regarding the product itself, there are countless educational features that we want to implement, such as an interactive whiteboard for real-time collaboration and a comprehensive rating system to allow peers to see and evaluate each other's contributions. These features aim to foster a more engaging and interactive learning environment. Additionally, we're exploring the integration of adaptive learning algorithms to personalize the educational experience for each user. This is a product we've always wanted to pursue in some form, and we look forward to bringing it to life and seeing its positive impact on the educational community. | ## Inspiration
We’ve all had the experience of needing assistance with a task but not having friends available to help. As a last resort, one has to resort to large, class-wide GroupMe’s to see if anyone can help. But most students have those because they’re filled with a lot of spam. As a result, the most desperate calls for help often go unanswered.
We realized that we needed to streamline the process for getting help. So, we decided to build an app to do just that. For every Yalie who needs help, there are a hundred who are willing to offer it—but they just usually aren’t connected. So, we decided to build YHelpUs, with a mission to help every Yalie get better help.
## What it does
YHelpUs provides a space for students that need something to create postings rather than those that have something to sell creating them. This reverses the roles of a traditional marketplace and allows for more personalized assistance. University students can sign up with their school email accounts and then be able to view other students’ posts for help as well as create their own posts. Users can access a chat for each posting discussing details about the author’s needs. In the future, more features relating to task assignment will be implemented.
## How we built it
Hoping to improve our skills as developers, we decided to carry out the app’s development with the MERNN stack; although we had some familiarity with standard MERN, developing for mobile with React Native was a unique challenge for us all. Throughout the entire development phase, we had to balance what we wanted to provide the user and how these relationships could present themselves in our code. In the end, we managed to deliver on all the basic functionalities required to answer our initial problem.
## Challenges we ran into
The most notable challenge we faced was the migration towards React Native. Although plenty of documentation exists for the framework, many of the errors we faced were specific enough to force development to stop for a prolonged period of time. From handling multi-layered navigation to user authentication across all our views, we encountered problems we couldn’t have expected when we began the project, but every solution we created simply made us more prepared for the next.
## Accomplishments that we're proud of
Enhancing our product with automated content moderation using Google Cloud Natural Language API. Also, our sidequest developing a simple matching algorithm for LightBox.
## What we learned
Learned new frameworks (MERNN) and how to use Google Cloud API.
## What's next for YHelpUs
Better filtering options and a more streamlined UI. We also want to complete the accepted posts feature, and enhance security for users of YHelpUs. | Hello and thank you for judging my project. I am listing below two different links and an explanation of what the two different videos are. Due to the time constraints of some hackathons, I have a shorter video for those who require a lower time. As default I will be placing the lower time video up above, but if you have time or your hackathon allows so please go ahead and watch the full video at the link below. Thanks!
[3 Minute Video Demo](https://youtu.be/8tns9b9Fl7o)
[5 Minute Demo & Presentation](https://youtu.be/Rpx7LNqh7nw)
For any questions or concerns, please email me at [joshiom28@gmail.com](mailto:joshiom28@gmail.com)
## Inspiration
Resource extraction has tripled since 1970. That leaves us on track to run out of non-renewable resources by 2060. To fight this extremely dangerous issue, I used my app development skills to help everyone support the environment.
As a human and a person residing in this environment, I felt that I needed to used my technological development skills in order to help us take care of the environment better, especially in industrial countries such as the United States. In order to do my part in the movement to help sustain the environment. I used the symbolism of the LORAX to name LORAX app; inspired to help the environment.
\_ side note: when referencing firebase I mean firebase as a whole since two different databases were used; one to upload images and the other to upload data (ex. form data) in realtime. Firestore is the specific realtime database for user data versus firebase storage for image uploading \_
## Main Features of the App
To start out we are prompted with the **authentication panel** where we are able to either sign in with an existing email or sign up with a new account. Since we are new, we will go ahead and create a new account. After registering we are signed in and now are at the home page of the app. Here I will type in my name, email and password and log in. Now if we go back to firebase authentication, we see a new user pop up over here and a new user is added to firestore with their user associated data such as their **points, their user ID, Name and email.** Now lets go back to the main app. Here at the home page we can see the various things we can do. Lets start out with the Rewards tab where we can choose rewards depending on the amount of points we have.
If we press redeem rewards, it takes us to the rewards tab, where we can choose various coupons from companies and redeem them with the points we have. Since we start out with zero points, we can‘t redeem any rewards right now.
Let's go back to the home page.
The first three pages I will introduce are apart of the point incentive system for purchasing items that help the environment
If we press the view requests button, we are then navigated to a page where we are able to view our requests we have made in the past. These requests are used in order to redeem points from items you have purchased that help support the environment. Here we would we able to **view some details and the status of the requests**, but since we haven’t submitted any yet, we see there are none upon refreshing. Let’s come back to this page after submitting a request.
If we go back, we can now press the request rewards button. By pressing it, we are navigated to a form where we are able to **submit details regarding our purchase and an image of proof to ensure the user truly did indeed purchase the item**. After pressing submit, **this data and image is pushed to firebase’s realtime storage (for picture) and Firestore (other data)** which I will show in a moment.
Here if we go to firebase, we see a document with the details of our request we submitted and if we go to storage we are able to **view the image that we submitted**. And here we see the details. Here we can review the details, approve the status and assign points to the user based on their requests. Now let’s go back to the app itself.
Now let’s go to the view requests tab again now that we have submitted our request. If we go there, we see our request, the status of the request and other details such as how many points you received if the request was approved, the time, the date and other such details.
Now to the Footprint Calculator tab, where you are able to input some details and see the global footprint you have on the environment and its resources based on your house, food and overall lifestyle. Here I will type in some data and see the results. **Here its says I would take up 8 earths, if everyone used the same amount of resources as me.** The goal of this is to be able to reach only one earth since then Earth and its resources would be able to sustain for a much longer time. We can also share it with our friends to encourage them to do the same.
Now to the last tab, is the savings tab. Here we are able to find daily tasks we can simply do to no only save thousands and thousands of dollars but also heavily help sustain and help the environment. \**Here we have some things we can do to save in terms of transportation and by clicking on the saving, we are navigated to a website where we are able to view what we can do to achieve these savings and do it ourselves. \**
This has been the demonstration of the LORAX app and thank you for listening.
## How I built it
For the navigation, I used react native navigation in order to create the authentication navigator and the tab and stack navigators in each of the respective tabs.
## For the incentive system
I used Google Firebase’s Firestore in order to view, add and upload details and images to the cloud for reviewal and data transfer. For authentication, I also used **Google Firebase’s Authentication** which allowed me to create custom user data such as their user, the points associated with it and the complaints associated with their **user ID**. Overall, **Firebase made it EXTREMELY easy** to create a high level application. For this entire application, I used Google Firebase for the backend.
## For the UI
for the tabs such as Request Submitter, Request Viewer I used React-native-base library to create modern looking components which allowed me to create a modern looking application.
## For the Prize Redemption section and Savings Sections
I created the UI from scratch trialing and erroring with different designs and shadow effects to make it look cool. The user react-native-deeplinking to navigate to the specific websites for the savings tab.
## For the Footprint Calculator
I embedded the **Global Footprint Network’s Footprint Calculator** with my application in this tab to be able to use it for the reference of the user of this app. The website is shown in the **tab app and is functional on that UI**, similar to the website.
I used expo for wifi-application testing, allowing me to develop the app without any wires over the wifi network.
For the Request submission tab, I used react-native-base components to create the form UI elements and firebase to upload the data.
For the Request Viewer, I used firebase to retrieve and view the data as seen.
## Challenges I ran into
Some last second challenges I ran to was the manipulation of the database on Google Firebase. While creating the video in fact, I realize that some of the parameters were missing and were not being updated properly. I eventually realized that the naming conventions for some of the parameters being updated both in the state and in firebase got mixed up. Another issue I encountered was being able to retrieve the image from firebase. I was able to log the url, however, due to some issues with the state, I wasnt able to get the uri to the image component, and due to lack of time I left that off. Firebase made it very very easy to push, read and upload files after installing their dependencies.
Thanks to all the great documentation and other tutorials I was able to effectively implement the rest.
## What I learned
I learned a lot. Prior to this, I had not had experience with **data modelling, and creating custom user data points. \*\*However, due to my previous experience with \*\*firebase, and some documentation referencing** I was able to see firebase’s built in commands allowing me to query and add specific User ID’s to the the database, allowing me to search for data base on their UIDs. Overall, it was a great experience learning how to model data, using authentication and create custom user data and modify that using google firebase.
## Theme and How This Helps The Environment
Overall, this application used **incentives and educates** the user about their impact on the environment to better help the environment.
## Design
I created a comprehensive and simple UI to make it easy for users to navigate and understand the purposes of the Application. Additionally, I used previously mentioned utilities in order to create a modern look.
## What's next for LORAX (Luring Others to Retain our Abode Extensively)
I hope to create my **own backend in the future**, using **ML** and an **AI** to classify these images and details to automate the submission process and **create my own footprint calculator** rather than using the one provided by the global footprint network. | partial |
Live Demo Link: <https://www.youtube.com/live/I5dP9mbnx4M?si=ESRjp7SjMIVj9ACF&t=5959>
## Inspiration
We all fall victim to impulse buying and online shopping sprees... especially in the first few weeks of university. A simple budgeting tool or promising ourselves to spend less just doesn't work anymore. Sometimes we need someone, or someone's, to physically stop us from clicking the BUY NOW button and talk us through our purchase based on our budget and previous spending. By drawing on the courtroom drama of legal battles, we infuse an element of fun and accountability into doing just this.
## What it does
Dime Defender is a Chrome extension built to help you control your online spending to your needs. Whenever the extension detects that you are on a Shopify or Amazon checkout page, it will lock the BUY NOW button and take you to court! You'll be interrupted by two lawyers, the defence attorney explaining why you should steer away from the purchase 😒 and a prosecutor explains why there still are some benefits 😏. By giving you a detailed analysis of whether you should actually buy based on your budget and previous spendings in the month, Dime Defender allows you to make informed decisions by making you consider both sides before a purchase.
The lawyers are powered by VoiceFlow using their dialog manager API as well as Chat-GPT. They have live information regarding the descriptions and prices of the items in your cart, as well as your monthly budget, which can be easily set in the extension. Instead of just saying no, we believe the detailed discussion will allow users to reflect and make genuine changes to their spending patterns while reducing impulse buys.
## How we built it
We created the Dime Defender Chrome extension and frontend using Svelte, Plasma, and Node.js for an interactive and attractive user interface. The Chrome extension then makes calls using AWS API gateways, connecting the extension to AWS lambda serverless functions that process queries out, create outputs, and make secure and protected API calls to both VoiceFlow to source the conversational data and ElevenLabs to get our custom text-to-speech voice recordings. By using a low latency pipeline, with also AWS RDS/EC2 for storage, all our data is quickly captured back to our frontend and displayed to the user through a wonderful interface whenever they attempt to check out on any Shopify or Amazon page.
## Challenges we ran into
Using chrome extensions poses the challenge of making calls to serverless functions effectively and making secure API calls using secret api\_keys. We had to plan a system of lambda functions, API gateways, and code built into VoiceFlow to create a smooth and low latency system to allow the Chrome extension to make the correct API calls without compromising our api\_keys. Additionally, making our VoiceFlow AIs arguing with each other with proper tone was very difficult. Through extensive prompt engineering and thinking, we finally reached a point with an effective and enjoyable user experience. We also faced lots of issues with debugging animation sprites and text-to-speech voiceovers, with audio overlapping and high latency API calls. However, we were able to fix all these problems and present a well-polished final product.
## Accomplishments that we're proud of
Something that we are very proud of is our natural conversation flow within the extension as well as the different lawyers having unique personalities which are quite evident after using our extension. Having your cart cross-examined by 2 AI lawyers is something we believe to be extremely unique, and we hope that users will appreciate it.
## What we learned
We had to create an architecture for our distributed system and learned about connection of various technologies to reap the benefits of each one while using them to cover weaknesses caused by other technologies.
Also.....
Don't eat the 6.8 million Scoville hot sauce if you want to code.
## What's next for Dime Defender
The next thing we want to add to Dime Defender is the ability to work on even more e-commerce and retail sites and go beyond just Shopify and Amazon. We believe that Dime Defender can make a genuine impact helping people curb excessive online shopping tendencies and help people budget better overall. | ## Inspiration
Partially inspired by the Smart Cities track, we wanted our app to have the direct utility of ordering food, while still being fun to interact with. We aimed to combine convenience with entertainment, making the experience more enjoyable than your typical drive-through order.
## What it does
You interact using only your voice. The app automatically detects when you start and stop talking, uses AI to transcribe what you say, figures out the food items (with modifications) you want to order, and adds them to your current order. It even handles details like size and flavor preferences. The AI then generates text-to-speech audio, which is played back to confirm your order in a humorous, engaging way. There is absolutely zero set-up or management necessary, as the program will completely ignore all background noises and conversation. Even then, it will still take your order with staggering precision.
## How we built it
The frontend of the app is built with React and TypeScript, while the backend uses Flask and Python. We containerized the app using Docker and deployed it using Defang. The design of the menu is also done in Canva with a dash of Harvard colors.
## Challenges we ran into
One major challenge was getting the different parts of the app—frontend, backend, and AI—to communicate effectively. From media file conversions to AI prompt engineering, we worked through each of the problems together. We struggled particularly with maintaining smooth communication once the app was deployed. Additionally, fine-tuning the AI to accurately extract order information from voice inputs while keeping the interaction natural was a big hurdle.
## Accomplishments that we're proud of
We're proud of building a fully functioning product that successfully integrates all the features we envisioned. We also managed to deploy the app, which was a huge achievement given the complexity of the project. Completing our initial feature set within the hackathon timeframe was a key success for us. Trying to work with Python data type was difficult to manage, and we were proud to navigate around that. We are also extremely proud to meet a bunch of new people and tackle new challenges that we were not previously comfortable with.
## What we learned
We honed our skills in React, TypeScript, Flask, and Python, especially in how to make these technologies work together. We also learned how to containerize and deploy applications using Docker and Docker Compose, as well as how to use Defang for cloud deployment.
## What's next for Harvard Burger
Moving forward, we want to add a business-facing interface, where restaurant staff would be able to view and fulfill customer orders. There will also be individual kiosk devices to handle order inputs. These features would allow *Harvard Burger* to move from a demo to a fully functional app that restaurants could actually use. Lastly, we can sell the product by designing marketing strategies for fast food chains. | ![](https://img.shields.io/github/license/MoroccanGemsbok/ReviewRecap)
![](https://img.shields.io/github/contributors/MoroccanGemsbok/ReviewRecap)
![](https://img.shields.io/github/last-commit/MoroccanGemsbok/ReviewRecap)
## Inspiration
Any seasoned online shopper knows that one of the best ways to get information about a product is to check the reviews. However, reading through hundreds of reviews can be time-consuming and overwhelming, leading many shoppers to give up on their search for the perfect product. On top of that, many reviews can be emotionally driven, unhelpful, or downright nonsensical, with no truly effective way to filter them out from the aggregated star rating displayed on the product.
Wouldn't it be great if a shopper could figure out why the people who liked the product liked it, and why the people who hated the product hated it, without wading through endless irrelevant information?
## What it does
Review Recap goes through the reviews of the Amazon product to extract keywords using NLP. The frequency of the keywords and the average rating of the reviews with the keywords are presented to the user in a bar graph in the extension. With Review Recap, shoppers can now make informed buying decisions, with confidence, in just a matter of seconds.
## How we built it
When a user is on a valid Amazon product page, the Chrome extension allows a GET request to be sent to our RESTful backend. The backend checks if the product page already exists in a cache. If not, the program scrapes through hundreds of reviews, compiling the data into review bodies and star ratings. This data is then fed into CoHere's Text Summarization natural language processing API, which we trained using a variety of prompts to find keywords in Amazon reviews. We also used CoHere to generate a list of meaningless keywords (such as "good", "great", "disappointing" etc) to filter out unhelpful information. The data is returned and processed in a bar graph using D3.
## Challenges we ran into
Django features many ways to build similar RESTful APIs. It was a struggle to find a guide online that had the syntax and logic that suited our purpose best. Furthermore, being stuck with the free tier of many APIs meant that these APIs were the bottleneck of our program. The content security policies for the Chrome extension also made it difficult for us to implement D3 into our program.
## Accomplishments that we're proud of
We were able to effectively work as a team, with each of us committing to our own tasks as well as coming together at the end to bring all our work together. We had an ambitious vision, and we were able to see it through.
## What we learned
All members of our team learned a new tech stack during this project. Our frontend members learned how to create a web extension using the Chrome API, while our backend members learned how to use Django and Cohere. In addition, we also learned how to connect the frontend and backend together using a RESTful API.
## What's next for Review Recap
We have several next goals for Review Recap:
* Optimize the data-gathering algorithm
* Add more configuration in the Chrome extension
* Implement a loading animation while the data is being fetched | winning |
## Inspiration
We wanted to create a new way to interact with the thousands of amazing shops that use Shopify.
![demo](https://res.cloudinary.com/devpost/image/fetch/s--AOJzynCD--/c_limit,f_auto,fl_lossy,q_auto:eco,w_900/http://i.imgur.com/0G1Pdea.jpg)
## What it does
Our technology can be implemented inside existing physical stores to help customers get more information about products they are looking at. What is even more interesting is that our concept can be implemented to ad spaces where customers can literally window shop! Just walk in front of an enhanced Shopify ad and voila, you have the product on the sellers store, ready to be ordered right there from wherever you are.
## How we built it
WalkThru is Android app built with the Altbeacon library. Our localisation algorithm allows the application to pull the Shopify page of a specific product when the consumer is in front of it.
![Shopify](https://res.cloudinary.com/devpost/image/fetch/s--Yj3u-mUq--/c_limit,f_auto,fl_lossy,q_auto:eco,w_900/http://i.imgur.com/biArh6r.jpg)
![Estimote](https://res.cloudinary.com/devpost/image/fetch/s--B-mjoWyJ--/c_limit,f_auto,fl_lossy,q_auto:eco,w_900/http://i.imgur.com/0M85Syt.jpg)
![Altbeacon](https://avatars2.githubusercontent.com/u/8183428?v=3&s=200)
## Challenges we ran into
Using the Estimote beacons in a crowded environment has it caveats because of interference problems.
## Accomplishments that we're proud of
The localisation of the user is really quick so we can show a product page as soon as you get in front of it.
![WOW](https://res.cloudinary.com/devpost/image/fetch/s--HVZODc7O--/c_limit,f_auto,fl_lossy,q_auto:eco,w_900/http://i.giphy.com/xT77XWum9yH7zNkFW0.gif)
## What we learned
We learned how to use beacons in Android for localisation.
## What's next for WalkThru
WalkThru can be installed in current brick and mortar shops as well as ad panels all over town. Our next step would be to create a whole app for Shopify customers which lets them see what shops/ads are near them. We would also want to improve our localisation algorithm to 3D space so we can track exactly where a person is in a store. Some analytics could also be integrated in a Shopify app directly in the store admin page where a shop owner would be able to see how much time people spend in what parts of their stores. Our technology could help store owners increase their sells and optimise their stores. | ## Inspiration
We wanted to explore what GCP has to offer more in a practical sense, while trying to save money as poor students
## What it does
The app tracks you, and using Google Map's API, calculates a geofence that notifies the restaurants you are within vicinity to, and lets you load coupons that are valid.
## How we built it
React-native, Google Maps for pulling the location, python for the webscraper (*<https://www.retailmenot.ca/>*), Node.js for the backend, MongoDB to store authentication, location and coupons
## Challenges we ran into
React-Native was fairly new, linking a python script to a Node backend, connecting Node.js to react-native
## What we learned
New exposure APIs and gained experience on linking tools together
## What's next for Scrappy.io
Improvements to the web scraper, potentially expanding beyond restaurants. | ## Inspiration
We **love** bubble tea a lot... but sometimes it gets expensive. Like any popular product on-trend, coffee, crispy chicken sandwiches and other fast foods, there's competition for the best value — and /tap.io.ca is here to get you to the best deals, based on your habits.
## What it does
Tap into your spending habits to find out where you can find the best deal for your next purchases with /tap.io.ca. /tap.io.ca tracks your spending habits in the background, analyzes your most frequent spending automatically and creates accurate, personalized recommendations for your daily use. With this information, /tap.io.ca is connected with local retailers and businesses, recommending better deals and locations for products you might buy.
## How we built it
Our team began by first looking through the challenges, deciding which were of interest to us and brainstorming various ideas. We focused on simple products that would fill a gap in a specific market and help individuals personalize their life. Once our idea was decided upon, we split the team into design and back-end teams. The design worked on the UI design and general branding while the back-end team researched specific libraries and python modules that would help us build the product.
## Challenges we ran into
Our team was not very equipped with the tools for app development. We had to do extensive research and set up to find the correct libraries to use. The set up and learning process took up a majority of our time and it left us with little time to build our product. However we were able to learn many new skills, tools and employ modules we previously didn’t use, which in itself was a great takeaway from this hackathon.
## Accomplishments that we're proud of
/tap.io.ca’s strength is with its branding. We feel like it captures the team’s energy and the attention of our target marketing very well. Our team is also very proud of the brainstorming process, as it was a time for the team to come together and share their most creative ideas without any restrictions. Finally, our ability to employ new learnings at such a quick pace allowed us to realize the potential of all the technologies out there that we have yet to learn.
## What we learned
We learned different skills and tools including flask, kivy and also improved upon our python knowledge and Figma skills. An especially important skill we gained was how to integrate API's to our project. This is a something that will be especially beneficial to us in the future. We also learned how to work effectively as a team, as there was not enough time to implement all of our ideas and delegating all the roles improved our efficiency greatly.
## What's next for /tap.io.ca
In the future, we hope to expand /tap.io.ca and grow its features. Some additional feature ideas include gamification, where users would be able to collect points for using our services and win rewards provided by local businesses we partner with. We also hope to expand our services from food and drinks into a multitude of products to provide easier financial tracking and analysis for everyone. | winning |
## Inspiration
Homes are becoming more and more intelligent with Smart Home products such as the Amazon Echo or Google Home. However, users have limited information about the infrastructure's status.
## What it does
Our smart chat bot helps users to monitor their house's state from anywhere using low cost sensors. Our product is easy to install, user friendly and fully expandable.
**Easy to install**
By using compact sensors, HomeScan is able to monitor information from your house. Afraid of gas leaks or leaving the heating on? HomeScan has you covered. Our product requires minimum setup and is energy efficient. In addition, since we use a small cellular IoT board to gather the data, HomeScan sensors are wifi-independant. This way, HomeScan can be placed anywhere in the house.
**User Friendly**
HomeScan uses Cisco Spark bots to communicate data to the users. Run diagnostics, ask for specific sensor data, our bots can do it all. Best of all, there is no need to learn command lines as our smart bots use text analysis technologies to find the perfect answer to your question. Since we are using Cisco Spark, the bots can be accessed on the go on both the Spark mobile app or on our website.Therefore, you'll have no problem accessing your data while away from your home.
**Fully expandable**
HomeScan was built with the future in mind. Our product will fully benefit from future technological advancements. For instance, 5G will enable HomeScan to expand and reach places that currently have a poor cellular signal. In addition, the anticipated release of Cisco Spark's "guestID" will grant access to our smart bots to an even wider audience. Newer bot customization tools will also allow us to implement additional functionalities. Lastly, HomeScan can be expanded into an infrastructure ranking system. This could have a tremendous impact on the real-estate industry as houses could be rated based on their infrastructure performances. This way, data could be used for services such as AirBnB, insurance companies and even home-owners.
We are confident that HomeScan is the solution for monitoring a healthy house and improve your real-estate decisions.
future proof
## How I built it
The infrastructure's information are being gathered through a Particle Electron board running of cellular network. The data are then sent to an Amazon's Web Services server. Finally, a Cisco Spark chat bot retrieves the data and outputs relevant queries according to the user's inputs. The intelligent bot is also capable of warning the user in case of an emergency.
## Challenges I ran into
Early on, we ran into numerous hardware issues with the Particle Electron board. After consulting with industry professionals and hours of debugging, we managed to successfully get the board working the way we wanted. Additionally, with no experience with back-end programming, we struggled a lot understanding the tools and the interactions between platforms but ended with successful results.
## Accomplishments that we are proud of
We are proud to showcase a fully-stacked solution using various tools with very little to no experience with it.
## What we learned
With perservance and mutual moral support, anything is possible. And never be shy to ask for help. | ## Problem Statement
As the number of the elderly population is constantly growing, there is an increasing demand for home care. In fact, the market for safety and security solutions in the healthcare sector is estimated to reach $40.1 billion by 2025.
The elderly, disabled, and vulnerable people face a constant risk of falls and other accidents, especially in environments like hospitals, nursing homes, and home care environments, where they require constant supervision. However, traditional monitoring methods, such as human caregivers or surveillance cameras, are often not enough to provide prompt and effective responses in emergency situations. This potentially has serious consequences, including injury, prolonged recovery, and increased healthcare costs.
## Solution
The proposed app aims to address this problem by providing real-time monitoring and alert system, using a camera and cloud-based machine learning algorithms to detect any signs of injury or danger, and immediately notify designated emergency contacts, such as healthcare professionals, with information about the user's condition and collected personal data.
We believe that the app has the potential to revolutionize the way vulnerable individuals are monitored and protected, by providing a safer and more secure environment in designated institutions.
## Developing Process
Prior to development, our designer used Figma to create a prototype which was used as a reference point when the developers were building the platform in HTML, CSS, and ReactJs.
For the cloud-based machine learning algorithms, we used Computer Vision, Open CV, Numpy, and Flask to train the model on a dataset of various poses and movements and to detect any signs of injury or danger in real time.
Because of limited resources, we decided to use our phones as an analogue to cameras to do the live streams for the real-time monitoring.
## Impact
* **Improved safety:** The real-time monitoring and alert system provided by the app helps to reduce the risk of falls and other accidents, keeping vulnerable individuals safer and reducing the likelihood of serious injury.
* **Faster response time:** The app triggers an alert and sends notifications to designated emergency contacts in case of any danger or injury, which allows for a faster response time and more effective response.
* **Increased efficiency:** Using cloud-based machine learning algorithms and computer vision techniques allow the app to analyze the user's movements and detect any signs of danger without constant human supervision.
* **Better patient care:** In a hospital setting, the app could be used to monitor patients and alert nurses if they are in danger of falling or if their vital signs indicate that they need medical attention. This could lead to improved patient care, reduced medical costs, and faster recovery times.
* **Peace of mind for families and caregivers:** The app provides families and caregivers with peace of mind, knowing that their loved ones are being monitored and protected and that they will be immediately notified in case of any danger or emergency.
## Challenges
One of the biggest challenges have been integrating all the different technologies, such as live streaming and machine learning algorithms, and making sure they worked together seamlessly.
## Successes
The project was a collaborative effort between a designer and developers, which highlights the importance of cross-functional teams in delivering complex technical solutions. Overall, the project was a success and resulted in a cutting-edge solution that can help protect vulnerable individuals.
## Things Learnt
* **Importance of cross-functional teams:** As there were different specialists working on the project, it helped us understand the value of cross-functional teams in addressing complex challenges and delivering successful results.
* **Integrating different technologies:** Our team learned the challenges and importance of integrating different technologies to deliver a seamless and effective solution.
* **Machine learning for health applications:** After doing the research and completing the project, our team learned about the potential and challenges of using machine learning in the healthcare industry, and the steps required to build and deploy a successful machine learning model.
## Future Plans for SafeSpot
* First of all, the usage of the app could be extended to other settings, such as elderly care facilities, schools, kindergartens, or emergency rooms to provide a safer and more secure environment for vulnerable individuals.
* Apart from the web, the platform could also be implemented as a mobile app. In this case scenario, the alert would pop up privately on the user’s phone and notify only people who are given access to it.
* The app could also be integrated with wearable devices, such as fitness trackers, which could provide additional data and context to help determine if the user is in danger or has been injured. | ## Inspiration
## What it does
Addition, subtraction, multiplication, division of numbers. It supports brackets as well.
## How we built it
Using a LCD display as output and a keypad as input. First, the input is converted to postfix notation (reverse Polish). The data is then processed and the result is outputted.
## Challenges we ran into
Wiring, debugging, hardware sadness.
## Accomplishments that we're proud of
It works!
## What we learned
Don't use IR.
## What's next for arduino-calculator
Support for scientific notation, sin/cos, better interface. | winning |
## Inspiration
As a developer, I understand the importance of an efficient and automated build process in software development. Manually triggering builds can become a time-consuming and error-prone task, especially as projects grow in size and complexity. That's why I was inspired to build a GitHub Build Trigger Application, to streamline the build process and ensure builds are always up-to-date. With just a few clicks, the application can be configured to trigger a build every time there's a change in the repository, saving time and freeing up developers to focus on other important tasks. Whether it's for a small team or a large enterprise, the GitHub Build Trigger Application is designed to work with GitHub, making it easy to trigger builds and monitor their progress, helping teams deliver high-quality results faster and more efficiently.
## What it does
The GitHub Build Trigger Application automates the build process for projects hosted on GitHub. It allows you to configure the application to trigger a build every time there's a change in the repository with just a few clicks. This helps to save time and ensures that your builds are always up-to-date. The application streamlines your workflow and helps you to focus on other important aspects of your project. Whether you're a small team or a large enterprise, the GitHub Build Trigger Application can help you deliver high-quality results faster and more efficiently.
## How I built it
The purpose of this application is to trigger builds on a GitHub repository.
To achieve this, I used the GitHub API and made a POST request to the "repos/{username}/{repo\_name}/dispatches" endpoint to trigger a build with the specified parameters. I also made use of the "client\_payload" object to pass custom data along with the build trigger.
I also made sure to validate the input parameters and provided error messages if any of the required environment variables were missing.
To log the events in the application, I set up a logger using the logging module.
Additionally, I added the option to send email notifications when a build is triggered. To do this, I used the smtplib module to send an email through an SMTP server.
Finally, I put all of these pieces together in the main() function which is executed when the script is run. The script takes in input arguments as custom data which can be passed along with the build trigger.
## Challenges I ran into
I faced a few challenges while developing the GitHub Build Trigger Application, some of which included:
Hardcoded values: I had hardcoded some values such as the GitHub access token, SMTP server address, SMTP username, SMTP password, and from email address. These values should have been moved to environment variables for ease of changing them without modifying the code.
Error handling: The send\_email\_notification function only logged the error message if sending an email failed. I needed to implement a more appropriate error-handling mechanism to handle this scenario.
Logging: The logging level was set to logging.INFO by default, which may have caused a lot of information to be logged, making it difficult to locate relevant information when an issue arose. I should have adjusted the logging level based on the requirements of the application.
Validation of required inputs: I validated the required inputs with the validate\_input\_params function, but only raised an exception when input was missing. I should have added more validation checks to ensure the input values were of the correct type and format.
## Accomplishments that I am proud of
Successfully integrated with GitHub API: I was able to connect my application with the GitHub API and make use of the available functionality to achieve the desired results.
Automated the build trigger process: I was able to automate the build trigger process, which previously required manual intervention. This has made the overall process much more efficient and has saved a lot of time.
Improved the notification process: I was able to implement an email notification system that informs users about the status of the build process. This has improved the overall communication and has made the process more transparent.
Built a scalable application: I made sure to design the application in such a way that it can easily be scaled up or down as per the requirements, ensuring that it can handle large amounts of data and processing.
Developed a user-friendly interface: I put in a lot of effort to create an intuitive and user-friendly interface for the application, making it easier for users to interact with the application and get the information they need.
## What I learned
While building the GitHub Build Trigger Application, I learned a number of important things:
Working with APIs: I got hands-on experience with consuming APIs, in this case, the GitHub API. I learned how to make HTTP requests and process the response data to retrieve information about repositories, commits, and builds.
Email notifications: I learned how to send emails using the smtplib library. I also learned how to format emails and attach data to emails.
Logging: I learned the importance of logging in a software application and how to use the logging module in Python to implement logging in the application.
Exception handling: I learned how to use try-except blocks to handle exceptions in the application. This is important for making the application more robust and reliable.
Input validation: I learned how to validate user input to ensure that the required data is present and in the correct format.
Environment variables: I learned how to use environment variables to store sensitive information such as access tokens and passwords.
Git version control: I learned how to use Git to manage the source code of the application, including how to create branches, make commits, and merge changes.
Overall, building the GitHub Build Trigger Application was a great learning experience that taught me many valuable skills that I can use in future projects.
## What's next for GitHub Build Trigger Application
There are several directions that the GitHub Build Trigger Application could go in next, here are a few ideas:
Integration with other CI/CD tools: The application could be integrated with other CI/CD tools such as Jenkins or Travis CI to provide a seamless workflow for triggering builds and deployments.
Adding more customization options: The application could be expanded to include more customization options, such as the ability to trigger builds based on specific branches, tags, or commit messages.
Improving error handling: The error handling mechanism could be improved to provide more detailed information about why a build failed, making it easier to troubleshoot issues.
Adding security features: Security is an important concern for any application that interacts with sensitive data. The application could be improved by adding features such as encryption, secure authentication, and access control.
Improving performance: The performance of the application could be improved by optimizing the code, implementing caching, and adding support for parallel processing.
These are just a few ideas, and the possibilities are endless. The future direction of the GitHub Build Trigger Application will depend on the requirements and priorities of the users. | ## Inspiration
This project came about because I have very limited data on my phone plan and when I'm outdoors I still need to get access to transit information and do quick searches.
## What it does
Guardian SMS allows you to access the powerful Google Assistant through SMS. You can send your searches to a phone number and receive Google Assistant's response. It also allows you to get routing information for public transit. Just send it your starting point and destination and it'll get you easy to follow instructions through Here's navigation api
## How we built it
We built this tool through STDLIB's server-less API back-end service. The SMS messages are sent using MessageBird's API. Public transit routing information is obtained through Here's routing API. We access Google Assistant through the google-assistant npm package based within an express js application.
## Challenges we ran into
We had difficulties setting up oauth correctly within our node application. This was necessary to access Google Assistant. Through persistent debugging and help from a very kind top hat mentor we were able to figure out how to set it up correctly.
Another issue we had was in trying to deploy our node app to heroku. To execute our app we needed a package that had to be installed through apt-get and we found out that heroku doesn't support apt-get commands without third party helpers and even after we got the helper, the libasound2-dev package was unavailable. With an hour left before submission we decided to use ngrock for tunneling between our STDLIB api and our node app which was executing locally.
## Accomplishments that we're proud of
We're very proud of how far we got. Every step of the way was a thrill. Getting the navigation service going through SMS was amazing and then getting google assistant to be accessible offline through text was mind blowing.
## What we learned
We learned that we can accomplish a lot on 4 hours of sleep. We learned how useful STDLIB is and how powerful node js can be.
## What's next for Guardian SMS
Next we want to create a supporting web app through which we want to allow other people to use this service by linking their Google accounts. We also want better navigation support so that it allows users to select between: public transit, biking, walking or driving, when they are requesting directions. | ## Inspiration
Our inspiration for Pi-Casso came from our tedious experiences with freehand drawing in design class at school. We realized that sketching and drawing can be very challenging, even for fully able people. Therefore, we looked at HTN 2022 as an opportunity to design a system that would allow disabled people to create “freehand” sketches.
## What it does
The Pi-Casso uses the “Adhawk Mindlink” vision tracking system and a 2 axis actuating bed to convert the user’s eye movements into a drawing. The vision tracking system is calibrated to detect where the user is focusing their vision with respect to a predetermined grid of coordinates. As the user changes their point of focus, the system detects the new direction. Pi-Casso will use this change in vision to create a drawing. If the user looks towards the right, the Pi-Casso will begin to draw a line towards the right. When running, the Pi-Casso will continuously track the user’s vision and create a physical image from it. To pause the system, the user takes a long blink. If another long blink is taken, the system will resume.
## How we built it
First we set goals and expectations for the controls and mechanical aspects of the project’s design. We separated our team into groups of two, one for the controls and the other for the mechanical, then prioritized and delegated responsibilities accordingly. For the mechanical aspects, we began with preliminary sketches to rapidly develop several possible designs. After discussion, we settled on a specific design and began CAD. We went through multiple design iterations as new problems arose. As construction finished, we made numerous changes to our original plan because of unforeseen issues, such as failures in 3D printing.
## Challenges we ran into
Our original design required a 30” x 21.5” base, which we quickly discovered was unfeasible with the resources available. We then revised the design to use a smaller base by reworking the placement of our motors.
Another challenge we faced was designing the mount for the pen. Due to severe sleep deprivation, one of our designers did not realize the measurements used were way too small, and the other team members were too tired to notice. As a result, it took 3 iterations to create a working pen mount.
Another issue arose with the availability of 3D printing. The printing of a critical design component failed several times. Since we were under tight time limitations, we elected to use stock components instead. This proved to be an inferior mechanism, but it was a necessary compromise given the time and material constraints.
## Accomplishments that we're proud of
Our team is very proud to have had excellent communications between the controls and the mechanical team. Throughout the project, all team members kept each other updated on the major events and goals from the respective subsections. We are especially proud that we’ve been able to work together effectively in an intense competitive environment and through several setbacks and mistakes.
## What we learned
One of the biggest takeaways from HTN 2022 is that we need to test as early and as often as possible. We lost at least 5+ hours of productive work because we based many of our designs on assumptions that proved to be faulty when actually implemented. Another key takeaway is the necessity of having strategized alternate ways of solving problems. During this build process, we often ran into issues where we kept trying to implement the original plan instead of switching to a new method. As a collective, we also gained a huge amount of technical skills. Specifically, we learned to work with eye tracking & laser cutting software and improved our abilities to CAD, rapid prototype, build and test software and hardware systems.
Our team also learned critical project management skills. Namely, we actively worked to improve our ability to analyze and break down a problem from several perspectives, allowing the problem to be systematically overcome.
## What's next for Pi-Casso
We’ve identified several key areas in which Pi-Casso could be improved. Most notably, stability of the system leaves much to be desired. The instability leads to rough and often low resolution prints. Another great feature to add would be the ability to detract the pen from the paper. The necessity of having the pen in continual contact with the paper is a key limitation in making high quality prints. The Pi-Casso would also benefit from increasing the amount of control. For example, adding sensors could help create more feedback, allowing the system to create more detailed images. Increased control could also help the system avoid self damage. | partial |
## Inspiration
Do you think twice about security before installing an `apt` package? What about downloading an OS ISO image from an official site? The robust and trustworthy community of modern package managers, fueled by active open-source contributions, seems unshakably strong.
Except it isn't. "I suggest you find a different community to do experiments on," [responded](https://www.theverge.com/2021/4/30/22410164/linux-kernel-university-of-minnesota-banned-open-source) Linux Foundation Fellow Greg Kroah-Hartman. In 15 days, white-hat hackers from the University of Minnesota have caused the institution's permanent ban from the Linux kernel, despite purportedly well-intended experiments to identify current system weaknesses.
Central authorities, benign or malignant, are susceptible to too many external influences that weaken, if not undermine, the public trust. This is the central question that [Web3](https://en.wikipedia.org/wiki/Web3) addresses: how do we interact and live in an increasingly centralized digital world when trust is repeatedly eroded?
## What it does
Introducing [`nex`](https://www.nexinaction.com/), a unique, \*nex\*t-generational decentralized package manager that reduces unconditional trust to only the package author. Built with [Iroh](https://www.iroh.computer/), a Rust-powered, performant platform that iterates from [IPFS](https://ipfs.tech/), `nex` removes any centrality in the process, making it ever more resilient to both censorship, hijackings, and network partitions.
## How we built it
At its core, Iroh provides primitives — practically immutable ones in particular — for "[r]ealtime, multiwriter key-value sync" in the form of [*documents*](https://www.iroh.computer/docs/components/docs). A *read ticket* uniquely identifies and provides read access to all key-value pairs. A package author, the sole user with write access, adds metadata, versions, and hash-addressed file blobs that anyone anywhere can see and download. The design of Iroh inherently makes `nex`:
* A minimal-trust framework: Only the package author controls the document/package. If not necessary, why delegate trust to anyone else?
* A robust network: Hash-addressed, immutable blobs can be stored and served by anyone. Hijackings, censorships, and other attacks looming over centralized package managers are easily identifiable and circumventable.
![image](https://www.iroh.computer/diagrams/documents.svg)
*Illustration from Iroh documentation.*
On top of the core CLI tool, we built a lightweight UI wrapper with [Tauri](https://v2.tauri.app/) (Next.js + Rust) that provides an intuitive and beginner-friendly means of getting into `nex`.
## Challenges we ran into
**Deliberating the Protocol.** For Nex to be useful, it must prove resilience beyond current solutions. The active, heated debate about potential vulnerabilities, along with trade-offs between speed and reliability, was challenging to navigate in the time-sensitive setting of a hackathon. Sleep deprivation didn't help; at 3 a.m., we seriously considered reinventing TCP/IP before realizing coffee isn't a substitute for good judgment.
**Unavoidable (De)serialization.** Building the Tauri frontend, which hinges on the communication between the Next.js frontend and the Rust backend, mundane, trivial problems like (de)serialization became huge time sinks that we had not expected. It's amazing how, after 24 hours without sleep, even parsing JSON starts to feel like deciphering ancient hieroglyphs.
**Trading Perfection for Completeness.** Within the narrow timeframe of PennApps, it was difficult but necessary to find a delicate balance between attention to detail and complete functionalities. We focused on the latter, delivering the three core functionalities of package publication, package search by name, and installation. After all, perfection is hard to achieve when you're powered by energy drinks and three hours of sleep!
## Accomplishments that we're proud of
* Elegant protocol designs leveraging mature open-source frameworks.
* Logarithmic time complexity in package distribution.
* Intuitive, beautiful frontend with native Rust code.
We had quite some fun thinking ideally and theoretically, revisiting high school math. Suppose a user publishes a package that $n$ people simultaneously want to download. While Iroh limits each client to at most $p=5$ concurrent peer connections, it turns out the time complexity of distributing the package to all $n$ peers is logarithmic.
Initially, 1 person distributes the package to $k$ peer. After one such turn, $1+p$ clients can simultaneously serve the package, each to $p$ new peers. For a general $i\in\mathbb N\_+$, let $a\_i\in\mathbb N\_+$ denote the number of clients who receives the package at turn $i$, with $a\_1=1$ initially. Denote $S\_k=a\_1+\cdots+a\_k$, so $a\_k=S\_{k-1}\cdot p$. Then, $S\_k=S\_{k-1}\cdot p+S\_{k-1}$, and we see that $a\_1=S\_1=1$ implies $S\_k=(1+p)^{k-1}$. Then, time complexity is evidently $O(\log\_{1+p}n)=O(\log n)$.
## What we learned
Software development does not exist in a socially void milieu; its meaning and value is precisely reflected in the ways people interact with such software. We set out with a lofty goal, to revolutionize how we leverage everyday programming and computer tools. Purely technical discussions regarding bandwidth, availability, and efficiency are far from the whole picture, and we're grateful for this valuable lesson learned from this intense yet meaningful hacking experience.
## What's next for nex
The frontend admittedly lags behind the CLI in terms of functionality, but it offers drastically more user-facing advantages in and potential for visualization and interactivity. Our primary goal will be to catch up to the completeness of features offered by the CLI, while maintaining its unique benefits.
Meanwhile, we want to facilitate community engagement and contribution, a hallmark of open-source software that made `nex` possible and that `nex` proudly stands for. We hope to talk with various stakeholders, work with industry experts and grassroots enthusiasts, and explore the future of `nex` in greater depth to benefit the landscape of software distribution tomorrow. | ## Inspiration
**Being lazy doesn't quite means we cannot buy NFTs.** Even NFTs are now lazily mintable. Web3 storage facility provided by IPFS changes how we store our data. With every other protocol emerging in web3, we still lag a massive user adoption and frictionless services embedded within our everyday engagements on the internet. People tend to practice their usual choices over picking new, advanced, decentralized web3 services. Another gap quite buzzing around in the web3 space is community lack a developer behavior alongside an entrepreneurial spirit, which stops them from letting the world watch their artwork live over virtual galleries and auctions. Community, if provided with embedded solutions within our reach can potentially emerge as a game-changer to attract mass user adoption to these technologies.
## What it does
"Hmm...I wonder if you are a new artist and recently opened a new account on Twitter, followers being limited to just your friends and families, looking for the options to get your work to wider audience overtime, provide your customers a payment solution to buy NFTs just two clicks away, and of course get yourself some credits for being recognized in space?" worry not! Quartile, in itself, is an embedded solution to those new emerging artists who are finding their way into web3 space and calibrating a revenue stream out of the NFT marketplaces.
Being built on top of **Rarible Protocol**, it allows users to lazy mint NFTs on multiple marketplaces, with **cross-chain** support for multiple blockchains in near future. With NFT payment solutions as innovative as buying them through Twitter DMs is fascinating to customers as they don't need laptops anymore. "You can go for a walk in the evening scrolling through tweets wondering which cool one to buy and just DM the bot to buy one, it will do everything for you, of course except **Private Key**."
It also provides artists who **lazy mint NFTs** through our project, an option to get tweeted by the account to gain customer attraction. It allows users to leverage the concept of **Proof of Existence** on the grounds of **nft.storage** and **web3.storage** powered by **IPFS**. It provides users with **Chainlink** powered Price-Feed updates to get instant options for payment.
Does "Insufficient Funds..." ever bother you from buying NFTs? worry not. **Circle** powered Payment solutions ranging from transferring assets across wallets to funding wallets with Business Accounts for on-chain transactions open up a whole new horizon to how we interact with payment gateways. It allows users full control over handling their assets meanwhile providing a wide range of accessibility to transfer them into on-chain assets for buying and selling NFTs.
## How we built it
Using the process of lazy minting through messaging with image links, users can decide what additions/details would they like to add to this NFT.
Options available:
* Custom Royalty splits
* Lazy Minting NFTs with Trade offer
* Minting with Attributes/Data
* Chainlink powered Price-feed updates
Adding upon this, using the tech stack of **Rarepress** and **Nebulus** backed by Rarible Protocol SDK, it allows my project to leverage web3 powered NFT storage facility as well as writing scripts to customize NFT according to user' needs. At the moment, the project only supports buying NFTs worth static prices, but in near future, I am looking forward to expanding it to a live auction on Twitter. Using Python to engage users with the bot and using Javascript to rely on the process of minting is what provides a sense of decentralized workload across programming languages and keeps it modularized while executing. Additional features include using web3.storage to provide a personal space for users to keep their precious memories safe on a decentralized network.
## Challenges we ran into
* Joining bot framework with minting and trading scripts of Rarible Protocol SDK.
* Surfacing the bot framework to be eligible to safely and precisely create the NFT according to the user's customized options.
* Adding the web app to automatically create the message and prompt users directly to sanction and purchase the NFT.
* Adding support to basically mint any image file publicly accessible over the internet.
* How to fetch attributes from users in a minimal way to help them customize their NFT.
## Accomplishments that we're proud of
* Able to successfully read data from human-readable chat to data modeling for minting NFT.
* Able to create a buy order through chats for static-price indexed NFTs.
* Able to render and process payments checkout limited to two clicks, by automating all the hashes and order permission encoded within backend scripts for a seamless UX.
* Able to provide full-scale flexibility for royalty splits.
* Integrating my previous project of Chainlink VRF-powered giveaway **[[Code](https://github.com/RankJay/Cleopatra), [Demo Video](https://www.youtube.com/watch?v=QAhCSeVSUXc), Used as a separate product by: [EthIndia](https://twitter.com/ETHIndiaco/status/1409783696970899457), [Chainlink Community](https://twitter.com/BillyBukak/status/1419609024023220224)]**
* Able to successfully integrate Circle-powered Payment solutions from transferring assets across wallets to funding wallets with Business Accounts for on-chain transactions.
## What we learned
* Building an MVP
* Interaction between different tech stacks
* Lazy Minting an NFT
* Custom Royalties providing users governance rights through RARI tokens
* Understanding concepts of Typescript in a web app
## What's next for Quartile
* Adding Oracle-based payment solution for minting NFTs on Chainlink network.
* Advanced version where we can host live auction on Twitter for a limited time to race for buying yet expensive and valuable NFTs recognized by their renowned artists.
* Adding Only1 and Metaplex into mainstream, helping users adopt these Solana-powered marketplaces with ease and in the most innovative way.
* Adding Cross Chain Support to blockchains including Binance, Polkadot, Flow, Polygon, and Tezos.
* NFTs mintable only by specific address in case of ownership.
* To provide support end for various multi-sig wallets in order to broaden the range of usability on the customer end.
* Create custom-curated airdrops where people having their wallet address in the description will receive tokens, with voluntary participation in events through retweets and other engagements.
* To provide support end for various Circle wallets solutions in order to broaden the range of payment accessibility on the customer end. | ## Inspiration
We were tired of the same boring jokes that Alexa tells. In an effort to spice up her creative side, we decided to implement a machine learning model that allows her to rap instead.
## What it does
Lil 'lexa uses an LSTM machine learning model to create her own rap lyrics based on the input of the user. Users first tell Alexa their rap name, along with what rapper they would like Lil 'lexa's vocabulary to be inspired by. Models have been created for Eminem, Cardi B, Nicki Minaj, Travis Scott, and Wu-Tang Clan. After the user drops a bar themselves, Lil 'lexa will spit back her own continuation along with a beat to go with it.
## How I built it
The models were trained using TensorFlow along with the Keras API. Lyrics for each rapper were scrapped from metrolyrics.com using Selenium python package, which served as the basis for the rapper's vocabulary. Fifty-word sequences were used as training data, where the model then guesses the next best word. The web application that takes in the seed text and outputs the generated lyrics is built with Flask and is deployed using Heroku. We also use Voiceflow to create the program to be loaded onto an Alexa, which uses an API call to retrieve the generated lyrics.
## Challenges I ran into
* Formatting the user input so that it would always work with the model
* Creating a consistent vocab list for each rapper
* Voiceflow inputs being merged together or stuck
## Accomplishments that I'm proud of
* My Alexa can finally gain some street cred
## What I learned
* Using Flask and Heroku to deploy an application
* Using Voiceflow to create programs that work with Amazon Alexa and Google Assistant
* Using Tensorflow to train an LSTM model
## What's next for Lil 'lexa
* Implementing more complex models that consider sentences and rhyming
* Call and response format for a rap battle
* Wider range of background beats | losing |
## Inspiration
Genes are the code of life, a sequencing that determines who you are, what you look like, what you do and how you behave. Sequencing is the process of determining the order of bases in an organisms genome. Knowing one's genetic sequence can give insight into inherited genetic disorders, one's ancestry, and even one's approximate lifespan.
Next-generation sequencing (NGS) is a term for the massive advancements made in genetic sequencing technologies made over the past 20 years. Since the first fully sequenced genome was released in 2000, the price of sequencing has dropped drastically, resulting in a wealth of biotech start-ups looking to commercialize this newfound scientific power.
Given that the human genome is very large (about 3 GB for an individual), the combination of computational tools and biology represent a powerful duo for medical and scientific applications. The field of bioinformatics, as it is known, represents a growth area for life sciences that will only increase in years to come.
## What it does
Reactive Genetics is a web portal. Individuals who have either paid to have their genes sequenced, or done it themselves (an increasing probability in coming years), can paste in their sequence into the home page of the web portal. It then returns another web page telling them whether they hold a "good" or "bad" gene for one of six common markers of genetic disease.
## How I built it
Reactive Genetics uses a flask server that queries the National Center for Biotechnology Information's Basic Local Alignment Search Tool (BLAST) API. "BLASTING" is commonly used in modern biological research to find unknown genes in model organisms. The results are then returned to a React app that tells the user whether they are positive or negative for a certain genetic marker.
## Challenges I ran into
The human genome was too large to return reliably or host within the app, so the trivial solution of querying the sequence against the reference genome wasn't possible. We resorted to BLASTing the input sequence and making the return value a boolean about whether the gene is what it "should" be.
## Accomplishments that I'm proud of
One team member hopes to enter serious bioinformatics research one day and this is a major first step. Another team member gave a serious shot at learning React, a challenging endeavour given the limited time frame.
## What I learned
One team member learned use of the BLAST API. Another team member became familiar with Bootstrap.
## What's next for Reactive genetics
The app is currently running both a React development server and a Flask server. Eventually, porting everything over to one language and application would be ideal. More bioinformatics tools are released on a regular basis, so there is potential to use other technologies in the future and/or migrate completely to React. | ## Inspiration
Our inspiration came from our desire of bubble tea. We wanted to share an app that would help others from being in our predicament.
## What it does
It locates bubble tea shops and displays featured locations which are within close proximity to the user. Featured drinks from each location are recommended to the user.
## How we built it
We used the android studio sdk to write the necessary java and XML files for our program. The google maps API was also used for the purpose of displaying physical locations to the user, as well as their own GPS location. GitHub was used to store our repository and to ensure that all team members were able to collaborate on the project together.
## Challenges we ran into
This was the first time that most of us have programmed in java, and the first time for all of us to develop an android application. We were unable to generate an API key for the google maps place API due to a bug on the google cloud website. As a temporary workaround, we hard-coded the coordinates of the bubble tea shops in Ottawa for a demo.
## Accomplishments that we're proud of
We successfully created a functional app for the first time and ventured into a field of software development which we had not been exposed to in our prior coursework. This was a rewarding learning experience. We were able to successfully integrate the google maps sdk for android after several attempts.
## What we learned
We learned how to use android studio sdk to create a functional app which also implemented an API. Furthermore, we learned how to share the workload of a software development project through the use of GitHub.
## What's next for FindMyBubbleTea
We plan to implement a functioning API that's actually able to search for locations rather than use hard-coded coordinates. We will also try to improve the user interface and design of the application. | ## Inspiration
When we first read Vitech's challenge for processing and visualizing their data, we were collectively inspired to explore a paradigm of programming that very few of us had any experience with, machine learning. With that in mind, the sentiment of the challenge themed around health care established relevant and impactful implications for the outcome of our project. We believe that using machine learning and data science to improve the customer experience of people in the market for insurance plans, would not only result in a more profitable model for insurance companies but improve the lives of the countless people who struggle to choose the best insurance plans for themselves at the right costs.
## What it does
Our scripts are built to parse, process, and format the data provided by Vitech's live V3 API database. The data is initially filtered using Solr queries and then formatted into a more adaptable comma-separated variable (CSV) file. This data is then processed by a different script through several machine learning algorithms in order to extract meaningful data about the relationship between an individual's personal details and the plan that they are most likely to choose. Additionally, we have provided visualizations created in R that helped us interpret the many data points more effectively.
## How we built it
We initially explored all of the ideas that we had regarding how exactly we planned to process the data and proceeded to pick Python as a suitable language and interface in which we believed that we could accomplish all of our goals. The first step was parsing and formatting data after which we began observing it through the visualization tools provided by R. Once we had a rough idea about how our data is distributed, we continued by making models using the h2o Python library in order to model our data.
## Challenges we ran into
Since none of us had much experience with machine learning prior to this project, we dived into many software tools we had never even seen before. Furthermore, the data provided by Vitech had many variables to track, so our deficiency in understanding of the insurance market truly slowed down our progress in making better models for our data.
## Accomplishments that we're proud of
We are very proud that we got as far as we did even though out product is not finalized. Going into this initially, we did not know how much we could learn and accomplish and yet we managed to implement fairly complex tools for analyzing and processing data. We have learned greatly from the entire experience as a team and are now inspired to continue exploring data science and the power of data science tools.
## What we learned
We have learned a lot about the nuances of processing and working with big data and about what software tools are available to us for future use.
## What's next for Vitech Insurance Data Processing and Analysis
We hope to further improve our modeling to get more meaningful and applicable results. The next barrier to overcome is definitely related to our lack of field expertise in the realm of the insurance market which would further allow us to make more accurate and representative models of the data. | losing |
## Inspiration
Although environmental problems are well-known all over the world, many people are still reluctant to reduce the carbon footprint by changing their daily habits. The main reason is that people cannot feel how much their decision can influence the environment. Our aim is to raise people’s awareness of their impact on the environment by calculating carbon footprints from traveling and food.
* Transportation: Cars were first invented in 1886. Since then, life has been greatly improved. People can more easily visit their friends. Goods can be more efficiently transported. However, this also brings problems when more cars are used. According to the EPA, “a typical passenger vehicle emits about 4.6 metric tons of carbon dioxide per year.” Furthermore, because online shopping is becoming more popular, carbon emission of transportation of these packages is exponentially increasing.
* Electricity: Since electricity is a clean and renewable source, some people believe it is not very important to save electricity. However, numerous power stations and electric grids can also leave carbon footprints. Indeed, people tend to notice the carbon footprint produced by vehicles, coal burned, or other traditional carbon emission sources. We compared the footprint produced by electricity consumption with that produced by vehicles, gasoline, coal, and more. By transferring the data of electricity consumption to the data of vehicles, gasoline, etc, we are able to give users a sense of how important it is to save electricity.
* Food: Although we usually think about transportation when it comes to carbon emissions, food actually has a major impact on carbon.
From this second image, we can see that our food choices have a major impact on the environment. So, the next time you eat out or go to the grocery store, consider stocking up on more vegetables, and reducing your beef and lamb intake!
## What it does
* The transportation & electricity consumption sections allow users to input the data of their daily routine, and the website will estimate the amount of carbon emission based on the users’ data by Carbon Interface API.
* The food section accepts an image URL, and upon pressing the submission button, shows you the picture that you submitted. It analyzes what kind of food the picture shows using the Clarifai API, then calculates the approximate CO2 of those foods.
* For all the pages, based on the amount of CO2 the user was estimated to use, the equivalence of that amount of emissions in terms of things like number of smartphones that could be charged, miles that could be traveled, and the number of trees it would take to sequester that amount of carbon is displayed.
## Challenges we ran into
Initially, we weren’t sure how to use the Clarifai API to recognize foods from images. We were able to use a Python script, but then had to learn how to use Flask to integrate the recognition with the button submission (the onClick event). Afterwards, since we were also building additional pages with React, we struggled to integrate the Flask backend with the React frontend.
## Accomplishments that we're proud of
We’re proud of lots of things. From using Github for the first time to learning to use Flask and connecting the Flask backend with the React frontend (thanks to mentor help!), we’ve learned so much!
## What we learned
We learned how to pass data from flask backend to react frontend and how to combine backend and frontend into a single project.
## What's next for Carbonology
We hope to design a more interactive user interface, connecting the pages more than in just a navigation bar (for example, a weekly summary of your carbon emissions if you fill out information for the week). We also aim to recognize carbon emission numbers for a wider variety of foods, and potentially add image recognition to items besides foods. | ## Inspiration
We were inspired to create Eco Kitty by the pressing need to educate children about environmental issues in an engaging and accessible way. We realized that many kids are naturally curious about nature and the world around them, but often find traditional environmental education boring or difficult to understand. We wanted to harness the power of modern technology to make sustainability fun and interactive, hoping to nurture a generation of eco-conscious individuals from an early age.
## What it does
Eco Kitty is an interactive educational app that makes learning about sustainability and environmental consciousness fun for children. It features an AI-powered chat assistant called the Eco-Buddy, which provides eco-tips and answers questions about the environment. The app's innovative recycling scanner allows kids to use their device's camera to identify recyclable items and learn proper disposal methods. To help children understand their environmental impact, Eco Kitty includes a leaderboard system that fosters some healthy competition between family members to adapt to eco-friendly habits. The app also incorporates gamification through daily eco-friendly quests, challenging users to develop sustainable habits. By combining these engaging features, Eco Kitty aims to nurture environmentally responsible behaviors in young learners, making complex environmental concepts accessible and exciting.
## How we built it
We built Eco Kitty using a modern tech stack centered around Next.js 14 with TypeScript. The frontend was developed using React components styled with Tailwind CSS for a clean, responsive design. We leveraged Next.js API routes for our backend logic, which helped streamline our development process. Supabase served as our database and authentication solution, providing a robust and scalable foundation. The AI chat feature was powered by the OpenAI API, while we used Zxing and Cheerio for UTC code recognition as well web scraping public data to find products
## Challenges we ran into
Making Eco Kitty was fun but tough. We wanted to teach kids about the environment in a way that was both correct and easy to understand. It was hard to explain big ideas in simple words without losing important details.
The AI chat part was tricky too. We had to make sure it talked to kids in a friendly way and taught them the right things. We spent a lot of time making the AI say just the right things and adding ways to keep it safe for kids.
Another big problem was using lots of different computer tools to build our app. Many of these tools had old or confusing instructions. This made it really frustrating to figure out how to use them. But we didn't give up! We looked everywhere for help - in the official guides, on forums where other coders talk, and even in other people's projects that used similar tools. We had to put together bits of information from all these places, which was like solving a big puzzle. This made us think in new ways and come up with clever solutions to make Eco Kitty work better. We learned that being able to adapt and find new ways to solve problems is super important when making apps. In the end, the hard parts helped us make Eco Kitty even cooler than we first imagined.
## Accomplishments that we're proud of
We're incredibly proud of what we've accomplished with Eco Kitty. Our biggest achievement was creating an app that makes complex environmental concepts accessible and engaging for children. Along the way, we significantly expanded our technical skills. We dove deep into Next.js 14, mastering its new App Router and strengthening our TypeScript expertise. Integrating various APIs, including OpenAI for our AI chat feature and image recognition services for our recycling scanner, pushed us to learn rapidly and adapt to new technologies. We overcame challenges with outdated documentation by collaborating and finding creative solutions, which not only solved our immediate problems but also deepened our understanding of API integration. The project also allowed us to refine our skills in React, Tailwind CSS, and Supabase. Perhaps most rewarding was seeing our vision come to life - we worked so hard as a team and couldn't have gone anywhere with the support we provided for each other. Though moments were intense and hard we stuck together and fought to the very end.
## What we learned
Developing Eco Kitty was a fantastic learning experience for our team. We significantly improved our skills in Next.js 14 and TypeScript, while also gaining valuable experience with Supabase and the OpenAI API. The project taught us the importance of clear communication and teamwork, especially when solving complex problems like creating child-friendly AI interactions or designing engaging educational content. We learned to adapt quickly when faced with challenges like outdated API documentation, which pushed us to find creative solutions. Working on an environmentally focused app also showed us how technology can be used to educate and inspire positive change. Overall, this project not only enhanced our technical abilities but also reinforced the value of creating purposeful technology that can make a real difference in the world.
## What's next for EcoKitty
• Conduct extensive user testing with children and parents
• Expand educational content with more environmental topics and quests
• Localize the app for different regions (recycling guidelines, languages)
• Create a parental dashboard for progress monitoring
• Enhance gamification with rewards, levels, and competitions
• Develop resources for school curriculum integration
• Continuously improve AI chat responses and accuracy
• Add community features for users to share eco-friendly achievements
• Partner with environmental organizations to enhance content and reach | ## Inspiration
30 percent of the total US carbon CO2 emissions were produced by the transportation sector. That's nearly 2 billion metric tons of CO2 per year that we send into the atmosphere.
We need to do something and do it quickly since the Earth has already been feeling the effects of humankind on its well being: sea levels have risen, ocean temperatures increased, and global warming is a real threat. Luckily, non-profits like Cool Earth have already jumped on the problem, with the mission of reinvesting the donations they get into cleaner and greener energy sources. They calculated that for every tonne of CO2 that is being sent into the atmosphere, they can offset its negative effects with a donation of just **$1.34** dollars.
The initial idea for Zeppelin came through when we thought about how much it would cost to offset one flight-worth of CO2. After reading papers from different non-profits, we realized that a flight from Los Angeles to New York would cost only $4.22 per person. Who wouldn't want to help the Earth, greatly offset the carbon footprint, and do it for just a fraction of a cost of a normal ticket? We decided to make it easy for people to make this step forward, so we created Zeppelin.
## What it does
Zeppelin has a plethora of different features that make it as easy as pressing a button to make a difference in today fight against Global Warming. So let's go through a scenario of someone who might use it after flying or driving in a car:
1. Jake just landed in Atlanta with his flight from San Francisco, and by opening Zeppelin, facing the camera at his boarding pass, and pressing a button, he was quick to donate to the non-profits he knows can help his cause. Luckily, his credit card is stored securely on the app, so there is no need to do any extra typing.
2. Reni is about to go on a road trip with his friends from New York City to Denver. By opening Zeppelin, typing in where they are going, and pressing the "Donate" button, Reni kept his mind at ease during the trip about his carbon footprint.
We can scan boarding passes, calculate your CO2 emission by car and by plane, store your credit card for when you want to donate, and we can wire the money to companies like Cool Earth for you.
## How we built it
Zeppelin is built in React Native, which means that this app can soon be available on iOS, Android, and Windows phone, and after a little bit of tweaking, can even come up as a website. We first sat down and discussed what we wanted in the minimum viable product, and later brainstormed potential features we can add. We were happy to have finished out MVP pretty early, and after dividing up the additional features, we worked on accomplishing those.
## Challenges we ran into
It was the first real opportunity most of us had to experience React Native, so there were many learning challenges we experienced. Setting up the local SQLite database and connecting with the Stripe API was the hardest of the challenges.
## Accomplishments that we're proud of
We took a lot of time in ensuring that our app looks polished. Even though this is a hackathon project, the app looks professional.
## What we learned
## What's next for Zeppelin
Technologically, we would love to connect the app through social media, to add a social aspect to lowering carbon emissions. Additionally, we would like to offer a larger range of potential non-profits people could donate to, so our users can see themselves who they like best.
On the business side, we see very big potential with airline companies: with large companies today helping fight climate change, airline companies can add the potential donations on future tickets, so the flyers can become better educated on what they can do to help. | losing |
## Inspiration
40 million people in the world are blind, including 20% of all people aged 85 or older. Half a million people suffer paralyzing spinal cord injuries every year. 8.5 million people are affected by Parkinson’s disease, with the vast majority of these being senior citizens. The pervasive difficulty for these individuals to interact with objects in their environment, including identifying or physically taking the medications vital to their health, is unacceptable given the capabilities of today’s technology.
First, we asked ourselves the question, what if there was a vision-powered robotic appliance that could serve as a helping hand to the physically impaired? Then we began brainstorming: Could a language AI model make the interface between these individual’s desired actions and their robot helper’s operations even more seamless? We ended up creating Baymax—a robot arm that understands everyday speech to generate its own instructions for meeting exactly what its loved one wants. Much more than its brilliant design, Baymax is intelligent, accurate, and eternally diligent.
We know that if Baymax was implemented first in high-priority nursing homes, then later in household bedsides and on wheelchairs, it would create a lasting improvement in the quality of life for millions. Baymax currently helps its patients take their medicine, but it is easily extensible to do much more—assisting these same groups of people with tasks like eating, dressing, or doing their household chores.
## What it does
Baymax listens to a user’s requests on which medicine to pick up, then picks up the appropriate pill and feeds it to the user. Note that this could be generalized to any object, ranging from food, to clothes, to common household trinkets, to more. Baymax responds accurately to conversational, even meandering, natural language requests for which medicine to take—making it perfect for older members of society who may not want to memorize specific commands. It interprets these requests to generate its own pseudocode, later translated to robot arm instructions, for following the tasks outlined by its loved one. Subsequently, Baymax delivers the medicine to the user by employing a powerful computer vision model to identify and locate a user’s mouth and make real-time adjustments.
## How we built it
The robot arm by Reazon Labs, a 3D-printed arm with 8 servos as pivot points, is the heart of our project. We wrote custom inverse kinematics software from scratch to control these 8 degrees of freedom and navigate the end-effector to a point in three dimensional space, along with building our own animation methods for the arm to follow a given path. Our animation methods interpolate the arm’s movements through keyframes, or defined positions, similar to how film editors dictate animations. This allowed us to facilitate smooth, yet precise, motion which is safe for the end user.
We built a pipeline to take in speech input from the user and process their request. We wanted users to speak with the robot in natural language, so we used OpenAI’s Whisper system to convert the user commands to text, then used OpenAI’s GPT-4 API to figure out which medicine(s) they were requesting assistance with.
We focused on computer vision to recognize the user’s face and mouth. We used OpenCV to get the webcam live stream and used 3 different Convolutional Neural Networks for facial detection, masking, and feature recognition. We extracted coordinates from the model output to extrapolate facial landmarks and identify the location of the center of the mouth, simultaneously detecting if the user’s mouth is open or closed.
When we put everything together, our result was a functional system where a user can request medicines or pills, and the arm will pick up the appropriate medicines one by one, feeding them to the user while making real time adjustments as it approaches the user’s mouth.
## Challenges we ran into
We quickly learned that working with hardware introduced a lot of room for complications. The robot arm we used was a prototype, entirely 3D-printed yet equipped with high-torque motors, and parts were subject to wear and tear very quickly, which sacrificed the accuracy of its movements. To solve this, we implemented torque and current limiting software and wrote Python code to smoothen movements and preserve the integrity of instruction.
Controlling the arm was another challenge because it has 8 motors that need to be manipulated finely enough in tandem to reach a specific point in 3D space. We had to not only learn how to work with the robot arm SDK and libraries but also comprehend the math and intuition behind its movement. We did this by utilizing forward kinematics and restricted the servo motors’ degrees of freedom to simplify the math. Realizing it would be tricky to write all the movement code from scratch, we created an animation library for the arm in which we captured certain arm positions as keyframes and then interpolated between them to create fluid motion.
Another critical issue was the high latency between the video stream and robot arm’s movement, and we spent much time optimizing our computer vision pipeline to create a near instantaneous experience for our users.
## Accomplishments that we're proud of
As first-time Hackathon participants, we are incredibly proud of the incredible progress we were able to make in a very short amount of time, proving to ourselves that with hard work, passion, and a clear vision, anything is possible. Our team did a fantastic job embracing the challenge of using technology unfamiliar to us, and stepped out of our comfort zones to bring our idea to life. Whether it was building the computer vision model, or learning how to interface the robot arm’s movements with voice controls, we ended up building a robust prototype which far surpassed our initial expectations. One of our greatest successes was coordinating our work so that each function could be pieced together and emerge as a functional robot. Let’s not overlook the success of not eating our hi-chews we were using for testing!
## What we learned
We developed our skills in frameworks we were initially unfamiliar with such as how to apply Machine Learning algorithms in a real-time context. We also learned how to successfully interface software with hardware - crafting complex functions which we could see work in 3-dimensional space. Through developing this project, we also realized just how much social impact a robot arm can have for disabled or elderly populations.
## What's next for Baymax
Envision a world where Baymax, a vigilant companion, eases medication management for those with mobility challenges. First, Baymax can be implemented in nursing homes, then can become a part of households and mobility aids. Baymax is a helping hand, restoring independence to a large disadvantaged group.
This innovation marks an improvement in increasing quality of life for millions of older people, and is truly a human-centric solution in robotic form. | ## Inspiration
More than **2 million** people in the United States are affected by diseases such as ALS, brain or spinal cord injuries, cerebral palsy, muscular dystrophy, multiple sclerosis, and numerous other diseases that impair muscle control. Many of these people are confined to their wheelchairs, some may be lucky enough to be able to control their movement using a joystick. However, there are still many who cannot use a joystick, eye tracking systems, or head movement-based systems.
Therefore, a brain-controlled wheelchair can solve this issue and provide freedom of movement for individuals with physical disabilities.
## What it does
BrainChair is a neurally controlled headpiece that can control the movement of a motorized wheelchair. There is no using the attached joystick, just simply think of the wheelchair movement and the wheelchair does the rest!
## How we built it
The brain-controlled wheelchair allows the user to control a wheelchair solely using an OpenBCI headset. The headset is an Electroencephalography (EEG) device that allows us to read brain signal data that comes from neurons firing in our brain. When we think of specific movements we would like to do, those specific neurons in our brain will fire. We can collect this EEG data through the Brainflow API in Python, which easily allows us to stream, filter, preprocess the data, and then finally pass it into a classifier.
The control signal from the classifier is sent through WiFi to a Raspberry Pi which controls the movement of the wheelchair. In our case, since we didn’t have a motorized wheelchair on hand, we used an RC car as a replacement. We simply hacked together some transistors onto the remote which connects to the Raspberry Pi.
## Challenges we ran into
* Obtaining clean data for training the neural net took some time. We needed to apply signal processing methods to obtain the data
* Finding the RC car was difficult since most stores didn’t have it and were closed. Since the RC car was cheap, its components had to be adapted in order to place hardware pieces.
* Working remotely made designing and working together challenging. Each group member worked on independent sections.
## Accomplishments that we're proud of
The most rewarding aspect of the software is that all the components front the OpenBCI headset to the raspberry-pi were effectively communicating with each other
## What we learned
One of the most important lessons we learned is effectively communicating technical information to each other regarding our respective disciplines (computer science, mechatronics engineering, mechanical engineering, and electrical engineering).
## What's next for Brainchair
To improve BrainChair in future iterations we would like to:
Optimize the circuitry to use low power so that the battery lasts months instead of hours. We aim to make the OpenBCI headset not visible by camouflaging it under hair or clothing. | ## Inspiration
There are approximately **10 million** Americans who suffer from visual impairment, and over **5 million Americans** suffer from Alzheimer's dementia. This weekend our team decided to help those who were not as fortunate. We wanted to utilize technology to create a positive impact on their quality of life.
## What it does
We utilized a smartphone camera to analyze the surrounding and warn visually impaired people about obstacles that were in their way. Additionally, we took it a step further and used the **Azure Face API** to detect the faces of people that the user interacted with and we stored their name and facial attributes that can be recalled later. An Alzheimer's patient can utilize the face recognition software to remind the patient of who the person is, and when they last saw him.
## How we built it
We built our app around **Azure's APIs**, we created a **Custom Vision** network that identified different objects and learned from the data that we collected from the hacking space. The UI of the iOS app was created to be simple and useful for the visually impaired, so that they could operate it without having to look at it.
## Challenges we ran into
Through the process of coding and developing our idea we ran into several technical difficulties. Our first challenge was to design a simple UI, so that the visually impaired people could effectively use it without getting confused. The next challenge we ran into was attempting to grab visual feed from the camera, and running them fast enough through the Azure services to get a quick response. Another challenging task that we had to accomplish was to create and train our own neural network with relevant data.
## Accomplishments that we're proud of
We are proud of several accomplishments throughout our app. First, we are especially proud of setting up a clean UI with two gestures, and voice control with speech recognition for the visually impaired. Additionally, we are proud of having set up our own neural network, that was capable of identifying faces and objects.
## What we learned
We learned how to implement **Azure Custom Vision and Azure Face APIs** into **iOS**, and we learned how to use a live camera feed to grab frames and analyze them. Additionally, not all of us had worked with a neural network before, making it interesting for the rest of us to learn about neural networking.
## What's next for BlindSpot
In the future, we want to make the app hands-free for the visually impaired, by developing it for headsets like the Microsoft HoloLens, Google Glass, or any other wearable camera device. | winning |
## Inspiration
GetSense was developed in an attempt to create low latency live streaming for countries with slow internet so that they could still have security regardless of their internet speed.
## What it does
GetSense is an AI powered flexible security solution that uses low-level IoT devices (laptop camera systems or Raspberry Pi) to detect, classify, and identify strangers and friends in your circle. A GetSense owner uploads images of authorized faces through an user-facing mobile application. Through the application, the user has access to a live-stream of all connected camera devices, and authorized friend list.
Under the hood, when an user uploads authorized faces, these are sent as data to Firebase storage through a REST API which generates dynamic image URLs. These are then sent to serverless functions (FAAS) which connects to the computer vision microservices setup in Clarifai. The IoT devices communicate via RTSP and streams the video-feed using low-latency.
## How we built it
We used stdlib to generate serverless functions for obtaining the probability score through Clarifai facial recognition, push notifications via Slack alerts to notify the user of an unrecognizable face, and managing the image model training route to Clarifai. For the facial detection process, we used OpenCV with multithreading to detect faces (through Clarifai) for optimization purposes - this was done in Python.
An iOS application was exposed to the user for live-streaming all camera sources, adding authorized faces, and visualizing current friend list. All the data involving images and streaming was handled through Firebase storage and database, which the iOS application heavily interfaced with.
## Challenges we ran into
Our initial goal was to use AWS kinesis to process everything originating from a Raspberry Pi camera module. We had lots of issues with the binaries and overall support of AWS kinesis, so we had to pivot and explore camera modules on local machines. We had to explore using Clarifai for facial detection, running serverless functions with stdlib, and push notifications through an external service.
## Accomplishments that we're proud of
It works.
## What we learned
We learned how to use StdLib, Clarifai for image processing, OpenCV, and building an iOS application.
## What's next for GetSense
We want to improve it to make it more user friendly. | ## Inspiration
Are you out in public but scared about people standing too close? Do you want to catch up on the social interactions at your cozy place but do not want to endanger your guests? Or you just want to be notified as soon as you have come in close contact to an infected individual? With this app, we hope to provide the tools to users to navigate social distancing more easily amidst this worldwide pandemic.
## What it does
The Covid Resource App aims to bring a one-size-fits-all solution to the multifaceted issues that COVID-19 has spread in our everyday lives.
Our app has 4 features, namely:
- A social distancing feature which allows you to track where the infamous "6ft" distance lies
- A visual planner feature which allows you to verify how many people you can safely fit in an enclosed area
- A contact tracing feature that allows the app to keep a log of your close contacts for the past 14 days
- A self-reporting feature which enables you to notify your close contacts by email in case of a positive test result
## How we built it
We made use primarily of Android Studio, Java, Firebase technologies and XML. Each collaborator focused on a task and bounced ideas off of each other when needed.
The social distancing feature functions based on a simple trigonometry concept and uses the height from ground and tilt angle of the device to calculate how far exactly is 6ft.
The visual planner adopts a tactile and object-oriented approach, whereby a room can be created with desired dimensions and the touch input drops 6ft radii into the room.
The contact tracing functions using Bluetooth connection and consists of phones broadcasting unique ids, in this case, email addresses, to each other. Each user has their own sign-in and stores their keys on a Firebase database.
Finally, the self-reporting feature retrieves the close contacts from the past 14 days and launches a mass email to them consisting of quarantining and testing recommendations.
## Challenges we ran into
Only two of us had experience in Java, and only one of us had used Android Studio previously. It was a steep learning curve but it was worth every frantic google search.
## What we learned
* Android programming and front-end app development
* Java programming
* Firebase technologies
## Challenges we faced
* No unlimited food | ## Inspiration
Today’s technological advances have resulted in many consumers having to increasingly rely on Web-based banking and financial systems. Consumers often overlooked as we make this transition include the visually impaired and/or the less tech-savvy populations (e.g. elderly). It is critical that we guarantee that systems give equal and complete access to everyone, and none are disregarded. Some may point out that paper billing is an option, but on top of not being eco-friendly, paper billing is not an ideal way of getting the latest information about your records on the go.
We combine the convenience and efficiency of mobile/web applications with the ease of paper billing. **One button is all you need in a clean, user-friendly interface.**
## What *Bank Yeller* Does
Our application allows the user to voice chat with an AI (created using Dasha AI) to get their latest bank account information and be able to pay their bills vocally.
Users can ask about their latest transactions to find out how much they spent at each location, and they can ask about their latest bank statements to find out when and how much is due, as well as have the option to pay them off!
## How We Built *Bank Yeller*
* We used existing Dasha AI repositories as inspiration and guides to create our own personalized AI that was geared to our particular use case.
* We created and deployed an SQL Database to Google Cloud Platform to mimic a bank’s database and demonstrate how this would work in everyday life.
* We used Javascript to set up our Dasha.AI interactions as well as connect to our SQL Database and run queries.
* We used Figma to construct prototypes of the web and mobile based apps.
## Challenges We Ran Into
Using Dasha AI had its own learning curve, but the Dasha AI team was amazing and provided us with numerous resources and support throughout the creation of our project! (Special shoutout to Arthur Grishkevich and Andrey Turtsev at Dasha AI for putting up with Joud’s endless questions.)
Having not had much experience with async functions, we also struggled with dealing with them and figuring out how to use them when performing queries. We ended up watching several YouTube videos to understand the concepts and help us approach our problem. We were exposed to several new platforms this weekend, each with its own challenges, which only pushed us to work harder and learn more.
## Accomplishments We're Proud of
One of the accomplishments that we are proud of is the implementation of our Dasha AI accessing and verifying information from our SQL Database stored in Google Cloud. We are also proud that we were able to divide our time well this weekend to give us time to create a project we really care about and enjoyed creating as well as time to meet new people and learn about some very cool concepts! In our past hackathon we created an application to support local businesses in London, and we are proud to continue in this path of improving lives through the targeting of (often) overlooked populations.
## What We Learned
Each of us had skills that the other had not used before, so we were able to teach each other new concepts and new ways of approaching ideas! The Hack Western workshops also proved to be excellent learning resources.
## What's Next for *Bank Yeller*
Expanding! Having it available as a phone application where users can press anywhere on the screen (shown in Figma mockups) to launch Dasha. There, they will be able to perform all the aforementioned actions, and possibly some additional features! Taking visual impairments into note, we would add clear, large captions that indicate what is being said.
* for instructions on how to run Bank Yeller, please check out our README file :)) | winning |
## Inspiration
The inspiration for our project came from three of our members being involved with Smash in their community. From one of us being an avid competitor, one being an avid watcher and one of us who works in an office where Smash is played quite frequently, we agreed that the way Smash Bro games were matched and organized needed to be leveled up. We hope that this becomes a frequently used bot for big and small organizations alike.
## How it Works
We broke the project up into three components, the front end made using React, the back end made using Golang and a middle part connecting the back end to Slack by using StdLib.
## Challenges We Ran Into
A big challenge we ran into was understanding how exactly to create a bot using StdLib. There were many nuances that had to be accounted for. However, we were helped by amazing mentors from StdLib's booth. Our first specific challenge was getting messages to be ephemeral for the user that called the function. Another adversity was getting DM's to work using our custom bot. Finally, we struggled to get the input from the buttons and commands in Slack to the back-end server. However, it was fairly simple to connect the front end to the back end.
## The Future for 'For Glory'
Due to the time constraints and difficulty, we did not get to implement a tournament function. This is a future goal because this would allow workspaces and other organizations that would use a Slack channel to implement a casual tournament that would keep the environment light-hearted, competitive and fun. Our tournament function could also extend to help hold local competitive tournaments within universities. We also want to extend the range of rankings to have different types of rankings in the future. One thing we want to integrate into the future for the front end is to have a more interactive display for matches and tournaments with live updates and useful statistics. | ## What it does
KokoRawr at its core is a Slack App that facilitates new types of interactions via chaotic cooperative gaming through text. Every user is placed on a team based on their Slack username and tries to increase their team's score by playing games such as Tic Tac Toe, Connect 4, Battleship, and Rock Paper Scissors. Teams must work together to play. However, a "Twitch Plays Pokemon" sort of environment can easily be created where multiple people are trying to execute commands at the same time and step on each others' toes. Additionally, people can visualize the games via a web app.
## How we built it
We jumped off the deep into the land of microservices. We made liberal use of StdLib with node.js to deploy a service for every feature in the app, amounting to 10 different services. The StdLib services all talk to each other and to Slack. We also have a visualization of the game boards that is hosted as a Flask server on Heroku that talks to the microservices to get information.
## Challenges we ran into
* not getting our Slack App banned by HackPrinceton
* having tokens show up correctly on the canvas
* dealing with all of the madness of callbacks
* global variables causing bad things to happen
## Accomplishments that we're proud of
* actually chaotically play games with each other on Slack
* having actions automatically showing up on the web app
* The fact that we have **10 microservices**
## What we learned
* StdLib way of microservices
* Slack integration
* HTML5 canvas
* how to have more fun with each other
## Possible Use Cases
* Friendly competitive way for teams at companies to get to know each other better and learn to work together
* New form of concurrent game playing for friend groups with "unlimited scalability"
## What's next for KokoRawr
We want to add more games to play and expand the variety of visualizations that are shown to include more games. Some service restructuring would be need to be done to reduce the Slack latency. Also, game state would need to be more persistent for the services. | ## Inspiration
“**Social media sucks these days.**” — These were the first few words we heard from one of the speakers at the opening ceremony, and they struck a chord with us.
I’ve never genuinely felt good while being on my phone, and like many others I started viewing social media as nothing more than a source of distraction from my real life and the things I really cared about.
In December 2019, I deleted my accounts on Facebook, Instagram, Snapchat, and WhatsApp.
For the first few months — I honestly felt great. I got work done, focused on my small but valuable social circle, and didn’t spend hours on my phone.
But one year into my social media detox, I realized that **something substantial was still missing.** I had personal goals, routines, and daily checklists of what I did and what I needed to do — but I wasn’t talking about them. By not having social media I bypassed superficial and addictive content, but I was also entirely disconnected from my network of friends and acquaintances. Almost no one knew what I was up to, and I didn’t know what anyone was up to either. A part of me longed for a level of social interaction more sophisticated than Gmail, but I didn’t want to go back to the forms of social media I had escaped from.
One of the key aspects of being human is **personal growth and development** — having a set of values and living them out consistently. Especially in the age of excess content and the disorder of its partly-consumed debris, more people are craving a sense of **routine, orientation, and purpose** in their lives. But it’s undeniable that **humans are social animals** — we also crave **social interaction, entertainment, and being up-to-date with new trends.**
Our team’s problem with current social media is its attention-based reward system. Most platforms reward users based on numeric values of attention, through measures such as likes, comments and followers. Because of this reward system, people are inclined to create more appealing, artificial, and addictive content. This has led to some of the things we hate about social media today — **addictive and superficial content, and the scarcity of genuine interactions with people in the network.**
This leads to a **backward-looking user-experience** in social media. The person in the 1080x1080 square post is an ephemeral and limited representation of who the person really is. Once the ‘post’ button has been pressed, the post immediately becomes an invitation for users to trap themselves in the past — to feel dopamine boosts from likes and comments that have been designed to make them addicted to the platform and waste more time, ultimately **distorting users’ perception of themselves, and discouraging their personal growth outside of social media.**
In essence — We define the question of reinventing social media as the following:
*“How can social media align personal growth and development with meaningful content and genuine interaction among users?”*
**Our answer is High Resolution — a social media platform that orients people’s lives toward an overarching purpose and connects them with liked-minded, goal-oriented people.**
The platform seeks to do the following:
**1. Motivate users to visualize and consistently achieve healthy resolutions for personal growth**
**2. Promote genuine social interaction through the pursuit of shared interests and values**
**3. Allow users to see themselves and others for who they really are and want to be, through natural, progress-inspired content**
## What it does
The following are the functionalities of High Resolution (so far!):
After Log in or Sign Up:
**1. Create Resolution**
* Name your resolution, whether it be Learning Advanced Korean, or Spending More Time with Family.
* Set an end date to the resolution — i.e. December 31, 2022
* Set intervals that you want to commit to this goal for (Daily / Weekly / Monthly)
**2. Profile Page**
* Ongoing Resolutions
+ Ongoing resolutions and level of progress
+ Clicking on a resolution opens up the timeline of that resolution, containing all relevant posts and intervals
+ Option to create a new resolution, or ‘Discover’ resolutions
* ‘Discover’ Page
+ Explore other users’ resolutions, that you may be interested in
+ Clicking on a resolution opens up the timeline of that resolution, allowing you to view the user’s past posts and progress for that particular resolution and be inspired and motivated!
+ Clicking on a user takes you to that person’s profile
* Past Resolutions
+ Past resolutions and level of completion
+ Resolutions can either be fully completed or partly completed
+ Clicking on a past resolution opens up the timeline of that resolution, containing all relevant posts and intervals
**3. Search Bar**
* Search for and navigate to other users’ profiles!
**4. Sentiment Analysis based on IBM Watson to warn against highly negative or destructive content**
* Two functions for sentiment analysis textual data on platform:
* One function to analyze the overall positivity/negativity of the text
* Another function to analyze the user of the amount of joy, sadness, anger and disgust
* When the user tries to create a resolution that seems to be triggered by negativity, sadness, fear or anger, we show them a gentle alert that this may not be best for them, and ask if they would like to receive some support.
* In the future, we can further implement this feature to do the same for comments on posts.
* This particular functionality has been demo'ed in the video, during the new resolution creation.
* **There are two purposes for this functionality**:
* a) We want all our members to feel that they are in a safe space, and while they are free to express themselves freely, we also want to make sure that their verbal actions do not pose a threat to themselves or to others.
* b) Current social media has shown to be a propagator of hate speech leading to violent attacks in real life. One prime example are the Easter Attacks that took place in Sri Lanka exactly a year ago: <https://www.bbc.com/news/technology-48022530>
* If social media had a mechanism to prevent such speech from being rampant, the possibility of such incidents occurring could have been reduced.
* Our aim is not to police speech, but rather to make people more aware of the impact of their words, and in doing so also try to provide resources or guidance to help people with emotional stress that they might be feeling on a day-to-day basis.
* We believe that education at the grassroots level through social media will have an impact on elevating the overall wellbeing of society.
## How we built it
Our tech stack primarily consisted of React (with Material UI), Firebase and IBM Watson APIs. For the purpose of this project, we opted to use the full functionality of Firebase to handle the vast majority of functionality that would typically be done on a classic backend service built with NodeJS, etc. We also used Figma to prototype the platform, while IBM Watson was used for its Natural Language toolkits, in order to evaluate sentiment and emotion.
## Challenges we ran into
A bulk of the challenges we encountered had to do with React Hooks. A lot of us were only familiar with an older version of React that opted for class components instead of functional components, so getting used to Hooks took a bit of time.
Another issue that arose was pulling data from our Firebase datastore. Again, this was a result of lack of experience with serverless architecture, but we were able to pull through in the end.
## Accomplishments that we're proud of
We’re really happy that we were able to implement most of the functionality that we set out to when we first envisioned this idea. We admit that we might have bit a lot more than we could chew as we set out to recreate an entire social platform in a short amount of time, but we believe that the proof of concept is demonstrated through our demo
## What we learned
Through research and long contemplation on social media, we learned a lot about the shortcomings of modern social media platforms, for instance how they facilitate unhealthy addictive mechanisms that limit personal growth and genuine social connection, as well as how they have failed in various cases of social tragedies and hate speech. With that in mind, we set out to build a platform that could be on the forefront of a new form of social media.
From a technical standpoint, we learned a ton about how Firebase works, and we were quite amazed at how well we were able to work with it without a traditional backend.
## What's next for High Resolution
One of the first things that we’d like to implement next, is the ‘Group Resolution’ functionality. As of now, users browse through the platform, find and connect with liked-minded people pursuing similarly-themed interests. We think it would be interesting to allow users to create and pursue group resolutions with other users, to form more closely-knitted and supportive communities with people who are actively communicating and working towards achieving the same resolution.
We would also like to develop a sophisticated algorithm to tailor the users’ ‘Discover’ page, so that the shown content is relevant to their past resolutions. For instance, if the user has completed goals such as ‘Wake Up at 5:00AM’, and ‘Eat breakfast everyday’, we would recommend resolutions like ‘Morning jog’ on the discover page. By recommending content and resolutions based on past successful resolutions, we would motivate users to move onto the next step. In the case that a certain resolution was recommended because a user failed to complete a past resolution, we would be able to motivate them to pursue similar resolutions based on what we think is the direction the user wants to head towards.
We also think that High Resolution could be potentially become a platform for recruiters to spot dedicated and hardworking talent, through the visualization of users’ motivation, consistency, and progress. Recruiters may also be able to user the platform to communicate with users and host online workshops or events .
WIth more classes and educational content transitioning online, we think the platform could serve as a host for online lessons and bootcamps for users interested in various topics such as coding, music, gaming, art, and languages, as we envision our platform being highly compatible with existing online educational platforms such as Udemy, Leetcode, KhanAcademy, Duolingo, etc.
The overarching theme of High Resolution is **motivation, consistency, and growth.** We believe that having a user base that adheres passionately to these themes will open to new opportunities and both individual and collective growth. | winning |
# better.me, AI Journaling
## Project Description
better.me is an AI journaling tool that helps you analyze your emotions and provide you with smart recommendations for your well being. We used NLP emotion analytics to process text data and incorporated a suicidal prevention algorithm that will help you make better informed decisions about your mental health.
## Motivation
Poor mental health is a growing pandemic that is still being stigmatized. Even after spending $5 Billion in federal investments for mental health, 1.3 million adults attempted suicide and 1.1 million plans to commit suicide.
>
> Our mission is to provide a private environment to help people analyze their emotions and receive mental health support.
>
>
>
## MVP Product Features Overview
| Features | Description |
| --- | --- |
| Personal Journal | Better Me is a personal AI-powered journal where users can write daily notes reflecting on their life's progress. |
| NLP Emotion Analytics | With the help of natural language process, Better Me will classify the user's emotional situation and keep a track of the data. |
| Smart Recommendations | It uses this monitored data to suggest appropriate mental health resources to the users and also provides them with suitable data analytics. |
| Suicide Prevention | In order to take a step forward towards suicide prevention, it also incorporates a suicidal text detection algorithm that triggers a preventive measure . |
## How we built it
We used Google T5 NLP model for emotional recognition and categorizing emotions. We trained data set with deep learning to develop a fine tuned BERT model to prevent suicide. We also implemented our own algorithm to make resource recommendations to users based on their emotional changes, and also did some data analytics. Due to time restraints and a member's absence, we had to change from React.js plus Firebase stack to Streamlit, a python library design framework.
## Challenges
Initially, we tried creating a dashboard using full-stack web development, however, it proved to be quite a challenging task with the little amount of time we had. We decided to shift our focus to quickly prototyping using a lightweight tool, and streamlit was the ideal choice for our needs. While deploying our suicide prevention algorithm on Google Cloud Function, we had trouble deploying due to memory availability constraints.
## Accomplishments
We are proud that we came up with such a novel idea that could be useful to innumerable people suffering from mental health issues, or those who are like to stay reserved with themselves or in a confused state about their mental well-being, just by writing about their daily lives. We are also proud of incorporating a suicide prevention algorithm, which could be life-saving for many.
## Roadmap
| Future Implementations | Description |
| --- | --- |
| Firebase Back End Architecture | We hope to design a scalable backend which accommodates for the users needs. |
| AI Mental Health Chat bot | Provide on the spot, mental health support using Dialogflow AI chat bot. |
| Connect with Therapists | Elevate data analytical features to connect and report to personal therapists. |
| Scaling Up | Fund our project and develop this project with scalable front and back end. |
| Languages Support | Support multiple languages, including French, Mandarin, and Spanish. | | View the SlideDeck for this project at: [slides](https://docs.google.com/presentation/d/1G1M9v0Vk2-tAhulnirHIsoivKq3WK7E2tx3RZW12Zas/edit?usp=sharing)
## Inspiration / Why
It is no surprise that mental health has been a prevailing issue in modern society. 16.2 million adults in the US and 300 million people in the world have depression according to the World Health Organization. Nearly 50 percent of all people diagnosed with depression are also diagnosed with anxiety. Furthermore, anxiety and depression rates are a rising issue among the teenage and adolescent population. About 20 percent of all teens experience depression before they reach adulthood, and only 30 percent of depressed teens are being treated for it.
To help battle for mental well-being within this space, we created DearAI. Since many teenagers do not actively seek out support for potential mental health issues (either due to financial or personal reasons), we want to find a way to inform teens about their emotions using machine learning and NLP and recommend to them activities designed to improve their well-being.
## Our Product:
To help us achieve this goal, we wanted to create an app that integrated journaling, a great way for users to input and track their emotions over time. Journaling has been shown to reduce stress, improve immune function, boost mood, and strengthen emotional functions. Journaling apps already exist, however, our app performs sentiment analysis on the user entries to help users be aware of and keep track of their emotions over time.
Furthermore, every time a user inputs an entry, we want to recommend the user something that will lighten up their day if they are having a bad day, or something that will keep their day strong if they are having a good day. As a result, if the natural language processing results return a negative sentiment like fear or sadness, we will recommend a variety of prescriptions from meditation, which has shown to decrease anxiety and depression, to cat videos on Youtube. We currently also recommend dining options and can expand these recommendations to other activities such as outdoors activities (i.e. hiking, climbing) or movies.
**We want to improve the mental well-being and lifestyle of our users through machine learning and journaling.This is why we created DearAI.**
## Implementation / How
Research has found that ML/AI can detect the emotions of a user better than the user themself can. As a result, we leveraged the power of IBM Watson’s NLP algorithms to extract the sentiments within a user’s textual journal entries. With the user’s emotions now quantified, DearAI then makes recommendations to either improve or strengthen the user’s current state of mind. The program makes a series of requests to various API endpoints, and we explored many APIs including Yelp, Spotify, OMDb, and Youtube. Their databases have been integrated and this has allowed us to curate the content of the recommendation based on the user’s specific emotion, because not all forms of entertainment are relevant to all emotions.
For example, the detection of sadness could result in recommendations ranging from guided meditation to comedy. Each journal entry is also saved so that users can monitor the development of their emotions over time.
## Future
There are a considerable amount of features that we did not have the opportunity to implement that we believe would have improved the app experience. In the future, we would like to include video and audio recording so that the user can feel more natural speaking their thoughts and also so that we can use computer vision analysis on the video to help us more accurately determine users’ emotions. Also, we would like to integrate a recommendation system via reinforcement learning by having the user input whether our recommendations improved their mood or not, so that we can more accurately prescribe recommendations as well. Lastly, we can also expand the APIs we use to allow for more recommendations. | ## Inspiration
Have you ever found yourself stuck in a rut, struggling to find the words to reflect on your day? The simple question **"How are you"** can often feel like a daunting task, leaving you wondering what to write about next. We've all been there!
## What it does
Our innovative app is here to help. By harnessing the power of empathetic AI, we provide personalized guidance to help you navigate your emotions with clarity and compassion. Whether you're seeking to improve your mental well-being, set meaningful goals, or simply reflect on life's precious moments, our intuitive platform offers actionable advice and insightful reflections.
## How we built it
We leveraged Swift's capabilities to craft a seamless frontend UI/UX design, ensuring a smooth user experience right away. Meanwhile, we built a solid API with fastAPI, allowing our frontend and backend to effortlessly communicate with each other.
Not only that, but we also used humeAI's powers to gain a deeper understanding of the sentiment behind each journal entry. This allowed us to better understand the emotional nuances of our users' experiences and offer a more empathetic and personalized response.
To further enhance the experience, we incorporated Groq low-latency inference to produce real-time guiding questions that are tailored to the unique context of each user's input. This means that the questions our users receive are skillfully constructed and intended to assist them in navigating their thoughts and emotions with greater clarity and insight.
## Challenges we ran into
Starting this project, we found ourselves venturing into unknown territory with Swift and fastAPI. With no prior experience, we had to quickly learn the ins and outs of these languages and frameworks. Additionally, our vision for the project was constantly evolving, which meant we had to be agile and adapt our approach on the fly. We regularly assessed our steps to ensure we were on track to meet our goals while navigating the challenges that arise with learning new technologies and modifying our vision.
## Accomplishments that we're proud of
We were excited to see that, even with a steep learning curve, we could still produce a completely usable application without needing to fully understand all of the frameworks and languages involved. This was a testament to our ability to adapt on the fly and solve problems that arise, allowing us to bring our vision to life.
## What we learned
We expanded our skillset by learning how to integrate humeAI for sentiment understanding of the user. We also learned how to create scalable and effective applications by utilizing Swift and fastAPI development. Also, through adding Groq, we were able to build an intuitive, yet unique, user experience that connects with our audience, taking our product to the next level.
## What's next for JournAI
We are looking at the possibility of integrating a function to set and track personal goals, allowing users to define their objectives and monitor their progress while journaling. Our system could determine whether or not users are accomplishing their objectives, offering a unique opportunity for further reflection and possible adjustment. Additionally, the system might recommend new objectives to users based on their interests and information curated through their journals, helping them discover new areas for growth and development. This function would empower users to approach their personal growth in a more intentional and thoughtful manner. | winning |
## Inspiration
To do our part to spread awareness and inspire the general public to make adjustments that will improve everyone's air quality. Also wanted to demonstrate that these adjustments are not as challenging and in our simulator, it shows that frequent small top ups go a long way.
## What it does
Our website includes information about EV and a simulation game where you have to drive past EV charging stations for quick top ups otherwise the vehicle will slow down to a crawl. EV stations come up fairly frequent, weather it be a regular wall socket or supercharger station.
## How we built it
Our website was built on repl.it, where one of us was working on a game while the other used html/css to create a website. After a domain was chosen from domain.com, we started to learn how to create a website using HTML. For some parts, code was taken from free html templates and were later manipulated in an HTML editor. Afterwards, google cloud was used to host the website, forcing us to learn how to use servers.
## Challenges we ran into
For starters, almost everything was new for all of us, from learning HTML to learning how to host off of a server. As new coders, we had to spend many hours learning how to code before we do anything. Once that happens we had to spend many hours testing code to see if it produced the wanted result. After all that was over, we had to learn how to use google cloud, our first experience with servers.
## Accomplishments that we're proud of
Actually having a working website, and having the website be hosted.
## What we learned
HTML, CSS, JS, Server hosting.
## What's next for EVolving Tech
We want to add destinations to give our simulation more complexity and context. This will allow the users to navigate between points of interest in their home city to get a feel of how range measures up to level of charge. | ## Inspiration
We've all heard horror stories of people with EVs running out of battery during a trip and not being able to find a charging station. Then, even if they do find one they have to wait so long for their car ot charge it throws off their whole trip. We wanted to make that process better for EV owners.
## What it does
RouteEV makes the user experience of owning and routing with an electric vehicle easy. It takes a trip in and based on the user's current battery, weather conditions, and route recommends if their trip is feasible or not. RouteEV then displays and recommends EV charging stations that have free spots near the route and readjusts the route to show whether charging at that station can help the user reach the destination.
## How we built it
We built RouteEV as a Javascript web app with React. It acts as a user interface for an electric Ford car that a user would interact with. Under the hood, we use various APIs such as the Google Maps API to display the map, markers, routing, and finding EV charging stations nearby. We also use APIs to collect weather information and provide Spotify integration.
## Challenges we ran into
Many members of our team hadn't used React before and we were all relatively experienced with front-end work. Trying to style and layout our application was a big challenge. The Google Maps API was also difficult to use at first and required lots of debugging to get it functional.
## Accomplishments that we're proud of
The main thing that we're proud of is that we were able to complete all the features we set out to complete at the beginning with time to spare. With our extra time we were able to have some fun and add fun integrations like Spotify.
## What we learned
We learned a lot about using React as well as using the Google Maps API and more about APIs in general. We also all learned a lot about front-end web development and working with CSS and JSX in React. | As a response to the ongoing wildfires devastating vast areas of Australia, our team developed a web tool which provides wildfire data visualization, prediction, and logistics handling. We had two target audiences in mind: the general public and firefighters. The homepage of Phoenix is publicly accessible and anyone can learn information about the wildfires occurring globally, along with statistics regarding weather conditions, smoke levels, and safety warnings. We have a paid membership tier for firefighting organizations, where they have access to more in-depth information, such as wildfire spread prediction.
We deployed our web app using Microsoft Azure, and used Standard Library to incorporate Airtable, which enabled us to centralize the data we pulled from various sources. We also used it to create a notification system, where we send users a text whenever the air quality warrants action such as staying indoors or wearing a P2 mask.
We have taken many approaches to improving our platform’s scalability, as we anticipate spikes of traffic during wildfire events. Our codes scalability features include reusing connections to external resources whenever possible, using asynchronous programming, and processing API calls in batch. We used Azure’s functions in order to achieve this.
Azure Notebook and Cognitive Services were used to build various machine learning models using the information we collected from the NASA, EarthData, and VIIRS APIs. The neural network had a reasonable accuracy of 0.74, but did not generalize well to niche climates such as Siberia.
Our web-app was designed using React, Python, and d3js. We kept accessibility in mind by using a high-contrast navy-blue and white colour scheme paired with clearly legible, sans-serif fonts. Future work includes incorporating a text-to-speech feature to increase accessibility, and a color-blind mode. As this was a 24-hour hackathon, we ran into a time challenge and were unable to include this feature, however, we would hope to implement this in further stages of Phoenix. | partial |
## Inspiration
The idea for Iris was born out of a mutual agreement on the need for interconnectivity between different languages. While technology has continued to allow more and more connectivity across far distances, one of the most significant barriers to this connectivity is the language barrier. We resolve this issue with Iris.
## What it does
Iris is a full-stack, web-based that creates translated transcriptions of live audio. A class instructor, keynote speaker, or any other user can make a room, have other users join the room, and immediately send live translated transcriptions to all of the users in the room.
## How we built it
The frontend of Iris was constructed using React, React-Router, and Javascript. The backend was made in Python and interacted with AssemblyAI API and the Google Translate API. The frontend and backend communicate with each other using WebSockets.
## Challenges we ran into
Due to the frontend and backend being written in different languages, converting the recorded audio to an acceptable format for the AssemblyAI API was challenging and took many hours of testing and debugging. While we made ample use of the documentation provided by AssemblyAI, there were no snippets that showcased inter-language interactions. Trial and error eventually prevailed, but not without a challenge.
Setting up Google Translate API was also a challenge. There was a tutorial on setting up Google Translate API, but we executed the code via the Google Cloud console. The commands given in the tutorial only worked with the Google Cloud SDK, and there wasn't a guide on converting those commands into Python. This was frustrating as we needed to debug the translation code outside of the Google Cloud console.
## Accomplishments that we're proud of
Iris accomplishes what we set out to do. It supports many languages and even allows for multiple simultaneous rooms displaying translated text. Overall, we are very proud of what we have accomplished with Iris especially considering the 36 hour timeframe.
## What we learned
We learned how to employ WebSockets for a continuous connection between the front and backend throughout this hackathon. Additionally, everyone also learned how to use the AssemblyAI and Google Cloud APIs and their documentation to help us build Iris.
## What's next for Iris
One potential avenue that we could take is implementing live video feeds in the future. Additionally, allowing hosts to upload recordings of their files for translation through an upload menu can further increase accessibility for Iris users. | ## Inspiration
We were inspired by hard working teachers and students. Although everyone was working hard, there was still a disconnect with many students not being able to retain what they learned. So, we decided to create both a web application and a companion phone application to help target this problem.
## What it does
The app connects students with teachers in a whole new fashion. Students can provide live feedback to their professors on various aspects of the lecture, such as the volume and pace. Professors, on the other hand, get an opportunity to receive live feedback on their teaching style and also give students a few warm-up exercises with a built-in clicker functionality.
The web portion of the project ties the classroom experience to the home. Students receive live transcripts of what the professor is currently saying, along with a summary at the end of the lecture which includes key points. The backend will also generate further reading material based on keywords from the lecture, which will further solidify the students’ understanding of the material.
## How we built it
We built the mobile portion using react-native for the front-end and firebase for the backend. The web app is built with react for the front end and firebase for the backend. We also implemented a few custom python modules to facilitate the client-server interaction to ensure a smooth experience for both the instructor and the student.
## Challenges we ran into
One major challenge we ran into was getting and processing live audio and giving a real-time transcription of it to all students enrolled in the class. We were able to solve this issue through a python script that would help bridge the gap between opening an audio stream and doing operations on it while still serving the student a live version of the rest of the site.
## Accomplishments that we’re proud of
Being able to process text data to the point that we were able to get a summary and information on tone/emotions from it. We are also extremely proud of the
## What we learned
We learned more about React and its usefulness when coding in JavaScript. Especially when there were many repeating elements in our Material Design. We also learned that first creating a mockup of what we want will facilitate coding as everyone will be on the same page on what is going on and all thats needs to be done is made very evident. We used some API’s such as the Google Speech to Text API and a Summary API. We were able to work around the constraints of the API’s to create a working product. We also learned more about other technologies that we used such as: Firebase, Adobe XD, React-native, and Python.
## What's next for Gradian
The next goal for Gradian is to implement a grading system for teachers that will automatically integrate with their native grading platform so that clicker data and other quiz material can instantly be graded and imported without any issues. Beyond that, we can see the potential for Gradian to be used in office scenarios as well so that people will never miss a beat thanks to the live transcription that happens. | ## Inspiration 💡
With the introduction of online-based learning, a lot of video tutorials are being created for students and learners to gain knowledge. As much as the idea is excellent, there was also a constraint whereby the tutorials may contain long hours of content and in some cases, it is inaccessible to users with disability. This is seen as a problem in the world today, that's why we built an innovative and creative solution **Vid2Text** to provide the solution to this. It is a web application that provides users with easy access to audio and video text transcription for all types of users. So either if the file is in an audio or video format it can always be converted to readable text.
## 🍁About
Vid2Text is a web app that allows users to upload audio and video files with ease, which then generates Modified and Customized Audio and Video Transcriptions.
Some of the features it provides are:
### Features
* Automatically transcribe audio and video files with high accuracy.
* Modified and Customized Audio and Video Transcriptions.
* Easy Keyword Search through Text
* Easy Keyword Search and Highlight through Text.
## How we built it
We built our project using Django, a Python web framework that uses the MVC architecture to develop full-stack web applications. When the user uploads the video they want to transcribe, we created a script that uploads the video onto the Django model database and after that, the video gets uploaded to the AssemblyAI server. The response of that part is the *upload\_url*. Finally, we send a post request, with the video transcript ID and get the video transcript text as the response. We utilized the AssemblyAI to match and search the transcript text for keywords. We also created an accessible and good user experience for the client-side.
## Challenges we ran into
In course of the hackathon, we faced some issues with limited time and integration of the AssemblyAI to determine the video duration of the uploaded videos dynamically. Initially, we were confused about how to do that, but we finally figured it out.
## Accomplishments that we're proud of
Finally, after long hours of work, we were able to build and deploy the full web application. The team was able to put in extra effort towards making it work.
## What we learned
This hackathon gave us the opportunity to learn how to use a Django project together with utilizing the Django and AssemblyAI API and also we were able to work together as a team despite the fact we were from different timezones.
## What's next for Vid2Text ⏭
For our next steps:
We plan to include more features like multiple language transcription, and export text files as pdf.
Also, improve the user experience and make it more accessible. | partial |
We present a blockchain agnostic system for benchmarking smart contract execution times. To do this we designed a simple programming language capable of running small performance benchmarks. We then implemented an interpreter for that language on the Ethereum, Solana, and Polkadot blockchains in the form of a smart contract. To perform a measurement we then submit the same program to each chain and time its execution.
Deploying new smart contracts is expensive and learning the tooling and programming languages required for their deployment is time consuming. This makes a single blockchain agnostic language appealing for developers as it cuts down on cost and time. It also means that new blockchains can be added later and all of the existing tests easily run after the deployment of a single smart contract.
You can think of this as "a JVM for performance measurements." To demonstrate how this can be used to measure non-blockchain runtimes we also implemented an interpreter on Cloudflare Workers and present some benchmarks of that. Cloudflare Workers was an order of magnitude faster than the fastest blockchain we tested.
Our results show that network and mining time dominate smart contract execution time. Despite considerable effort we were unable to find a program that notably impacted the execution time of a smart contract while remaining within smart contract execution limits. These observations suggest three things:
1. Once a smart contract developer has written a functional smart contract there is little payoff to optimizing the code for performance as network and mining latency will dominate.
2. Smart contract developers concerned about performance should look primarily at transaction throughput and latency when choosing a platform to deploy their contracts.
3. Even blockchains like Solana which bill themselves as being high performance are much, much slower than their centralized counterparts.
### Results
We measured the performance of three programs:
1. An inefficient, recursive fibonacci number generator computing the 12th fibonacci number.
2. A program designed to "thrash the cache" by repeatedly making modifications to dispirate memory locations.
3. A simple program consisting of two instructions to measure cold start times
In addition to running these programs on our smart contracts we also wrote a runtime on top of Cloudflare Workers as a point of comparison. Like these smart contracts Cloudflare Workers run in geographically distributed locations and feature reasonably strict limitations on runtime resource consumption.
To compute execution time we measured the time between when the transaction to run the start contract was sent and when it was confirmed by the blockchain. Due to budgetary constraints our testing was done on test networks.
We understand that this is an imperfect proxy for actual code execution time. Due to determinism requirements on all of the smart contract platforms that we used, access to the system time is prohibited to smart contracts. This makes measuring actual code execution time difficult. Additionally as smart contracts are executed and validated on multiple miners it is not clear what a measurement of actual code execution time would mean. This is an area that we would like to explore further given the time.
In the meantime we imagine that most users of a smart contract benchmarking system care primarily about total transaction time. This is the time delay that users of their smart contracts will experience and also the time that we measure.
![](https://challengepost-s3-challengepost.netdna-ssl.com/photos/production/software_photos/001/714/760/datas/original.png)
Our results showed that Solana and Polkadot significantly outperformed Ethereum with Solana being the fastest blockchain we measured.
### Additional observations
While Solana was faster than Polkadot and Ethereum in our benchmarks it also had the most restrictive computational limits. The plot below shows the largest fibonacci number computable on each blockchain before computational limits were exceeded. Once again we include Cloudflare Workers as a non-blockchain baseline.
![](https://challengepost-s3-challengepost.netdna-ssl.com/photos/production/software_photos/001/714/761/datas/original.png)
### The benchmarking language
To provide a unified interface for performance measurements we have designed and implemented a 17 instruction programming language called Arcesco. For each platform we then implement a runtime for Arcesco and time the execution of a standard suite of programs.
Each runtime takes assembled Arcesco bytecode through stdin and prints the execution result to stdout. An example invocation might look like this:
```
cat program.bc | assembler | runtime
```
This unified runtime interface means that very different runtimes can be plugged in and run the same way. As testament to the simplicity of runtime implementations we were able to implement five different runtimes over the course of the weekend.
Arcesco is designed as a simple stack machine which is as easy as possible to implement an interpreter for. An example Arcesco program that computes the 10th fibonacci number looks like this:
```
pi 10
call fib
exit
fib:
copy
pi 3
jlt done
copy
pi 1
sub
call fib
rot 1
pi 2
sub
call fib
add
done:
ret
```
To simplify the job of Arcesco interpreters we have written a very simple bytecode compiler for Arcesco which replaces labels with relative jumps and encodes instructions into 40 bit instructions. That entire pipeline for the above program looks like this:
```
text | assembled | bytecode
----------------|---------------|--------------------
| |
pi 10 | pi 10 | 0x010a000000
call fib | call 2 | 0x0e02000000
exit | exit | 0x1100000000
fib: | |
copy | copy | 0x0200000000
pi 3 | pi 3 | 0x0103000000
jlt done | jlt 10 | 0x0b0a000000
copy | copy | 0x0200000000
pi 1 | pi 1 | 0x0101000000
sub | sub | 0x0400000000
call fib | call -6 | 0x0efaffffff
rot 1 | rot 1 | 0x0d01000000
pi 2 | pi 2 | 0x0102000000
sub | sub | 0x0400000000
call fib | call -10 | 0x0ef6ffffff
add | add | 0x0300000000
done: | |
ret | ret | 0x0f00000000
| |
```
Each bytecode instruction is five bytes. The first byte is the instructions opcode and the next four are its immediate. Even instructions without immediates are encoded this way to simplify instruction decoding in interpreters. We understand this to be a small performance tradeoff but as much as possible we were optimizing for ease of interpretation.
```
0 8 40
+--------+-------------------------------+
| opcode | immediate |
+--------+-------------------------------+
```
The result of this is that an interpreter for Arcesco bytecode is just a simple while loop and switch statement. Each bytecode instruction being the same size and format makes decoding instructions very simple.
```
while True:
switch opcode:
case 1:
stack.push(immediate)
break
# etc..
```
This makes it very simple to implement an interpreter for Arcesco bytecode which is essential for smart contracts where larger programs are more expensive and less auditable.
A complete reference for the Arcesco instruction set is below.
```
opcode | instruction | explanation
-----------------------------------
1 | pi <value> | push immediate - pushes VALUE to the stack
2 | copy | duplicates the value on top of the stack
3 | add | pops two values off the stack and adds them pushing
the result back onto the stack.
4 | sub | like add but subtracts.
5 | mul | like add but multiplies.
6 | div | like add but divides.
7 | mod | like add but modulus.
8 | jump <label> | moves program execution to LABEL
9 | jeq <label> | moves program execution to LABEL if the two two
stack values are equal. Pops those values from the
stack.
10 | jneq <label> | like jeq but not equal.
11 | jlt <label> | like jeq but less than.
12 | jgt <label> | like jeq but greater than.
13 | rot <value> | swaps stack item VALUE items from the top with the
stack item VALUE-1 items from the top. VALUE must
be >= 1.
14 | call <label> | moves program execution to LABEL and places the
current PC on the runtime's call stack
15 | ret | sets PC to the value on top of the call stack and
pops that value.
16 | pop | pops the value on top of the stack.
17 | exit | terminates program execution. The value at the top
of the stack is the program's return value.
```
### Reflections on smart contract development
Despite a lot of hype about smart contracts we found that writing them was quite painful.
Solana was far and away the most pleasant to work with as its `solana-test-validator` program made local development easy. Solana's documentation was also approachable and centralized. The process of actually executing a Solana smart contract after it was deployed was very low level and required a pretty good understanding of the entire stack before it could be done.
Ethereum comes in at a nice second. The documentation was reasonably approachable and the sheer size of the Ethereum community meant that there was almost too much information. Unlike Solana though, we were unable to set up a functional local development environment which meant that the code -> compile -> test feedback loop was slow. Working on Ethereum felt like working on a large C++ project where you spend much of your time waiting for things to compile.
Polkadot was an abject nightmare to work with. The documentation was massively confusing and what tutorials did exist failed to explain how one might interface with a smart contract outside of some silly web UI. This was surprising given that Polkadot has a $43 billion market cap and was regularly featured in "best smart contract" articles that we read at the beginning of this hackathon.
We had a ton of fun working on this project. Externally, it can often be very hard to tell the truth from marketing fiction when looking in the blockchain space. It was fun to dig into the technical details of it for a weekend.
### Future work
On our quest to find the worst performing smart contract possible, we would like to implement a fuzzer that integrates with Clockchain to generate adversarial bytecode. We would also like to explore the use of oracles in blockchains for more accurate performance measurements. Finally, we would like to flesh out our front-end to be dynamically usable for a wide audience. | ## Inspiration
* Web3 is in it's static phase. Contracts are not dynamic. You call a contract, it performs an action and it stops. This is similar to the read-only version of web2. Web2 became dynamic with the advent of APIs and webhooks. Apps could call each other, share events and trigger actions. This is missing in web3.
* Since this is missing in web3, the potential usecases of smart contracts are restricted. If you want to perform an on-chain action based on a trigger/event, it's extremely hard and expensive right now.
* This is a larger infrastructure problem and efforts have been made in the past to solve but all the solutions fall short on the width they cover and are constrained for the particular applications they are built for.
* Therefore, we decided build Relic which uses hybrid combinations of on-chain/off-chain technologies to solve this problem.
## What it does
* Our core product is a trigger-based automation framework. dApps/developers or individual users can come to Relic, define a trigger (smart contract method invocations, emitting of events, etc.), define an off-chain (custom API calls & webhooks) or on-chain (contract calls, funds transfer, etc.) action and link them together.
* For instance, you can trigger liquidation of a particular asset on Uniswap if it drops below certain price. Another example would be to transfer your deployed investments from Compound to Aave according to the APYs or passing of a governance proposal in their DAOs. Personally, we have used Relic to track transfer in ownership our favourite ENS domains to snipe them whenever available :)
## How we built it
* Relic listens to every block being deployed on the chain (ethereum mainnet, polygon mainnet, sonr, avalanche) and use our custom built query language to parse them.
* We match the transactions with parameters mentioned in "triggers" setup by users and perform the required off-chain or on-chain actions.
* For on-chain actions, we provide each user with a semi-custodial wallet. Each of the contract invocations mentioned are signed by that wallet, and user just have to fund it to manage the gas fees.
## Challenges we ran into
* Making sure we don't miss on any blocks being mined and transactions executed - we implemented fail/safe across our architecture.
* Implementing a safe semi-custodial wallet for high stakes automations was a challenge, we had to deep dive into encryption models.
## Accomplishments that we're proud of
Relic is intrinsically cross-chain by nature. Your triggers can be on any chain, independent of where you want to perform the action.
## What we learned
We learnt more about blockchain nodes and understanding the nature of blocks and headers. The nuances and thus benefits of each chain we worked with were fascinating. While designing our architecture, we implemented robust mechanism to eternally run tasks and jobs.
## What's next for Relic
We envision Relic solving major difficulties across web3 applications and use-cases, in proximate future. We plan on transforming into a fully featured framework which works on any chain and is highly composable. | ## Inspiration
With a prior interest in crypto and defi, we were attracted to Uniswap V3's simple yet brilliant automated market maker. The white papers were tantalizing and we had several eureka moments when pouring over them. However, we realized that the concepts were beyond the reach of most casual users who would be interested in using Uniswap. Consequently, we decided to build an algorithm that allowed Uniswap users to take a more hands-on and less theoretical approach, while mitigating risk, to understanding the nuances of the marketplace so they would be better suited to make decisions that aligned with their financial goals.
## What it does
This project is intended to help new Uniswap users understand the novel processes that the financial protocol (Uniswap) operates upon, specifically with regards to its automated market maker. Taking an input of a hypothetical liquidity mining position in a liquidity pool of the user's choice, our predictive model uses past transactions within that liquidity pool to project the performance of the specified liquidity mining position over time - thus allowing Uniswap users to make better informed decisions regarding which liquidity pools and what currencies and what quantities to invest in.
## How we built it
We divided the complete task into four main subproblems: the simulation model and rest of the backend, an intuitive UI with a frontend that emulated Uniswap's, the graphic design, and - most importantly - successfully integrating these three elements together. Each of these tasks took the entirety of the contest window to complete to a degree we were satisfied with given the time constraints.
## Challenges we ran into and accomplishments we're proud of
Connecting all the different libraries, frameworks, and languages we used was by far the biggest and most frequent challenge we faced. This included running Python and NumPy through AWS, calling AWS with React and Node.js, making GraphQL queries to Uniswap V3's API, among many other tasks. Of course, re-implementing many of the key features Uniswap runs on to better our simulation was another major hurdle and took several hours of debugging. We had to return to the drawing board countless times to ensure we were correctly emulating the automated market maker as closely as possible. Another difficult task was making our UI as easy to use as possible for users. Notably, this meant correcting the inputs since there are many constraints for what position a user may actually take in a liquidity pool. Ultimately, in spite of the many technical hurdles, we are proud of what we have accomplished and believe our product is ready to be released pending a few final touches.
## What we learned
Every aspect of this project introduced us to new concepts, or new implementations of concepts we had picked up previously. While we had dealt with similar subtasks in the past, this was our first time building something of this scope from the ground-up. | partial |
## Inspiration
Ethiscan was inspired by a fellow member of our Computer Science club here at Chapman who was looking for a way to drive social change and promote ethical consumerism.
## What it does
Ethiscan reads a barcode from a product and looks up the manufacturer and information about the company to provide consumers with information about the product they are buying and how the company impacts the environment and society as a whole. The information includes the parent company of the product, general information about the parent company, articles related to the company, and an Ethics Score between 0 and 100 giving a general idea of the nature of the company. This Ethics Score is created by using Sentiment Analysis on Web Scraped news articles, social media posts, and general information relating to the ethical nature of the company.
## How we built it
Our program is two parts. We built an android application using Android Studio which takes images of a barcode on a product and send that to our server. Our server processes the UPC (Universal Product Code) unique to each barcode and uses a sentiment analysis neural network and web scraping to populate the android client with relevant information related to the product's parent company and ethical information.
## Challenges we ran into
Android apps are significantly harder to develop than expected, especially when nobody on your team has any experience. Alongside this we ran into significant issues finding databases of product codes, parent/subsidiary relations, and relevant sentiment data.
The Android App development process was significantly more challenging than we anticipated. It took a lot of time and effort to create functioning parts of our application. Along with that, web scraping and sentiment analysis are precise and diligent tasks to accomplish. Given the time restraint, the accuracy of the Ethics Score is not as accurate as possible. Finally, not all barcodes will return accurate results simply due to the lack of relevant information online about the ethical actions of companies related to products.
## Accomplishments that we're proud of
We managed to load the computer vision into our original android app to read barcodes on a Pixel 6, proving we had a successful proof of concept app. While our scope was ambitious, we were able to successfully show that the server-side sentiment analysis and web scraping was a legitimate approach to solving our problem, as we've completed the production of a REST API which receives a barcode UPC and returns relevant information about the company of the product. We're also proud of how we were able to quickly turn around and change out full development stack in a few hours.
## What we learned
We have learned a great deal about the fullstack development process. There is a lot of work that needs to go into making a working Android application as well as a full REST API to deliver information from the server side. These are extremely valuable skills that can surely be put to use in the future.
## What's next for Ethiscan
We hope to transition from the web service to a full android app and possibly iOS app as well. We also hope to vastly improve the way we lookup companies and gather consumer scores alongside how we present the information. | ## Inspiration
One of the greatest challenges facing our society today is food waste. From an environmental perspective, Canadians waste about *183 kilograms of solid food* per person, per year. This amounts to more than six million tonnes of food a year, wasted. From an economic perspective, this amounts to *31 billion dollars worth of food wasted* annually.
For our hack, we wanted to tackle this problem and develop an app that would help people across the world do their part in the fight against food waste.
We wanted to work with voice recognition and computer vision - so we used these different tools to develop a user-friendly app to help track and manage food and expiration dates.
## What it does
greenEats is an all in one grocery and food waste management app. With greenEats, logging your groceries is as simple as taking a picture of your receipt or listing out purchases with your voice as you put them away. With this information, greenEats holds an inventory of your current groceries (called My Fridge) and notifies you when your items are about to expire.
Furthermore, greenEats can even make recipe recommendations based off of items you select from your inventory, inspiring creativity while promoting usage of items closer to expiration.
## How we built it
We built an Android app with Java, using Android studio for the front end, and Firebase for the backend. We worked with Microsoft Azure Speech Services to get our speech-to-text software working, and the Firebase MLKit Vision API for our optical character recognition of receipts. We also wrote a custom API with stdlib that takes ingredients as inputs and returns recipe recommendations.
## Challenges we ran into
With all of us being completely new to cloud computing it took us around 4 hours to just get our environments set up and start coding. Once we had our environments set up, we were able to take advantage of the help here and worked our way through.
When it came to reading the receipt, it was difficult to isolate only the desired items. For the custom API, the most painstaking task was managing the HTTP requests. Because we were new to Azure, it took us some time to get comfortable with using it.
To tackle these tasks, we decided to all split up and tackle them one-on-one. Alex worked with scanning the receipt, Sarvan built the custom API, Richard integrated the voice recognition, and Maxwell did most of the app development on Android studio.
## Accomplishments that we're proud of
We're super stoked that we offer 3 completely different grocery input methods: Camera, Speech, and Manual Input. We believe that the UI we created is very engaging and presents the data in a helpful way. Furthermore, we think that the app's ability to provide recipe recommendations really puts us over the edge and shows how we took on a wide variety of tasks in a small amount of time.
## What we learned
For most of us this is the first application that we built - we learned a lot about how to create a UI and how to consider mobile functionality. Furthermore, this was also our first experience with cloud computing and APIs. Creating our Android application introduced us to the impact these technologies can have, and how simple it really is for someone to build a fairly complex application.
## What's next for greenEats
We originally intended this to be an all-purpose grocery-management app, so we wanted to have a feature that could allow the user to easily order groceries online through the app, potentially based off of food that would expire soon.
We also wanted to implement a barcode scanner, using the Barcode Scanner API offered by Google Cloud, thus providing another option to allow for a more user-friendly experience. In addition, we wanted to transition to Firebase Realtime Database to refine the user experience.
These tasks were considered outside of our scope because of time constraints, so we decided to focus our efforts on the fundamental parts of our app. | ## Inspiration
Considering our generation is more affected by the consequences of climate change than any other, our team was extremely motivated to follow the sustainability track. We firmly believe that things cannot carry on as they are now because if they do, we will leave nothing for our future generations. Since methane is emitted during the breakdown of waste, waste creation has a significant impact on climate change. We saw this as a chance to assist not just today but also tomorrow's community. Today's technology has a lot of possibilities. As it affects everyone, whether directly or indirectly, we decided to use technology to help tackle this issue.
## What it does
The user can use RecycleBot to either write in a material or take a picture of an object to find out whether it is recyclable or not. A Google Maps API connection is another feature that shows recycling facilities close to the user's current location.
## How we built it
Initially, we began by watching a ton of YouTube videos that supported our main concept. Afterward, we made the choice to adopt Appery.io as a tool for mobile app development. We were able to create an application that works with both Android and iOS thanks to the use of this platform. This enabled us to expand the audience for our software and increase usability. To complete the effective functioning of our program, we additionally established databases, Google Vision API, and RESTful services.
## Challenges we ran into
The Google Vision API integration was one of our challenges. The intention was to use it to identify objects in a picture taken from the user's photo. We wanted to establish whether or not it was recyclable from the labels that the API would return. It was challenging to receive the right answer from the API request. After extensive investigation and reviewing the instructions, we eventually succeeded in getting it to work, but we still received a blank answer. Perhaps if we had more time, we could resolve this problem.
## Accomplishments that we're proud of
We take pride in our ability to collaborate and enhance our knowledge of mobile development. For several of our team members, this was their first hackathon, and we were pleased of ourselves for making it to Stanford and completing our project. In order to make the most of our limited time here, we also used the chance to attend a ton of workshops.
## What we learned
We discovered new recycling techniques that we were ignorant of during the research and development phase. It revealed to us how ignorant we are of the various ways an object may be recycled and transformed into something new, and of greater worth to the present or other users. We also learned about how technology can best serve humanity when used wisely, and actually enable us to have better lives.
## What's next for RecycleBot
We want to improve the Interface to give it a more polished and coherent appearance. In order to accomplish more functionality, we'd also like to fix a few problems with the API integration. In order to have a more complete list, it would be nice to store recyclable materials in a database. We want to add a payment interface to the app. | winning |
## Inspiration
We wanted to use Livepeer's features to build a unique streaming experience for gaming content for both streamers and viewers.
Inspired by Twitch, we wanted to create a platform that increases exposure for small and upcoming creators and establish a more unified social ecosystem for viewers, allowing them both to connect and interact on a deeper level.
## What is does
kizuna has aspirations to implement the following features:
* Livestream and upload videos
* View videos (both on a big screen and in a small mini-player for multitasking)
* Interact with friends (on stream, in a private chat, or in public chat)
* View activities of friends
* Highlights smaller, local, and upcoming streamers
## How we built it
Our web-application was built using React, utilizing React Router to navigate through webpages, and Livepeer's API to allow users to upload content and host livestreams on our videos. For background context, Livepeer describes themselves as a decentralized video infracstucture network.
The UI design was made entirely in Figma and was inspired by Twitch. However, as a result of a user research survey, changes to the chat and sidebar were made in order to facilitate a healthier user experience. New design features include a "Friends" page, introducing a social aspect that allows for users of the platform, both streamers and viewers, to interact with each other and build a more meaningful connection.
## Challenges we ran into
We had barriers with the API key provided by Livepeer.studio. This put a halt to the development side of our project. However, we still managed to get our livestreams working and our videos uploading! Implementing the design portion from Figma to the application acted as a barrier as well. We hope to tweak the application in the future to be as accurate to the UX/UI as possible. Otherwise, working with Livepeer's API was a blast, and we cannot wait to continue to develop this project!
You can discover more about Livepeer's API [here](https://livepeer.org/).
## Accomplishments that we're proud of
Our group is proud of the persistance through the all the challenges that confronted us throughout the hackathon. From learning a whole new programming language, to staying awake no matter how tired, we are all proud of each other's dedication to creating a great project.
## What we learned
Although we knew of each other before the hackathon, we all agreed that having teammates that you can collaborate with is a fundamental part to developing a project.
The developers (Josh and Kennedy) learned lot's about implementing API's and working with designers for the first time. For Josh, this was his first time implementing his practiced work from small projects in React. This was Kennedy's first hackathon, where she learned how to implement CSS.
The UX/UI designers (Dorothy and Brian) learned more about designing web-applications as opposed to the mobile-applications they are use to. Through this challenge, they were also able to learn more about Figma's design tools and functions.
## What's next for kizuna
Our team maintains our intention to continue to fully develop this application to its full potential. Although all of us our still learning, we would like to accomplish the next steps in our application:
* Completing the full UX/UI design on the development side, utilizing a CSS framework like Tailwind
* Implementing Lens Protocol to create a unified social community in our application
* Redesign some small aspects of each page
* Implementing filters to categorize streamers, see who is streaming, and categorize genres of stream. | ## Inspiration
Many of us have a hard time preparing for interviews, presentations, and any other social situation. We wanted to sit down and have a real talk... with ourselves.
## What it does
The app will analyse your speech, hand gestures, and facial expressions and give you both real-time feedback as well as a complete rundown of your results after you're done.
## How We built it
We used Flask for the backend and used OpenCV, TensorFlow, and Google Cloud speech to text API to perform all of the background analyses. In the frontend, we used ReactJS and Formidable's Victory library to display real-time data visualisations.
## Challenges we ran into
We had some difficulties on the backend integrating both video and voice together using multi-threading. We also ran into some issues with populating real-time data into our dashboard to display the results correctly in real-time.
## Accomplishments that we're proud of
We were able to build a complete package that we believe is purposeful and gives users real feedback that is applicable to real life. We also managed to finish the app slightly ahead of schedule, giving us time to regroup and add some finishing touches.
## What we learned
We learned that planning ahead is very effective because we had a very smooth experience for a majority of the hackathon since we knew exactly what we had to do from the start.
## What's next for RealTalk
We'd like to transform the app into an actual service where people could log in and save their presentations so they can look at past recordings and results, and track their progress over time. We'd also like to implement a feature in the future where users could post their presentations online for real feedback from other users. Finally, we'd also like to re-implement the communication endpoints with websockets so we can push data directly to the client rather than spamming requests to the server.
![Image](https://i.imgur.com/aehDk3L.gif)
Tracks movement of hands and face to provide real-time analysis on expressions and body-language.
![Image](https://i.imgur.com/tZAM0sI.gif) | ## Inspiration
After Ordering printed circuit boards I found they did not work sometimes due to EMI and trace width being too small for the current running through the traces.
## What it does
Provides PCB rendering and runs a calculation on current, electric field and other useful values. | winning |
## Inspiration
It is difficult for Product Managers to keep track of the needs and the sentiment of the consumers. Also in a company like JetBlue which operates on such a high scale, it is difficult to measure the KPIs which are important for the business. Customer redressal is one of the main needs of an airline company. Moreover, the airlines who have better customer satisfaction are more profitable in the future.
## What it does
With jetLytics we plan to simplify the entire pipeline of customer redressal. With real-time social media analytics employing Natural Language Processing and automated customer redressal system using Artifical Intelligence, it helps the users to be heard quickly and moreover improves the redressal mechanism by more than 60%.
It keeps track of the KPIs which are essential to the business and provide alerts to the requisite team when faced by some threating issue. Overall it helps the firm to make decisions quickly based on the needs of the users.
## How we built it
This mobile application is built using Flutter so that it can be deployed on both iOS and Android phones. The analytics engine is built using Flask and is trained using NLP model employing over 10K tweets on GCP and is hosted on Google App Engine.
## Challenges we ran into
Natural Language Processing was a totally different field for our team and collecting data of more than 10K tweets for the training was even tougher. Moreover building an entire mobile application using Flutter was difficult since we didn't have any experience of the same.
## Accomplishments that we're proud of
Building a sophisticated cross-platform mobile application solving real business needs.
## What we learned
Natural Language Processing, Flutter and deploying on Google Cloud
## What's next for jetLytics
Improving analytics by using more social media platforms and integrating more business verticals | ## Inspiration
We were inspired by Wattpad's creative outlet for letting authours tell stories and realized that people deserve to read stories tailored to their own needs too. "To Bee Continued" helps people not only get the ball rolling when it comes to producing scripts, visual graphics, and general flow of plot, but also provides ready to access entertainment for all users. It is a very exciting website and is all working within our cute Bee themed website!
## What it does
Our project takes the user's input (key words or a statement which best represents the theme of the story they want) and outputs 5 different prompts. The user can select any of these prompts or regenerate a new set of prompts, which then helps generates the story's title, plot, and pictures. The user can regenerate any of these elements if they are unsatisfied.
## How we built it
We built the website using with the help of HTML, CSS and Javascript. This was responsible for the entire aesthetic side of theUI. The code itself which incorporated the user's input was thanks to the help of Javascript and the usage of OpenAI's Dall-E and GPT3 API's.
## Challenges we ran into
We ran into a multitude of challenges. Considering that 3 of us are first year students and 2 of us are first time hackers, we don't possess as much skill as we could've had. We learned as the progress progressed in ways such as designing the website in a more visually appealing way as well as properly utilizing API's for the first time. We also ran into issues optimizing OpenAI to help generate appropriate results based on given prompts as well as reducing the number of API calls to make our website faster.
## Accomplishments that we're proud of
We're proud of creating a seamless final product which gets the job at hand done. At times, we worried that it wouldn't be possible but we pushed through and it was worth it in the end. It's very fun to play around with our project.
## What we learned
We learned how to incorporate API's into a seamless final product as well as strengtening our web development skills as mostly backend programmers.
## What's next for ToBeContinued
Our next step for ToBeContinued would be to host it onto a website where it can work properly instead of being a stagnant Github Pages site. This was because we were unable to encrypt the API keys and didn't believe it was wise to put it onto the web. This could be successfully done through better Django skills however our lack of coding experience as first year students and first time hackers held us back. | ## Inspiration
We were inspired by JetBlue's challenge to utilize their data in a new way and we realized that, while there are plenty of websites and phone applications that allow you to find the best flight deal, there is none that provide a way to easily plan the trip and items you will need with your friends and family.
## What it does
GrouPlane allows users to create "Rooms" tied to their user account with each room representing an unique event, such as a flight from Toronto to Boston for a week. Within the room, users can select flight times, see the best flight deal, and plan out what they'll need to bring with them. Users can also share the room's unique ID with their friends who can then utilize this ID to join the created room, being able to see the flight plan and modify the needed items.
## How we built it
GrouPlane was built utilizing Android Studio with Firebase, the Google Cloud Platform Authentication API, and JetBlue flight information. Within Android Studio, Java code and XML was utilized.
## Challenges we ran into
The challenges we ran into was learning how to use Android Studio/GCP/Firebase, and having to overcome the slow Internet speed present at the event. In terms of Android Studio/GCP/Firebase, we were all either entirely new or very new to the environment and so had to learn how to access and utilize all the features available. The slow Internet speed was a challenge due to not only making it difficult to learn for the former tools, but, due to the online nature of the database, having long periods of time where we could not test our code due to having no way to connect to the database.
## Accomplishments that we're proud of
We are proud of being able to finish the application despite the challenges. Not only were we able to overcome these challenges but we were able to build an application there functions to the full extent we intended while having an easy to use interface.
## What we learned
We learned a lot about how to program Android applications and how to utilize the Google Cloud Platform, specifically Firebase and Google Authentication.
## What's next for GrouPlane
GrouPlane has many possible avenues for expansion, in particular we would like to integrate GrouPlane with Airbnb, Hotel chains, and Amazon Alexa. In terms of Airbnb and hotel chains, we would utilize their APIs in order to pull information about hotel deals for the flight locations picked for users can plan out their entire trip within GrouPlane. With this integration, we would also expand GrouPlane to be able to inform everyone within the "event room" about how much the event will cost each person. We would also integrate Amazon Alexa with GrouPlane in order to provide users the ability to plane out their vacation entirely through the speech interface provided by Alexa rather than having to type on their phone. | losing |
## 💡 Inspiration
>
> #hackathon-help-channel
> `<hacker>` Can a mentor help us with flask and Python? We're stuck on how to host our project.
>
>
>
How many times have you created an epic web app for a hackathon but couldn't deploy it to show publicly? At my first hackathon, my team worked hard on a Django + React app that only lived at `localhost:5000`.
Many new developers don't have the infrastructure experience and knowledge required to deploy many of the amazing web apps they create for hackathons and side projects to the cloud.
We wanted to make a tool that enables developers to share their projects through deployments without any cloud infrastructure/DevOps knowledge
(Also, as 2 interns currently working in DevOps positions, we've been learning about lots of Infrastructure as Code (IaC), Configuration as Code (CaC), and automation tools, and we wanted to create a project to apply our learning.)
## 💭 What it does
InfraBundle aims to:
1. ask a user for information about their project
2. generate appropriate IaC and CaC code configurations
3. bundle configurations with GitHub Actions workflow to simplify deployment
Then, developers commit the bundle to their project repository where deployments become as easy as pushing to your branch (literally, that's the trigger).
## 🚧 How we built it
As DevOps interns, we work with Ansible, Terraform, and CI/CD pipelines in an enterprise environment. We thought that these could help simplify the deployment process for hobbyists as well
InfraBundle uses:
* Ansible (CaC)
* Terraform (IaC)
* GitHub Actions (CI/CD)
* Python and jinja (generating CaC, IaC from templates)
* flask! (website)
## 😭 Challenges we ran into
We're relatitvely new to Terraform and Ansible and stumbled into some trouble with all the nitty-gritty aspects of setting up scripts from scratch.
In particular, we had trouble connecting an SSH key to the GitHub Action workflow for Ansible to use in each run. This led to the creation of temporary credentials that are generated in each run.
With Ansible, we had trouble creating and activating a virtual environment (see: not carefully reading [ansible.builtin.pip](https://docs.ansible.com/ansible/latest/collections/ansible/builtin/pip_module.html) documentation on which parameters are mutually exclusive and confusing the multiple ways to pip install)
In general, hackathons are very time constrained. Unfortunately, slow pipelines do not care about your time constraints.
* hard to test locally
* cluttering commit history when debugging pipelines
## 🏆 Accomplishments that we're proud of
InfraBundle is capable of deploying itself!
In other news, we're proud of the project being something we're genuinely interested in as a way to apply our learning. Although there's more functionality we wished to implement, we learned a lot about the tools used. We also used a GitHub project board to keep track of tasks for each step of the automation.
## 📘 What we learned
Although we've deployed many times before, we learned a lot about automating the full deployment process. This involved handling data between tools and environments. We also learned to use GitHub Actions.
## ❓ What's next for InfraBundle
InfraBundle currently only works for a subset of Python web apps and the only provider is Google Cloud Platform.
With more time, we hope to:
* Add more cloud providers (AWS, Linode)
* Support more frameworks and languages (ReactJS, Express, Next.js, Gin)
* Improve support for database servers
* Improve documentation
* Modularize deploy playbook to use roles
* Integrate with GitHub and Google Cloud Platform
* Support multiple web servers | ## Inspiration 🌈
Our team has all experienced the struggle of jumping into a pre-existing codebase and having to process how everything works before starting to add our own changes. This can be a daunting task, especially when commit messages lack detail or context. We also know that when it comes time to push our changes, we often gloss over the commit message to get the change out as soon as possible, not helping any future collaborators or even our future selves. We wanted to create a web app that allows users to better understand the journey of the product, allowing users to comprehend previous design decisions and see how a codebase has evolved over time. GitInsights aims to bridge the gap between hastily written commit messages and clear, comprehensive documentation, making collaboration and onboarding smoother and more efficient.
## What it does 💻
* Summarizes commits and tracks individual files in each commit, and suggests more accurate commit messages.
* The app automatically suggests tags for commits, with the option for users to add their own custom tags for further sorting of data.
* Provides a visual timeline of user activity through commits, across all branches of a repository
Allows filtering commit data by user, highlighting the contributions of individuals
## How we built it ⚒️
The frontend is developed with Next.js, using TypeScript and various libraries for UI/UX enhancement. The backend uses Express.js , which handles our API calls to GitHub and OpenAI. We used Prisma as our ORM to connect to a PostgreSQL database for CRUD operations. For authentication, we utilized GitHub OAuth to generate JWT access tokens, securely accessing and managing users' GitHub information. The JWT is stored in cookie storage and sent to the backend API for authentication. We created a github application that users must all add onto their accounts when signing up. This allowed us to not only authenticate as our application on the backend, but also as the end user who provides access to this app.
## Challenges we ran into ☣️☢️⚠️
Originally, we wanted to use an open source LLM, like LLaMa, since we were parsing through a lot of data but we quickly realized it was too inefficient, taking over 10 seconds to analyze each commit message. We also learned to use new technologies like d3.js, the github api, prisma, yeah honestly everything for me
## Accomplishments that we're proud of 😁
The user interface is so slay, especially the timeline page. The features work!
## What we learned 🧠
Running LLMs locally saves you money, but LLMs require lots of computation (wow) and are thus very slow when running locally
## What's next for GitInsights
* Filter by tags, more advanced filtering and visualizations
* Adding webhooks to the github repository to enable automatic analysis and real time changes
* Implementing CRON background jobs, especially with the analysis the application needs to do when it first signs on an user, possibly done with RabbitMQ
* Creating native .gitignore files to refine the summarization process by ignoring files unrelated to development (i.e., package.json, package-lock.json, **pycache**). | ## What it does
It can tackle the ground of any surface and any shape using its specially managed wheels!
## How we built it
I had built this using arduino and Servos
## Challenges we ran into
The main challenge I ran through are as follows:-
1. To make the right angle of the bot with the servo
2.To power the whole system !
## Accomplishments that we're proud of
I am proud that I had made the whole project just during the hackathon though everything was not completed still achieved at least what I wanted! | winning |
# MediConnect ER
## Problem Statement
In emergency healthcare situations, accessing timely and accurate patient information is critical for healthcare providers to deliver effective and urgent care. However, the current process of sharing essential information between individuals in need of emergency care and healthcare professionals is often inefficient and prone to delays. Moreover, locating and scheduling an appointment in the nearest Emergency Room (ER) can be challenging, leading to unnecessary treatment delays and potential health risks. There is a pressing need for a solution that streamlines the transmission of vital patient information to healthcare providers and facilitates the seamless booking of ER appointments, ultimately improving the efficiency and effectiveness of emergency healthcare delivery.
## Introduction
We have developed an app designed to streamline the process for individuals involved in accidents or requiring emergency care to provide essential information to doctors beforehand in a summarized format for Electronic Health Record (EHR) integration. Leveraging Language Model (LLM) technology, the app facilitates the efficient transmission of pertinent details, ensuring healthcare providers have access to critical information promptly and also allows doctors to deal with more volume of paitents.
Moreover, the app includes functionality for users to locate and schedule an appointment in the nearest Emergency Room (ER), enhancing accessibility and ensuring timely access to care in urgent situations. By combining pre-emptive data sharing with convenient ER booking features, the app aims to improve the efficiency and effectiveness of emergency healthcare delivery, potentially leading to better patient outcomes.
## About MediConnect ER
1. Content Summarization with LLM -
"MediConnect ER" revolutionizes emergency healthcare by seamlessly integrating with Electronic Health Records (EHRs) and utilizing advanced Language Model (LLM) technology. Users EHR medical data, which the LLM summarizer condenses into EHR-compatible summaries. This automation grants healthcare providers immediate access to crucial patient information upon ER arrival, enabling swift and informed decision-making. By circumventing manual EHR searches, the app reduces wait times, allowing doctors to prioritize and expedite care effectively. This streamlined process enhances emergency healthcare efficiency, leading to improved patient outcomes and satisfaction.
1. Geolocation and ER Booking -
MediConnect ER includes functionality for users to quickly locate and book the nearest Emergency Room (ER). By leveraging geolocation technology, the app identifies nearby healthcare facilities, providing users with real-time information on wait times, available services, and directions to the chosen ER. This feature eliminates the uncertainty and
1. Medi-Chatbot -
The chatbot feature in "MediConnect ER" offers users a user-friendly interface to engage with and access essential information about their treatment plans. Patients can interact with the chatbot to inquire about various aspects of their treatment, including medication instructions, follow-up appointments, and potential side effects. By providing immediate responses to user queries, the chatbot improves accessibility to crucial treatment information, empowering patients to take a more active role in their healthcare journey.
## Building Process
Our application leverages a sophisticated tech stack to deliver a seamless user experience. At the forefront, we utilize JavaScript, HTML, and CSS to craft an intuitive and visually appealing frontend interface. This combination of technologies ensures a smooth and engaging user interaction, facilitating effortless navigation and information access.
Backing our frontend, we employ Flask, a powerful Python web framework, to orchestrate our backend operations. Flask provides a robust foundation for handling data processing, storage, and communication between our frontend and other components of our system. It enables efficient data management and seamless integration of various functionalities, enhancing the overall performance and reliability of our application.
Central to our data summarization capabilities is Mistral 7B, a state-of-the-art language model meticulously fine-tuned to summarize clinical health records. Through extensive tuning on the Medalpaca dataset, we have optimized Misteral 7B to distill complex medical information into concise and actionable summaries. This tailored approach ensures that healthcare professionals receive relevant insights promptly, facilitating informed decision-making and personalized patient care.
Additionally, our chatbot functionality is powered by GPT-3.5, one of the most advanced language models available. GPT-3.5 enables natural and contextually relevant conversations, allowing users to interact seamlessly and obtain pertinent information about their treatment plans. By leveraging cutting-edge AI technology, our chatbot enhances user engagement and accessibility, providing users with immediate support and guidance throughout their healthcare journey.
# EHR Datageneration
To validate the effectiveness of our data summarization capabilities, we utilize Mistral 7B, a sophisticated language model specifically tailored for summarizing clinical health records. By running Mistral 7B through our synthetic EHR data records generated by Synteha, we validate the accuracy and relevance of the summarized information. This validation process ensures that our summarization process effectively captures essential medical insights and presents them in a concise and actionable format.
# Challenges we Ran into
1. One of the major challenges that we ran into are , bascially finding what EHR data looks like , after much research we found Syntheta that can generate the data we are looking into.
2. We found that fine tunning dataset were not avaliable to fine tune datasets that were even remotely sdimilar to EHR data.
# Future Directions
In the future, our aim is to seamlessly integrate Amanuensis into established Electronic Health Record (EHR) systems like Epic and Cerner, offering physicians an AI-powered assistant to enhance their clinical decision-making processes. Additionally, we intend to augment our Natural Language Processing (NLP) pipeline by incorporating actual patient data rather than relying solely on synthetic EHR records. We will complement this with meticulously curated annotations provided by physicians, ensuring the accuracy and relevance of the information processed by our system. | ## Inspiration
Recent job search experience during which is was sometimes difficult to keep track of the status of the many applications that had been thrown out into the void
## What it does
Provides a simple, informative platform to help you keep track of whether your application is still floating around out there, if you've been contacted for an interview or have a job offer pending (yay!)
## How we built it
Using Reactjs with the flux concept in mind for the views and expressjs for the server we had a wild time learning these new tools.
## Challenges we ran into
We ran in to issues getting the initial state to be properly processed by the server and sent to the client properly. Something also seems to have gone wrong with our npm setup and two group members can no longer build the project (oops)
## Accomplishments that we're proud of
We are extremely proud that we made it this far. None of us had very much experience with javascript before this event and picked areas to work on that were well beyond our comfort levels.
## What we learned
We learned how much fun Hackathons can be and that we can still pull all nighters when we want to (and bring appropriate amounts of caffeine). We split the work specifically so that each group member could work on something they were interested in trying but had little experience with.
## What's next for Huntr
We plan to keep working on Huntr after the Hackathon ends. We have some ideas for integrating APIs from Google and Glassdoor to help provide more information about the companies you apply to as well as automatically adding cards to your job search boards as you get emails that match certain key words or phrases | ## Inspiration
While we were brainstorming for ideas, we realized that two of our teammates are international students from India also from the University of Waterloo. Their inspiration to create this project was based on what they had seen in real life. Noticing how impactful access to proper healthcare can be, and how much this depends on your socioeconomic status, we decided on creating a healthcare kiosk that can be used by people in developing nations. By designing interface that focuses heavily on images; it can be understood by those that are illiterate as is the case in many developing nations, or can bypass language barriers. This application is the perfect combination of all of our interests; and allows us to use tech for social good by improving accessibility in the healthcare industry.
## What it does
Our service, Medi-Stand is targeted towards residents of regions who will have the opportunity to monitor their health through regular self administered check-ups. By creating healthy citizens, Medi-Stand has the potential to curb the spread of infectious diseases before they become a bane to the society and build a more productive society. Health care reforms are becoming more and more necessary for third world nations to progress economically and move towards developed economies that place greater emphasis on human capital. We have also included supply-side policies and government injections through the integration of systems that have the ability to streamline this process through the creation of a database and eliminating all paper-work thus making the entire process more streamlined for both- patients and the doctors. This service will be available to patients through kiosks present near local communities to save their time and keep their health in check. By creating a profile the first time you use this system in a government run healthcare facility, users can create a profile and upload health data that is currently on paper or in emails all over the interwebs. By inputting this information for the first time manually into the database, we can access it later using the system we’ve developed. Overtime, the data can be inputted automatically using sensors on the Kiosk and by the doctor during consultations; but this depends on 100% compliance.
## How I built it
In terms of the UX/UI, this was designed using Sketch. By beginning with the creation of mock ups on various sheets of paper, 2 members of the team brainstormed customer requirements for a healthcare system of this magnitude and what features we would be able to implement in a short period of time. After hours of deliberation and finding ways to present this, we decided to create a simple interface with 6 different screens that a user would be faced with. After choosing basic icons, a font that could be understood by those with dyslexia, and accessible colours (ie those that can be understood even by the colour blind); we had successfully created a user interface that could be easily understood by a large population.
In terms of developing the backend, we wanted to create the doctor’s side of the app so that they could access patient information. IT was written in XCode and connects to Firebase database which connects to the patient’s information and simply displays this visually on an IPhone emulator. The database entries were fetched in Json notation, using requests.
In terms of using the Arduino hardware, we used Grove temperature sensor V1.2 along with a Grove base shield to read the values from the sensor and display it on the screen. The device has a detectable range of -40 to 150 C and has an accuracy of ±1.5 C.
## Challenges I ran into
When designing the product, one of the challenges we chose to tackle was an accessibility challenge. We had trouble understanding how we can turn a healthcare product more accessible. Oftentimes, healthcare products exist from the doctor side and the patients simply take home their prescription and hold the doctors to an unnecessarily high level of expectations. We wanted to allow both sides of this interaction to understand what was happening, which is where the app came in. After speaking to our teammates, they made it clear that many of the people from lower income households in a developing nation such as India are not able to access hospitals due to the high costs; and cannot use other sources to obtain this information due to accessibility issues. I spent lots of time researching how to make this a user friendly app and what principles other designers had incorporated into their apps to make it accessible. By doing so, we lost lots of time focusing more on accessibility than overall design. Though we adhered to the challenge requirements, this may have come at the loss of a more positive user experience.
## Accomplishments that I'm proud of
For half the team, this was their first Hackathon. Having never experienced the thrill of designing a product from start to finish, being able to turn an idea into more than just a set of wireframes was an amazing accomplishment that the entire team is proud of.
We are extremely happy with the UX/UI that we were able to create given that this is only our second time using Sketch; especially the fact that we learned how to link and use transactions to create a live demo. In terms of the backend, this was our first time developing an iOS app, and the fact that we were able to create a fully functioning app that could demo on our phones was a pretty great feat!
## What I learned
We learned the basics of front-end and back-end development as well as how to make designs more accessible.
## What's next for MediStand
Integrate the various features of this prototype.
How can we make this a global hack?
MediStand is a private company that can begin to sell its software to the governments (as these are the people who focus on providing healthcare)
Finding more ways to make this product more accessible | losing |
## What it does
You sit in front of the webcam and ask Alexa to give you a rating. In addition to your rating, it will give you a quality meme.
## How we built it
Our Alexa interaction is connected to an AWS lambda function. The lambda function then interacts with a python flask server that connects to a HTML/JavaScript front end. This front end accesses the webcam to take a picture which is sent back to the flask server and go through the faceplusplus API to get rated. This rating is then sent back to the lambda server where it matches appropriate memes for Alexa to tell you.
## Challenges we ran into
Meme girls sounds too much like Mean Girls and Alexa sometimes misunderstands us when we ask for a rate. | ## Reimagining Patient Education and Treatment Delivery through Gamification
Imagine walking into a doctors office to find out you’ve been diagnosed with a chronic illness. All of a sudden, you have a slew of diverse healthcare appointments, ongoing medication or lifestyle adjustments, lots of education about the condition and more. While in the clinic/hospital, you can at least ask the doctor questions and try to make sense of your condition & management plan. But once you leave to go home, **you’re left largely on your own**.
We found that there is a significant disconnect between physicians and patients after patients are discharged and diagnosed with a particular condition. Physicians will hand patients a piece of paper with suggested items to follow as part of a "treatment plan". But after this diagnosis meeting, it is hard for the physicians to keep up-to-date with their patients on the progress of the plan. The result? Not surprisingly, patients **quickly fall off and don’t adhere** to their treatment plans, costing the healthcare system **upwards of $300 billion** as they get readmitted due to worsening conditions that may have been prevented.
But it doesn’t have to be that way…
We're building an engaging end-to-end experience for patients managing chronic conditions, starting with one of the most prevalent ones - diabetes. **More than 100 million U.S. adults are now living with diabetes or prediabetes**
## How does Glucose Guardian Work?
Glucose Guardian is a scalable way to gamify education for chronic conditions using an existing clinical technique called “teachback” (see here- [link](https://patientengagementhit.com/features/developing-patient-teach-back-to-improve-patient-education)). We plan to partner with clinics and organizations, scrape their existing websites/documents where they house all their information about the chronic condition, and instantly convert that into short (up to 2 min) voice modules.
Glucose Guardian users can complete these short, guided, voice-based modules that teach and validate their understanding of their medical condition. Participation and correctness earn points which go towards real-life rewards for which we plan to partner with rewards organizations/corporate programs.
Glucose Guardian users can also go to the app to enter their progress on various aspects of their personalized treatment plan. Their activity on this part of the app is also incentive-driven.
This is inspired by current non-health solutions our team has had experience with using very low barrier audio-driven games that have been proven to drive user engagement through the roof.
## How we built it
We've simplified how we can use gamification to transform patient education & treatment adherence by making it more digestible and fun. We ran through some design thinking sessions to work out how we could create a solution that wouldn’t simply look great but could be implemented clinically and be HIPAA compliant.
We then built Glucose Guardian as a native iOS application using Swift. Behind the scenes, we use Python toolkits to perform some of our text matching for patient education modules, and we utilize AWS for infrastructure needs.
## Challenges we ran into
It was difficult to navigate the pre-existing market of patient adherence apps and create a solution that was unique and adaptable to clinical workflow. To tackle this, we dedicated ample time to step through user journeys - patients, physicians and allied health professionals. Through this strategy, we identified education as our focus because it is critical to treatment adherence and a patient-centric solution.
## We're proud of this
We've built something that has the potential to fulfill a large unmet need in the healthcare space, and we're excited to see how the app is received by beta testers, healthcare partners, and corporate wellness organizations.
## Learning Points
Glucose Guardian has given our cross-disciplined team the chance to learn more about the intersection of software + healthcare. Through developing speech-to-text features, designing UIs, scraping data, and walking through patient journeys, we've maximized our time to learn as much as possible in order to deliver the biggest impact.
## Looking Ahead
As per the namesake, so far we've implemented one use case (diabetes) but are planning to expand to many other diseases. We'd also like to continue building other flows beyond patient education. This includes components such as the gamified digital treatment plan which can utilize existing data from wearables and wellness apps to provide a consolidated view on the patient's post-discharge health.
Beyond that, we also see potential for our platform to serve as a treasure trove of data for clinical research and medical training. We're excited to keep building and keep creating more impact. | ## Inspiration
Many of us have a hard time preparing for interviews, presentations, and any other social situation. We wanted to sit down and have a real talk... with ourselves.
## What it does
The app will analyse your speech, hand gestures, and facial expressions and give you both real-time feedback as well as a complete rundown of your results after you're done.
## How We built it
We used Flask for the backend and used OpenCV, TensorFlow, and Google Cloud speech to text API to perform all of the background analyses. In the frontend, we used ReactJS and Formidable's Victory library to display real-time data visualisations.
## Challenges we ran into
We had some difficulties on the backend integrating both video and voice together using multi-threading. We also ran into some issues with populating real-time data into our dashboard to display the results correctly in real-time.
## Accomplishments that we're proud of
We were able to build a complete package that we believe is purposeful and gives users real feedback that is applicable to real life. We also managed to finish the app slightly ahead of schedule, giving us time to regroup and add some finishing touches.
## What we learned
We learned that planning ahead is very effective because we had a very smooth experience for a majority of the hackathon since we knew exactly what we had to do from the start.
## What's next for RealTalk
We'd like to transform the app into an actual service where people could log in and save their presentations so they can look at past recordings and results, and track their progress over time. We'd also like to implement a feature in the future where users could post their presentations online for real feedback from other users. Finally, we'd also like to re-implement the communication endpoints with websockets so we can push data directly to the client rather than spamming requests to the server.
![Image](https://i.imgur.com/aehDk3L.gif)
Tracks movement of hands and face to provide real-time analysis on expressions and body-language.
![Image](https://i.imgur.com/tZAM0sI.gif) | partial |
## LiveBoard is an easy-to-use tool to live stream classroom blackboard view.
It's half past 1 and you rush into the lecture hall, having to sit in the back of class for being late. Your professor has a microphone but the board is stuck at the same size, looming farther away the higher you climb the steps to an open seat. The professor is explaining the latest lesson, but he's blocking the ever-important diagram. As soon as you seem to understand, your eyes wander away, and in less than a second you're lost in your dreams. If only you could have seen the board up close!
LiveBoard is here to help! With a two-camera setup and a web-facing server, the board is rendered online, ready to be studied, understood and even saved for a later date. Functioning as either a broadcast or a recording, along with a Matlab backend and Django server, LiveBoard is here to facilitate studying for everyone.
Setup for use is easy. As phones can easily operate as IP cameras, any board can be captured and displayed with full convenience. | # Blackboard
This project utilizes the technology of Leap Motion to create amazing, unique drawings that capture the spontaneous creativity within everyone.
## Tackling the Creative Community
Our team looked forward to designing a product for the arts community, since we thought there was a lack of community driven motion tracking in this field. The idea originally was focused on helping teachers upload their notes and achieve a "smart board". However, after using the hardware and developing the tech to work with the leap motion, we quickly realized that this platform had more opportunities in an artistic format due to the freedom of movement and expression offered by being able to simply move your hand around to draw.
## Technical Aspects
The application runs on the web and is written in Node.JS which is augmented by Socket.IO and Express. It draws arcs on a JavaScript canvas using coordinates from a leap motion. The application is hosted on an azure instance in the cloud.
## Whats to come
As this was completed in a short span of time, it is far from complete. A more polished version would include the ability for only certain people to draw/write while everyone else can only view the writing. Having the ability to make channels private with password protection is also one future feature. As we continue to develop the blackboard platform, our plan is to have roll out features to give the ability to allow more than two user to simultaneously produce content on the same board. This will allow creators to express them selves when ever desired as creativity is always a spur of the moment. Moreover, the Blackboard team plans on making the platform a full suite of creative features such as brush types, opacity and more. | ## Inspiration
Technology in schools today is given to those classrooms that can afford it. Our goal was to create a tablet that leveraged modern touch screen technology while keeping the cost below $20 so that it could be much cheaper to integrate with classrooms than other forms of tech like full laptops.
## What it does
EDT is a credit-card-sized tablet device with a couple of tailor-made apps to empower teachers and students in classrooms. Users can currently run four apps: a graphing calculator, a note sharing app, a flash cards app, and a pop-quiz clicker app.
-The graphing calculator allows the user to do basic arithmetic operations, and graph linear equations.
-The note sharing app allows students to take down colorful notes and then share them with their teacher (or vice-versa).
-The flash cards app allows students to make virtual flash cards and then practice with them as a studying technique.
-The clicker app allows teachers to run in-class pop quizzes where students use their tablets to submit answers.
EDT has two different device types: a "teacher" device that lets teachers do things such as set answers for pop-quizzes, and a "student" device that lets students share things only with their teachers and take quizzes in real-time.
## How we built it
We built EDT using a NodeMCU 1.0 ESP12E WiFi Chip and an ILI9341 Touch Screen. Most programming was done in the Arduino IDE using C++, while a small portion of the code (our backend) was written using Node.js.
## Challenges we ran into
We initially planned on using a Mesh-Networking scheme to let the devices communicate with each other freely without a WiFi network, but found it nearly impossible to get a reliable connection going between two chips. To get around this we ended up switching to using a centralized server that hosts the apps data.
We also ran into a lot of problems with Arduino strings, since their default string class isn't very good, and we had no OS-layer to prevent things like forgetting null-terminators or segfaults.
## Accomplishments that we're proud of
EDT devices can share entire notes and screens with each other, as well as hold a fake pop-quizzes with each other. They can also graph linear equations just like classic graphing calculators can.
## What we learned
1. Get a better String class than the default Arduino one.
2. Don't be afraid of simpler solutions. We wanted to do Mesh Networking but were running into major problems about two-thirds of the way through the hack. By switching to a simple client-server architecture we achieved a massive ease of use that let us implement more features, and a lot more stability.
## What's next for EDT - A Lightweight Tablet for Education
More supported educational apps such as: a visual-programming tool that supports simple block-programming, a text editor, a messaging system, and a more-indepth UI for everything. | losing |
## Inspiration
Picture this, I was all ready to go to Yale and just hack away. I wanted to hack without any worry. I wanted to come home after hacking and feel that I had accomplished a functional app that I could see people using. I was not sure if I wanted to team up with anyone, but I signed up to be placed on a UMD team. Unfortunately none of the team members wanted to group up, so I developed this application alone.
I volunteered at Technica last week and saw that chaos that is team formation and I saw the team formation at YHack. I think there is a lot of room for team formation to be more fleshed out, so that is what I set out on trying to fix. I wanted to build an app that could make team building at hackathons efficiently.
## What it does
Easily set up "Event rooms" for hackathons, allowing users to join the room, specify their interests, and message other participants that are LFT (looking for team). Once they have formed a team, they can easily get out of the chatroom simply by holding their name down and POOF, no longer LFT!
## How I built it
I built a Firebase server that stores a database of events. Events hold information obviously regarding the event as well as a collection of all members that are LFT. After getting acquainted with Firebase, I just took the application piece by piece. First order of business was adding an event, and then displaying all events, and then getting an individual event, adding a user, deleting a user, viewing all users and then beautifying with lots of animations and proper Material design.
## Challenges I ran into
Android Animation still seems to be much more complex than it needs to be, so that was a challenge and a half. I want my applications to flow, but the meticulousness of Android and its fragmentation problems can cause a brain ache.
## Accomplishments that I'm proud of
In general, I am very proud with the state of the app. I think it serves a very nifty core purpose that can be utilized in the future. Additionally, I am proud of the application's simplicity. I think I set out on a really solid and feasible goal for this weekend that I was able to accomplish.
I really enjoyed the fact that I was able to think about my project, plan ahead, implement piece by piece, go back and rewrite, etc until I had a fully functional app. This project helped me realize that I am a strong developer.
## What I learned
Do not be afraid to erase code that I already wrote. If it's broken and I have lots of spaghetti code, it's much better to try and take a step back, rethink the problem, and then attempt to fix the issues and move forward.
## What's next for Squad Up
I hope to continue to update the project as well as provide more functionalities. I'm currently trying to get a published .apk on my Namecheap domain (squadupwith.me), but I'm not sure how long this DNS propagation will take. Also, I currently have the .apk to only work for Android 7.1.1, so I will have to go back and add backwards compatibility for the android users who are not on the glorious nexus experience. | ## Inspiration
The idea started with a group chat we had with around 60 people in it. The idea was to add many people to the group chat in order to get a general consensus on an issue or topic. Groupme allows certain messages to be 'liked' and we would 'like' the answer we agreed with the most, essentially creating a poll. We had a lot of fun with the group chat and were inspired to make a version accessible to the entire MIT community.
## What it does
At the moment we have a sign up page set up where a new user can enter some preliminary information including their kerberos. A verification email will then be sent to [kerberos@mit.edu](mailto:kerberos@mit.edu), verifying that the user is affiliated with MIT, and has an MIT email address. The verification email will contain a code should be entered on the site and allow the user to proceed to the site.
The homepage allows the user to choose to create a poll, which will be posted on the site. In order for a user to view the results of any poll, they must have answered the poll themselves.
## How we built it
We built the website using bootstrap. We also created a locally hosted database with mySQL that contains all of the account information, question data and answer data. The site is also locally hosted.
## Challenges we ran into
At first we tried hosting the site and the database on amazon web services, but didn't want to pay any money. We then eventually decided to move everything to a local host. It was a challenge to figure out the best host for the site and the database, and the overall structure.
## Accomplishments that we're proud of
We're very proud of our signup feature and the email verification system we set up.
We also are proud of basically everything we've done since we came in with little to no knowledge of how to make a website from scratch.
## What we learned
We learned about what a database was and how to make one. We learned how to send and retrieve information from that database. A lot of us were using PHP for the first time. We also learned how to set up a local server, and the pros and cons of different servers. We also learned how to use Bootstrap and integrate all of our different components together.
## What's next for MIT Poles
We want to add new features such as a 'join group' feature that allows user to make subgroups and vote on issues with sub communities, choosing whether to remain anonymous or not.
We have code to show, but didn't have time to upload it all to a Bitbucket repo! | ## Inspiration
Inspired by personal experience of commonly getting separated in groups and knowing how inconvenient and sometimes dangerous it can be, we aimed to create an application that kept people together. We were inspired by how interlinked and connected we are today by our devices and sought to address social issues while using the advancements in decentralized compute and communication. We also wanted to build a user experience that is unique and can be built upon with further iterations and implementations.
## What it does
Huddle employs mesh networking capability to maintain a decentralized network among a small group of people, but can be scaled to many users. By having a mesh network of mobile devices, Huddle manages the proximity of its users. When a user is disconnected, Huddle notifies all of the devices on its network, thereby raising awareness, should someone lose their way.
The best use-case for Huddle is in remote areas where cell-phone signals are unreliable and managing a group can be cumbersome. In a hiking scenario, should a unlucky hiker choose the wrong path or be left behind, Huddle will reduce risks and keep the team together.
## How we built it
Huddle is an Android app built with the RightMesh API. With many cups of coffee, teamwork, brainstorming, help from mentors, team-building exercises, and hours in front of a screen, we produced our first Android app.
## Challenges we ran into
Like most hackathons, our first challenge was deciding on an idea to proceed with. We employed the use of various collaborative and brainstorming techniques, approached various mentors for their input, and eventually we decided on this scalable idea.
As mentioned, none of us developed an Android environment before, so we had a large learning curve to get our environment set-up, developing small applications, and eventually building the app you see today.
## Accomplishments that we're proud of
One of our goals was to be able to develop a completed product at the end. Nothing feels better than writing this paragraph after nearly 24 hours of non-stop hacking.
Once again, developing a rather complete Android app without any developer experience was a monumental achievement for us. Learning and stumbling as we go in a hackathon was a unique experience and we are really happy we attended this event, no matter how sleepy this post may seem.
## What we learned
One of the ideas that we gained through this process was organizing and running a rather tightly-knit developing cycle. We gained many skills in both user experience, learning how the Android environment works, and how we make ourselves and our product adaptable to change. Many design changes occured, and it was great to see that changes were still what we wanted and what we wanted to develop.
Aside from the desk experience, we also saw many ideas from other people, different ways of tackling similar problems, and we hope to build upon these ideas in the future.
## What's next for Huddle
We would like to build upon Huddle and explore different ways of using the mesh networking technology to bring people together in meaningful ways, such as social games, getting to know new people close by, and facilitating unique ways of tackling old problems without centralized internet and compute.
Also V2. | losing |
## Inspiration
I was compelled to undertake a project on my own for this first time in my hackathoning career. One that I covers my interests in web applications and image processing and would be something "do-able" within the competition.
## What it does
Umoji is a web-app that take's an image input and using facial recognition maps emoji symbols onto the faces in the image matching their emotion/facial expressions.
## How I built it
Using Google Cloud Vision API as the backbone for all the ML and visual recognition, flask to serve up the simple bootstrap based html front-end.
## Challenges I ran into
Creating an extensive list of Emoji to map to the different levels of emotion predicted by the ML Model. Web deployment / networking problems.
## Accomplishments that I'm proud of
That fact that I was able to hit all the check boxes for what I set out to do. Not overshooting with stretch features or getting to caught up with extending the main features beyond the original scope.
## What I learned
How to work with Google's cloud API / image processing and rapid live deployment.
## What's next for Umoji
More emojis, better UI/UX and social media integration for sharing. | ## Story
Mental health is a major issue especially on college campuses. The two main challenges are diagnosis and treatment.
### Diagnosis
Existing mental health apps require the use to proactively input their mood, their thoughts, and concerns. With these apps, it's easy to hide their true feelings.
We wanted to find a better solution using machine learning. Mira uses visual emotion detection and sentiment analysis to determine how they're really feeling.
At the same time, we wanted to use an everyday household object to make it accessible to everyone.
### Treatment
Mira focuses on being engaging and keeping track of their emotional state. She allows them to see their emotional state and history, and then analyze why they're feeling that way using the journal.
## Technical Details
### Alexa
The user's speech is being heard by the Amazon Alexa, which parses the speech and passes it to a backend server. Alexa listens to the user's descriptions of their day, or if they have anything on their mind, and responds with encouraging responses matching the user's speech.
### IBM Watson/Bluemix
The speech from Alexa is being read to IBM Watson which performs sentiment analysis on the speech to see how the user is actively feeling from their text.
### Google App Engine
The backend server is being hosted entirely on Google App Engine. This facilitates the connections with the Google Cloud Vision API and makes deployment easier. We also used Google Datastore to store all of the user's journal messages so they can see their past thoughts.
### Google Vision Machine Learning
We take photos using a camera built into the mirror. The photos are then sent to the Vision ML API, which finds the user's face and gets the user's emotions from each photo. They're then stored directly into Google Datastore which integrates well with Google App Engine
### Data Visualization
Each user can visualize their mental history through a series of graphs. The graphs are each color-coded to certain emotional states (Ex. Red - Anger, Yellow - Joy). They can then follow their emotional states through those time periods and reflect on their actions, or thoughts in the mood journal. | ## Inspiration
*"A correct diagnosis is three-fourths the remedy." - Mahatma Gandhi*
In this fast-paced world where everything seems to be conveniently accessed in a matter of seconds at our fingertips with our smartphones and laptops, some parts of our lives can not be replaced or compromised. Let's not kid ourselves, we are all *guilty* of getting a scare when we see something suspicious on our skin or if we feel funny, we fall into the black hole of googling the symptoms, believing everything we read and scaring ourselves to an unnecessary extent.
Even forty-four percent of Americans prefer to self-diagnose their illness online rather than see a medical professional, according to a survey conducted by *The Tinker Law Firm*. That is an alarmingly large amount of people for just a country.
While it is cheaper to go to Google to self-diagnose rather than to visit a doctor, this often leads to inaccurate diagnosis and can be extremely dangerous as they might follow a wrong treatment plan or may not realize the severity of their condition.
Through our personal experiences in Asian countries, it was common to get an X-Ray scan at one place, and then another appointment with a doctor had to be booked the next day to receive an opinion. We also wanted to create a way to avoid inconvenience for some people and make it socially sustainable this way. Especially with the exponentially rising cases of damaging effects on the environment, we wanted to create a means of a sustainable health care system while reducing the negative impacts.
## What it does
**Doctorize.AI** is an easy-to-use web application that uses Machine Learning to scan the images or audio clip uploaded, and with a simple click of a button, it is processed, and the results inform if there are any concerning medical issues recognized or if the x-ray is clear. It also lets you know if you must seek immediate medical attention and connects you to a matching specialist to help you out. Worried about something in general? Use the “Request A Doctor” to connect and talk all your worries out.
An added **bonus**: Patients and doctors can use Doctorize.AI as an extra tool to get an instantaneous second opinion and avoid any false negative/positive results, further reducing the load of the healthcare system, making this web application socially sustainable. It is also a safe and low-carbon health system, protecting the environment.
Our models are able to **recognize and respond** to cases by classifying:
**-** skin cancer (Malignant or Benign)
**-** brain tumor (Glioma\_Tumor, Meningioma\_Tumor, Pituitary\_Tumor, No\_Tumor)
**-** X-ray (Tuberculosis, Pneumonia, COVID-19 induced Pneumonia, or Normal)
## How we built it
The **frontend** was built using:
**-** Next.js
**-** HTML
**-** CSS
**-** JavaScript
The **backend** was built using:
**-** Flask
**-** Python
**-** TensorFlow/Keras for the Deep learning models to classify images/audio
**-** AWS S3 for storage of large data set
## Challenges we ran into
As four individuals came together, we were bursting with uncountable ideas, so it took a long discussion or two to settle and choose what we could realistically achieve in a span of 36 hours.
Here are a few challenges we ran into:
**-** Lack of dataset availability
**-** Different time-zones
**-** Mix between first time hacker, new hacker(s), and experienced hacker in the team
**-** AWS S3 - Simple Storage Service
**-** Storage of large data
**-** AWS Sagemaker
**-** Computational power - deep learning takes time
## Accomplishments that we're proud of
**-** Being able to tackle and develop the **Machine Learning Models** with the supportive team we had.
**-** Creating a successful clean and polished look to the design
**-** Models with over 80% accuracy across the board
**-** Figuring out how to implement Flask
**-** Experimenting with AWS (S3, and Sagemaker (not as successful))
## What we learned
**-** Together as a team, we learnt how to use and apply CSS in an efficient way and how different CSS tools helped to achieve certain looks we were aiming for.
**-** We also learned how to use Flask to connect ML models to our web application.
**-** Further, we learned how to use AWS (S3, and Sagemaker (not as successful)).
## What's next for Doctorize.AI
**-** Allow patients and doctors to interact smoothly on the platform
**-** Expand our collection of medical cases that can be scanned and recognized such as more types of bacteria/viruses and rashes
**-** Bring in new helpful features such as advanced search of specialists and general doctors in the area of your own choice
**-** Record the patient’s history and information for future references
**-** QR codes on patient’s profile for smoother connectivity
**-** Voice Memo AI to summarize what the patient is talking about into targeted key topics | partial |
## Inspiration
We were inspired by Plaid's challenge to "help people make more sense of their financial lives." We wanted to create a way for people to easily view where they are spending their money so that they can better understand how to conserve it. Plaid's API allows us to see financial transactions, and Google Maps API serves as a great medium to display the flow of money.
## What it does
GeoCash starts by prompting the user to login through the Plaid API. Once the user is authorized, we are able to send requests for transactions, given the public\_token of the user. We then displayed the locations of these transactions on the Google Maps API.
## How I built it
We built this using JavaScript, including Meteor, React, and Express frameworks. We also utilized the Plaid API for the transaction data and the Google Maps API to display the data.
## Challenges I ran into
Data extraction/responses from Plaid API, InfoWindow displays in Google Maps
## Accomplishments that I'm proud of
Successfully implemented meteor webapp, integrated two different APIs into our product
## What I learned
Meteor (Node.js and React.js), Plaid API, Google Maps API, Express framework
## What's next for GeoCash
We plan on integrating real user information into our webapp; we currently only using the sandbox user, which has a very limited scope of transactions. We would like to implement differently size displays on the Maps API to represent the amount of money spent at the location. We would like to display different color displays based on the time of day, which was not included in the sandbox user. We would also like to implement multiple different user displays at the same time, so that we can better describe the market based on the different categories of transactions. | ## Inspiration
As students, we have found that there are very few high-quality resources on investing for those who are interested but don't have enough resources. Furthermore, we have found that investing and saving money can be a stressful experience. We hope to change this for those who want to save better with the help of our app, hopefully making it fun in the process!
## What it does
Our app first asks a new client a brief questionnaire about themselves. Then, using their banking history, it generates 3 "demons", aka bad spending habits, to kill. Then, after the client chooses a habit to work on, it brings them to a dashboard where they can monitor their weekly progress on a task. Once the week is over, the app declares whether the client successfully beat the mission - if they did, they get rewarded with points which they can exchange for RBC Loyalty points!
## How we built it
We built the frontend using React + Tailwind, using Routes to display our different pages. We used Cohere for our AI services, both for generating personalized weekly goals and creating a more in-depth report. We used Firebase for authentication + cloud database to keep track of users. For our data of users and transactions, as well as making/managing loyalty points, we used the RBC API.
## Challenges we ran into
Piecing the APIs together was probably our most difficult challenge. Besides learning the different APIs in general, integrating the different technologies got quite tricky when we are trying to do multiple things at the same time!
Besides API integration, definitely working without any sleep though was the hardest part!
## Accomplishments that we're proud of
Definitely our biggest accomplishment was working so well together as a team. Despite only meeting each other the day before, we got along extremely well and were able to come up with some great ideas and execute under a lot of pressure (and sleep deprivation!) The biggest reward from this hackathon are the new friends we've found in each other :)
## What we learned
I think each of us learned very different things: this was Homey and Alex's first hackathon, where they learned how to work under a small time constraint (and did extremely well!). Paige learned tons about React, frontend development, and working in a team. Vassily learned lots about his own strengths and weakness (surprisingly reliable at git, apparently, although he might have too much of a sweet tooth).
## What's next for Savvy Saver
Demos! After that, we'll just have to see :) | ## Inspiration
We wanted to bring financial literacy into being a part of your everyday life while also bringing in futuristic applications such as augmented reality to really motivate people into learning about finance and business every day. We were looking at a fintech solution that didn't look towards enabling financial information to only bankers or the investment community but also to the young and curious who can learn in an interesting way based on the products they use everyday.
## What it does
Our mobile app looks at company logos, identifies the company and grabs the financial information, recent company news and company financial statements of the company and displays the data in an Augmented Reality dashboard. Furthermore, we allow speech recognition to better help those unfamiliar with financial jargon to better save and invest.
## How we built it
Built using wikitude SDK that can handle Augmented Reality for mobile applications and uses a mix of Financial data apis with Highcharts and other charting/data visualization libraries for the dashboard.
## Challenges we ran into
Augmented reality is very hard, especially when combined with image recognition. There were many workarounds and long delays spent debugging near-unknown issues. Moreover, we were building using Android, something that none of us had prior experience with which made it harder.
## Accomplishments that we're proud of
Circumventing challenges as a professional team and maintaining a strong team atmosphere no matter what to bring something that we believe is truly cool and fun-to-use.
## What we learned
Lots of things about Augmented Reality, graphics and Android mobile app development.
## What's next for ARnance
Potential to build in more charts, financials and better speech/chatbot abilities into out application. There is also a direction to be more interactive with using hands to play around with our dashboard once we figure that part out. | partial |
## Inspiration
There are two types of pets wandering unsupervised in the streets - ones that are lost and ones that don't have a home to go to. Pet's Palace portable mini-shelters services these animals and connects them to necessary services while leveraging the power of IoT.
## What it does
The units are placed in streets and alleyways. As an animal approaches the unit, an ultrasonic sensor triggers the door to open and dispenses a pellet of food. Once inside, a live stream of the interior of the unit is sent to local animal shelters which they can then analyze and dispatch representatives accordingly. Backend analysis of the footage provides information on breed identification, custom trained data, and uploads an image to a lost and found database. A chatbot is implemented between the unit and a responder at the animal shelter.
## How we built it
Several Arduino microcontrollers distribute the hardware tasks within the unit with the aide of a wifi chip coded in Python. IBM Watson powers machine learning analysis of the video content generated in the interior of the unit. The adoption agency views the live stream and related data from a web interface coded with Javascript.
## Challenges we ran into
Integrating the various technologies/endpoints with one Firebase backend.
## Accomplishments that we're proud of
A fully functional prototype! | ## Inspiration
* Smart homes are taking over the industry
* Current solutions are WAY too expensive(almost $30) for one simple lightbulb
* Can fail from time to time
* Complicated to connect
## What it does
* It simplifies the whole idea of a smart home
* Three part system
+ App(to control the hub device)
+ Hub(used to listen to the Firebase database and control all of the devices)
+ Individual Devices(used to do individual tasks such as turn on lights, locks, etc.)
* It allows as many devices as you want to be controlled through one app
* Can be controlled from anywhere in the world
* Cheap in cost
* Based on usage data, provides feedback on how to be more efficient with trained algorithm
## How I built it
* App built with XCode and Swift
* Individual devices made with Arduino's and Node-MCU's
* Arduino's intercommunicate with RF24 Radio modules
* Main Hub device connects to Firebase with wifi
## Challenges I ran into
* Using RF24 radios to talk between Arduinos
* Communicating Firebase with the Hub device
* Getting live updates from Firebase(constant listening)
## Accomplishments that I'm proud of
* Getting a low latency period, almost instant from anywhere in the world
* Dual way communication(Input and Output Devices)
* Communicating multiple non-native devices with Firebase
## What I learned
* How RF24 radios work at the core
* How to connect Firebase to many devices
* How to keep listening for changes from Firebase
* How to inter-communicate between Arduinos and Wifi modules
## What's next for The Smarter Home
* Create more types of devices
* Decrease latency
* Create more appropriate and suitable covers | ## Inspiration
Honestly Jini Kim's keynote speech inspired me to try and solve an important problem. Just this school year I "rescued" a dog (Trooper). And there are a lot of things people don't tell you when you take in a stray, I wanted to provide a sort of 'digest' of info that would've helped me back then and hopefully will help others in the future.
## What it does
As of right now, it's an online guide providing information, useful links and a strong narrative/step-by-step process to how to go from: picking up a dog from inside a box, while it's raining, to taking them home.
## How I built it
Super basic web dev. I wanted to do a lot more, but got caught up with complications and then scaled back to the original reason why I pursued the idea.
## Challenges I ran into
Unfortunately I'm the only member of my usual team that was accepted, so without them I definitely felt like I was restricted in my implementation. I struggled with PHP primarily and had some other minor problems during the process.
## Accomplishments that I'm proud of
I think the narrative is strong and honest. The links are actually useful. I hope if I implemented some solid SEO, that the info would be relevant enough to help people get the information they needed to prepare easily without having to reactively Google problems instead.
## What I learned
This weekend I learned that implementation isn't as important as the problem. Our focus as Silicon Valley programmers are too commonly skewed to monetization problems instead of ones that actually matter.
## What's next for newDoggo
Asides from extra features, I want to start an online community, initially with a Facebook group and newsletter, to connect and support people taking in strays.
Once I get the chance to take this back and work with my team I want to add the following functionality.
A stray pet reporting system (GPS tagging with photo upload, "Have you seen this doggo?" \*/
An adopt-a-pet profile matching system (dog, cat; male, female; color, age, etc.) | winning |
## Inspiration
During these trying times, the pandemic impacted many people by isolating them in their homes. People are not able to socialize like they used to and find people they can relate with.
For example, students who are transitioning to college or a new school where they don’t know anyone. Matcher aims to improve students' mental health by matching them with people who share similar interests and allows them to communicate. Overall, its goal is to connect people across the world.
## What it does
The user first logs in and answers a series of comprehensive, researched backed questions (AI determined questions) to determine his/her personality type. Then, we use machine learning to match people and connect them. Users can email each other after they are matched!
Our custom Machine Learning algorithm used K-Means Algorithm, and Random Forest to study people's personalities.
## How we built it
We used React on the front end, Firebase for authentication and storage, and Python for the server and machine learning.
## Challenges we ran into
We all faced unique challenges but losing one member mid way really damped our spirits and limited our potential.
* Gordon: I was new to firebase and I didn’t follow the right program flow in the first half of the hackathon.
* Lucia: The challenge I ran into was trying to figure out how to properly route the web pages together on React. Also, how to integrate Firebase database on the Front End since I never used it before.
* Anindya: Time management.
## Accomplishments that we're proud of
We are proud that we are able to persevere after losing a member but still managing to achieve a lot. We are also proud that we showed resiliency when we realized that we messed up our program flow mid way and had to start over from the beginning. We are happy that we learned and implemented new technologies that we have never used before. Our hard work and perseverance resulted in an app that is useful and will make an impact to people's lives!
## What we learned
We believe that what doesn't kill you, makes you stronger.
* Gordon: After chatting with mentors, I learnt about SWE practises, Firebase flow, and Flask. I also handled setback and failure from wasting 10 hours.
* Lucia: I learned about Firebase and how to integrate it into React Front End. I also learned more about how to use React Hooks!
* Anindya: I learned how to study unique properties of data using unsupervised learning methods. Also I learned how to integrate Firebase with Python.
## What's next for Matcher
We would like to finish our web app by completing our integration of the Firebase Realtime Database. We plan to add social networking features such as a messaging and video chat feature which allows users to communicate with each other on the web app. This will allow them to discuss their interests with one another right at our site! We would like to make this project accessible to multiple platforms such as mobile as well. | ## Inspiration
Our inspiration came from very bad and delayed customer service we received from cloud companies.
## What it does
It take the customers social media data like comments, posts, tweets from all the leading the leading competitors of any industry and performs sentiment analysis and prioritizes.
## How I built it
We built it based on HASURA platform. We used Facebook's Graph API for data and IBM Watson- Tone Analyzer for sentiment analysis.
## Challenges I ran into
Integrating Back end to front end.
## Accomplishments that I'm proud of
## What I learned
Docker, mongodb
## What's next for competitone
Extend it to other social media platforms | ## Inspiration
My college friends and brother inspired me for doing such good project .This is mainly a addictive game which is same as we played in keypad phones
## What it does
This is a 2-d game which includes tunes graphics and much more .we can command the snake to move ip-down-right and left
## How we built it
I built it using pygame module in python
## Challenges we ran into
Many bugs are arrived such as runtime error but finally i manged to fix all this problems
## Accomplishments that we're proud of
I am proud of my own project that i built a user interactive program
## What we learned
I learned to use pygame in python and also this project attarct me towards python programming
## What's next for Snake Game using pygame
Next I am doing various python projects such as alarm,Virtual Assistant program,Flappy bird program,Health management system And library management system using python | losing |
# **Cough It**
#### COVID-19 Diagnosis at Ease
## Inspiration
As the pandemic has nearly crippled all the nations and still in many countries, people are in lockdown, there are many innovations in these two years that came up in order to find an effective way of tackling the issues of COVID-19. Out of all the problems, detecting the COVID-19 strain has been the hardest so far as it is always mutating due to rapid infections.
Just like many others, we started to work on an idea to detect COVID-19 with the help of cough samples generated by the patients. What makes this app useful is its simplicity and scalability as users can record a cough sample and just wait for the results to load and it can give an accurate result of where one have the chances of having COVID-19 or not.
## Objective
The current COVID-19 diagnostic procedures are resource-intensive, expensive and slow. Therefore they are lacking scalability and retarding the efficiency of mass-testing during the pandemic. In many cases even the physical distancing protocol has to be violated in order to collect subject's samples. Disposing off biohazardous samples after diagnosis is also not eco-friendly.
To tackle this, we aim to develop a mobile-based application COVID-19 diagnostic system that:
* provides a fast, safe and user-friendly to detect COVID-19 infection just by providing their cough audio samples
* is accurate enough so that can be scaled-up to cater a large population, thus eliminating dependency on resource-heavy labs
* makes frequent testing and result tracking efficient, inexpensive and free of human error, thus eliminating economical and logistic barriers, and reducing the wokload of medical professionals
Our [proposed CNN](https://dicova2021.github.io/docs/reports/team_Brogrammers_DiCOVA_2021_Challenge_System_Report.pdf) architecture also secured Rank 1 at [DiCOVA](https://dicova2021.github.io/) Challenge 2021, held by IISc Bangalore researchers, amongst 85 teams spread across the globe. With only being trained on small dataset of 1,040 cough samples our model reported:
Accuracy: 94.61%
Sensitivity: 80% (20% false negative rate)
AUC of ROC curve: 87.07% (on blind test set)
## What it does
The working of **Cough It** is simple. User can simply install the app and tap to open it. Then, the app will ask for user permission for external storage and microphone. The user can then just tap the record button and it will take the user to a countdown timer like interface. Playing the play button will simply start recording a 7-seconds clip of cough sample of the user and upon completion it will navigate to the result screen for prediction the chances of the user having COVID-19
## How we built it
Our project is divided into three different modules -->
#### **ML Model**
Our machine learning model ( CNN architecture ) will be trained and deployed using the Sagemaker API which is apart of AWS to predict positive or negative infection from the pre-processed audio samples. The training data will also contain noisy and bad quality audio sample, so that it is robust for practical applications.
#### **Android App**
At first, we prepared the wireframe for the app and decided the architecture of the app which we will be using for our case. Then, we worked from the backend part first, so that we can structure our app in proper android MVVM architecture. We constructed all the models, Retrofit Instances and other necessary modules for code separation.
The android app is built in Kotlin and is following MVVM architecture for scalability. The app uses Media Recorder class to record the cough samples of the patient and store them locally. The saved file is then accessed by the android app and converted to byte array and Base64 encoded which is then sent to the web backend through Retrofit.
#### **Web Backend**
The web backend is actually a Node.js application which is deployed on EC2 instance in AWS. We choose this type of architecture for our backend service because we wanted a more reliable connection between our ML model and our Node.js application.
At first, we created a backend server using Node.js and Express.js and deployed the Node.js server in AWS EC2 instance. The server then receives the audio file in Base64 encoded form from the android client through a POST request API call. After that, the file is getting converted to .wav file through a module in terminal through command. After successfully, generating the .wav file, we put that .wav file as argument in the pre-processor which is a python script. Then we call the AWS Sagemaker API to get the predictions and the Node.js application then sends the predictions back to the android counterpart to the endpoint.
## Challenges we ran into
#### **Android**
Initially, in android, we were facing a lot of issues in recording a cough sample as there are two APIs for recording from the android developers, i.e., MediaRecorder, AudioRecord. As the ML model required a .wav file of the cough sample to pre-process, we had to generate it on-device. It is possible with AudioRecord class but requires heavy customization to work and also, saving a file and writing to that file, is a really tedious and buggy process. So, for android counterpart, we used the MediaRecorder class and saving the file and all that boilerplate code is handled by that MediaRecorder class and then we just access that file and send it to our API endpoint which then converts it into a .wav file for the pre-processor to pre-process.
#### **Web Backend**
In the web backend side, we faced a lot of issues in deploying the ML model and to further communicate with the model with node.js application.
Initially, we deployed the Node.js application in AWS Lamdba, but for processing the audio file, we needed to have a python environment as well, so we could not continue with lambda as it was a Node.js environment. So, to actually get the python environment we had to use AWS EC2 instance for deploying the backend server.
Also, we are processing the audio file, we had to use ffmpeg module for which we had to downgrade from the latest version of numpy library in python to older version.
#### **ML Model**
The most difficult challenge for our ml-model was to get it deployed so that it can be directly accessed from the Node.js server to feed the model with the MFCC values for the prediction. But due to lot of complexity of the Sagemaker API and with its integration with Node.js application this was really a challenge for us. But, at last through a lot of documentation and guidance we are able to deploy the model in Sagemaker and we tested some sample data through Postman also.
## Accomplishments that we're proud of
Through this project, we are proud that we are able to get a real and accurate prediction of a real sample data. We are able to send a successful query to the ML Model that is hosted on Sagemaker and the prediction was accurate.
Also, this made us really happy that in a very small amount we are able to overcome with so much of difficulties and also, we are able to solve them and get the app and web backend running and we are able to set the whole system that we planned for maintaining a proper architecture.
## What we learned
Cough It is really an interesting project to work on. It has so much of potential to be one of the best diagnostic tools for COVID-19 which always keeps us motivated to work on it make it better.
In android, working with APIs like MediaRecorder has always been a difficult position for us, but after doing this project and that too in Kotlin, we feel more confident in making a production quality android app. Also, developing an ML powered app is difficult and we are happy that finally we made it.
In web, we learnt the various scenarios in which EC2 instance can be more reliable than AWS Lambda also running various script files in node.js server is a good lesson to be learnt.
In machine learning, we learnt to deploy the ML model in Sagemaker and after that, how to handle the pre-processing file in various types of environments.
## What's next for Untitled
As of now, our project is more focused on our core idea, i.e., to predict by analysing the sample data of the user. So, our app is limited to only one user, but in future, we have already planned to make a database for user management and to show them report of their daily tests and possibility of COVID-19 on a weekly basis as per diagnosis.
## Final Words
There is a lot of scope for this project and this project and we don't want to stop innovating. We would like to take our idea to more platforms and we might also launch the app in the Play-Store soon when everything will be stable enough for the general public.
Our hopes on this project is high and we will say that, we won't leave this project until perfection. | ## Inspiration
Our solution was named on remembrance of Mother Teresa
## What it does
Robotic technology to assist nurses and doctors in medicine delivery, patient handling across the hospital, including ICU’s. We are planning to build an app in Low code/No code that will help the covid patients to scan themselves as the mobile app is integrated with the CT Scanner for doctors to save their time and to prevent human error . Here we have trained with CT scans of Covid and developed a CNN model and integrated it into our application to help the covid patients. The data sets are been collected from the kaggle and tested with an efficient algorithm with the efficiency of around 80% and the doctors can maintain the patient’s record. The beneficiary of the app is PATIENTS.
## How we built it
Bots are potentially referred to as the most promising and advanced form of human-machine interactions. The designed bot can be handled with an app manually through go and cloud technology with e predefined databases of actions and further moves are manually controlled trough the mobile application. Simultaneously, to reduce he workload of doctor, a customized feature is included to process the x-ray image through the app based on Convolution neural networks part of image processing system. CNN are deep learning algorithms that are very powerful for the analysis of images which gives a quick and accurate classification of disease based on the information gained from the digital x-ray images .So to reduce the workload of doctors these features are included. To know get better efficiency on the detection I have used open source kaggle data set.
## Challenges we ran into
The data source for the initial stage can be collected from the kaggle and but during the real time implementation the working model and the mobile application using flutter need datasets that can be collected from the nearby hospitality which was the challenge.
## Accomplishments that we're proud of
* Counselling and Entertainment
* Diagnosing therapy using pose detection
* Regular checkup of vital parameters
* SOS to doctors with live telecast
-Supply off medicines and foods
## What we learned
* CNN
* Machine Learning
* Mobile Application
* Cloud Technology
* Computer Vision
* Pi Cam Interaction
* Flutter for Mobile application
## Implementation
* The bot is designed to have the supply carrier at the top and the motor driver connected with 4 wheels at the bottom.
* The battery will be placed in the middle and a display is placed in the front, which will be used for selecting the options and displaying the therapy exercise.
* The image aside is a miniature prototype with some features
* The bot will be integrated with path planning and this is done with the help mission planner, where we will be configuring the controller and selecting the location as a node
* If an obstacle is present in the path, it will be detected with lidar placed at the top.
* In some scenarios, if need to buy some medicines, the bot is attached with the audio receiver and speaker, so that once the bot reached a certain spot with the mission planning, it says the medicines and it can be placed at the carrier.
* The bot will have a carrier at the top, where the items will be placed.
* This carrier will also have a sub-section.
* So If the bot is carrying food for the patients in the ward, Once it reaches a certain patient, the LED in the section containing the food for particular will be blinked. | ## Inspiration
We realized how difficult it is to be politically informed on both ends of the spectrum, so we wanted to increase the ease of accessibility of that for other people.
## What it does
Given a user input on any topic, Politeracy scrapes three news websites -- CNN, Associated Press, and Forbes -- and, using Cohere's API, summarizes the most relevant articles pertaining to the topic, giving a clear idea on how the left, center, and right view the issue.
## How we built it
* Problem: People are often too busy, or have too short of attention spans to pay attention to the news, and are quick to accept bias.
* Solution: Highlight bias, summarize news, and allow people to form opinions off such summaries.
* Design: We decided to make a website and split up responsibilities, with Cyrus in charge of making the website and finetuning Cohere, and Alyssa aggregating news sources and building a web scraper.
## Challenges we ran into
**Alyssa**: This was my first time building a project outside of my courses, so my setup was not prepared in the slightest. I ran into trouble even installing packages and eventually had to reinstall Python itself! Aside from that, this was my first time working with anything pertaining to the backend -- APIs and web scraping, while sounding cool, were completely new to me! I kept having to go through trial and error with web scraping, scraping myself from a myriad of websites only to finally stumble upon a way I could understand.
**Cyrus**: I didn't have experience with NLP, so I wasn't sure how to tune the hyperparameters or how it works. That took some time to learn through both tutorials and trail and error. I also ran into a lot of problems with getting the backend to integrate well with the frontend, as I have very little experience with the requests library.
## Accomplishments that we're proud of
**Alyssa**: As this is one of the first apps I've made, I'm especially proud of the way it turned out! Going through the design process of building an app from scratch, and learning all the tools necessary to bring it together was definitely a rewarding experience.
**Cyrus**: I'm very proud that I've made my first forays into NLP, a field I've always been interested in but was too intimidated to start. It was immensely satisfying to learn about it, try it out, and use it to make a product that's more than just the sum of its parts.
## What we learned
**Alyssa**: I definitely learned a lot about working with APIs and web scraping. I also learned a lot about how to integrate all the different parts of an app together!
**Cyrus**: Learned how to tune NLP models, and more experience on developing a good frontend interface and how to request data from our backend.
## What's next for Politeracy
We hope to expand to more news sources, offer more complex stances, and use CoHere's classification endpoints to classify left-right bias in news sources so that users can understand the different political standpoints better. We'd also expand the UI, and allow users to include/exclude specific news sources, choose how many articles they'd like to see, and adjust the length of the summaries. | winning |
## Inspiration
In online documentaries, we saw visually impaired individuals and their vision consisted of small apertures. We wanted to develop a product that would act as a remedy for this issue.
## What it does
When a button is pressed, a picture is taken of the user's current view. This picture is then analyzed using OCR (Optical Character Recognition) and the text is extracted from the image. The text is then converted to speech for the user to listen to.
## How we built it
We used a push button connected to the GPIO pins on the Qualcomm DragonBoard 410c. The input is taken from the button and initiates a python script that connects to the Azure Computer Vision API. The resulting text is sent to the Azure Speech API.
## Challenges we ran into
Coming up with an idea that we were all interested in, incorporated a good amount of hardware, and met the themes of the makeathon was extremely difficult. We attempted to use Speech Diarization initially but realized the technology is not refined enough for our idea. We then modified our idea and wanted to use a hotkey detection model but had a lot of difficulty configuring it. In the end, we decided to use a pushbutton instead for simplicity in favour of both the user and us, the developers.
## Accomplishments that we're proud of
This is our very first Makeathon and we were proud of accomplishing the challenge of developing a hardware project (using components we were completely unfamiliar with) within 24 hours. We also ended up with a fully function project.
## What we learned
We learned how to operate and program a DragonBoard, as well as connect various APIs together.
## What's next for Aperture
We want to implement hot-key detection instead of the push button to eliminate the need of tactile input altogether. | ## Inspiration
The main problem we are focusing is the inability of differently abled people to communicate with normal people or other differently abled people. Now, we are here with the combined Hardware and Software solution which is called **“EME”**, after analysing some problems. And, try to make the next generation Support Device for differently abled people. (Deaf, mute & Blind)
## What it does
There is a Hardware project which consists of a Hand Gesture Recognition and Smart Blind Stick. Smart Blind Stick for “Blind People”, in this when object comes near to 1 m than less, then, buzzer beeping with increasing in beep frequency as object comes closer, through way they get an Alert.
Hand Gesture Recognition for “Dumb People (or mute people)”, because they have their own sign language but normal people don’t know the sign language which is used for intercommunication between mute people. This system will be useful to solve this problem and help them to make their interaction easily with the people. They can communicate hassle-free by using this device & with the help of our app EME, we converted gesture pattern into text to Speech.
In our app, there so many specifications-
1. Speech to Text (for Deaf people, to understand other people communication)
2. Text to Speech (for Dumb people with bluetooth connectivity)
3. Text to Speech (for Normally use)
## How we built it
In the Hardware of this project, Smart Blind Stick for “Blind People” using Adruino, Ultrasonic Sensor HC-SR04, IR Sensor & Buzzer to give alert and Hand Gesture Recognition for “Dumb People” using Arduino, Bluetooth HC 05, LDR Sensors & our app EME, through we convert gesture pattern into text to Speech.
We build our app on Flutter which is a hybrid platform using dart language. Because Flutter apps can run on ios and android devices. Then, Text to Speech with bluetooth connection and Speech to Text were implemented with our skills.
## Challenges we ran into
Due to lockdown, we don’t have much hardware components. So, we try to figure out this with Arduino Uno and Bluetooth in specific time duration. Also, make an app with such functioning is new to us.
## Accomplishments that we're proud of
Working of Hand Gesture recognition is so efficiently and accurately. Also, make a this device very cheap around 13 USD. ( There some sensors in market whose cost around 30 USD each then for four fingers 120 USD with other components additional cost also)
And, converted Text to Speech & Speech to Text successfully.
## What we learned
Learned a lot about that without having much components, how to make efficient project in hardware in specific time and also, learned about the new sensors.
Moreover, built a more functioning mobile app.
## What's next for EME
In terms of enhancing the project, we can introduce Obstacle alert Helmet for Blind People using circular rotate sensor like a radar and with the help of Gyrometer in hand gesture, we can increase the limits of signs means able for whole sentence instead of common words.
In terms of marketing the product, we would like to initially target differently abled people center of our region. | ## Inspiration
My partner and I can empathize with those that have disabilities. Because of this, we are passionate about doing our part in making the world a better place for them. This app can help blind people in navigating the world around them, making life easier and less dangerous.
## What it Does
Droid Eyes can help people with a loss of sight to go through their life in a safer way. By implementing google vision and voice technology of accessible smartphones, this app will narrate the path of a person, either out loud or through headphones by preference. For example, if a blind person is approaching a red light, the app will notify them to stop until it is green.
## How We Built it
**Hardware:** We first created a CAD design for a case that would hold the phone implementing the program, creating holes for the straps, speaker, and camera. This sketch was laser printed and put together via hot glue gun. As for the straps, we removed those from a reusable shopping bag to hold the case. The initial goal was to utilize a Raspi and create an entirely new product. However, we decided that a singular application will have a greater outreach.
**Software:** We utilized the Android development environment in order to prototype a working application. The image recognition is done on Google’s side with Google Cloud Vision API. To communicate with the API, we used a variety of software dependencies on the Android end, such as Apache Commons and Volley. The application is capable of utilizing both WIFI and cellular data in order to be practical in most scenarios.
## Challenges We Ran Into
**Hardware:** We first intended to 3D print our case, as designed on CAD. However, when exporting the file to the Makerbot Software, no details were shown of the case. After several attempts to fix this issue, we simply decided to use the same design but laser printed instead.
**Software:** Uploading the pictures and identifying the objects in them was not occurring in an efficient speed. This was because the API provided for Android would only allow batch photo uploads. This feature takes more time to transfer the picture as well as forcing the server to examine sixteen photos instead of one. Also, some of the dependencies were outdated, and Android did not build the application. Getting the camera to work autonomously was another struggle we faced as well.
## Accomplishments That We’re Proud of
When we entered this hackathon, this app was barely an idea. Through many hours of intense work, we created something that could hopefully change people’s lives for the better. We are very proud of this as well as what we learned personally throughout this project.
## What We Learned
In terms of hardware, we learned how to laser print objects. This can be very helpful in the future when creating material that can easily be put together, to save us the time of 3D printing. For our software, we used in part google vision for the first time. This API was what identifies the elements of each picture in our application.
## What’s Next for Droid Eyes?
We hope to expand upon this idea in the future, making it more widely available to other Android phones and Apple as well. By spreading the product to different devices, we hope to keep it open source so that many people can contribute by constantly improving it. We would also like to be able to 3D print a case instead of laser printing and then gluing together. | winning |
## Inspiration
As a team, we've all witnessed the devastation of muscular-degenerative diseases, such as Parkinson's, on the family members of the afflicted. Because we didn't have enough money or resources or time to research and develop a new drug or other treatment for the disease, we wanted to make the medicine already available as effective as possible. So, we decided to focus on detection; the early the victim can recognize the disease and report it to his/her physician, the more effective the treatments we have become.
## What it does
HandyTrack uses three tests: a Flex Test, which tests the ability of the user to bend their fingers into a fist, a Release Test, which tests the user's speed in releasing the fist, and a Tremor Test, which measures the user's hand stability. All three of these tests are stored and used to, over time, look for trends that may indicate symptoms of Parkinson's: a decrease in muscle strength and endurance (ability to make a fist), an increase in time spent releasing the fist (muscle stiffness), and an increase in hand tremors.
## How we built it
For the software, we built the entirety of the application in the Arduino IDE using C++. As for the hardware, we used 4 continuous rotation servo motors, an Arduino Uno, an accelerometer, a microSD card, a flex sensor, and an absolute abundance of wires. We also used a 3D printer to make some rings for the users to put their individual fingers in.
The 4 continuous rotation servos were used to provide resistance against the user's hands. The flex sensor, which is attached to the user's palm, is used to control the servos; the more bent the sensor is, the faster the servo rotation. The flex sensor is also used to measure the time it takes for the user to release the fist, a.k.a the time it takes for the sensor to return to the original position. The accelerometer is used to detect the changes in the user's hand's position, and changes in that position represent the user's hand tremors. All of this data is sent to the SD cards, which in turn allow us to review trends over time.
## Challenges we ran into
Calibration was a real pain in the butt. Every time we changed the circuit, the flex sensor values would change. Also, developing accurate algorithms for the functions we wanted to write was kind of difficult. Time was a challenge as well; we had to stay up all night to put out a finished product. Also, because the hack is so hardware intensive, we only had one person working on the code for most of the time, which really limited our options for front-end development. If we had an extra team member, we probably could have made a much more user-friendly application that looks quite a bit cleaner.
## Accomplishments that we're proud of
Honestly, we're happy that we got all of our functions running. It's kind of difficult only having one person code for most of the time. Also, we think our hardware is on-point. We mostly used cheap products and Arduino parts, yet we were able to make a device that can help users detect symptoms of muscular-degenerative diseases.
## What we learned
We learned that we should always have a person dedicated to front-end development, because no matter how functional a program is, it also needs to be easily navigable.
## What's next for HandyTrack
Well, we obviously need to make a much more user-friendly app. We would also want to create a database to store the values of multiple users, so that we can not only track individual users, but also to store data of our own and use the trends of different users to compare to the individuals, in order to create more accurate diagnostics. | ## Inspiration
One In every 250 people suffer from cerebral palsy, where the affected person cannot move a limb properly, And thus require constant care throughout their lifetimes. To ease their way of living, we have made this project, 'para-pal'.
The inspiration for this idea was blended with a number of research papers and a project called Pupil which used permutations to make communication possible with eye movements.
## What it does
![Main](https://media.discordapp.net/attachments/828211308305448983/828261879326572544/iris_seg.png?width=819&height=355)
**"What if Eyes can Speak? Yesss - you heard it right!"**
Para-pal is a novel idea that tracks patterns in the eye movement of the patient and then converts into actual speech. We use the state-of-the-art iris recognition (dlib) to accurately track the eye movements to figure out the the pattern. Our solution is sustainable and very cheap to build and setup. Uses QR codes to connect the caretaker and the patient's app.
We enable paralyzed patients to **navigate across the screen using their eye movements**. They can select an action by placing the cursor for more than 3 seconds or alternatively, they can **blink three times to select the particular action**. A help request is immediately sent to the mobile application of the care taker as a **push notification**
## How we built it
We've embraced flutter in our frontend to make the UI - simple, intuitive with modularity and customisabilty. The image processing and live-feed detection are done on a separate child python process. The iris-recognition at it's core uses dlib and pipe the output to opencv.
We've developed a desktop-app (which is cross-platform with a rpi3 as well)for the patient and a mobile app for the caretaker.
We also tried running our desktop application on Raspberry Pi using an old laptop screen. In the future, we wish to make a dedicated hardware which can be cost-efficient for patients with paralysis.
![hardware](https://media.discordapp.net/attachments/828211308305448983/828263070228676638/20210404_191100.jpg?width=542&height=406)
![hardware2](https://media.discordapp.net/attachments/828211308305448983/828263051420762182/20210404_191120.jpg?width=542&height=406)
## Challenges we ran into
Building up the dlib took a significant amount of time, because there were no binaries/wheels and we had to build from source. Integrating features to enable connectivity and sessions between the caretaker's mobile and the desktop app was hard. Fine tuning some parameters of the ML model, preprocessing and cleaning the input was a real challenge.
Since we were from a different time zone, it was challenging to stay awake throughout the 36 hours and make this project!
## Accomplishments that we're proud of
* An actual working application in such a short time span.
* Integrating additional hardware of a tablet for better camera accuracy.
* Decoding the input feed with a very good accuracy.
* Making a successful submission for HackPrinceton.
* Team work :)
## What we learned
* It is always better to use a pre-trained model than making one yourself, because of the significant accuracy difference.
* QR scanning is complex and is harder to integrate in flutter than how it looks on the outside.
* Rather than over-engineering a flutter component, search if a library exists that does exactly what is needed.
## What's next for Para Pal - What if your eyes can speak?
* More easier prefix-less code patterns for the patient using an algorithm like huffman coding.
* More advanced controls using ML that tracks and learns the patient's regular inputs to the app.
* Better analytics to the care-taker.
* More UI colored themes. | ## Inspiration
I was inspired to make this device while sitting in physics class. I really felt compelled to make something that I learned inside the classroom and apply my education to something practical. Growing up I always remembered playing with magnetic kits and loved the feeling of repulsion between magnets.
## What it does
There is a base layer of small magnets all taped together so the North pole is facing up. There are hall effect devices to measure the variances in magnetic field that is created by the user's magnet attached to their finger. This allows the device to track the user's finger and determine how they are interacting with the magnetic field pointing up.
## How I built it
It is build using the intel edison. Each hall effect device is either on or off depending if there is a magnetic field pointing down through the face of the black plate. This determines where the user's finger is. From there the analog data is sent via serial port to the processing program on the computer that demonstrates that it works. That just takes the data and maps the motion of the object.
## Challenges I ran into
There are many challenges I faced. Two of them dealt with just the hardware. I bought the wrong type of sensors. These are threshold sensors which means they are either on or off instead of linear sensors that give a voltage proportional to the strength of magnetic field around it. This would allow the device to be more accurate. The other one deals with having alot of very small worn out magnets. I had to find a way to tape and hold them all together because they are in an unstable configuration to create an almost uniform magnetic field on the base. Another problem I ran into was dealing with the edison, I was planning on just controlling the mouse to show that it works but the mouse library only works with the arduino leonardo. I had to come up with a way to transfer the data to another program, which is how i came up dealing with serial ports and initially tried mapping it into a Unity game.
## Accomplishments that I'm proud of
I am proud of creating a hardware hack that I believe is practical. I used this device as a way to prove the concept of creating a more interactive environment for the user with a sense of touch rather than things like the kinect and leap motion that track your motion but it is just in thin air without any real interaction. Some areas this concept can be useful in is in learning environment or helping people in physical therapy learning to do things again after a tragedy, since it is always better to learn with a sense of touch.
## What I learned
I had a grand vision of this project from thinking about it before hand and I thought it was going to theoretically work out great! I learned how to adapt to many changes and overcoming them with limited time and resources. I also learned alot about dealing with serial data and how the intel edison works on a machine level.
## What's next for Tactile Leap Motion
Creating a better prototype with better hardware(stronger magnets and more accurate sensors) | winning |
## Inspiration
Around 40% of the lakes in America are too polluted for aquatic life, swimming or fishing.Although children make up 10% of the world’s population, over 40% of the global burden of disease falls on them. Environmental factors contribute to more than 3 million children under age five dying every year. Pollution kills over 1 million seabirds and 100 million mammals annually. Recycling and composting alone have avoided 85 million tons of waste to be dumped in 2010. Currently in the world there are over 500 million cars, by 2030 the number will rise to 1 billion, therefore doubling pollution levels. High traffic roads possess more concentrated levels of air pollution therefore people living close to these areas have an increased risk of heart disease, cancer, asthma and bronchitis. Inhaling Air pollution takes away at least 1-2 years of a typical human life. 25% deaths in India and 65% of the deaths in Asia are resultant of air pollution. Over 80 billion aluminium cans are used every year around the world. If you throw away aluminium cans, they can stay in that can form for up to 500 years or more. People aren’t recycling as much as they should, as a result the rainforests are be cut down by approximately 100 acres per minute
On top of this, I being near the Great Lakes and Neeral being in the Bay area, we have both seen not only tremendous amounts of air pollution, but marine pollution as well as pollution in the great freshwater lakes around us. As a result, this inspired us to create this project.
## What it does
For the react native app, it connects with the Website Neeral made in order to create a comprehensive solution to this problem.
There are five main sections in the react native app:
The first section is an area where users can collaborate by creating posts in order to reach out to others to meet up and organize events in order to reduce pollution. One example of this could be a passionate environmentalist who is organizing a beach trash pick up and wishes to bring along more people. With the help of this feature, more people would be able to learn about this and participate.
The second section is a petitions section where users have the ability to support local groups or sign a petition in order to enforce change. These petitions include placing pressure on large corporations to reduce carbon emissions and so forth. This allows users to take action effectively.
The third section is the forecasts tab where the users are able to retrieve data regarding various data points in pollution. This includes the ability for the user to obtain heat maps regarding the amount of air quality, pollution and pollen in the air and retrieve recommended procedures for not only the general public but for special case scenarios using apis.
The fourth section is a tips and procedures tab for users to be able to respond to certain situations. They are able to consult this guide and find the situation that matches them in order to find the appropriate action to take. This helps the end user stay calm during situations as such happening in California with dangerously high levels of carbon.
The fifth section is an area where users are able to use Machine Learning in order to figure out whether where they are is in a place of trouble. In many instances, not many know exactly where they are especially when travelling or going somewhere unknown. With the help of Machine Learning, the user is able to place certain information regarding their surroundings and the Algorithm is able to decide whether they are in trouble. The algorithm has 90% accuracy and is quite efficient.
## How I built it
For the react native part of the application, I will break it down section by section.
For the first section, I simply used Firebase as a backend which allowed a simple, easy and fast way of retrieving and pushing data to the cloud storage. This allowed me to spend time on other features, and due to my ever growing experience with firebase, this did not take too much time. I simply added a form which pushed data to firebase and when you go to the home page it refreshes and see that the cloud was updated in real time
For the second section, I used native base in order to create my UI and found an assortment of petitions which I then linked and added images from their website in order to create the petitions tab. I then used expo-web-browser, to deep link the website in opening safari to open the link within the app.
For the third section, I used breezometer.com’s pollution api, air quality api, pollen api and heat map apis in order to create an assortment of data points, health recommendations and visual graphics to represent pollution in several ways. The apis also provided me information such as the most common pollutant and protocols for different age groups and people with certain conditions should follow. With this extensive api, there were many endpoints I wanted to add in, but not all were added due to lack of time.
For the fourth section, it is very much similar to the second section as it is an assortment of links, proofread and verified to be truthful sources, in order for the end user to have a procedure to go to for extreme emergencies. As we see horrible things happen, such as the wildfires in California, air quality becomes a serious concern for many and as a result these procedures help the user stay calm and knowledgeable.
For the fifth section, Neeral please write this one since you are the one who created it.
## Challenges I ran into
API query bugs was a big issue in formatting back the query and how to map the data back into the UI. It took some time and made us run until the end but we were still able to complete our project and goals.
## What's next for PRE-LUTE
We hope to use this in areas where there is commonly much suffering due to the extravagantly large amount of pollution, such as in Delhi where seeing is practically hard due to the amount of pollution. We hope to create a finished product and release it to the app and play stores respectively. | ## Inspiration
Between 1994 and 2013 there were 6,873 natural disasters worldwide, which claimed 1.35 million lives or almost 68,000 lives on average each year. In many of these disasters, people don't know where to go, and where is safe. For example, during Hurricane Harvey, many people learned about dangers such as alligators or flooding through other other people, and through social media.
## What it does
* Post a natural disaster hazard in your area
* Crowd-sourced hazards
* Pulls government severe weather data
* IoT sensor system to take atmospheric measurements and display on map
* Twitter social media feed of trending natural disasters in the area
* Machine learning image processing to analyze posted images of natural disaster hazards
Our app **Eye in the Sky** allows users to post about various natural disaster hazards at their location, upload a photo, and share a description of the hazard. These points are crowd-sourced and displayed on a map, which also pulls live data from the web on natural disasters. It also allows these users to view a Twitter feed of current trending information on the natural disaster in their location. The last feature is the IoT sensor system, which take measurements about the environment in real time and displays them on the map.
## How I built it
We built this app using Android Studio. Our user-created posts are hosted on a Parse database (run with mongodb). We pull data from the National Oceanic and Atmospheric Administration severe weather data inventory. We used Particle Electron to collect atmospheric sensor data, and used AWS to store this data in a JSON.
## Challenges I ran into
We had some issues setting up an AWS instance to run our image processing on the backend, but other than that, decently smooth sailing.
## Accomplishments that I'm proud of
We were able to work really well together, using everyone's strengths as well as combining the pieces into a cohesive app that we're really proud of. Everyone's piece was integrated well into the app, and I think we demonstrated good software collaboration.
## What I learned
We learned how to effectively combine our code across various languages and databases, which is a big challenge in software as a whole. We learned Android Development, how to set up AWS, and how to create databases to store custom data. Most importantly, we learned to have fun!
## What's next for Eye in the Sky
In the future, we would like to add verification of people's self-reported hazards (through machine learning and through up-voting and down-voting)
We would also like to improve the efficiency of our app and reduce reliance on network because there might not be network, or very poor network, in a natural disaster
We would like to add more datasets from online, and to display storms with bigger bubbles, rather than just points
We would also like to attach our sensor to drones or automobiles to gather more weather data points to display on our map | ## Check out our site -> [Saga](http://sagaverse.app)
## Inspiration
There are few better feelings in the world than reading together with a child that you care about. “Just one more story!” — “I promise I’ll go to bed after the next one” — or even simply “Zzzzzzz” — these moments forge lasting memories and provide important educational development during bedtime routines. We wanted to make sure that our loved ones never run out of good stories. Even more, we wanted to create a unique, dynamic reading experience for kids that makes reading even more fun.
After helping to build the components of the story, kids are able to help the character make decisions along the way. “Should Balthazar the bear search near the park for his lost friend? or should he look in the desert?” These decisions help children learn and develop key skills like decisiveness and action. The story updates in real time, ensuring an engaging experience for kids and parents.
Through copious amounts of delirious research, we learned that children can actually learn better and retain more when reading with parents on a tablet. After talking to 8 users (parents and kiddos) over the course of the weekend, we defined our problem space and set out to create a truly “Neverending Story.”
## What it does
Each day, *Saga* creates a new, illustrated bedtime story for children aged 0-7. Using OpenAI technology, the app generates and then illustrates an age and interest-appropriate story based on what they want to hear and what will help them learn. Along the way, our application keeps kids engaged by prompting decisions; like a real-time choose-your-own-adventure story.
We’re helping parents broaden the stories available for their children — imprinting values of diversity, inclusion, community, and a strong moral compass. With *Saga*, parents and children can create a universe of stories, with their specific interests at the center.
## How we built it
We took an intentional approach to developing a working MVP
* **Needs finding:** We began with a desire to uncover a need and build a solution based on user input. We interviewed 8 users over the weekend (parents and kids) and used their insights to develop our application.
* **Defined MVP:** A deployable application that generates a unique story and illustrations while allowing for dynamic reader inputs using OpenAI. We indexed on story, picture, and educational quality over reproducibility.
* **Tech Stack:** We used the latest LLM models (GPT-3 and DALLE-2), Flutter for the client, a Node/Express backend, and MongoDB for data management
* **Prompt Engineering:** Finding the limitations of the underlying LLM technology and instead using Guess and check until we narrowed down the prompt to produce to more consistent results. We explored borderline use cases to learn where the model breaks.
* **Final Touches:** Quality control and lots of tweaking of the image prompting functionality
## Challenges we ran into
Our biggest challenges revolved around fully understanding the power of, and the difficulties stemming from prompt generation for OpenAI. This struggle hit us on several different fronts:
1. **Text generation** - Early on, we asked for specific stories and prompts resembling “write me a 500-word story.” Unsurprisingly, the API completely disregarded the constraints, and the outputs were similar regardless of how we bounded by word count. We eventually became more familiar with the structure of quality prompts, but we hit our heads against this particular problem for a long time.
2. **Illustration generation** - We weren’t able to predictably write OpenAI illustration prompts that provided consistently quality images. This was a particularly difficult problem for us since we had planned on having a consistent character illustration throughout the story. Eventually, we found style modifiers to help bound the problem.
3. **Child-safe content** - We wanted to be completely certain that we only presented safe and age-appropriate information back to the users. With this in mind, we built several layers of passive and active protection to ensure all content is family friendly.
## What we learned
So many things about OpenAI!
1. Creating consistent images using OpenAI generation is super hard, especially when focusing on one primary protagonist. We addressed this by specifically using art styles to decrease the variability between images.
2. GPT-3's input / output length limitations are much more stringent than ChatGPT's -- this meant we had to be pretty innovative with how we maintained the context over the course of 10+ page stories.
3. How to reduce overall response time while using OpenAI's API, which was really important when generating so many images and using GPT-3 to describe and summarize so many things.
4. Simply instructing GPT to not do something doesn’t seem to work as well as carefully crafting a prompt of behavior you would like it to model. You need to trick it into thinking it is someone or something -- from there, it will behave.
## Accomplishments that we're proud of
We’re super excited about what we were able to create given that this is the first hackathon for 3 of our team members! Specifically, we’re proud of:
* Developing a fun solution to help make learning engaging for future generations
* Solving a real need for people in our lives
* Delivering a well-scoped and functional MVP based on multiple user interviews
* Integrating varied team member skill sets from barely technical to full-stack
## What's next for Saga
### **Test and Iterate**
We’re excited to get our prototype project in the hands of users and see what real-world feedback looks like. Using this customer feedback, we’ll quickly iterate and make sure that our application is really solving a user need. We hope to get this on the App Store ASAP!!
### **Add functionality**
Based on the feedback that we’ll receive from our initial MVP, we will prioritize additional functionality:
**Reading level that grows with the child** — adding more complex vocabulary and situations for a story and character that the child knows and loves.
**Allow for ongoing universe creation** — saving favorite characters, settings, and situations to create a rich, ongoing world.
**Unbounded story attributes** — rather than prompting parents with fixed attributes, give an open-ended prompt for more control of the story, increasing child engagement
**Real-time user feedback on a story to refine the prompts** — at the end of each story, capture user feedback to help personalize future prompts and stories.
### **Monetize**
Evaluate unit economics and determine the best path to market. Current possible ideas:
* SaaS subscription based on one book per day or unlimited access
* Audible tokens model to access a fixed amount of stories per month
* Identify and partner with mid-market publishers to license IP and leverage existing fan bases
* Whitelabel the solution on a services level to publishers who don’t have a robust engineering team
## References
<https://www.frontiersin.org/articles/10.3389/fpsyg.2017.00677/full> | winning |
## Inspiration
We were inspired by chrome extensions that let you highlight web-page text, improving ease of reading. We thought we could push the idea one step further by making highlighting synonymous with searching, streamlining the process of understanding complex concepts.
## What it does
Every time you highlight text in a web-page, this extension searches Wikipedia for related articles and provides a summary in a compact UI. The extension also provides the possibility of searching in the extension toolbar, for accessibility.
## How we built it
We used HTML, CSS, and JS to make the front end; the back end is built with Node.JS and relays API data from Wikipedia onto the webpage to display.
## Challenges we ran into
Chrome's system for communicating within extension programs proved highly difficult to work with, especially in conjunction with complex server tasks. The front-end was also difficult to adjust to, since the chrome extension formula forbids certain expected HTML/CSS controls.
## Accomplishments that we're proud of
We're proud of how much we've learned about both backend and frontend development, Kai is impressed by how much the final frontend resembles their initial sketches.
## What we learned
As said above, we learned a lot about server architecture, managing inter-layer communication, dynamically formulating webpages with JS, and creating good-looking graphic design.
## What's next for Wiki-Fly
We hope to continue improving and polishing this product until it is ready to be used on a regular basis by average internet users. We think it could truly help people comprehend difficult topics and make the internet as a whole more accessible. | ## Inspiration
We wanted to learn more about Node.js and lots of other interesting technologies and combine them into a usable product.
## What it does
PickIT is a system that lets you save and tag images that you see on the Internet into a small archive of your very own. The Chrome extension makes it possible to save these pictures as a context menu action, the web app allows you to view and tag them, and the API is the interface that lets them communicate.
## How I built it
We used Node.js and Express to write the back-end API of the server, which handled GET and POST requests. The POST requests posting images to a user's account used the Amazon S3 API. The front-end was built with Bootstrap and the Pug.js engine to handle data from the server and present it in a nice manner. Finally, the Chrome extension was built with regular JavaScript.
## Challenges we ran into
Building any project in a small time span is tough. A big challenge we faced during our development was splitting up the parts of the project so that everyone had something to do. The sections included building the Chrome extension, developing the API, and creating a front-end.
Coordination between teams is difficult, and this was our first time trying to write working code while ensuring that all the parts would work together. We used a combination of Google Docs and Github's issue tracking for communication.
Most of the technologies we used were new to us, and we had to read through lots of documentation to see how they worked. Lots of these technologies also involved difficult new concepts, such as image parsing and asynchronous function calls.
Also, apparently certain Chrome-extension JavaScript files don't log to the console. We learned how to debug with alerts instead.
## Accomplishments that I'm proud of
We're happy that the whole system works well, since it took a lot of tinkering to get to that stage. In addition, we're also happy about how we got the chance to learn about new technologies and implement them in an interesting product.
## What's next for blank
* Save text, videos, and other media
* Chrome extension has more features
* "Create a big picture with all the small pictures" -Sambhav
* Update front-end to have a more aesthetic interface
* Create permalinks to share with friends and set permissions | ## Inspiration
### Many people struggle with research. We wanted to fix that. So we built something to provide a starting point for research on any topic, with the goal of being concise and informative.
## What it does
### It uses uipath to find the definition of almost any topic on Wikipedia to give the user a bit of a general overview of the topic. Afterward, it scrapes the titles of the most relevant news articles to give the user a head-start in their research.
## How we built it
1. As mentioned before we have a uipath component that gets the definition of the topic from Wikipedia as a general overview. For that, we have built an algorithm that goes step by step into this process just like a human would. This gets tied into the python code using the start\_wikifetcher.py file which uses paperclip and other functions from the sentence\_tools.py.
2. sentence\_tools.py has numerous algorithms that are responsible for the computation to scrape the most relevant information. Such as the extract\_first\_sentence() and get\_key\_points\_google(), which extract the first sentence and get the headlines from the first page of google news on the topic respectively
3. For the front end, there is an index.html file that contains the HTML content of the page. It is fairly basic and easy to understand for the reader.
4. The graphics were made using a PowerPoint presentation.
### Note: Please refer to the README file from the git repository for a more thorough explanation of the installation process.
## Challenges we ran into
1. Google's limit for requests before it labels your IP as spam.
2. Working remotely
3. Managing teammates
4. Having unrealistic goals of making an AI that writes essays like humans.
## Accomplishments that we're proud of
### For the accomplishments, I will address how we overcame the challenges in their respective order as that is an accomplishment in itself.
1. We found a way to just get the first page from google search and we settled on only doing the search once other than doing a search for all selective combinations of the topic
2. We used discord calls to connect better
3. We optimized the team by selectively picking teammates.
4. We have made that a goal for the future now.
## What we learned
### This project taught us both technical and nontechnical skills, we had to familiarize ourselves with numerous APIs and new technologies such as UiPath. On the non-technical side of things, the reason this project was a success was because of the equitable amount of work everyone was given. Everyone worked on their part because they truly wanted to.
## What's next for Essay Generator
### Next step is to use all the data found by web scraping to form an essay just like a human would and that requires AI technologies like machine learning and neural networks. This was too hard to figure out in the 36 hrs. period but in the future years, we all will try to come up with a solution for this idea. | losing |
## Inspiration
My inspiration emerged after using chatgpt website. It was then i realised how it can be easy for people to learn new things through the power of Ai, i went further to analyse the limitations of the website and challenged myself to build an advanced application that would run on mobile phones, with more advanced features such as voice notes and video chat aiming at making the conversation more realistic and aiding users to attain information easily.
## What it does
The application uses the power of AI to respond to user questions, it answers its users both through text message and voice notes. initially a user is required to sign up, after registration is complete, signed up credidentials can be used to login in and enjoy the application features
## How we built it
The application involves the use of 3D model humans for video chat interactions that are from the botlibre website. Training of the model was done using the brainshop website. Authentication was handled with firebase, and the technology used to build the application was Android Java.
## Challenges we ran into
The main challenge i ran to was the lack of having no playstore account to appload the project, this issue makes the project not discoverable to the world. Other major implementations of the applications are still pending due to lack of no enough capital to facilitate these ideas to happen.
## Accomplishments that we're proud of
I am very proud to finally deliver what was in my mind to reality. Implementing all these features and finally producing a product that can be used and enjoyed by users is something that i am very grateful for.My passion is to solve challenges facing my community with technology, thus with this project made possible, i believe that a challenge is being solved.
## What we learned
I have learned that nothing is possible and every thing initially starts with an idea. I have learnt that encountering something hard should not make you backdown rather stand up and fight for what you planned for. I can say , that the development of this project wasnt easy, it involved sleepless nights, but im happy that today i showcase it to the world.
## What's next for Uliza Ai(Ask Ai)
New more features are going to be added soon, with chatting section completed, next step would be analysing and selecting another cartegory to be added to the app, such as weather prediction. | ## Inspiration
What is our first thought when we hear "health-care"? Is it an illness? Cancer? Disease? That is where we lose our focus from an exponential increasing crisis, especially in this post-COVID era. It is MENTAL HEALTH!
Studying at university, I have seen my friends suffer from depression and anxiety looking for someone to hear them out for once.
Statistically, an estimated 792 million individuals worldwide suffer from mental health diseases and concerns.
That's roughly one out of every ten persons on the planet.
In India, where I am from, the problem is even worse. Close to 14 per cent of India's population required active mental health interventions. Every year, about 2,00,000 Indians take their lives. The statistics are even higher if one starts to include the number of attempts to suicide.
The thought of being able to save even a fraction of this number is powerful enough to get me working this hard for it.
## What it does
Noor, TALKs because that's all it takes. She provides a comfortable environment to the user where they can share their thoughts very privately, and let that feelings out once and for all.
## How we built it
I built this app in certain steps:
1. Converting all the convolutional intents into a Machine Learning Model - Pytorch.
2. Building a framework where users can provide input and the model can output the best possible response that makes the most sense. Here, the threshold is set to 90% accuracy.
3. Building an elegant GUI.
## Challenges we ran into
Building Chatbots from scratch is extremely difficult. However, 36 hours were divided into sections where I could manage building a decent hack out of everything I got. Enhancing the bot's intelligence was challenging too. In the initial stages, I was experimenting with fewer intents, but with the addition of more intents, keeping track of the intents became difficult.
## Accomplishments that we're proud of
First, I built my own Chatbot for the first time!!! YAYYYYY! This is a really special project because of the fact that it is dealing with such a major issue in the world right now. Also, this was my first time making an entire hackathon project using Python and its frameworks only. Extremely new experience. I am proud of myself for pushing through the frustrating times when I felt like giving up.
## What we learned
Everything I made during this hackathon was something I had never done before. Legit, EVERYTHING! Let it be using NLP or Pytorch. Or even Tkinter for Graphic User Interface (GUI)! Honestly, may not be my best work ever but, definitely something that taught me the most!
## What's next for Noor
Switch from Tkinter to deploying my script into an application or web app. The only reason I went with Tkinter was to try learning something new. I'll be using flutter for app development and TFjs for a web-based application.
Discord: keivalya#8856 | ## Inspiration
We wanted to tackle a problem that impacts a large demographic of people. After research, we learned that 1 in 10 people suffer from dyslexia and 5-20% of people suffer from dysgraphia. These neurological disorders go undiagnosed or misdiagnosed often leading to these individuals constantly struggling to read and write which is an integral part of your education. With such learning disabilities, learning a new language would be quite frustrating and filled with struggles. Thus, we decided to create an application like Duolingo that helps make the learning process easier and more catered toward individuals.
## What it does
ReadRight offers interactive language lessons but with a unique twist. It reads out the prompt to the user as opposed to it being displayed on the screen for the user to read themselves and process. Then once the user repeats the word or phrase, the application processes their pronunciation with the use of AI and gives them a score for their accuracy. This way individuals with reading and writing disabilities can still hone their skills in a new language.
## How we built it
We built the frontend UI using React, Javascript, HTML and CSS.
For the Backend, we used Node.js and Express.js. We made use of Google Cloud's speech-to-text API. We also utilized Cohere's API to generate text using their LLM.
Finally, for user authentication, we made use of Firebase.
## Challenges we faced + What we learned
When you first open our web app, our homepage consists of a lot of information on our app and our target audience. From there the user needs to log in to their account. User authentication is where we faced our first major challenge. Third-party integration took us significant time to test and debug.
Secondly, we struggled with the generation of prompts for the user to repeat and using AI to implement that.
## Accomplishments that we're proud of
This was the first time for many of our members to be integrating AI into an application that we are developing so that was a very rewarding experience especially since AI is the new big thing in the world of technology and it is here to stay.
We are also proud of the fact that we are developing an application for individuals with learning disabilities as we strongly believe that everyone has the right to education and their abilities should not discourage them from trying to learn new things.
## What's next for ReadRight
As of now, ReadRight has the basics of the English language for users to study and get prompts from but we hope to integrate more languages and expand into a more widely used application. Additionally, we hope to integrate more features such as voice-activated commands so that it is easier for the user to navigate the application itself. Also, for better voice recognition, we should | losing |
## Inspiration
My goal at Cal Hacks 4.0 was to use machine learning to help the blind, and I have created the technology that does just that!
## What it does
I am using the Google cloud vision API to identify objects around a user in combination with AR to map distances to the those objects.
The blind can use this to detect walls and objects and know approx how far away they are from them!
## How I built it
My first step was to create an AR Video feed. From that I needed to figure out how to measure and map objects in a 3D plane. I then implemented Googles Vision API for object recognition in a 3D plane. Then finishing up with 3D AR labels and text to voice.
## Challenges I ran into
My biggest challenges were the following:
1. How to use machine learning to detect real world objects in a 3D plane.
2. Measuring distances to real world objects
3. Implementing Google Cloud Vision API
4. Drawing 3D text and lines to detected objects
## Accomplishments that I'm proud of
Currently, vision apis that assist the blind only educate them on what is around them on a 2D plane. I have just added a 3D understanding! My goal is to help millions around the world with this technology!
## What I learned
I learned how to use the Google Cloud ML API. I was surprised at how accurate it is for use in object recognition. I now want to use other AI Google APIs for future builds!
## What's next for Vision AI 3D
I want to update the distances as the user moves away or towards detects objects on a 3D plane. This will help blind people navigate a room. | ## Inspiration
The current situation of covid-19 has made our classes go online.Our current classes are taken through video conferencing platforms such as zoom,meet etc.So one time I saw one of our teachers helping one of my friend complete a question by sharing his camera facing the notebook.So we thought why not create a system that does exactly the same for blind people.This small device could have huge impacts on the life of Blind people.
## What it does
Lend Your Eyes is a raspberry pi powered smart glass which helps blind person by taking the help of your friends or family.Whenever a blind person is in a situation requiring help he/she can request help through our smart glass.The blind person can request help via a button or use voice commands.The raspberry pi creates a Jitsi meeting and shares the link via SMS to all your friends and family.Thus anyone can join the link and help the blind person.It is also capable of describing an event that is in front of the blind person by generating a caption from the image captured
## How We built it
We used python to automate the process of creating a jitsi meeting in the raspberry Pi.Used Twilio's API for sending the text messages to all the saved contacts.We have also used Microsoft Cognitive Services API for generating a caption from the image captured.The text generated can be converted to any language using google Translate API and can be given as voice output to the blind person.
## Challenges We ran into
Since each one of our team lives in different parts of our state.It was very difficult to collaborate on a hardware project due to state restrictions and lack of hardware components.But however we were able to complete the basic working of the raspberry pi code
## Accomplishments that We're proud of
This was the first hackathon for majority of our team.So we learned what a hackathon is and enjoyed brainstorming on ideas.We learned that building a product out of our ideas doesn't require you to be with your team.Thanks to the present health crisis
## What We learned
We became familiar with the raspberry pi environment and learned to work with python's speech recognition module.Also this was our first time working with a computer vision API
## What's next for Lend Your Eyes
* Add basic object detection capability using ultrasonic sensors or Tensorflow API
* Brainstorm on ideas on how we could solve more of the challenges faced by such people.
* Add the feature of recognizing family members using face recognition | ## Inspiration
One of our teammate’s grandfathers suffers from diabetic retinopathy, which causes severe vision loss.
Looking on a broader scale, over 2.2 billion people suffer from near or distant vision impairment worldwide. After examining the issue closer, it can be confirmed that the issue disproportionately affects people over the age of 50 years old. We wanted to create a solution that would help them navigate the complex world independently.
## What it does
### Object Identification:
Utilizes advanced computer vision to identify and describe objects in the user's surroundings, providing real-time audio feedback.
### Facial Recognition:
It employs machine learning for facial recognition, enabling users to recognize and remember familiar faces, and fostering a deeper connection with their environment.
### Interactive Question Answering:
Acts as an on-demand information resource, allowing users to ask questions and receive accurate answers, covering a wide range of topics.
### Voice Commands:
Features a user-friendly voice command system accessible to all, facilitating seamless interaction with the AI assistant: Sierra.
## How we built it
* Python
* OpenCV
* GCP & Firebase
* Google Maps API, Google Pyttsx3, Google’s VERTEX AI Toolkit (removed later due to inefficiency)
## Challenges we ran into
* Slow response times with Google Products, resulting in some replacements of services (e.g. Pyttsx3 was replaced by a faster, offline nlp model from Vosk)
* Due to the hardware capabilities of our low-end laptops, there is some amount of lag and slowness in the software with average response times of 7-8 seconds.
* Due to strict security measures and product design, we faced a lack of flexibility in working with the Maps API. After working together with each other and viewing some tutorials, we learned how to integrate Google Maps into the dashboard
## Accomplishments that we're proud of
We are proud that by the end of the hacking period, we had a working prototype and software. Both of these factors were able to integrate properly. The AI assistant, Sierra, can accurately recognize faces as well as detect settings in the real world. Although there were challenges along the way, the immense effort we put in paid off.
## What we learned
* How to work with a variety of Google Cloud-based tools and how to overcome potential challenges they pose to beginner users.
* How to connect a smartphone to a laptop with a remote connection to create more opportunities for practical designs and demonstrations.
How to connect a smartphone to a laptop with a remote connection to create more opportunities for practical designs and demonstrations.
How to create docker containers to deploy google cloud-based flask applications to host our dashboard.
How to develop Firebase Cloud Functions to implement cron jobs. We tried to developed a cron job that would send alerts to the user.
## What's next for Saight
### Optimizing the Response Time
Currently, the hardware limitations of our computers create a large delay in the assistant's response times. By improving the efficiency of the models used, we can improve the user experience in fast-paced environments.
### Testing Various Materials for the Mount
The physical prototype of the mount was mainly a proof-of-concept for the idea. In the future, we can conduct research and testing on various materials to find out which ones are most preferred by users. Factors such as density, cost and durability will all play a role in this decision. | losing |
# For the Kidz
The three of us were once part of an organization that provided after-school coding classes and lab-assisting services to local under-resourced middle and high schools. While we all enjoyed our individual experiences, we agreed that the lack of adequate hardware (ie. broken Chromebooks) as well as software that assumed certain basic tech skills (ie. typing) were not the best for teaching the particular group of under-resourced students in our classrooms.
Thus, this hackathon, we strived to meet two goals:
(a) Move the learning platform to a mobile device, which is more accessible to students.
(b) Make the results of the code more visual and clear via gamification and graphics.
We designed our app on Figma, then prototyped it on Invision Studio. In parallel, we used Android Studio, Java, and ButterKnife to develop the app.
Our largest challenges were in the development of the app, initially trying to leverage Google's Blockly Library in order to provide functional code blocks without reinventing the wheel. However, due to the lack of documentation and resources of individuals who used Blockly for Android applications (support for Android Dev. was deprecated), we decided to switch towards building everything from the ground up to create a prototype with basic functionality. On our coding journey, we learned about MVP architecture, modularizing our code with fragments, and implementing drag-and-drop features.
Our Invision Studio Demo:
[link](https://www.youtube.com/watch?v=upxkcIj16j0) | ## Inspiration
When coming up with the idea for our hack, we realized that as engineering students, and specifically first year students, we all had one big common problem... time management. We all somehow manage to run out of time and procrastinate our work, because it's hard to find motivation to get tasks done. Our solution to this problem is an app that would let you make to-do lists, but with a twist.
## What it does
The app will allow users to make to-do lists, but each task is assigned a number of points you can receive on completion. Earning points allows you to climb leaderboards, unlock character accessories, and most importantly, unlock new levels of a built-in game. The levels of the built in game are not too long to complete, as to not take away too much studying, but it acts as a reward system for people who love gaming. It also has a feature where you can take pictures of your tasks as your completing them, that you can share with friends also on the app, or archive for yourself to see later. The app includes a pomodoro timer to promote studying, and a forum page where you are able to discuss various educational topics with other users to further enhance your learning experience on this app.
## How we built it
Our prototype was built on HTML using a very basic outline. Ideally, if we were to go further with this app, we would use a a framework such as Django or Flask to add a lot more features then this first prototype.
## Challenges we ran into
We are *beginners*!! This was a first hackathon for almost all of us, and we all had very limited coding knowledge previously, so we spent a lot of time learning new applications, and skills, and didn't get much time to actually build our app.
## Accomplishments that we're proud of
Learning new applications! We went through many different applications over the past 24 hours before landing on HTML to make our app with. We looked into Django, Flask, and Pygame, before deciding on HTML, so we gained some experience with these as well.
## What we learned
We learned a lot over the weekend from various workshops, and hands-on personal experience. A big thing we learned is the multiple components that go into web development and how complicated it can get. This was a great insight into the world of real coding, and the application of coding that is sure to stick with us, and keep us motivated to keep teaching ourselves new things!
## What's next for Your Future
Hopefully, in the future we're able to further develop Your Future to make it complete, and make it run the way we hope. This will involve a lot of time and dedication to learning new skills for us, but we hope to take that time and put in the effort to learn those skills! | ## Inspiration
Save the World is a mobile app meant to promote sustainable practices, one task at a time.
## What it does
Users begin with a colorless Earth prominently displayed on their screens, along with a list of possible tasks. After completing a sustainable task, such as saying no to a straw at a restaurant, users obtain points towards their goal of saving this empty world. As points are earned and users level up, they receive lively stickers to add to their world. Suggestions for activities are given based on the time of day. They can also connect with their friends to compete for the best scores and sustainability. Both the fun stickers and friendly competition encourage heightened sustainability practices from all users!
## How I built it
Our team created an iOS app with Swift. For the backend of tasks and users, we utilized a Firebase database. To connect these two, we utilized CocoaPods.
## Challenges I ran into
Half of our team had not used iOS before this Hackathon. We worked together to get past this learning curve and all contribute to the app. Additionally, we created a setup in Xcode for the wrong type of database at first. At that point, we made a decision to change the Xcode setup instead of creating a different database. Finally, we found that it is difficult to use CocoaPods in conjunction with Github, because every computer needs to do the pod init anyway. We carefully worked through this issue along with several other merge conflicts.
## Accomplishments that I'm proud of
We are proud of our ability to work as a team even with the majority of our members having limited Xcode experience. We are also excited that we delivered a functional app with almost all of the features we had hoped to complete. We had some other project ideas at the beginning but decided they did not have a high enough challenge factor; the ambition worked out and we are excited about what we produced.
## What I learned
We learned that it is important to triage which tasks should be attempted first. We attempted to prioritize the most important app functions and leave some of the fun features for the end. It was often tempting to try to work on exciting UI or other finishing touches, but having a strong project foundation was important. We also learned to continue to work hard even when the due date seemed far away. The first several hours were just as important as the final minutes of development.
## What's next for Save the World
Save the World has some wonderful features that could be implemented after this hackathon. For instance, the social aspect could be extended to give users more points if they meet up to do a task together. There could also be be forums for sustainability blog posts from users and chat areas. Additionally, the app could recommend personal tasks for users and start to “learn” their schedule and most-completed tasks. | losing |
## Inspiration
We were inspired by Michael Reeves, a YouTuber who is known for his "useless inventions." We all enjoy his videos and creations, so we thought it would be cool if we tried it out too.
## What it does
Abyss. is a table that caves in on itself and drops any item that is placed onto it into the abyss.
## How we built it
Abyss. was built with an Arduino UNO, breadboard, a table and Arduino parts to control the table. It uses a RADAR sensor to detect when something is placed on the table and then controls the servo motors to drop the table open.
## Challenges we ran into
Due to the scale of our available parts and materials, we had to downsize from a full-sized table/nightstand to a smaller one to accommodate the weak, smaller servo motors. Larger ones need more than the 5V that the UNO provides to work. We also ran into trouble with the Arduino code since we were all new to Arduinos. We also had a supply chain issue with Amazon simply not shipping the motors we ordered. Also, the "servo motors", or as they were labelled, turned out to be regular step-down motors.
## Accomplishments that we're proud of
We're happy that the project is working and the results are quite fun to watch.
## What we learned
We learned a lot about Arduinos and construction. We did a lot of manual work with hand-powered tools to create our table and had to learn Arduinos from scratch to get the electronics portion working.
## What's next for Abyss.
We hope to expand this project to a full-sized table and integrate everything on a larger scale. This could include a more sensitive sensor, larger motors and power tools to make the process easier. | ## Inspiration
It is nearly a year since the start of the pandemic and going back to normal still feels like a distant dream.
As students, most of our time is spent attending online lectures, reading e-books, listening to music, and playing online games. This forces us to spend immense amounts of time in front of a large monitor, clicking the same monotonous buttons.
Many surveys suggest that this has increased the anxiety levels in the youth.
Basically, we are losing the physical stimulus of reading an actual book in a library, going to an arcade to enjoy games, or play table tennis with our friends.
## What it does
It does three things:
1) Controls any game using hand/a steering wheel (non-digital) such as Asphalt9
2) Helps you zoom-in, zoom-out, scroll-up, scroll-down only using hand gestures.
3) Helps you browse any music of your choice using voice commands and gesture controls for volume, pause/play, skip, etc.
## How we built it
The three main technologies used in this project are:
1) Python 3
The software suite is built using Python 3 and was initially developed in the Jupyter Notebook IDE.
2) OpenCV
The software uses the OpenCV library in Python to implement most of its gesture recognition and motion analysis tasks.
3) Selenium
Selenium is a web driver that was extensively used to control the web interface interaction component of the software.
## Challenges we ran into
1) Selenium only works with google version 81 and is very hard to debug :(
2) Finding the perfect HSV ranges corresponding to different colours was a tedious task and required me to make a special script to make the task easier.
3) Pulling an all-nighter (A coffee does NOT help!)
## Accomplishments that we're proud of
1) Successfully amalgamated computer vision, speech recognition and web automation to make a suite of software and not just a single software!
## What we learned
1) How to debug selenium efficiently
2) How to use angle geometry for steering a car using computer vision
3) Stabilizing errors in object detection
## What's next for E-Motion
I plan to implement more components in E-Motion that will help to browse the entire computer and make the voice commands more precise by ignoring background noise. | ## Inspiration
We were inspired by our shared love of dance. We knew we wanted to do a hardware hack in the healthcare and accessibility spaces, but we weren't sure of the specifics. While we were talking, we mentioned how we enjoyed dance, and the campus DDR machine was brought up. We decided to incorporate that into our hardware hack with this handheld DDR mat!
## What it does
The device is oriented so that there are LEDs and buttons that are in specified directions (i.e. left, right, top, bottom) and the user plays a song they enjoy next to the sound sensor that activates the game. The LEDs are activated randomly to the beat of the song and the user must click the button next to the lit LED.
## How we built it
The team prototyped the device for the Arduino UNO with the initial intention of using a sound sensor as the focal point and slowly building around it, adding features where need be. The team was only able to add three features to the device due to the limited time span of the event. The first feature the team attempted to add was LEDs that reacted to the sound sensor, so it would activate LEDs to the beat of a song. The second feature the team attempted to add was a joystick, however, the team soon realized that the joystick was very sensitive and it was difficult to calibrate. It was then replaced by buttons that operated much better and provided accessible feedback for the device. The last feature was an algorithm that added a factor of randomness to LEDs to maximize the "game" aspect.
## Challenges we ran into
There was definitely no shortage of errors while working on this project. Working with the hardware on hand was difficult, the team was nonplussed whether the issue on hand stemmed from the hardware or an error within the code.
## Accomplishments that we're proud of
The success of the aforementioned algorithm along with the sound sensor provided a very educational experience for the team. Calibrating the sound sensor and developing the functional prototype gave the team the opportunity to utilize prior knowledge and exercise skills.
## What we learned
The team learned how to work within a fast-paced environment and experienced working with Arduino IDE for the first time. A lot of research was dedicated to building the circuit and writing the code to make the device fully functional. Time was also wasted on the joystick due to the fact the values outputted by the joystick did not align with the one given by the datasheet. The team learned the importance of looking at recorded values instead of blindly following the datasheet.
## What's next for Happy Fingers
The next steps for the team are to develop the device further. With the extra time, the joystick method could be developed and used as a viable component. Working on delay on the LED is another aspect, doing client research to determine optimal timing for the game. To refine the game, the team is also thinking of adding a scoring system that allows the player to track their progress through the device recording how many times they clicked the LED at the correct time as well as a buzzer to notify the player they had clicked the incorrect button. Finally, in a true arcade fashion, a display that showed the high score and the player's current score could be added. | winning |
## Inspiration
We were inspired by the difficulty of filtering out the clothes we were interested in from all the available options. As a group of motivated undergraduate software developers, we were determined to find a solution.
## What it does
MatchMyStyle intelligently filters brand catalogs using past items from your personal wardrobe that you love. It aims to enhance the shopping experience for both brands and their consumers.
## How we built it
We split our tech stack into three core ecosystems. Our backend ecosystem hosted its own API to communicate with the machine learning model, Cloud Firestore, Firebase Storage and our frontend. Cloud Firestore was used to store our user dataset for training purposes with the ability to add additional images and host them on Firebase Storage.
The ML ecosystem was built using Google Cloud's Vision API and fetched images using Firebase Storage buckets. It learns from images of past items you love from your personal wardrobe to deliver intelligent filters.
Finally, the frontend ecosystem demonstrates the potential that could be achieved by a fashion brand's catalog being coupled with our backend and ML technology to filter the items that matter to an individual user.
## Challenges we ran into
We knew going into this project that we wanted to accomplish something ambitious that could have a tangible impact on people to increase productivity. One of the biggest hurdles we encountered was finding the appropriate tools to facilitate our machine learning routine in a span of 24 hours. Eventually we decided on Google Cloud's Vision API which proved successful.
Our backend was the glue that held the entire project together. Ensuring efficient and robust communication across frontend and the machine learning routine involved many endpoints and moving parts.
Our frontend was our hook that we hoped would show brands and consumers the true potential of our technology. Featuring a custom design made in Figma and developed using JavaScript, React, and CSS, it attempts to demonstrate how our backend and ML ecosystems could be integrated into any brand's pre-existing frontend catalog.
## Accomplishments that we're proud of
We're proud of finishing all of the core ecosystems and showing the potential of MatchMyStyle.
## What we learned
We learned tons in the 24 hours of development. We became more familiar with Google Cloud's Vision API and the services it offers. We worked on our Python / Flask skills and familiarized ourselves with Cloud Firestore, and Firebase Storage. Finally, we improved our design skills and got better at developing more complex frontend code to bring it to life.
## What's next for MatchMyStyle
We would love to see if fashion brands would be interested in our technology and any other ideas they may have on how we could offer value to both them and their consumers. | ## Inspiration
The fashion industry produces around 10% of the world’s carbon emissions and is using copious amounts of potable water and precious energy. By keeping clothes for more than 9 months, the clothes start reducing their environmental impact by more than 30% (because you aren't buying new clothes). Our team wanted to find a way to reduce our everyday impact on the environment and help people become more environmentally mindful. People spend months of their lives deciding on what to wear, with many possible patterns and colours in their closets, each for different temperatures and seasons. Magazines and fashion websites always suggest new clothes. We want people to “shop their own closets”, trying different combinations of clothes they already own. We decided to create StyleEyes order to encourage sustainable fashion choices, reduce the amount of time needed to get ready, and improve their fashion style.
## What it does
The user takes a picture of what they’re currently wearing or plan to wear, and StyleEyes uses a custom machine learning algorithm to provide recommendations on accessories, patterns, and colour combinations based on the user’s existing closet. We used colour theory, trends and general fashion styles to determine the best outfit. StyleEyes also informs the user about their clothes’ environmental impact, encouraging reusability through many unique options for each item.
## How I built it
First, we trained Microsoft Azure to detect different textures, colours, and patterns on pictures of people wearing clothes by selecting a variety of pictures and manually tagging colours. We then incorporated our best iteration into Android Studio using asynchronous HTTP calls to create a mobile application. We developed its user interface and functionality, which includes a custom method to give advice based on tags found, as well as recommending an accessory from the user’s current “closet”. The “closet” also has a calculator for the approximate environmental footprint, approximating costs of accessories based on existing data. The app was uploaded and tested on our Android mobile phones.
## Challenges I ran into
We had difficulty building a custom machine learning model with our own tags, since it required a lot of iteration and training. We originally drew our bounding boxes incorrectly, which affected our early test results. We erroneously used machine learning to detect colour, and in hindsight it would have been better to detect colours through more simple image scanning.
There were challenges connecting the API to the Android Studio project. The Microsoft Azure SDK was not working properly with Android Studio, so we had to manually do an HTTP call using an asynchronous base class. We were also inexperienced with Android Studio so switching contexts and activities was difficult, especially triggering the context switch from the conclusion of the HTTP call, as that was an asynchronous static context, whereas contexts must be switched from a non-static context.
## Accomplishments that I'm proud of
The UI looks quite sleek, and it works perfectly on a physical device.
After many iterations, our machine learning model works very well with pattern recognition.
We included carbon footprint calculations to show the environmental impact of the user’s closet.
## What I learned
Conda is the superior method of installing Python libraries.
We learned how to both properly and improperly train machine learning models, and discovered that bounding boxes need to be larger and more varied in shape for accurate results.
Some of us learned how to properly use Git and how to commit, push and pull without disrupting the workflow.
## What's next for StyleEyes
Due to limited time, our machine learning solution was extremely primitive. We found an online database of clothing classification but it required emailing alumni from the University of Hong Kong, which we didn’t know if they would respond in time.
With self-built server-based classification, we could use the database to pull example clothing from and also get more accurate tagging.
Another possible feature would be uploading an inventory of your closet so that the app could recommend clothes that you already have instead of just matching accessories. In the future, StyleEyes will be able to recommend an entire outfit based on factors such as weather, temperature and style.
The clothing recommendation algorithm needs work as well. This could be helped by a redesign of the machine learning tagging system, using photo processing rather than machine learning to detect colours, and saving the machine learning for patterns and types of clothing instead.
## How to Use StyleEyes
1. Open the app. The home screen will show the accessories in your closet to keep track of what you have.
When you click on the button on the upper left-hand corner, it takes you to a page where you can see the environmental impact of your accessories.
2. From the home app, you can tap the “style me” button which pulls up a camera:
3. Take a clear photo of your outfit and make sure you like it. Our app analyzes your outfit and tells you our recommendations for accessories
4. Read recommendations based on your clothes’ colours, textures, and patterns.
5. Save time, money, and the environment! | ## Inspiration
Fashion has always been a world that seemed far away from tech. We want to bridge this gap with "StyleList", which understands your fashion within a few swipes and makes personalized suggestions for your daily outfits. When you and I visit the Nordstorm website, we see the exact same product page. But we could have completely different styles and preferences. With Machine Intelligence, StyleList makes it convenient for people to figure out what they want to wear (you simply swipe!) and it also allows people to discover a trend that they favor!
## What it does
With StyleList, you don’t have to scroll through hundreds of images and filters and search on so many different websites to compare the clothes. Rather, you can enjoy a personalized shopping experience with a simple movement from your fingertip (a swipe!). StyleList shows you a few clothing items at a time. Like it? Swipe left. No? Swipe right! StyleList will learn your style and show you similar clothes to the ones you favored so you won't need to waste your time filtering clothes. If you find something you love and want to own, just click “Buy” and you’ll have access to the purchase page.
## How I built it
We use a web scrapper to get the clothing items information from Nordstrom.ca and then feed these data into our backend. Our backend is a Machine Learning model trained on the bank of keywords and it provides next items after a swipe based on the cosine similarities between the next items and the liked items. The interaction with the clothing items and the swipes is on our React frontend.
## Accomplishments that I'm proud of
Good teamwork! Connecting the backend, frontend and database took us more time than we expected but now we have a full stack project completed. (starting from scratch 36 hours ago!)
## What's next for StyleList
In the next steps, we want to help people who wonders "what should I wear today" in the morning with a simple one click page, where they fill in the weather and plan for the day then StyleList will provide a suggested outfit from head to toe! | losing |
## What?
Our goal was to create a noninvasive location tracker through a confined space. Our solution provides store owners/managers with a tool to see where their consumers spend most of their time without invading the customer's right to privacy. Our goal is that the merchant incentivizes the consumer with a discounted purchase so that the store can collect data about where to optimally place products.
Usage of the product is entirely up to the consumer, and the beauty of the product is that it stops tracking a user's location beyond the confines of a store.
**DISCLAIMER:** This product does not violate a consumer's right to privacy, as it ends tracking by the time the consumer leaves the predefined confines of the system.
## How it works
Given a beacon at position (*Xa, Ya*), a user's distance from the router can be calculated through a transformed form of the Free Space Path Loss (FSPL) formula. Through communicating with the wi-fi emitted from the beacon, a computer can calculate the signal strength in dBm and the frequency in MHz. When plugged into the following equation, where f is the frequency and s is the signal strength, a distance *d* in meters is returned.
While this was a great start, the problem persisted as to finding the user's specific location. The equation returns a distance between the user and the beacon, which forms a circle around the beacon of radius *d*. In order to track the precise location, we used a technique known as *trilateration* (very similar to *triangularation* in GPS's). When we use *n*=3 beacons, we can plot three different circles and thereby get a much more precise area as to where the user could be through optimization with least squares approximation. As can be seen with a simple graph below, the area of overlap between all three beacons can be used to generalize a location within a given confined area.
## Computational Technicalities/Specifications
A computer is used in the middle (can be as portable as an Raspberry Pi if plugged into a portable charger; we were lacking in resources so we used a laptop) to communicate with all three beacons. The computer recursively disconnects with a Wi-Fi connection, communicates with each beacon, calculates the location, then communicates with a GCP back-end via a RESTful API, mimicking multilateral U-TDOA technology that's used to geolocate cell phones. Mathematical models were originally drafted in Python, but were ported to a Node.js back-end for powerful and asynchronous network tools.
## Challenges/Responsibilities
This was a first in multiple ways for all of us. Henry, Andy, and Saul came to PennApps hoping to build some kind of deep learning project as that's where their interests lay. Nathan was originally responsible for front-end interactivity. However, after arriving and spending all of Friday night brainstorming ideas and working out a lot of what ended up being useless complicated math, we decided to go with something unique and explore territories we've never dealt with in the past. None of us have ever worked with network engineering or hardware hacking, so this entire experience was brand new for all of us. Henry and Andy spent a lot of the original time writing bash scripts and mathematical models for network analyses. Nathan spent a good part of the later half of Saturday implementing the abstractions with his specialties in JavaScript (the other three mainly do Python development). Saul, being a Physics grad student right here at Penn, was able to help in both facets a myriad of times where/when needed. He was primarily responsible for the actual trilateration models in Python that then became much easier to port to JavaScript.
Also, due to the use of multiple phones and OSes as beacons, the signals weren't uniform and often resulted in diverging plots, which led to the necessity of more approximation functions. In production, all three beacons will be identical for more uniform signal analysis.
## Future Plans
We plan to implement this in small shops so that they know how to optimally organize their materials for consumers who want them at a given time and place. As brick and mortar shops are quickly succumbing to large online retail stores, this will help small businesses boost profits and hopefully attract more consumers away from online retail and toward their own business. We will also move into RF technology to eliminate the dependence on internet in lower-income areas of the country and world. For more precision, we could potentially add more nodes for linear scalability. | ## Inspiration
Our inspiration was Find My by Apple. It allows you to track your Apple devices and see them on a map giving you relevant information such as last time pinged, distance, etc.
## What it does
Picks up signals from beacons using the Eddystone protocol. Using this data, it will display the beacon's possible positions on Google Maps.
## How we built it
Node.js for the scanning of beacons, our routing and our API which is hosted on Heroku. We use React.js for the front end with Google Maps as the main component of the web app.
## Challenges we ran into
None of us had experience with mobile app development so we had to improvise with our skillset. NodeJs was our choice however we had to rely on old deprecated modules to make things work. It was tough but in the end it was worth it as we learned a lot.
Calculating the distance from the given data was also a challenge but we managed to get it quite accurately.
## Accomplishments that I'm proud of
Using hardware was an interesting as I (Olivier) have never done a hackathon project with them. I stick to web apps as it is my comfort zone but this time we have merged two together.
## What we learned
Some of us learned front-end web development and even got started with React. I've learned that hardware hacks doesn't need to be some low-level programming nightmare (which to me seemed it was).
## What's next for BeaconTracker
The Eddystone technology is deprecated and beacons are everywhere in every day life. I don't think there is a future for BeaconTracker but we have all learned much from this experience and it was definitely worth it. | ## TLDR
Duolingo is one of our favorite apps of all time for learning. For DeerHacks, we wanted to bring the amazing learning experience from Duolingo even more interactive by bringing it to life in VR, making it more accessible by offering it for free for all, and making it more personalized by offering courses beyond languages so everyone can find a topic they enjoy.
Welcome to the future of learning with Boolingo, let's make learning a thrill again!
## Inspiration 🌟
We were inspired by the monotonous grind of traditional learning methods that often leave students disengaged and uninterested. We wanted to transform learning into an exhilarating adventure, making it as thrilling as gaming. Imagine diving into the depths of mathematics, exploring the vast universe of science, or embarking on quests through historical times—all while having the time of your life. That's the spark that ignited BooLingo! 🚀
## What it does 🎮
BooLingo redefines the learning experience by merging education with the immersive world of virtual reality (VR). It’s not just a game; it’s a journey through knowledge. Players can explore different subjects like Math, Science, Programming, and even Deer Facts, all while facing challenges, solving puzzles, and unlocking levels in a VR landscape. BooLingo makes learning not just interactive, but utterly captivating! 🌈
## How we built it 🛠️
We leveraged the power of Unity and C# to craft an enchanting VR world, filled with rich, interactive elements that engage learners like never before. By integrating the XR Plug-in Management for Oculus support, we ensured that BooLingo delivers a seamless and accessible experience on the Meta Quest 2, making educational adventures available to everyone, everywhere. The journey from concept to reality has been nothing short of a magical hackathon ride! ✨
## Challenges we ran into 🚧
Embarking on this adventure wasn’t without its trials. From debugging intricate VR mechanics to ensuring educational content was both accurate and engaging, every step presented a new learning curve. Balancing educational value with entertainment, especially in a VR environment, pushed us to our creative limits. Yet, each challenge only fueled our passion further, driving us to innovate and iterate relentlessly. 💪
## Accomplishments that we're proud of 🏆
Seeing BooLingo come to life has been our greatest achievement. We're incredibly proud of creating an educational platform that’s not only effective but also enormously fun. Watching players genuinely excited to learn, laughing, and learning simultaneously, has been profoundly rewarding. We've turned the daunting into the delightful, and that’s a victory we’ll cherish forever. 🌟
## What we learned 📚
This journey taught us the incredible power of merging education with technology. We learned that when you make learning fun, the potential for engagement and retention skyrockets. The challenges of VR development also taught us a great deal about patience, perseverance, and the importance of a user-centric design approach. BooLingo has been a profound learning experience in itself, teaching us that the sky's the limit when passion meets innovation. 🛸
## What's next for BooLingo 🚀
The adventure is just beginning! We envision BooLingo expanding its universe to include more subjects, languages, and historical epochs, creating a limitless educational playground. We’re also exploring social features, allowing learners to team up or compete in knowledge quests. Our dream is to see BooLingo in classrooms and homes worldwide, making learning an adventure that everyone looks forward to. Join us on this exhilarating journey to make education thrillingly unforgettable! Let's change the world, one quest at a time. 🌍💫 | losing |
## Inspiration
After years of teaching methods remaining constant, technology has not yet infiltrated the classroom to its full potential. One day in class, it occurred to us that there must be a correlation between students behaviour in classrooms and their level of comprehension.
## What it does
We leveraged Apple's existing API's around facial detection and combined it with the newly added Core ML features to track students emotions based on their facial queues. The app can follow and analyze up to ~ ten students and provide information in real time using our dashboard.
## How we built it
The iOS app integrated Apple's Core ML framework to run a [CNN](https://www.openu.ac.il/home/hassner/projects/cnn_emotions/) to detect people's emotions from facial queues. The model was then used in combination with Apple's Vision API to identify and extract student's face's. This data was then propagated to Firebase for it to be analyzed and displayed on a dashboard in real time.
## Challenges we ran into
Throughout this project, there were several issues regarding how to improve the accuracy of the facial results. Furthermore, there were issues regarding how to properly extract and track users throughout the length of the session. As for the dashboard, we ran into problems around how to display data in real time.
## Accomplishments that we're proud of
We are proud of the fact that we were able to build such a real-time solution. However, we are happy to have met such a great group of people to have worked with.
## What we learned
Ozzie learnt more regarding CoreML and Vision frameworks.
Haider gained more experience with front-end development as well as working on a team.
Nakul gained experience with real-time graphing as well as helped developed the dashboard.
## What's next for Flatline
In the future, Flatline could grow it's dashboard features to provide more insight for the teachers. Also, the accuracy of the results could be improved by training a model to detect emotions that are more closely related to learning and student's behaviours. | ## Inspiration
There are approximately **10 million** Americans who suffer from visual impairment, and over **5 million Americans** suffer from Alzheimer's dementia. This weekend our team decided to help those who were not as fortunate. We wanted to utilize technology to create a positive impact on their quality of life.
## What it does
We utilized a smartphone camera to analyze the surrounding and warn visually impaired people about obstacles that were in their way. Additionally, we took it a step further and used the **Azure Face API** to detect the faces of people that the user interacted with and we stored their name and facial attributes that can be recalled later. An Alzheimer's patient can utilize the face recognition software to remind the patient of who the person is, and when they last saw him.
## How we built it
We built our app around **Azure's APIs**, we created a **Custom Vision** network that identified different objects and learned from the data that we collected from the hacking space. The UI of the iOS app was created to be simple and useful for the visually impaired, so that they could operate it without having to look at it.
## Challenges we ran into
Through the process of coding and developing our idea we ran into several technical difficulties. Our first challenge was to design a simple UI, so that the visually impaired people could effectively use it without getting confused. The next challenge we ran into was attempting to grab visual feed from the camera, and running them fast enough through the Azure services to get a quick response. Another challenging task that we had to accomplish was to create and train our own neural network with relevant data.
## Accomplishments that we're proud of
We are proud of several accomplishments throughout our app. First, we are especially proud of setting up a clean UI with two gestures, and voice control with speech recognition for the visually impaired. Additionally, we are proud of having set up our own neural network, that was capable of identifying faces and objects.
## What we learned
We learned how to implement **Azure Custom Vision and Azure Face APIs** into **iOS**, and we learned how to use a live camera feed to grab frames and analyze them. Additionally, not all of us had worked with a neural network before, making it interesting for the rest of us to learn about neural networking.
## What's next for BlindSpot
In the future, we want to make the app hands-free for the visually impaired, by developing it for headsets like the Microsoft HoloLens, Google Glass, or any other wearable camera device. | ## Inspiration
In our daily lives we noticed that we were spending a lot of time shopping online and weighing many options. When I was looking to buy a mechanical keyboard last month, I spent over two days visiting countless sites to find a keyboard to buy. During this process, it was frustrating to keep track of everything I had looked at and compare them efficiently.
That’s why we built snip. It’s an ultra easy to use tool that extracts important features from your screenshots using AI, and automatically tabulates your data for easy visualization and comparison!
## What it does
Whenever you decide you’re about to take a series of related screenshots that you would like to have saved and organized automatically (in one centralized place), you can create a “snip” session. Then, whenever you take a screenshot of anything, it will automatically get added to the session, relevant features will be extracted of whatever item you took a screenshot of, and will be automatically added to a table (much easier to view for comparison).
## How we built it
We used Tauri to create a desktop app with a Rust backend (to interact natively with the device to monitor its clipboard) and a React frontend (to create the user interface). We also used the shadcn UI component library and Tailwind CSS to enhance our design.
## Challenges we ran into
Our team is comprised primarily of roboticists, infra, or backend people, and so trying to create a visually pleasing UI with React, shadcn, and Tailwind CSS (all technologies that we’re not used to using) was quite difficult - we often ran into CSS conflicts, UI library conflicts, and random things not working because of the smallest syntax errors.
## Accomplishments that we're proud of
We were able to finish creating our hack and make it look like an actual product.
## What we learned
We learned to prefer to use technologies that give us more speed, rather than perhaps better code quality - for example, we decided to use typescript on the frontend instead of javascript, but all the errors due to typing rules made it quite frustrating to go fast. Also, we learned that if you want to build something challenging and finish it, you should work with technologies that you are familiar with - for example, we were not as familiar with React, but still decided to use it for a core portion of our project and that cost us a lot of precious development time (due to the learning curve).
## What's next for snip.
Provide more powerful control over how you can manipulate data, better AI features, and enhanced ability to extract data from snapshots (and also extract the site they came from). | partial |
## Inspiration
Road safety has become an escalating concern in recent years. According to Transport Canada, the number of collisions and causalities has been rising in the past three years. Meanwhile, as AI technology grows exponentially, we identified a niche where we could leverage powerful AI to offer valuable advice and feedback to drivers of all levels, effectively promoting road safety.
## What it does
Our system utilizes Computer Vision (CV) techniques and gyroscopes to collect meaningful data about driving performance, such as whether one does shoulder checks on turns or drives when drowsy. The data is then passed to the backend and analyzed by a Large Language Model (LLM). When users want to review their performance and ask for advice, they can ask the LLM to find relevant driving records and offer helpful insights.
Examples of potential use cases are supplementing driving lessons or exams for learners, encouraging concerned drivers to enforce good driving habits, providing objective evaluation of professional driving service quality, etc.
## How we built it
Our system consists of
**a data collection script (Python, React Native),**
* Runs a CV algorithm (utilizing roboflow models) and streams the output with the video and gyroscope (from mobile) data to the frontend web app
**a frontend web app (React),**
* Receives and displays driving performance data
* Provides an interface to review the driving records and interact with the LLM to get valuable insights
Authenticates user logins with Auth0
**a backend (Flask),**
* Connects to Google Gemini for LLM interactions
* Transfers data and LLM outputs between the frontend and the database (Utilizing VectorSearch to extract relevant trip records as context for LLM to generate advice)
**a database (MongoDB Atlas),**
* Saves and transfers metadata and analysis information of each driving trip
* Configured to support VectorSearch
**a cloud storage service (Google Cloud Storage)**
* Hosts driving videos which are media data of larger sizes
## Challenges we ran into
* Setting up web sockets to connect individual components for real-time data transfer
* Configuring Auth0 to perform authentication correctly within the React app
* Deciding how to store videos (Saving them to DB as BLOB vs Using paid cloud storage service)
## Accomplishments that we're proud of
* Built components that achieve their corrseponding functionalities (Idenify shoulder checks and closed eyes, interacting with LLM, querying database by vectors, etc.)
* Overcame or worked around errors arisen from using libraries or SDKs
## What we learned
* Collaborating as a small team to deliver quickly
* Utilizing web sockets for real-time data transfer
* Utilizing vectorSearch in Mongodb Atlas to query documents
* Utilizing Auth0 for authentication
* Connecting various programs and technologies to construct the end-to-end point
## What's next for DriveInsight
* Consulting domain experts (drivers, driving examiners, etc.) for more driving habits to monitor
* Fine-tuning specialized LLMs for even more robust and insightful responses | ## Inspiration
Urban areas are increasingly polluted by traffic, and although people make an effort to ride share, cars still produce a large portion of urban carbon dioxide emissions. App-based directions offer an option between distance-optimized or time-optimized routes, but never present the possibility of an eco-friendly route. In addition, many people want to be more green, but don't know what factors most impact the carbon footprint of their vehicle.
## What it does
Our interface provides an information-based solution. By synthesizing millions of very precise data points from Ford's OpenXC platform, we can isolate factors like idling, tailgating, aggressive driving, and analyze their impacts on fuel efficiency. We strip the noise in raw data to find the important trends and present them in clear visualizations.
## How we built it
The bulk of data analysis is handled by python which processes the raw json data files. Then, we use pandas to streamline data into tables, which are easier to handle and filter. Given the extreme precision of the data points (records in fractions of a second), the data was initially very difficult to interpret. With the help of numpy, we were able to efficiently calculate MPG figures and overlay additional trends on several visuals.
## Challenges we ran into
Data points for specific vehicle attributes are taken very irregularly and do not match up at the same timestamps. The user's interaction with their car's usage - negative fuel usage figures when tanks were filled. Column names in the data were inconsistent across sets (e.g. Odometer vs Fine Odometer Since Restart). Plenty of files had missing data for certain attributes, resulting in a scattering of NaNs across the dataset. Given this, we had to be clever with data filtering and condense the data so important metrics could be compared.
## Accomplishments that we're proud of
Beautiful visuals indicating clear trends in data. Clean filtering of extremely noisy raw data. A fun frontend that's visually appealing to the user.
## What we learned
Big data is not as easy as running a few functions on data that's simply downloaded from a database. Much of analytics is the filtration and data handling, and trends may often be surprising.
## What's next for MPGreen
We could integrate Maps and Directions APIs to find more eco friendly routes in order to directly provide the user with ways to reduce their carbon footprint. As it stands, our system is a strong tool to view and share information, but has potential to actually impact the environment. | ## Inspiration
Approximately 107.4 million Americans choose walking as a regular mode of travel for both social and work purposes. In 2015, about 70,000 pedestrians were injured in motor vehicle accidents while over 5,300 resulted in fatalities. Catastrophic accidents as such are usually caused by negligence or inattentiveness from the driver.
With the help of **Computer** **Vision** and **Machine** **Learning**, we created a tool that assists the driver when it comes to maintaining attention and being aware of his/her surroundings and any nearby pedestrians. Our goal is to create a product that provides social good and potentially save lives.
## What it does
We created **SurroundWatch** which assists with detecting nearby pedestrians and notifying the driver. The driver can choose to attach his/her phone to the dashboard, click start on the simple web application and **SurroundWatch** processes the live video feed sending notifications to the driver in the form of audio or visual cues when he/she is in danger of hitting a pedestrian. Since we designed it as an API, it can be incorporated into various ridesharing and navigation applications such as Uber and Google Maps.
## How we built it
Object detection and image processing was done using **OpenCV** and **YOLO-9000**. A web app that can run on both Android and iOS was built using **React**, **JavaScript**, and **Expo.io**. For the backend, **Flask** and **Heroku** was used. **Node.js** was used as the realtime environment.
## Challenges we ran into
We struggled with getting the backend and frontend to transmit information to one another along with converting the images to base64 to send as a POST request. We encountered a few hiccups in terms of node.js, ubuntu and react crashes, but we're successfully able to resolve them. Being able to stream live video feed was difficult given the limited bandwith, therefore, we resulted to sending images every 1000 ms.
## Accomplishments that we're proud of
We were able to process and detect images using YOLO-9000 and OpenCV, send image information using the React app and communicate between the front end and the Heroku/Flask backend components of our project. However, we are most excited to have built and shipped meaningful code that is meant to provide social good and potentially save lives.
## What we learned
We learned the basics of creating dynamic web apps using React and Expo along with passing information to a server where processing can take place. Our team work and hacking skills definitely improved and have made us more adept at building software products.
## What's next for SurroundWatch
Next step for SurroundWatch would be to offset the processing to AWS or Google Cloud Platform to improve speed of real-time image processing. We'd also like to create a demo site to allow users to see the power of SurroundWatch. Further improvements include improving our backend, setting up real-time image processing for live video streams over AWS or Google Cloud Platform. | partial |