anchor
stringlengths 159
16.8k
| positive
stringlengths 184
16.2k
| negative
stringlengths 167
16.2k
| anchor_status
stringclasses 3
values |
---|---|---|---|
## 💡Inspiration💡
According to statistics, hate crimes and street violence have exponentially increased and the violence does not end there. Many oppressed groups face physical and emotional racial hostility in the same way. These crimes harm not only the victims but also people who have a similar identity. Aside from racial identities, all genders reported feeling more anxious about exploring the outside environment due to higher crime rates. After witnessing an upsurge in urban violence and fear of the outside world, We developed Walk2gether, an app that addresses the issue of feeling unsafe when venturing out alone and fundamentally alters the way we travel.
## 🏗What it does🏗
It offers a remedy to the stress that comes with walking outside, especially alone. We noticed that incorporating the option of travelling with friends lessens anxiety, and has a function to raise information about local criminal activity to help people make informed travel decisions. It also provides the possibility to adjust settings to warn the user of specific situations and incorporates heat map technology that displays red alert zones in real-time, allowing the user to chart their route comfortably. Its campaign for social change is closely tied with our desire to see more people, particularly women, outside without being concerned about being aware of their surroundings and being burdened by fears.
## 🔥How we built it🔥
How can we make women feel more secure while roaming about their city? How can we bring together student travellers for a safer journey? These questions helped us outline the issues we wanted to address as we moved into the design stage. And then we created a website using HTML/CSS/JS and used Figma as a tool to prepare the prototype. We have Used Auth0 for Multifactor Authentication. CircleCi is used so that we can deploy the website in a smooth and easy to verify pipelining system. AssemblyAi is used for speech transcription and is associated with Twilio for Messaging and Connecting Friends for the journey to destination. Twilio SMS is also used for alerts and notification ratings. We have also used Coil for Membership using Web Based Monitization and also for donation to provide better safety route facilities.
## 🛑 Challenges we ran into🛑
The problem we encountered was the market viability - there are many safety and crime reporting apps on the app store. Many of them, however, were either paid, had poor user interfaces, or did not plan routes based on reported occurrences. Also, The challenging part was coming up with a solution because there were additional features that might have been included, but we only had to pick the handful that was most critical to get started with the product.
Also, Our team began working on the hack a day before the deadline, and we ran into some difficulties while tackling numerous problems. Learning how to work with various technology came with a learning curve. We have ideas for other features that we'd like to include in the future, but we wanted to make sure that what we had was production-ready and had a pleasant user experience first.
## 🏆Accomplishments that we're proud of: 🏆
We gather a solution to this problem and create an app which is very viable and could be widely used by women, college students and any other frequent walkers!
Also, We completed the front-end and backend within the tight deadlines we were given, and we are quite pleased with the final outcome. We are also proud that we learned so many technologies and completed the whole project with just 2 members on the team.
## What we learned
We discovered critical safety trends and pain points that our product may address. Over the last few years, urban centres have seen a significant increase in hate crimes and street violence, and the internet has made individuals feel even more isolated.
## 💭What's next for Walk2gether💭
Some of the features incorporated in the coming days would be addressing detailed crime mapping and offering additional facts to facilitate learning about the crimes happening. | ## Inspiration
I dreamed about the day we would use vaccine passports to travel long before the mRNA vaccines even reached clinical trials. I was just another individual, fortunate enough to experience stability during an unstable time, having a home to feel safe in during this scary time. It was only when I started to think deeper about the effects of travel, or rather the lack thereof, that I remembered the children I encountered in Thailand and Myanmar who relied on tourists to earn $1 USD a day from selling handmade bracelets. 1 in 10 jobs are supported by the tourism industry, providing livelihoods for many millions of people in both developing and developed economies. COVID has cost global tourism $1.2 trillion USD and this number will continue to rise the longer people are apprehensive about travelling due to safety concerns. Although this project is far from perfect, it attempts to tackle vaccine passports in a universal manner in hopes of buying us time to mitigate tragic repercussions caused by the pandemic.
## What it does
* You can login with your email, and generate a personalised interface with yours and your family’s (or whoever you’re travelling with’s) vaccine data
* Universally Generated QR Code after the input of information
* To do list prior to travel to increase comfort and organisation
* Travel itinerary and calendar synced onto the app
* Country-specific COVID related information (quarantine measures, mask mandates etc.) all consolidated in one destination
* Tourism section with activities to do in a city
## How we built it
Project was built using Google QR-code APIs and Glideapps.
## Challenges we ran into
I first proposed this idea to my first team, and it was very well received. I was excited for the project, however little did I know, many individuals would leave for a multitude of reasons. This was not the experience I envisioned when I signed up for my first hackathon as I had minimal knowledge of coding and volunteered to work mostly on the pitching aspect of the project. However, flying solo was incredibly rewarding and to visualise the final project containing all the features I wanted gave me lots of satisfaction. The list of challenges is long, ranging from time-crunching to figuring out how QR code APIs work but in the end, I learned an incredible amount with the help of Google.
## Accomplishments that we're proud of
I am proud of the app I produced using Glideapps. Although I was unable to include more intricate features to the app as I had hoped, I believe that the execution was solid and I’m proud of the purpose my application held and conveyed.
## What we learned
I learned that a trio of resilience, working hard and working smart will get you to places you never thought you could reach. Challenging yourself and continuing to put one foot in front of the other during the most adverse times will definitely show you what you’re made of and what you’re capable of achieving. This is definitely the first of many Hackathons I hope to attend and I’m thankful for all the technical as well as soft skills I have acquired from this experience.
## What's next for FlightBAE
Utilising GeoTab or other geographical softwares to create a logistical approach in solving the distribution of Oyxgen in India as well as other pressing and unaddressed bottlenecks that exist within healthcare. I would also love to pursue a tech-related solution regarding vaccine inequity as it is a current reality for too many. | 💡
## Inspiration
49 percent of women reported feeling unsafe walking alone after nightfall according to the Office for National Statistics (ONS). In light of recent sexual assault and harassment incidents in the London, Ontario and Western community, women now feel unsafe travelling alone more than ever.
Light My Way helps women navigate their travel through the safest and most well-lit path. Women should feel safe walking home from school, going out to exercise, or going to new locations, and taking routes with well-lit areas is an important precaution to ensure safe travel. It is essential to always be aware of your surroundings and take safety precautions no matter where and when you walk alone.
🔎
## What it does
Light My Way visualizes data of London, Ontario’s Street Lighting and recent nearby crimes in order to calculate the safest path for the user to take. Upon opening the app, the user can access “Maps” and search up their destination or drop a pin on a location. The app displays the safest route available and prompts the user to “Send Location” which sends the path that the user is taking to three contacts via messages. The user can then click on the google maps button in the lower corner that switches over to the google maps app to navigate the given path. In the “Alarm” tab, the user has access to emergency alert sounds that the user can use when in danger, and upon clicking the sounds play at a loud volume to alert nearby people for help needed.
🔨
## How we built it
React, Javascript, and Android studio were used to make the app. React native maps and directions were also used to allow user navigation through google cloud APIs. GeoJson files were imported of Street Lighting data from the open data website for the City of London to visualize street lights on the map. Figma was used for designing UX/UI.
🥇
## Challenges we ran into
We ran into a lot of trouble visualization such a large amount of data that we exported on the GeoJson street lights. We overcame that by learning about useful mapping functions in react that made marking the location easier.
⚠️
## Accomplishments that we're proud of
We are proud of making an app that can be of potential help to make women be safer walking alone. It is our first time using and learning React, as well as using google maps, so we are proud of our unique implementation of our app using real data from the City of London. It was also our first time doing UX/UI on Figma, and we are pleased with the results and visuals of our project.
🧠
## What we learned
We learned how to use React, how to implement google cloud APIs, and how to import GeoJson files into our data visualization. Through our research, we also became more aware of the issue that women face daily on feeling unsafe walking alone.
💭
## What's next for Light My Way
We hope to expand the app to include more data on crimes, as well as expand to cities surrounding London. We want to continue developing additional safety features in the app, as well as a chatting feature with the close contacts of the user. | partial |
## Inspiration
With the increase in Covid-19 cases, the healthcare sector has experienced a shortage of PPE supplies. Many hospitals have turned to the public for donations. However, people who are willing to donate may not know what items are needed, which hospitals need it urgently, or even how to donate.
## What it does
Corona Helping Hands is a real-time website that sources data directly from hospitals and ranks their needs based on bed capacity and urgency of necessary items. An interested donor can visit the website and see the hospitals in their area that are accepting donations, what specific items, and how to donate.
## How we built it
We built the donation web application using:
1) HTML/ CSS/ Bootstrap (Frontend Web Development)
2) Flask (Backend Web Development)
3) Python (Back-End Language)
## Challenges we ran into
We ran into issues getting integrating our map with the HTML page. Taking data and displaying it on the web application was not easy at first, but we were able to pull it off at the end.
## Accomplishments that we're proud of
None of us had a lot of experience in frontend web development, so that was challenging for all of us. However, we were able to complete a web application by the end of this hackathon which we are all proud of. We are also proud of creating a platform that can help users help hospitals in need and give them an easy way to figure out how to donate.
## What we learned
This was most of our first times working with web development, so we learned a lot on that aspect of the project. We also learned how to integrate an API with our project to show real-time data.
## What's next for Corona Helping Hands
We hope to further improve our web application by integrating data from across the nation. We would also like to further improve on the UI/UX of the app to enhance the user experience. | ## We wanted to help the invisible people of Toronto, many homeless people do not have identification and often have a hard time keeping it due to their belongings being stolen. This prevents many homeless people to getting the care that they need and the access to resources that an ordinary person does not need to think about.
**How**
Our application would be set up as a booth or kiosks within pharmacies or clinics so homeless people can be verified easily.
We wanted to keep information of our patients to be secure and tamper-proof so we used the Ethereum blockchain and would compare our blockchain with the information of the patient within our database to ensure they are the same otherwise we know there was edits or a breach.
**Impact**
This would solve problems such as homeless people getting the prescriptions they need at local clinics and pharmacies. As well shelters would benefit from this as our application can track the persons: age, medical visits, allergies and past medical history experiences.
**Technologies**
For our facial recognition we used Facenet and tensor flow to train our models
For our back-end we used Python-Flask to communicate with Facenet and Node.JS to handle our routes on our site.
As well Ether.js handled most of our back-end code that had to deal with our smart contract for our blockchain.
We used Vue.JS for our front end to style our site. | ## Inspiration ⚡️
Given the ongoing effects of COVID-19, we know lots of people don't want to spend more time than necessary in a hospital. We wanted to be able to skip a large portion of the waiting process and fill out the forms ahead of time from the comfort of our home so we came up with the solution of HopiBot.
## What it does 📜
HopiBot is an accessible, easy to use chatbot designed to make the process of admitting patients more efficient — transforming basic in person processes to a digital one, saving not only your time, but the time of the doctors and nurses as well. A patient will use the bot to fill out their personal information and once they submit, the bot will use the inputted mobile phone number to send a text message with the current wait time until check in at the nearest hospital to them. As pandemic measures begin to ease, HopiBot will allow hospitals to socially distance non-emergency patients, significantly reducing exposure and time spent around others, as people can enter the hospital at or close to the time of their check in. In addition, this would reduce the potential risks of exposure (of COVID-19 and other transmissible airborne illnesses) to other hospital patients that could be immunocompromised or more vulnerable.
## How we built it 🛠
We built our project using HTML, CSS, JS, Flask, Bootstrap, Twilio API, Google Maps API (Geocoding and Google Places), and SQLAlchemy. HTML, CSS/Bootstrap, and JS were used to create the main interface. Flask was used to create the form functions and SQL database. The Twilio API was used to send messages to the patient after submitting the form. The Google Maps API was used to send a Google Maps link within the text message designating the nearest hospital.
## Challenges we ran into ⛈
* Trying to understand and use Flask for the first time
* How to submit a form and validate at each step without refreshing the page
* Using new APIs
* Understanding how to use an SQL database from Flask
* Breaking down a complex project and building it piece by piece
## Accomplishments that we're proud of 🏅
* Getting the form to work after much deliberation of its execution
* Being able to store and retrieve data from an SQL database for the first time
* Expanding our hackathon portfolio with a completely different project theme
* Finishing the project within a tight time frame
* Using Flask, the Twilio SMS API, and the Google Maps API for the first time
## What we learned 🧠
Through this project, we were able to learn how to break a larger-scale project down into manageable tasks that could be done in a shorter time frame. We also learned how to use Flask, the Twilio API, and the Google Maps API for the first time, considering that it was very new to all of us and this was the first time we used them at all. Finally, we learned a lot about SQL databases made in Flask and how we could store and retrieve data, and even try to present it so that it could be easily read and understood.
## What's next for HopiBot ⏰
* Since we have created the user side, we would like to create a hospital side to the program that can take information from the database and present all the patients to them visually.
* We would like to have a stronger validation system for the form to prevent crashes.
* We would like to implement an algorithm that can more accurately predict a person’s waiting time by accounting for the time it would take to get to the hospital and the time a patient would spend waiting before their turn.
* We would like to create an AI that is able to analyze a patient database and able to predict wait times based on patient volume and appointment type.
* Along with a hospital side, we would like to send update messages that warns patients when they are approaching the time of their check-in. | partial |
## Inspiration
After learning about NLP and Cohere, we were inspired to explore the capabilities it had and decided to use it for a more medical oriented field. We realized that people prefer the internet to tediously needing to call somebody and wait in long hold times so we designed an alternative to the 811 hotline. We believed that this would not only help those with speech impediments but also aid the health industry with what they want to hire their future employees for.
## What it does
We designed a web application on which the user inputs how they are feeling (as a string), which is then sent onto our web server which contains the Cohere python application, from which we ask for specific data (The most probable illness thought by the NLP model and the percentage certainty it has) to be brought back to the web application as an output.
## How we built it
We built this website itself using HTML, CSS and Javascript. We then imported 100 training examples regarding symptoms for the natural language processing model to learn from, which we then exported as Python code, which was then deployed as a Flask microframework upon DigitalOcean’s Cloud Service Platform so that we could connect it to our website. This sucessfully helped connect our frontend and backend.
## Challenges we ran into
We ran into many challenges as we were all very inexperienced with Flask, Cohere's NLP models, professional web development and Wix (which we tried very hard to work with for the first half of the hackathon). This was because 3 of us were first and second years and half of our team hadn't been to a hackathon before. It was a very stressful 24 hours in which we worked very hard. We were also limited by Cohere's free limit of 100 training examples thus forcing our NLP model to not be as accurate as we wanted it to be.
## Accomplishments that we're proud of
We're very proud of the immense progress we made after giving up upon hosting our website on Wix. Despite losing more than a third of our time, we still managed to not only create a nice web app, we succesfully used Cohere's NLP model, and most notably, we were able to connect our Frontend and Backend using a Flask microframework and a cloud based server. These were all things outside of our confortzone and provided us with many learning opportunities.
## What we learned
We learned a tremendous amount during this hackathon. We became more skilled in flexbox to create a more professional website, we learned how to use flask to connect our python application data with our website domain.
## What's next for TXT811
We believe that the next step is to work on our web development skills to create an even more professional website and train our NLP model to be more accurate in its diagnosis, as well as expand upon what it can diagnose so that it can reach a wider audience of patients. Although we don't believe that it can 100% aid in professional diagnosis as that would be a dangerous concept to imply, it's definetly a very efficient software to point out warning signs to push the general public to reach out before their symptoms could get worse. | ## Inspiration
As we all know the world has come to a halt in the last couple of years. Our motive behind this project was to help people come out of their shells and express themselves. Connecting various people around the world and making them feel that they are not the only ones fighting this battle was our main objective.
## What it does
People share their thought processes through speaking and our application identifies the problem the speaker is facing and connects him to a specialist in that particular domain so that his/her problem could be resolved. There is also a **Group Chat** option available where people facing similar issues can discuss their problems among themselves.
For example, if our application identifies that the topics spoken by the speaker are related to mental health, then it connects them to a specialist in the mental health field and also the user has an option to get into a group discussion which contains people who are also discussing mental health.
## How we built it
The front-end of the project was built by using HTML, CSS, Javascript, and Bootstrap. The back-end part was exclusively written in python and developed using the Django framework. We integrated the **Assembly AI** file created by using assembly ai functions to our back-end and were successful in creating a fully functional web application within 36 hours.
## Challenges we ran into
The first challenge was to understand the working of Assembly AI. None of us had used it before and it took us time to first understand it's working. Integrating the audio part into our application was also a major challenge. Apart from Assembly AI, we also faced issues while connecting our front-end to the back-end. Thanks to the internet and the mentors of **HackHarvard**, especially **Assembly AI Mentors** who were very supportive and helped us resolve our errors.
## Accomplishments that we're proud of
Firstly, we are proud of creating a fully functional application within 36 hours taking into consideration all the setbacks we had. We are also proud of building an application from which society can be benefitted. Finally and mainly we are proud of exploring and learning new things which is the very reason for hackathons.
## What we learned
We learned how working as a team can do wonders. Working under a time constraint can be a really challenging task, aspects such as time management, working under pressure, the never give up attitude and finally solving errors which we never came across are some of the few but very important things which we were successful in learning. | ## Inspiration
My inspiration for creating CityBlitz was getting lost in Ottawa TWO SEPARATE TIMES on Friday. Since it was my first time in the city, I honestly didn't know how to use the O-Train or even whether Ottawa had buses in operation or not. I realized that if there existed an engaging game that could map hotspots in Ottawa and ways to get to them, I probably wouldn't have had such a hard time navigating on Friday. Plus, I wanted to actively contribute to sustainability, hence the trophies for climate charities pledge.
## What it does
CityBlitz is a top-down pixelated roleplay game that leads players on a journey through Ottawa, Canada. It encourages players to use critical thinking skills to solve problems and to familiarize themselves with navigation in a big city, all while using in-game rewards to make a positive difference in sustainability.
## How I built it
* Entirely coded using Javax swing
* All 250+ graphics assets are hand-drawn using Adobe Photoshop
* All original artwork
* In-game map layouts copy real-life street layouts
* Buildings like the parliament and the O-Train station are mimicked from real-life
* Elements like taxis and street signs also mimic those of Ottawa
## Challenges I ran into
Finding the right balance between a puzzle RPG being too difficult/unintuitive for players vs. spoonfeeding the players every solution was the hardest part of this project. This was overcome through trial and error as well as peer testing and feedback.
## Accomplishments that we're proud of
Over 250 original graphics, a fully functioning RPG, a sustainability feature, and overall gameplay.
## What I learned
I learned how to implement real-world elements like street layouts and transit systems into a game for users to familiarize themselves with the city in question. I also learned how to use GitHub and DevPost, how to create a repository, update git files, create a demo video, participate in a hackathon challenge, submit a hackathon project, and pitch a hackathon project.
## What's next for CityBlitz
Though Ottawa was the original map for CityBlitz, the game aims to create versions/maps centering around other major metropolitan areas like Toronto, New York City, Barcelona, Shanghai, and Mexico City.
In the future, CityBlitz aims to partner with these municipal governments to be publicly implemented in schools for kids to engage with, around the city for users to discover, and to be displayed on tourism platforms to attract people to the city in question. | losing |
## Inspiration
Helping people who are visually and/or hearing impaired to have better and safer interactions.
## What it does
The sensor beeps when the user comes too close to an object or too close to a hot beverage/food.
The sign language recognition system translates sign language from a hearing impaired individual to english for a caregiver.
The glasses capture pictures of surroundings and convert them into speech for a visually imapired user.
## How we built it
We used Microsoft Azure's vision API,Open CV,Scikit Learn, Numpy, Django + REST Framework, to build the technology.
## Challenges we ran into
Making sure the computer recognizes the different signs.
## Accomplishments that we're proud of
Making a glove with a sensor that helps user navigate their path, recognizing sign language, and converting images of surroundings to speech.
## What we learned
Different technologies such as Azure, OpenCV
## What's next for Spectrum Vision
Hoping to gain more funding to increase the scale of the project. | ## Inspiration
There were two primary sources of inspiration. The first one was a paper published by University of Oxford researchers, who proposed a state of the art deep learning pipeline to extract spoken language from video. The paper can be found [here](http://www.robots.ox.ac.uk/%7Evgg/publications/2018/Afouras18b/afouras18b.pdf). The repo for the model used as a base template can be found [here](https://github.com/afourast/deep_lip_reading).
The second source of inspiration is an existing product on the market, [Focals by North](https://www.bynorth.com/). Focals are smart glasses that aim to put the important parts of your life right in front of you through a projected heads up display. We thought it would be a great idea to build onto a platform like this through adding a camera and using artificial intelligence to gain valuable insights about what you see, which in our case, is deciphering speech from visual input. This has applications in aiding individuals who are deaf or hard-of-hearing, noisy environments where automatic speech recognition is difficult, and in conjunction with speech recognition for ultra-accurate, real-time transcripts.
## What it does
The user presses a button on the side of the glasses, which begins recording, and upon pressing the button again, recording ends. The camera is connected to a raspberry pi, which is a web enabled device. The raspberry pi uploads the recording to google cloud, and submits a post to a web server along with the file name uploaded. The web server downloads the video from google cloud, runs facial detection through a haar cascade classifier, and feeds that into a transformer network which transcribes the video. Upon finished, a front-end web application is notified through socket communication, and this results in the front-end streaming the video from google cloud as well as displaying the transcription output from the back-end server.
## How we built it
The hardware platform is a raspberry pi zero interfaced with a pi camera. A python script is run on the raspberry pi to listen for GPIO, record video, upload to google cloud, and post to the back-end server. The back-end server is implemented using Flask, a web framework in Python. The back-end server runs the processing pipeline, which utilizes TensorFlow and OpenCV. The front-end is implemented using React in JavaScript.
## Challenges we ran into
* TensorFlow proved to be difficult to integrate with the back-end server due to dependency and driver compatibility issues, forcing us to run it on CPU only, which does not yield maximum performance
* It was difficult to establish a network connection on the Raspberry Pi, which we worked around through USB-tethering with a mobile device
## Accomplishments that we're proud of
* Establishing a multi-step pipeline that features hardware, cloud storage, a back-end server, and a front-end web application
* Design of the glasses prototype
## What we learned
* How to setup a back-end web server using Flask
* How to facilitate socket communication between Flask and React
* How to setup a web server through local host tunneling using ngrok
* How to convert a video into a text prediction through 3D spatio-temporal convolutions and transformer networks
* How to interface with Google Cloud for data storage between various components such as hardware, back-end, and front-end
## What's next for Synviz
* With stronger on-board battery, 5G network connection, and a computationally stronger compute server, we believe it will be possible to achieve near real-time transcription from a video feed that can be implemented on an existing platform like North's Focals to deliver a promising business appeal | Yeezy Vision is a Chrome extension that lets you view the web from the eyes of the 21st century's greatest artist - Kanye West. Billions of people have been deprived of the life changing perspective that Mr. West has cultivated. This has gone on for far too long. We had to make a change.
Our extension helps the common man view the world as Yeezus does. Inferior text is replaced with Kanye-based nouns, less important faces are replaced by Kanye's face and links are replaced by the year of Ye - #2020.
## How we built it
We split the functionality to 4 important categories: Image replacement, text and link replacement, website and chrome extension. We did the image and text replacement using http post and gets from the microsoft azure server, bootstrap for the website and wrote a replacement script in javascript.
## What's next for Yeezy Vision
Imma let you finish, but Yeezy Vision will be THE GREATEST CHROME EXTENSION OF ALL TIME. Change your world today at yeezyvision.tech. | winning |
## Inspiration
As a team, we had a collective interest in sustainability and knew that if we could focus on online consumerism, we would have much greater potential to influence sustainable purchases. We also were inspired by Honey -- we wanted to create something that is easily accessible across many websites, with readily available information for people to compare.
Lots of people we know don’t take the time to look for sustainable items. People typically say if they had a choice between sustainable and non-sustainable products around the same price point, they would choose the sustainable option. But, consumers often don't make the deliberate effort themselves. We’re making it easier for people to buy sustainably -- placing the products right in front of consumers.
## What it does
greenbeans is a Chrome extension that pops up when users are shopping for items online, offering similar alternative products that are more eco-friendly. The extension also displays a message if the product meets our sustainability criteria.
## How we built it
Designs in Figma, Bubble for backend, React for frontend.
## Challenges we ran into
Three beginner hackers! First time at a hackathon for three of us, for two of those three it was our first time formally coding in a product experience. Ideation was also challenging to decide which broad issue to focus on (nutrition, mental health, environment, education, etc.) and in determining specifics of project (how to implement, what audience/products we wanted to focus on, etc.)
## Accomplishments that we're proud of
Navigating Bubble for the first time, multiple members coding in a product setting for the first time... Pretty much creating a solid MVP with a team of beginners!
## What we learned
In order to ensure that a project is feasible, at times it’s necessary to scale back features and implementation to consider constraints. Especially when working on a team with 3 first time hackathon-goers, we had to ensure we were working in spaces where we could balance learning with making progress on the project.
## What's next for greenbeans
Lots to add on in the future:
Systems to reward sustainable product purchases. Storing data over time and tracking sustainable purchases. Incorporating a community aspect, where small businesses can link their products or websites to certain searches.
Including information on best prices for the various sustainable alternatives, or indicators that a product is being sold by a small business. More tailored or specific product recommendations that recognize style, scent, or other niche qualities. | ## Inspiration
SustainaPal is a project that was born out of a shared concern for the environment and a strong desire to make a difference. We were inspired by the urgent need to combat climate change and promote sustainable living. Seeing the increasing impact of human activities on the planet's health, we felt compelled to take action and contribute to a greener future.
## What it does
At its core, SustainaPal is a mobile application designed to empower individuals to make sustainable lifestyle choices. It serves as a friendly and informative companion on the journey to a more eco-conscious and environmentally responsible way of life. The app helps users understand the environmental impact of their daily choices, from transportation to energy consumption and waste management. With real-time climate projections and gamification elements, SustainaPal makes it fun and engaging to adopt sustainable habits.
## How we built it
The development of SustainaPal involved a multi-faceted approach, combining technology, data analysis, and user engagement. We opted for a React Native framework, and later incorporated Expo, to ensure the app's cross-platform compatibility. The project was structured with a focus on user experience, making it intuitive and accessible for users of all backgrounds.
We leveraged React Navigation and React Redux for managing the app's navigation and state management, making it easier for users to navigate and interact with the app's features. Data privacy and security were paramount, so robust measures were implemented to safeguard user information.
## Challenges we ran into
Throughout the project, we encountered several challenges. Integrating complex AI algorithms for climate projections required a significant amount of development effort. We also had to fine-tune the gamification elements to strike the right balance between making the app fun and motivating users to make eco-friendly choices.
Another challenge was ensuring offline access to essential features, as the app's user base could span areas with unreliable internet connectivity. We also grappled with providing a wide range of educational insights in a user-friendly format.
## Accomplishments that we're proud of
Despite the challenges, we're incredibly proud of what we've achieved with SustainaPal. The app successfully combines technology, data analysis, and user engagement to empower individuals to make a positive impact on the environment. We've created a user-friendly platform that not only informs users but also motivates them to take action.
Our gamification elements have been well-received, and users are enthusiastic about earning rewards for their eco-conscious choices. Additionally, the app's offline access and comprehensive library of sustainability resources have made it a valuable tool for users, regardless of their internet connectivity.
## What we learned
Developing SustainaPal has been a tremendous learning experience. We've gained insights into the complexities of AI algorithms for climate projections and the importance of user-friendly design. Data privacy and security have been areas where we've deepened our knowledge to ensure user trust.
We've also learned that small actions can lead to significant changes. The collective impact of individual choices is a powerful force in addressing environmental challenges. SustainaPal has taught us that education and motivation are key drivers for change.
## What's next for SustainaPal
The journey doesn't end with the current version of SustainaPal. In the future, we plan to further enhance the app's features and expand its reach. We aim to strengthen data privacy and security, offer multi-language support, and implement user support for a seamless experience.
SustainaPal will also continue to evolve with more integrations, such as wearable devices, customized recommendations, and options for users to offset their carbon footprint. We look forward to fostering partnerships with eco-friendly businesses and expanding our analytics and reporting capabilities for research and policy development.
Our vision for SustainaPal is to be a global movement, and we're excited to be on this journey towards a healthier planet. Together, we can make a lasting impact on the world. | ## Inspiration
At the TreeHacks hackathon, amidst the buzz of innovation and collaboration, our team found inspiration in an unlikely place: the trash bins. As we observed the overflowing landfill bins in stark contrast to the scarcely filled recycling and compost bins, a collective realization dawned on us. It highlighted a glaring, often overlooked issue in our daily practices — the gap between our intentions for sustainability and the reality of our actions.
This observation sparked a profound internal questioning: "Do we really understand what sustainability is? What impact do seemingly minor unsustainable practices have on our planet? Do we, as students and future leaders, genuinely prioritize our planet's well-being?"
Motivated by this insight, we saw an opportunity for meaningful change right at the heart of the hackathon. The situation with the trashbins was not just a practical challenge; it was a symbol of a larger issue that resonated deeply with us. This led us to the ask ourselves - how could we bridge the gap between sustainable intentions and actions, starting from our immediate environment at the hackathon to broader, everyday life scenarios?
## What it does
At the heart of every action lies a mindset, and it's this mindset that our platform - **Sustaino** seeks to transform. Sustaino goes beyond imparting sustainable literacy; it actively reshapes behavioral patterns, fostering a culture of responsible consumption.
By engaging with Sustaino, users learn not just the 'what' and 'why' of sustainability, but also the 'how.' From making informed choices about daily consumption to understanding the ripple effects of their actions on the global ecosystem, Sustaino guides users in building behavior that leaves a positive imprint on the planet.
The key features of our platform are:
**Personalized Behavioral Nudges**
* Provide personalized information and answers to users’ questions about sustainability
* Tailor content based on user interests, behaviors and interactions, making it more relevant and engaging.
**Incentivization through Gamification**
* Users earn reward points and badges for engaging in sustainable behaviors, like recycling or reducing energy consumption.
- Implementing systems where users can track their progress in sustainable practices, similar to leveling up in a game.
**Eco-Partner Rewards Network**
* Accumulated reward points can be redeemed for discounts and special offers from our eco-friendly partner brands.
* The marketplace of deals and offers is regularly updated, keeping the options fresh, relevant, and appealing to our users.
## How we built it
In order to provide personalized and accurate sustainability recommendations, we needed to construct our own dataset. This was done by selecting multiple internet forums and filtering all threads searching for quality, highly-argumented answers to sustainability problems. We combined information from three major fields relevant in our everyday lives (however we did not limit ourselves to just these) - food, consumerism and travel. For each of these, the most relevant posts were selected in order to arrive at the final, concatenated database. This was then used to fine-tune a LongLlama 7B large language model so that it could reasonably answer any questions or remarks of the users, while providing sensible and creative responses. The model was both fine-tuned and deployed through MonsterAPI and is available as open-source. We also trained additional models for cross-compatibility with less powerful systems (also with MonsterAPI). The models can then be accessed by the webapp (built in Django) through the Python API and provides both a response for the user, as well as a classification and ranking of the user's actions necessary for the gamification component. Furthermore, historical data about the user's actions is fed into the model to provide a reasonable (and not too strict) approach to sustainability, focusing on educating, rather than penalizing the person. This progress can also be viewed in a visually appealing frontend (built in JS/HTML/CSS) through sustainability levels!
## Challenges we ran into
We faced a key challenge in meeting the diverse needs of Gen Z. We quickly realized that a 'one size fits all' approach would not suffice for our Gen Z audience. A generic approach wouldn't work. Our solution: a platform that customizes responses based on individual user interactions. This personal touch is crucial for an audience that values individuality and personalization.
Motivating users to consistently engage in sustainable practices was another hurdle. We recognized that while many individuals aspire to lead more sustainable lives, various challenges can impact their motivation. To address this, we introduced incentivization through gamification. This makes the sustainability journey enjoyable and aligns well with Gen Z's preference for interactive digital experiences.
We also identified that a major pain point for our users was the overwhelming amount of information available on sustainability.Our strategy: simplify and clarify. We focused on providing straightforward, digestible content, making sustainability more accessible and less intimidating.
## Accomplishments that we're proud of
1. We are especially proud of the dataset creation described in detail in "How we built it". We really put a lot of effort into selecting and later filtering, as well as cleaning the data so that it only concerns positive and realistic examples of sustainability. Interestingly, we were able to collect even more data than the model could compute in the limited time we had, so there is a lot of potential in future developments!
2. We are also extremely proud of how we were able to use MonsterAPI - not only to fine-tune our model, but also to deploy it, repurpose the available LLM's and robustly use them in our webapp, utilizing the most out of this technology!
3. Lastly, we are extremely happy with the collaboration inside of our team - all coming from very different places and with diverse experience, we managed to identify a common problem and work towards a solution, while maintaining a positive attitude. Coming here, we thought that it is simply not possible for us (having met on Friday evening) to compete with pre-established teams, yet now we're certain we can!
## What we learned
We realized that although most people know what sustainability is and have a general overview of how to practice it, they usually do not implement it in their lives (or at least do so poorly). This might be due to the fact that they do not realize the implications of their actions, do not fully understand the arguments behind some sustainable options or simply are confused. Although surprising, this realization helped us create a better product tailored to all of these people and helping them live better, both for themselves and the planet.
We also learned that a slightly different outlook on a problem can really make a difference and help solve many challenges. However, understanding this would not have been possible without such a multidisciplinary team!
## What's next for Sustaino
When a group of passionate individuals comes together to build a project, the sky’s the limit. We are grateful for this opportunity, and have outlined the following steps to bring our vision for this product to its full potential:
**User Testing and Feedback**
* Conduct beta testing with a diverse group of users, particularly from Gen Z, to gather feedback on usability, content, and features.
* Use this feedback to refine and improve the application, focusing on user experience and the relevance of the sustainability content.
**Accessibility and Inclusivity**
* Ensure the app is accessible to a diverse range of users, considering voice integration, different languages, cultural contexts, and abilities.
* Include content and features that address a variety of sustainability challenges faced by different communities.
**Scalability and Expansion**
* Plan for scalability, both in terms of technological infrastructure and content.
* Consider expanding the application's reach to new regions or demographics over time. | winning |
## Inspiration
Saw a **huge Problem** in the current K-Pop Production system:
1) Tremendous Resource Waste (5+ Years and 100s of MUSD to train & manufacture a single group)
2) Gamble Factor (99.9% fail utterly)
**Why K-Pop?**
K-Pop isn’t just a music genre, but a very unique BM that builds upon the economy of fandom based on integrative contents business
## Solution
**Interactive test-bed platform** that
1) Match-makes aspiring K-Pop Performers and music producers, giving more autonomy and direct access to potential collaborators and resources
2) Enables to test their marketability out, through real-time fan-enabled voting system on the collaborated contents
## Key Features
**1. Profile Dashboard**
: Aspiring performers can find the amateur music producers and vice versa, where the Self-Production gets enabled through direct access to talent pool
**2. Matchmaking**
: Many of the aspiring performers have passion yet no knowledge what to expect & where to look to to pursue they passion, thus platform provides excellent place where to “start”, by matchmaking and recommendations individual combinations to collaborate with based on preferences questionnaire
**3. Voting & Real-time Ranking**
: Provides real-time testbed to check the reaction of the market for different contents the performers publish through showing the relative ranking compared to other aspiring performers
## Business Model & Future Vision:
Narrowly: monetizing from the vote Giving incentives such as special videos & exclusive merch
Broadly: Having access to the future talent pool with high supply & the first mover to filter “proven” to be successful, turning them into Intellectual Property
## How we built it
**All features written:**
* Sign-Up/Login
* Create Profile
* Upload Media
* User Voting
* Matching
* Ranking
* Liking Media
**Technology Used:**
Front-End:
* REACT
* HTML
* CSS
Back-End:
* JavaScript
* Node.js
* Typescript
* Convex Tools for backend, SQL database, File Storage
Design:
* Figma
## Challenges we ran into
* 2 People team, not pre-matched
* First time for both of us working with a non-developer <-> developer
## Accomplishments that we're proud of
* Have 10+ interested potential users already
* Could integrate a lot in time crunch: Matchmaking & Real Time Voting Integration | ## ✨ Try out our Figma prototype [here](https://www.figma.com/proto/UwwjiYZ9OcKncY7WsVZ3Rv/TED-Talks-Dashboard?node-id=3-712&node-type=frame&t=AiEglOMvzB3etz1W-1&scaling=min-zoom&content-scaling=fixed&page-id=0%3A1&starting-point-node-id=3%3A712)
## Inspiration
We all procrastinate a lot—especially our team. We didn't do anything the first night, changed our idea five times, and finally decided on an idea at 5 pm on Saturday. We decided it would be a good idea to create a project that would help solve procrastination.
## What it does
It tracks your open tabs on your computer and starts yelling at you if you're being unproductive.
## How we built it
We created a mockup through Figma, a backend through Python, and our AI is completely powered by voiceflow.
## Challenges we ran into
* Voiceflow has very limited documentation for their Voice agents, especially if you want to use a third-party deepfake.
* We tried making a progressive web app that none of us had ever used before.
* ~~Connecting the frontend to the backend~~
## Accomplishments that we're proud of
We are most proud of our rockstar UI/UX designer, Winston Zhao. He was able to go from wireframe to a full-fledge high-fidelity markup for two separate ideas ALL ON HIS OWN.
We are also really proud of our idea. As first-year university students living away from home, we don't have our parents to berate us when we aren't studying hard enough. TED is a little taste of our parent's love that we didn't know we needed.
## What we learned
One of the mistakes we made early on was not spending enough time formalizing and choosing an idea. We ended up spending the first half of the hackathon working on a completely different idea, only to scrap it and start from scratch. Luckily, we settled on an idea we liked and made TED. In the end, we learned the importance of choosing the right product and pivoting ideas when the necessity arises.
## What's next for TED
We would like to switch technologies to a more convenient format: from a progressive web app to a desktop widget. We would also like to add voice and personality customization options. | ## Inspiration
Google Docs is a game changing product but it was never applied to music production. This hackathon is about community so we decided that a tool to allow people to jam together would make music and inspire people to work together to create
## What it does
Allows people to join up in a "room" to work on tracks to make beats for a song.
## How we built it
Used firebase for server and database/storage as well as to sync up sessions. Javascript and jQuery for front-end functionality.
## Challenges we ran into
Trying to sync up users across several client browsers. Implementing system to play synths and beats while not causing issues for upload of data.
## Accomplishments that we're proud of
Getting clients to actually share changes in real time.
## What we learned
Firebase is very powerful for server side technology. Javascript can be very complex yet powerful when used properly.
## What's next for Band Together
Hopefully better user integration and further features such as exporting songs and more variety of instruments.
## Location
Table 12H | losing |
# PotholePal
## Pothole Filling Robot - UofTHacks VI
This repo is meant to enable the Pothole Pal proof of concept (POC) to detect changes in elevation on the road using an ultrasonic sensor thereby detecting potholes. This POC is to demonstrate the ability for a car or autonomous vehicle to drive over a surface and detect potholes in the real world.
Table of Contents
1.Purpose
2.Goals
3.Implementation
4.Future Prospects
**1.Purpose**
By analyzing city data and determining which aspects of city infrastructure could be improved, potholes stood out. Ever since cities started to grow and expand, potholes have plagued everyone that used the roads. In Canada, 15.4% of Quebec roads are very poor according to StatsCan in 2018. In Toronto, 244,425 potholes were filled just in 2018. Damages due to potholes averaged $377 per car per year. There is a problem that can be better addressed. In order to do that, we decided that utilizing Internet of Things (IoT) sensors like the ulstrasound sensor, we can detect potholes using modern cars already mounted with the equipment, or mount the equipment on our own vehicles.
**2.Goals**
The goal of the Pothole Pal is to help detect potholes and immediately notify those in command with the analytics. These analytics can help decision makers allocate funds and resources accordingly in order to quickly respond to infrastructure needs. We want to assist municipalities such as the City of Toronto and the City of Montreal as they both spend millions each year assessing and fixing potholes. The Pothole Pal helps reduce costs by detecting potholes immediately, and informing the city where the pothole is.
**3.Implementation**
We integrated an arduino on a RedBot Inventors Kit car. By attaching an ultrasonic sensor module to the arduino and mounting it to the front of the vehicle, we are able to detect changes in elevation AKA detect potholes. After the detection, the geotag of the pothole and an image of the pothole is sent to a mosquito broker, which then directs the data to an iOS app which a government worker can view. They can then use that information to go and fix the pothole.
![](https://i.imgur.com/AtI0mDD.jpg)
![](https://i.imgur.com/Lv1A5xf.png)
![](https://i.imgur.com/4DD3Xuc.png)
**4.Future Prospects**
This system can be further improved on in the future, through a multitude of different methods. This system could be added to mass produced cars that already come equipped with ultrasonic sensors, as well as cameras that can send the data to the cloud for cities to analyze and use. This technology could also be used to not only detect potholes, but continously moniter road conditions and providing cities with analytics to create better solutions for road quality, reduce costs to the city to repair the roads and reduce damages to cars on the road. | ## Inspiration
Many of us have a hard time preparing for interviews, presentations, and any other social situation. We wanted to sit down and have a real talk... with ourselves.
## What it does
The app will analyse your speech, hand gestures, and facial expressions and give you both real-time feedback as well as a complete rundown of your results after you're done.
## How We built it
We used Flask for the backend and used OpenCV, TensorFlow, and Google Cloud speech to text API to perform all of the background analyses. In the frontend, we used ReactJS and Formidable's Victory library to display real-time data visualisations.
## Challenges we ran into
We had some difficulties on the backend integrating both video and voice together using multi-threading. We also ran into some issues with populating real-time data into our dashboard to display the results correctly in real-time.
## Accomplishments that we're proud of
We were able to build a complete package that we believe is purposeful and gives users real feedback that is applicable to real life. We also managed to finish the app slightly ahead of schedule, giving us time to regroup and add some finishing touches.
## What we learned
We learned that planning ahead is very effective because we had a very smooth experience for a majority of the hackathon since we knew exactly what we had to do from the start.
## What's next for RealTalk
We'd like to transform the app into an actual service where people could log in and save their presentations so they can look at past recordings and results, and track their progress over time. We'd also like to implement a feature in the future where users could post their presentations online for real feedback from other users. Finally, we'd also like to re-implement the communication endpoints with websockets so we can push data directly to the client rather than spamming requests to the server.
![Image](https://i.imgur.com/aehDk3L.gif)
Tracks movement of hands and face to provide real-time analysis on expressions and body-language.
![Image](https://i.imgur.com/tZAM0sI.gif) | ![UpLync](https://s18.postimg.org/5syr0jrg9/ss_2016_10_15_at_06_36_48.png)
## Inspiration
Two weeks ago you attended an event and have met some wonderful people to help get through the event, each one of you exchanged contact information and hope to keep in touch with each other. Neither one of you contacted each other and eventually lost contact with each other. A potentially valuable friendship is lost due to neither party taking the initiative to talk to each other before. This is where *UpLync* comes to the rescue, a mobile app that is able to ease the connectivity with lost contacts.
## What it does?
It helps connect to people you have not been in touch for a while, the mobile application also reminds you have not been contacting a certain individual in some time. In addition, it has a word prediction function that allows users to send a simple greeting message using the gestures of a finger.
## Building process
We used mainly react-native to build the app, we use this javascript framework because it has cross platform functionality. Facebook has made a detailed documented tutorial at [link](https://facebook.github.io/react-native) and also [link](http://nativebase.io/) for easier cross-platform coding, we started with
* Designing a user interface that can be easily coded for both iOS and Android
* Functionality of the Lazy Typer
* Touch up with color scheme
* Coming up with a name for the application
* Designing a Logo
## Challenges we ran into
* non of the team members know each other before the event
* Coding in a new environment
* Was to come up with a simple UI that is easy on the eyes
* Keeping people connected through a mobile app
* Reduce the time taken to craft a message and send
## Accomplishments that we're proud of
* Manage to create a product with React-Native for the first time
* We are able to pick out a smooth font and colour scheme to polish up the UI
* Enabling push notifications to remind the user to reply
* The time taken to craft a message was reduced by 35% with the help of our lazy typing function
## What we learned.
We are able to learn the ins-and-outs of react-native framework, it saves us work to use android studio to create the application.
## What's next for UpLync
The next step for UpLync is to create an AI that learns the way how the user communicates with their peers and provide a suitable sentence structure. This application offers room to provide support for other languages and hopefully move into wearable technology. | winning |
**Recollect** is a $150 robot that scans, digitizes, and analyzes books with zero intervention. Once a book is placed on the stand, Recollect's robotic arm delicately flips the pages while a high-resolution camera captures each page. The images are sent to a website which merges the images into a PDF and creates AI summaries and insights of the document.
## Why build this?
Only 12% of all published books have been digitized. Historical records, ancient manuscripts, rare collections. Family photo albums, old journal entries, science field notes. Without digitization, centuries of accumulated wisdom, cultural treasures, personal narratives, and family histories threaten to be forever lost to time.
Large-scale digitization currently requires highly specialized equipment and physical personnel to manually flip, scan, and process each page. Oftentimes, this is simply not practical, resulting in many books remaining in undigitized form, which necessitates careful, expensive, and unsustainable transportation across various locations for analysis.
## How we built it
*Hardware:*
Recollect was made with easy-to-fabricate material including 3D printed plastic parts, laser-cut acrylic and wood, and cheap, and off-the-shelf electronics. A book rests at a 160-degree angle, optimal to hold the book naturally open while minimizing distortions. The page presser drops onto the book, flattening it to further minimize distortions. After the photo is taken, the page presser is raised, then a two-degree-of-freedom robotic arm flips the page. A lightly adhesive pad attaches to the page, and then one of the joints rotates the page. The second joint separates the page from the adhesive pad, and the arm returns to rest. The scanner was designed to be adaptable to a wide range of books, up to 400mm tall and 250 mm page width, with easy adjustments to the arm joints and range of motion to accommodate for a variety of books.
*Software:*
Image processing:
On the backend, we leverage OpenCV to identify page corners, rescale images, and sharpen colors to produce clear images. These images are processed with pre-trained Google Cloud Vision API models to enable optical character recognition of handwriting and unstructured text. The data are saved into a Supabase database to allow users to access their digital library from anywhere.
Webpage and cloud storage:
The front end is a Vercel-deployed web app built with Bun, Typescript, Chakra, Next.js, and React.js.
## Challenges we ran into
We ran into challenges involving getting the perfect angle for the robotic arm to properly stick to the page. To fix this, we had to modify the pivot point of the arm’s base to be in line with the book’s spine and add a calibration step to make it perfectly set up for the book to be scanned. Our first version also used servo motors with linkages to raise the acrylic page presser up and down, but we realized these motors did not have enough torque. As a result, we replaced them with DC motors and a basic string and pulley system which turned out to work surprisingly well.
## Accomplishments that we're proud of
This project was a perfect blend of each team member’s unique skill sets: Lawton, a mechanical engineering major, Scott, an electrical and systems engineer, Kaien, an AI developer, and Jason, a full-stack developer. Being able to combine our skills in this project was amazing, and we were truly impressed by how much we were able to accomplish in just 24 hours. Seeing this idea turn into a physical reality was insane, and we were able to go beyond what we initially planned on building (such as adding summarization, quotation, and word cloud features as post-processing steps on your diary scans). We’re happy to say that we’ve already digitized over 100 pages of our diaries through testing.
## What we learned
We learned how to effectively divide up the project into several tasks and assign it based on area of expertise. We also learned to parallelize our work—while parts were being 3D-printed, we would focus on software, design, and electronics.
## What's next for Recollect
We plan to improve the reliability of our system to work with all types of diaries, books, and notebooks, no matter how stiff or large the pages are. We also want to focus on recreating PDFs from these books in a fully digital format (i.e. not just the images arranged in a PDF document but actual text boxes following the formatting of the original document). We also plan to release all of the specifications and software publicly so that anyone can build their own Recollect scanner at home to scan their own diaries and family books. We will design parts kits to make this process even easier. We will also explore collaborating with Stanford libraries and our close communities (friends and family). Thanks to Recollect, we hope no book is left behind. | ## Inspiration
McMaster's SRA presidential debate brought to light the issue of garbage sorting on campus. Many recycling bins were contaminated and were subsequently thrown into a landfill. During the project's development, we became aware of the many applications of this technology, including sorting raw materials, and manufacturing parts.
## What it does
The program takes a customizable trained deep learning model that can categorize over 1000 different classes of objects. When an object is placed in the foreground of the camera, its material is determined and its corresponding indicator light flashes. This is to replicate a small-scale automated sorting machine.
## How we built it
To begin, we studied relevant modules of the OpenCV library and explored ways to implement them for our specific project. We also determined specific categories/materials for different classes of objects to build our own library for sorting.
## Challenges we ran into
Due to time constraints, we were unable to train our own data set for the specific objects we wanted. Many pre-trained models are designed to run on much stronger hardware than a raspberry pi. Being limited to pre-trained databases added a level of difficulty for the software to detect our specific objects.
## Accomplishments that we're proud of
The project actually worked and was surprisingly better than we had anticipated. We are proud that we were able to find a compromise in the pre-trained model and still have a functioning application.
## What we learned
We learned how to use OpenCV for this application, and the many applications of this technology in the deep learning and IoT industry.
## What's next for Smart Materials Sort
We'd love to find a way to dynamically update the training model (supervised learning), and try the software with our own custom models. | ## Inspiration
As someone who is very interested in both computer science and game design, I like developing tools to enable what we are capable of with games. Today, play any RPG and enter a dialogue. You'll have 2, maybe 3 or 4 choices. You click on one, and the game continues. This experience can be breaking to the immersion -- you're being pulled out of the game and back into a user interface. I wanted to create a simple tool that anyone can use, even non-programmers, so that game designers can keep their players engaged in an immersive environment by using players' voices to drive dialogue.
## What it does
This is a dialogue engine to enable designers to create branching, nonlinear game dialogues. The mechanism for this is the voice control by the player. The player will speak to an NPC, and then uses Rev.ai to transcribe the spoken text into a form the NPC can understand. Designers use a visual editor and XML files to configure what the NPC can respond to and it will respond accordingly. The NPC is driven by an algorithm that takes the user input as an input and looks through all its preconfigured dialogue lines to find the response to best address the player. Designers can also configure optional audio to play alongside the NPC, so you can integrate voice acting by just dragging and dropping.
Want an NPC to give your players quests? This can do that. What about an NPC to answer player questions and guide them around the village, or to the next quest? This can do that. NPCs can be configured to remember what players told them and what they said in the past, or with scripting, NPCs can also respond to actions outside of dialogues that occur.
With the power of RevNPC, you have a fully-functional dialogue engine you can include in your game. All you need is a Rev.ai Access Token.
## How I built it
Unity and C# for game-related functionality. Rev.ai for player voice transcription
I built the Demo Project on top of the Unity Standard Assets Example First Person project that comes with the Engine. The focus of this project was AI and Dialogue, not gameplay, but I wanted a demo so I used the Unity starter project and assets.
## Challenges I ran into
I ran into a bug with the Unity Engine. It is supposed to support Ogg Vorbis audio clips but when I try to import voice acting for demos the Unity Engine displays an error that it is unable to import. This error also comes for every other audio format. As a result, the voice acting planned for the demo could not be integrated.
## Accomplishments that I'm proud of
Getting Rev.ai to work with Unity. It's a really cool tool but Unity isn't always easy to integrate with new tools but with help from Rev.ai we were able to get it to work.
## What I learned
Rev.ai, it was my first time working with it and it was very accurate
## What's next for RevNPC
As of right now, this is currently on Github. You can clone it and get started with it in your own project. Just get a Rev.ai and you will see the prompt inside Unity to enter it. In the future, I'm going to put this on the Asset Store to make it simpler to import into Unity.
Also, I think this would be a super fun way to play RPGs in a VR setting. I'd like to make a fully-featured VR game with this. I know RevNPC is capable of it, I just will need to set aside enough time to make a full game. | partial |
## Inspiration:
What inspired the team to create this project, was the fact that many people around the world are misdiagnosed with different types of ocular diseases, which leads to patients getting improper treatments and ultimately leading to visual impairment (blindness). With the help of Theia, we can properly diagnose patients and find out whether or not they have any sort of ocular diseases. This will also help reduce the conflicts of incorrectly diagnosed patients around the world. Our eyes are an important asset to us human beings, and with the help of Theia, we can help many individuals around the world protect their eyes and have a clear vision of the world around them. Additionally, with the rise of COVID-19, leaving the house is very difficult due to the government restrictions. We wanted to reduce constant trips between optometrists and ophthalmologists with Theia due to the diagnosis being performed at the optometrists’ eye clinic, leading to fewer people in buildings and fewer gatherings.
## What it does:
Theia can analyze a fundus photograph of a patient’s eye, to see if they have an ocular disease with extremely high accuracy. By uploading a picture, Theia will be able to tell if the patient is Normal or has the following diseases: Diabetic Retinthropy, glaucoma, Cataract, Age-related Macular Degeneration, Hypertension, Pathological Myopia, or Other diseases/abnormalities. Theia then returns a bar graph with the different values that the model has predicted for the image. You can then hover over the graph to see all the prediction percentages that the model returned for each image, therefore the highest value would be the condition that the patient has. Theia will allow medical practitioners to get a second opinion on a patient's condition and decide if the patient needs further evaluation rather than sending the patient to the ophthalmologist for diagnosis if they have a concern. It also allows new optometrists to guide their patients and not be unsighted to the diseases shown in the fundus photos.
## How we built it:
Theia is a tool created for optometrists to identify ocular diseases directly through a web application. So how does it work? Theia’s backend framework is designed using Flask and the front end was created using plain HTML, CSS, and JavaScript. The computer vision solution was created using TensorFlow and was exported as a TensorFlow JS file to use on the browser. When an image is uploaded to Theia, the image is converted into 224 by 224 tensor matrix. When the predict button is clicked the TensorFlow model is called with its weights, and a javascript prediction promise is returned. Which is then fetched and returned to the user in a visual bar graph format.
## Challenges we ran into:
We tried to create a REST API for our model by deploying the exported TensorFlow model on google cloud. But Google has a recent user interface issue when it comes to deploying models on the cloud. So we instead had to export our TensorFlow model as a TensorFlow JS file. But why would this be a problem? Because it affects our client-side performance by predicting on the client-side. If the prediction were made on the server and returned to the client it would’ve improved the performance of the web application. We also ran into other challenges when it comes to working with promises in javascript since our team had two people that were beginners and we weren’t very experienced in working with javascript.
## Accomplishments that we're proud of:
We are proud of making such an accurate model in TensorFlow. We are not very experienced with deep learning and TensorFlow, so getting a machine learning model that is accurate is a big accomplishment for us. We are also proud that we created an end to end ML solution that can help people see the world in front of them clearly. With two completely new hackers on our team, we were able to expand on our skills while still teaching the beginners something new. Using Flask as our backend was new to us and learning how to integrate it into the web app and ultimately make it work was a major accomplishment but the most important thing we learned was collaboration and how to split work among group members that may be new to this world of programming and making them feel welcomed.
## What we learned:
It’s surprising how much can be learned and taught in only 48 hours. We learned how to use Flask as our backend framework which was an amazing experience since we didn’t run into too many problems. We also learned how to work with javascript and how to make a complex computer vision model with TensorFlow. Our team also learned how to use TensorFlow JS as well which means that in the future we can use TensorFlow JS to make more web-based machine learning solutions.
## What's next for Theia:
We provision Theia to be more scalable and reliable. We aim to deploy the model on a cloud service like google cloud or AWS and access the model from the cloud which would ultimately increase the client-side performance. We also plan on making a database of all the images the users upload, and passing those images through a data pipeline for preprocessing the images, and then saving those images from the user into a dataset for the model to train on weekly. This allows the model to be up to date and also constantly improves the accuracy of the model and reduces the bias due to the large variety of unique fundus photos of patients. Expanding the use case of Theia to other ocular diseases like Strabismus, Amblyopia, and Keratoconus is another goal which means feeding more inputs to our neural network and making it more complex. | ## Inspiration
The inspiration for this app arose from two key insights about medical education.
1. Medicine is inherently interdisciplinary. For example, in fields like dermatology, pattern recognition plays a vital role in diagnosis. Previous studies have shown that incorporating techniques from other fields, such as art analysis, can enhance these skills, highlighting the benefits of cross-disciplinary approaches. Additionally, with the rapid advancement of AI, which has its roots in pattern recognition, there is a tremendous opportunity to revolutionize medical training.
2. Second, traditional methods like textbooks and static images often lack the interactivity and personalized feedback needed to develop diagnostic skills effectively. Current education emphasizes the knowledge of various diagnostic features, but not the ability to recognize such features. This app was designed to address these gaps, creating a dynamic, tech-driven solution to better prepare medical students for the complexities of real-world practice.
## What it does
This app provides an interactive learning platform for medical students, focusing on dermatological diagnosis. It presents users with real-world images of skin conditions and challenges them to make a diagnosis. After each attempt, the app delivers personalized feedback, explaining the reasoning behind the correct answer, whether the diagnosis was accurate or not. By emphasizing pattern recognition and critical thinking, in concert with a comprehensive dataset of over 400,000 images, the app helps students refine their diagnostic skills in a hands-on manner. With its ability to adapt to individual performance, the app ensures a tailored learning experience, making it an effective tool for bridging the gap between theoretical knowledge and clinical application.
## How we built it
To build the app, we utilized a variety of tools and technologies across both the frontend and backend. On the frontend, we implemented React with TypeScript and styled the interface using TailwindCSS. To track user progress in real time, we integrated React’s Rechart library, allowing us to display interactive statistical visualizations. Axios was employed to handle requests and responses between the frontend and backend, ensuring smooth communication. On the backend, we used Python with Pandas, Scikit-Learn, and Numpy to create a machine learning model capable of identifying key factors for diagnosis. Additionally, we integrated OpenAI’s API with Flask to generate large language model (LLM) responses from user input, making the app highly interactive and responsive.
## Challenges we ran into
One of the primary challenges we encountered was integrating OpenAI’s API to deliver real-time feedback to users, which was critical for enhancing the app's personalized learning experience. Navigating the complexities of API communication and ensuring seamless functionality required significant troubleshooting. Additionally, learning how to use Flask to connect the frontend and backend posed another challenge, as some team members were unfamiliar with this framework. This required us to invest time in researching and experimenting with different approaches to ensure proper integration and communication between the app's components.
## Accomplishments that we're proud of
We are particularly proud of successfully completing our first hackathon, where we built this app from concept to execution. Despite being new to many of the technologies involved, we developed a full-stack application, learning the theory and implementation of tools like Flask and OpenAI's API along the way. Another accomplishment was our ability to work as a cohesive team, bringing together members from diverse, interdisciplinary backgrounds, both in general interests and in past CS experiences. This collaborative effort allowed us to combine different skill sets and perspectives to create a functional and innovative app that addresses key gaps in medical education.
## What we learned
Throughout the development of this app, we learned the importance of interdisciplinary collaboration. By combining medical knowledge, AI, and software development, we were able to create a more effective and engaging tool than any one field could produce alone. We also gained a deeper understanding of the technical challenges that come with working on large datasets and implementing adaptive feedback systems.
## What's next for DermaDrill
Looking ahead, there are many areas our app can expand into. With AI identifying the reasoning behind a certain diagnosis, we can explore the potential for diagnostic assistance, where AI can identify areas that may be abnormal to ultimately support clinical decision-making, giving physicians another tool. Furthermore, in other fields that are based on image-based diagnosis, such as radiology or pathology, we can apply a similar identification and feedback system. Future applications of such an app can enhance clinical diagnostic abilities while acknowledging the complexities of real world practice. | ## Inspiration
The idea was to help people who are blind, to be able to discretely gather context during social interactions and general day-to-day activities
## What it does
The glasses take a picture and analyze them using Microsoft, Google, and IBM Watson's Vision Recognition APIs and try to understand what is happening. They then form a sentence and let the user know. There's also a neural network at play that discerns between the two dens and can tell who is in the frame
## How I built it
We took a RPi Camera and increased the length of the cable. We then made a hole in our lens of the glasses and fit it in there. We added a touch sensor to discretely control the camera as well.
## Challenges I ran into
The biggest challenge we ran into was Natural Language Processing, as in trying to parse together a human-sounding sentence that describes the scene.
## What I learned
I learnt a lot about the different vision APIs out there and creating/trainingyour own Neural Network.
## What's next for Let Me See
We want to further improve our analysis and reduce our analyzing time. | losing |
## Inspiration
We recognized that many individuals are keen on embracing journaling as a habit, but hurdles like the "all or nothing" mindset often hinder their progress. The pressure to write extensively or perfectly every time can be overwhelming, deterring potential journalers. Consistency poses another challenge, with life's busy rhythm making it hard to maintain a daily writing routine. The common issue of forgetting to journal compounds the struggle, as people find it difficult to integrate this practice seamlessly into their day. Furthermore, the blank page can be intimidating, leaving many uncertain about what to write and causing them to abandon the idea altogether. In addressing these barriers, our aim with **Pawndr** is to make journaling an inviting, effortless, and supportive experience for everyone, encouraging a sustainable habit that fits naturally into daily life.
## What it does
**Pawndr** is a journaling app that connects with you through text and voice. You will receive conversational prompts delivered to their phone, sparking meaningful reflections wherever you are and making journaling more accessible and fun. Simply reply to our friendly messages with your thoughts or responses to our prompts, and watch your personal journey unfold. Your memories are safely stored, easy accessible through our web app, and beautifully organized. **Pawndr** is able to transform your daily moments into a rich tapestry of self-discovery.
## How we built it
The front-end was built using react.js. We built the backend using FastAPI, and used MongoDB as our database. We deployed our web application and API to a Google Cloud VM using nginx and uvicorn. We utilized Infobip to build our primary user interaction method. Finally, we made use of OpenAI's GPT 3 and Whisper APIs to power organic journaling conversations.
## Challenges we ran into
Our user stories required us to use 10 digit phone numbers for SMS messaging via Infobip. However, Canadian regulations blocked any live messages we sent using the Infobip API. Unfortunately, this was a niche problem that the sponsor reps could not help us with (we still really appreciate all of their help and support!! <3), so we pivoted to a WhatsApp interface instead.
## Accomplishments that we're proud of
We are proud of being able to quickly problem-solve and pivot to a WhatsApp interface upon the SMS difficulties. We are also proud of being able to integrate our project into an end-to-end working demo, allowing hackathon participants to experience our project vision.
## What we learned
We learned how to deploy a web app to a cloud VM using nginx. We also learned how to use Infobip to interface with WhatsApp business and SMS. We learned about the various benefits of journaling, the common barriers to journaling, and how to make journaling rewarding, effortless, and accessible to users.
## What's next for Pawndr
We want to implement more channels to allow our users to use any platform of their choice to journal with us (SMS, Messenger, WhatsApp, WeChat, etc.). We also hope to have more comprehensive sentiment analysis visualization, including plots of mood trends over time. | ## Inspiration
While we were doing preliminary research, we had found overwhelming amounts of evidence of mental health deterioration as a consequence of life-altering lockdown restrictions. Academic research has shown that adolescents depend on friendship to maintain a sense of self-worth and to manage anxiety and depression. Intimate exchanges and self-esteem support significantly increased long-term self worth and decreased depression.
While people do have virtual classes and social media, some still had trouble feeling close with anyone. This is because conventional forums and social media did not provide a safe space for conversation beyond the superficial. User research also revealed that friendships formed by physical proximity don't necessarily make people feel understood and resulted in feelings of loneliness anyway. Proximity friendships formed in virtual classes also felt shallow in the sense that it only lasted for the duration of the online class.
With this in mind, we wanted to create a platform that encouraged users to talk about their true feelings, and maximize the chance that the user would get heartfelt and intimate replies.
## What it does
Reach is an anonymous forum that is focused on providing a safe space for people to talk about their personal struggles. The anonymity encourages people to speak from the heart. Users can talk about their struggles and categorize them, making it easy for others in similar positions to find these posts and build a sense of closeness with the poster. People with similar struggles have a higher chance of truly understanding each other. Since ill-mannered users can exploit anonymity, there is a tone analyzer that will block posts and replies that contain mean-spirited content from being published while still letting posts of a venting nature through. There is also ReCAPTCHA to block bot spamming.
## How we built it
* Wireframing and Prototyping: Figma
* Backend: Java 11 with Spring Boot
* Database: PostgresSQL
* Frontend: Bootstrap
* External Integration: Recaptcha v3 and IBM Watson - Tone Analyzer
* Cloud: Heroku
## Challenges we ran into
We initially found it a bit difficult to come up with ideas for a solution to the problem of helping people communicate. A plan for a VR space for 'physical' chatting was also scrapped due to time constraints, as we didn't have enough time left to do it by the time we came up with the idea. We knew that forums were already common enough on the internet, so it took time to come up with a product strategy that differentiated us. (Also, time zone issues. The UXer is Australian. They took caffeine pills and still fell asleep.)
## Accomplishments that we're proud of
Finishing it on time, for starters. It felt like we had a bit of a scope problem at the start when deciding to make a functional forum with all these extra features, but I think we pulled it off. The UXer also iterated about 30 screens in total. The Figma file is *messy.*
## What we learned
As our first virtual hackathon, this has been a learning experience for remote collaborative work.
UXer: I feel like i've gotten better at speedrunning the UX process even quicker than before. It usually takes a while for me to get started on things. I'm also not quite familiar with code (I only know python), so watching the dev work and finding out what kind of things people can code was exciting to see.
# What's next for Reach
If this was a real project, we'd work on implementing VR features for those who missed certain physical spaces. We'd also try to work out improvements to moderation, and perhaps a voice chat for users who want to call. | ## 💡 Inspiration
Mental health issues are becoming a more prevalent problem each year. Nearly one in five U.S. adults live with a mental illness, but less than 20% of people actually seek mental health treatment. Furthermore, 47% of Americans believe that seeking therapy is a sign of weakness. Our goal with journal4me is to provide an outlet for people to speak their thoughts; to let it all out.
## 🔍 What it does
journal4me is a journaling app that removes the tediousness of typing out your thoughts. It is well known that journaling can improve one's mental health, but many stray away from the act, intimidated by the monotonous writing involved with jotting down one's feelings. With journal4me, all you need to do is speak! The app automatically processes your words and jots them down for you, so that you can rant about your day effortlessly without having to pause and think about how to spell a word or structure a sentence. At the end of your recording, the app even ranks your mood based on what was depicted from your rant using sentiment analysis!
## ⚙️ How it was built
journal4me was built with the following tools and languages:
* HTML
* CSS
* JavaScript
* React
* FastAPI
* Co:here
* AssemblyAI
* MongoDB
## 🚧 Challenges we ran into
In terms of front-end, we ran into a few slight hiccups with using React, such as fixing images and dealing with positioning, but ultimately solved them as we moved forward.
For back-end, we used Python and FastAPI as the back-end framework, which led us to use a database we were not too comfortable with (MongoDB Atlas). Configuring AssemblyAI was also a tough API to get used to, but we managed to figure it out in the end.
## ✔️ Accomplishments that we're proud of
For many of us it was our first times working in a group environment at a hackathon and we are all proud of what we were able to build in such a short amount of time. Other accomplishments include:
* Learning the ins and outs of React
* Working with and handling APIs
* Putting the whole application together; ultimately connecting the front-end with the back-end
## 📚 What we learned
Throughout Hack the North 2022, we were all able to improve our skills as full stack web developers. From refreshing our React knowledge to handling server-side requests and updates, we all definitely finished Hack the North 2022 with newfound skills and abilities.
## 🔭 What's next for journal4me
To level up journal4me, we could improve the UI to further better the user's journaling experience. Some examples include updating the text as the user speaks, adding a mood board, creating a larger 'emotion rating scale'; there are many improvements that we are considering of making in the future. | winning |
# Get the Flight Out helps you GTFO ASAP
## Inspiration
Constantly stuck in meetings, classes, exams, work, with nowhere to go, we started to think. What if we could just press a button, and in a few hours, go somewhere awesome? It doesn't matter where, as long as its not here. We'd need a plane ticket, a ride to the airport, and someplace to stay. So can I book a ticket?
Every online booking site asks for where to go, but we just want to go. What if we could just set a modest budget, and take advantage of last minute flight and hotel discounts, and have all the details taken care of for us?
## What it does
With a push of a button or the flick of an Apple Watch, we'll find you a hotel at a great location, tickets out of your preferred airport, and an Uber to the airport, and email you the details for reference. | ## Inspiration
In 2012 in the U.S infants and newborns made up 73% of hospitals stays and 57.9% of hospital costs. This adds up to $21,654.6 million dollars. As a group of students eager to make a change in the healthcare industry utilizing machine learning software, we thought this was the perfect project for us. Statistical data showed an increase in infant hospital visits in recent years which further solidified our mission to tackle this problem at its core.
## What it does
Our software uses a website with user authentication to collect data about an infant. This data considers factors such as temperature, time of last meal, fluid intake, etc. This data is then pushed onto a MySQL server and is fetched by a remote device using a python script. After loading the data onto a local machine, it is passed into a linear regression machine learning model which outputs the probability of the infant requiring medical attention. Analysis results from the ML model is passed back into the website where it is displayed through graphs and other means of data visualization. This created dashboard is visible to users through their accounts and to their family doctors. Family doctors can analyze the data for themselves and agree or disagree with the model result. This iterative process trains the model over time. This process looks to ease the stress on parents and insure those who seriously need medical attention are the ones receiving it. Alongside optimizing the procedure, the product also decreases hospital costs thereby lowering taxes. We also implemented a secure hash to uniquely and securely identify each user. Using a hyper-secure combination of the user's data, we gave each patient a way to receive the status of their infant's evaluation from our AI and doctor verification.
## Challenges we ran into
At first, we challenged ourselves to create an ethical hacking platform. After discussing and developing the idea, we realized it was already done. We were challenged to think of something new with the same amount of complexity. As first year students with little to no experience, we wanted to tinker with AI and push the bounds of healthcare efficiency. The algorithms didn't work, the server wouldn't connect, and the website wouldn't deploy. We persevered and through the help of mentors and peers we were able to make a fully functional product. As a team, we were able to pick up on ML concepts and data-basing at an accelerated pace. We were challenged as students, upcoming engineers, and as people. Our ability to push through and deliver results were shown over the course of this hackathon.
## Accomplishments that we're proud of
We're proud of our functional database that can be accessed from a remote device. The ML algorithm, python script, and website were all commendable achievements for us. These components on their own are fairly useless, our biggest accomplishment was interfacing all of these with one another and creating an overall user experience that delivers in performance and results. Using sha256 we securely passed each user a unique and near impossible to reverse hash to allow them to check the status of their evaluation.
## What we learned
We learnt about important concepts in neural networks using TensorFlow and the inner workings of the HTML code in a website. We also learnt how to set-up a server and configure it for remote access. We learned a lot about how cyber-security plays a crucial role in the information technology industry. This opportunity allowed us to connect on a more personal level with the users around us, being able to create a more reliable and user friendly interface.
## What's next for InfantXpert
We're looking to develop a mobile application in IOS and Android for this app. We'd like to provide this as a free service so everyone can access the application regardless of their financial status. | ## Inspiration
We have many friends living in cities around the world but we rarely visit them since the flights are usually too expensive. So we build an app which gives you alerts once the flights to your friends drop to a low price.
## What it does
We get your friendslist from facebook and then give you the option to choose the ones you want to track the flights for. We then send you a weekly email, with the 3 best picks, where the price is much lower then usually for a weekend trip.
## How I built it
We use node.js with firebase to do the backend work. The flight prices are querried with the Amadeus API and we then send the weekly mail with node.js
## Challenges I ran into
Time, time, time.... | winning |
## Inspiration
Due to our craze for space exploration, we are forgetting how we are polluting outer space and the long-lasting changes that we are making. Just like climate change, we will soon have space change.
Due to our today's mistakes, I do not want our future generations to take charge of cleaning space just so a spacecraft or a satellite can fly to the moon or elsewhere in the galaxy. We need a better mechanism to tackle this man-made mess that we have created in space.
**The rate at which these tiny space debris move in space has the capacity of destroying an entire satellite and spacecraft. Detecting Space debris is the need of the hour and we need to work on it ASAP before it gets too late for mankind to do space exploration because there is too much *GARBAGE IN SPACE*!**
## What it does
The Solution that I would like to propose is in the following three steps:
1. Can we use Radar as a Payload on a Cubesat that can detect debris of size 1-15 cm? Challenging, but yes we can!
2. Can we design a very High Gain Foldable Antenna into this CubeSat? Challenging, but yes we can!
3. Can we deorbit debris by using a Gun-shoot in Space? Challenging, but yes we can!
## How we built it
The following steps need to be followed as per the below priority
1. Calculate radar equation
2. Calculate antenna parameters
3. Deign the deployment mechanism for the antenna into the CubeSat
4. Design its overall system
5. Design its electrical design
6. Design its mechanical design
7. Design the estimate of how the projectile can hit the debris and deorbit it
8. Build the CubeSat and work on how it will not be potential debris in space
## Challenges we ran into
Estimating how the radar can detect debris in space and simultaneously also be able to deploy a very large antenna into a very tiny CubeSat. The building of this CubeSat can take us at least 4-5 years.
## Accomplishments that we're proud of
I am really very confident that this will soon be a successful mission. Additionally, I am also very proud that although this system needs a little modification over time, I was still able to personally establish that this system can practically be deployed into space by taking help from my professors at Stanford as well from friends in the Spacecraft design class that I am taking at Stanford as an Electrical Engineering student. Proud of what I have been able to do so far!!! :)
## What we learned
How to determine if the radar can be designed, design the very high gain antenna and establish that the projectile can be used to hit the debris
## What's next for PIERCE
Working on building all the models that are mentioned under how we build it. | ## Inspiration
We drew inspiration from recognizing the laziness of people when they are sitting at their desks yet they wish to throw away their garbage. Hence, to solve this problem, we decided to have a robot that can shoot garbage to the trash bin without the need to get up.
## What it does
It transforms garbage throwing into a game. You can control the robot with a joystick, moving it forward and backwards, and side-to-side, and turning it left and right. After you decided on the position you wish for the robot to shoot, you can toggle to the arm by pressing the joystick, allowing you to change the angle at which the garbage is aimed at. You are able to toggle back and forth. When you are satisfied with the position, you can launch the garbage by pulling the elastic band, which will cause an impact with the garbage, sending it to your desired location.
## How we built it
We built it using C/C++ with two Arduino Uno and two NRF24L01 so it could have a wireless connection between the joystick controller and the robot itself. We have also connected the motors to the motor driver, allowing the motor to actually spin.
## Challenges we ran into
A major challenge we ran into was transmitting values from one NRF24L01 to the other. We kept on receiving "0"s as the output in the receiver, which was not what the receiver should be receiving.
## Accomplishments that we're proud of
We are proud of creating a project using C/C++ and Arduino as it was our first time working with these programming languages.
## What we learned
We learned how to use C/C++ and Arduino in a professional setting. We had no/little prior experience when it comes to coding in Arduino. Yet, during the entire hackathon, not only did we learn how to write codes in Arduino, we also learned how to connect the wires on the hardwires, allowing the robot to actually run.
## What's next for Garbage Shooter
The next steps for Garbage Shooter is to have a shooting mechanism where the user would have to control how much they want to stretch the rubber band through a joystick instead of manually launching the garbage. | ## Inspiration
In recent times, we have witnessed undescribable tragedy occur during the war in Ukraine. The way of life of many citizens has been forever changed. In particular, the attacks on civilian structures have left cities covered in debris and people searching for their missing loved ones. As a team of engineers, we believe we could use our skills and expertise to facilitate the process of search and rescue of Ukrainian citizens.
## What it does
Our solution integrates hardware and software development in order to locate, register, and share the exact position of people who may be stuck or lost under rubble/debris. The team developed a rover prototype that navigates through debris and detects humans using computer vision. A picture and the geographical coordinates of the person found are sent to a database and displayed on a web application. The team plans to use a fleet of these rovers to make the process of mapping the area out faster and more efficient.
## How we built it
On the frontend, the team used React and the Google Maps API to map out the markers where missing humans were found by our rover. On the backend, we had a python script that used computer vision to detect humans and capture an image. Furthermore,
For the rover, we 3d printed the top and bottom chassis specifically for this design. After 3d printing, we integrated the arduino and attached the sensors and motors. We then calibrated the sensors for the accurate values.
To control the rover autonomously, we used an obstacle-avoider algorithm coded in embedded C.
While the rover is moving and avoiding obstacles, the phone attached to the top is continuously taking pictures. A computer vision model performs face detection on the video stream, and then stores the result in the local directory. If a face was detected, the image is stored on the IPFS using Estuary's API, and the GPS coordinates and CID are stored in a Firestore Database. On the user side of the app, the database is monitored for any new markers on the map. If a new marker has been added, the corresponding image is fetched from IPFS and shown on a map using Google Maps API.
## Challenges we ran into
As the team attempted to use the CID from the Estuary database to retrieve the file by using the IPFS gateway, the marker that the file was attached to kept re-rendering too often on the DOM. However, we fixed this by removing a function prop that kept getting called when the marker was clicked. Instead of passing in the function, we simply passed in the CID string into the component attributes. By doing this, we were able to retrieve the file.
Moreover, our rover was initally designed to work with three 9V batteries (one to power the arduino, and two for two different motor drivers). Those batteries would allow us to keep our robot as light as possible, so that it could travel at faster speeds. However, we soon realized that the motor drivers actually ran on 12V, which caused them to run slowly and burn through the batteries too quickly. Therefore, after testing different options and researching solutions, we decided to use a lithium-poly battery, which supplied 12V. Since we only had one of those available, we connected both the motor drivers in parallel.
## Accomplishments that we're proud of
We are very proud of the integration of hardware and software in our Hackathon project. We believe that our hardware and software components would be complete projects on their own, but the integration of both makes us believe that we went above and beyond our capabilities. Moreover, we were delighted to have finished this extensive project under a short period of time and met all the milestones we set for ourselves at the beginning.
## What we learned
The main technical learning we took from this experience was implementing the Estuary API, considering that none of our teammembers had used it before. This was our first experience using blockchain technology to develop an app that could benefit from use of public, descentralized data.
## What's next for Rescue Ranger
Our team is passionate about this idea and we want to take it further. The ultimate goal of the team is to actually deploy these rovers to save human lives.
The team identified areas for improvement and possible next steps. Listed below are objectives we would have loved to achieve but were not possible due to the time constraint and the limited access to specialized equipment.
* Satellite Mapping -> This would be more accurate than GPS.
* LIDAR Sensors -> Can create a 3D render of the area where the person was found.
* Heat Sensors -> We could detect people stuck under debris.
* Better Cameras -> Would enhance our usage of computer vision technology.
* Drones -> Would navigate debris more efficiently than rovers. | losing |
## Inspiration
We wanted to do something fun and exciting, nothing too serious. Slang is a vital component to thrive in today's society. Ever seen Travis Scott go like, "My dawg would prolly do it for a Louis belt", even most menials are not familiar with this slang. Therefore, we are leveraging the power of today's modern platform called "Urban Dictionary" to educate people about today's ways. Showing how today's music is changing with the slang thrown in.
## What it does
You choose your desired song it will print out the lyrics for you and then it will even sing it for you in a robotic voice. It will then look up the urban dictionary meaning of the slang and replace with the original and then it will attempt to sing it.
## How I built it
We utilized Python's Flask framework as well as numerous Python Natural Language Processing libraries. We created the Front end with a Bootstrap Framework. Utilizing Kaggle Datasets and Zdict API's
## Challenges I ran into
Redirecting challenges with Flask were frequent and the excessive API calls made the program super slow.
## Accomplishments that I'm proud of
The excellent UI design along with the amazing outcomes that can be produced from the translation of slang
## What I learned
A lot of things we learned
## What's next for SlangSlack
We are going to transform the way today's menials keep up with growing trends in slang. | Ever wonder where that video clip came from? Probably some show or movie you've never watched. Well with RU Recognized, you can do a reverse video search to find out what show or movie it's from.
## Inspiration
We live in a world rife with movie and tv show references, and not being able to identify these references is a sign of ignorance in our society. More importantly, the feeling of not being able to remember what movie or show that one really funny clip was from can get really frustrating. We wanted to enale every single human on this planet to be able to seek out and enjoy video based content easily but also efficiently. So, we decided to make **Shazam, but for video clips!**
## What it does
RU Recognized takes a user submitted video and uses state of the art algorithms to find the best match for that clip. Once a likely movie or tv show is found, the user is notified and can happily consume the much desired content!
## How we built it
We took on a **3 pronged approach** to tackle this herculean task:
1. Using **AWS Rekognition's** celebrity detection capabilities, potential celebs are spotted in the user submitted video. These identifications have a harsh confidence value cut off to ensure only the best matching algorithm.
2. We scrape the video using **AWS' Optical Character Recognition** (OCR) capabilities to find any identifying text that could help in identification.
3. **Google Cloud's** Speech to Text API allows us to extract the audio into readable plaintext. This info is threaded through Google Cloud Custom Search to find a large unstructured datadump.
To parse and exract useful information from this amourphous data, we also maintained a self-curated, specialized, custom made dataset made from various data banks, including **Kaggle's** actor info, as well as IMDB's incredibly expansive database.
Furthermore, due to the uncertain nature of the recognition API's, we used **clever tricks** such as cross referencing celebrities seen together, and only detecting those that had IMDB links.
Correlating the information extracted from the video with the known variables stored in our database, we are able to make an educated guess at origins of the submitted clip.
## Challenges we ran into
Challenges are an obstacle that our team is used to, and they only serve to make us stronger. That being said, some of the (very frustrating) challenges we ran into while trying to make RU Recognized a good product were:
1. As with a lot of new AI/ML algorithms on the cloud, we struggles alot with getting our accuracy rates up for identified celebrity faces. Since AWS Rekognition is trained on images of celebrities from everyday life, being able to identify a heavily costumed/made-up actor is a massive challenge.
2. Cross-connecting across various cloud platforms such as AWS and GCP lead to some really specific and hard to debug authorization problems.
3. We faced a lot of obscure problems when trying to use AWS to automatically detect the celebrities in the video, without manually breaking it up into frames. This proved to be an obstacle we weren't able to surmount, and we decided to sample the frames at a constant rate and detect people frame by frame.
4. Dataset cleaning took hours upon hours of work and dedicated picking apart. IMDB datasets were too large to parse completely and ended up costing us hours of our time, so we decided to make our own datasets from this and other datasets.
## Accomplishments that we're proud of
Getting the frame by frame analysis to (somewhat) accurately churn out celebrities and being able to connect a ton of clever identification mechanisms was a very rewarding experience. We were effectively able to create an algorithm that uses 3 to 4 different approaches to, in a way, 'peer review' each option, and eliminate incorrect ones.
## What I learned
* Data cleaning is ver very very cumbersome and time intensive
* Not all AI/ML algorithms are magically accurate
## What's next for RU Recognized
Hopefully integrate all this work into an app, that is user friendly and way more accurate, with the entire IMDB database to reference. | ## our why
Dialects, Lingoes, Creoles, Acrolects are more than just words, more than just languages - there are a means for cultural immersion, intangible pieces of tradition and history passed down through generations.
Remarkably two of the industry giants lag far behind - Google Translate doesn't support translations for the majority of dialects and ChatGPT's responses can be likened to a dog meowing or a cat barking.
Aiden grew up in Trinidad and Tobago, a native creole (patois) speaker; Nuween in Afghanistan making memories with his extended family in hazaragi, and Halle and Savvy though Canadian show their love and appreciation at home, in Cantonese and Mandarin, with their parents who are both 1st gen immigrants.
How can we bring dialect speakers and even non-dialect speakers alike together? How can we traverse cultures, when the infrastructure to do so isn’t up to par?
## pitta-patta, our solution
Metet Pitta-Patta—an LLM-powered, voice-to-text web app designed to bridge cultural barriers and bring people together through language, no matter where they are. With our innovative dialect translation system for underrepresented minorities, we enable users to seamlessly convert between standard English and dialects. Currently, we support Trinidadian Creole as our proof of concept, with plans to expand further, championing a cause dear to all of us.
## our building journey
Model:
Our project is built on a Sequence-to-Sequence (Seq2Seq) model, tailored to translate Trinidadian Creole slang to English and back. The encoder compresses the input into a context vector, while the decoder generates the output sequence. We chose Long Short-Term Memory (LSTM) networks to handle the complexity of sequential data.
To prepare our data, we clean it by removing unnecessary prefixes and adding start and end tokens to guide the model. We then tokenize the text, converting words to integers and defining an out-of-vocabulary token for unknown words. Finally, we pad the sequences to ensure they’re uniform in length.
The architecture includes an embedding layer that turns words into dense vectors, capturing their meanings. As the encoder processes each word, it produces hidden states that initialize the decoder, which predicts the next word in the sequence.
Our decode\_sequence() function takes care of translating Trinidadian Creole into English, generating one word at a time until it reaches the end. This allows us to create meaningful connections through language, one sentence at a time.
Frontend:
The Front end was done using stream-lit.
**Challenges we ran into**
1. This was our first time using Databricks and their services - while we did get Tensorflow up, it was pretty painful to utilize spark and also attempting to run llm models within the databricks environment - we eventually abandoned that plan.
2. We had a bit of difficulty connecting the llm to the backend - a small chink along the way, where calling the model would always result in retraining - slight tweaks in the logic fixed this.
3. We had a few issues in training the llm in terms of the data format of the input - this was fixed with the explicit encoder and decoder logic
**Accomplishments that we're proud of**
1. This was our first time using streamlit to build the front-end and in the end it was done quite smoothly.
2. We trained an llm to recognise and complete dialect!
## looking far, far, ahead
We envision an exciting timeline for Pitta-Patta. Our goal is to develop a Software Development Kit (SDK) that small translation companies can utilize, empowering them to integrate our dialect translation capabilities into their platforms. This will not only broaden access to underrepresented dialects but also elevate the importance of cultural nuances in communication.
Additionally, we plan to create a consumer-focused web app that makes our translation tools accessible to everyday users. This app will not only facilitate seamless communication but also serve as a cultural exchange platform, allowing users to explore the richness of various dialects and connect with speakers around the world. With these initiatives, we aim to inspire a new wave of cultural understanding and appreciation.
Made with coffee, red bull, and pizza. | winning |
## Inspiration
The inspiration behind ReflectAI stems from the growing prevalence of virtual behavioral interviews in the modern hiring process. We recognized that job seekers face a significant challenge in mastering these interviews, which require not only the right words but also the right tone and body language. We wanted to empower job seekers by providing them with a platform to practice, improve, and receive personalized feedback on their performance. Our goal is to level the playing field and increase the chances of success for job seekers everywhere.
## What it does
ReflectAI combines language analysis, prosody analysis, and facial expression analysis to offer comprehensive feedback on interview responses. Key features include:
Practice Environment: Users can simulate real interview scenarios, record their responses to common behavioral questions, and receive feedback on their performance.
Multi-Modal Analysis: Our platform assesses not just what you say but how you say it and what your body language conveys.
Personalized Feedback: ReflectAI provides detailed feedback and actionable recommendations to help users improve their communication skills.
![Mockup](https://cdn.discordapp.com/attachments/1166663245285830666/1168069411165454362/First_interview_question.png?ex=65506c69&is=653df769&hm=8754d04503253d35c4d30e968a00d2dd5761010f707aefebc744414af490a092&)
## How we built it
We built a React frontend connected to Firebase for storing intermediate artifacts and a backend that utilizes Hume for facial expression, prosody, and language emotion detection alongside OpenAI for feedback generation.
## Challenges we ran into
The main challenges were building a React frontend from scratch, and understanding all facets of the Hume API and how it would work within our application.
## Accomplishments that we're proud of
We built a full-stack app from scratch that is capable of processing large artifacts (videos) in a performant manner.
## What we learned
We learned how to use tools like Figma and the Hume API, and how to effectively set expectations so that we weren't overly scrunched for time.
## What's next for ReflectAI
Our journey with ReflectAI is just beginning. We have ambitious plans for the future, including:
* Expanding our library of interview questions and scenarios to cover a wide range of industries and job types.
* Enhancing our AI models to provide even more detailed and personalized feedback.
* Exploring partnerships with educational institutions and employers to integrate ReflectAI into training and hiring processes.
* Continuously improving our platform based on user feedback and evolving technology to remain at the forefront of interview preparation. | ## Inspiration
With the number of candidates rising in the industry in comparison to the job positions, conducting interviews and deciding on candidates carefully has become a critical task. Along with this ratio imbalance, there are an increasing number of candidates that are forging their experience to gain an unfair advantage over others.
## What it does
The project provides AI interview solution which conducts a human-like interview and deploy AI agents in the backend to verify the authenticity of the candidate.
## How we built it
The project was building NextJS as frontend, and NodeJS as backend. The AI service was provided by Hume, along with Single Store as the backend database. We also used Fetch AI to deploy the AI agents that verify the authenticity.
## Challenges we ran into
Some challenges we ran into were related to integrating Hume into our frontend. Managing the conversation data and inferring it to provide feedback was also tricky.
## Accomplishments that we're proud of
Being able to build a working MVP within 2 days of hacking. Integrating hume AI and being able to persist and maintain conversation transcripts that was later used to make inference.
## What we learned
We learned about using and integrating AI agents to help us with important tasks. Having used Hume AI also provided us with insights on different emotions captured by the AI service that can be used in a lot of downstream tasks.
## What's next for Candidate Compare
We plan on expanding the scope of our candidate information verification to include more thorough checks. We also plan to partner with a couple of early stage adopters to use Candidate Compare and benefit from reduced hiring loads. | ## Inspiration
An abundance of qualified applicants lose their chance to secure their dream job simply because they are unable to effectively present their knowledge and skills when it comes to the interview. The transformation of interviews into the virtual format due to the Covid-19 pandemic has created many challenges for the applicants, especially students as they have reduced access to in-person resources where they could develop their interview skills.
## What it does
Interviewy is an **Artificial Intelligence** based interface that allows users to practice their interview skills by providing them an analysis of their video recorded interview based on their selected interview question. Users can reflect on their confidence levels and covered topics by selecting a specific time-stamp in their report.
## How we built it
This Interface was built using the MERN stack
In the backend we used the AssemblyAI APIs for monitoring the confidence levels and covered topics. The frontend used react components.
## Challenges we ran into
* Learning to work with AssemblyAI
* Storing files and sending them over an API
* Managing large amounts of data given from an API
* Organizing the API code structure in a proper way
## Accomplishments that we're proud of
• Creating a streamlined Artificial Intelligence process
• Team perseverance
## What we learned
• Learning to work with AssemblyAI, Express.js
• The hardest solution is not always the best solution
## What's next for Interviewy
• Currently the confidence levels are measured through analyzing the words used during the interview. The next milestone of this project would be to analyze the alterations in tone of the interviewees in order to provide a more accurate feedback.
• Creating an API for analyzing the video and the gestures of the the interviewees | losing |
## Inspiration
SpotMe seeks to create a designated social media platform for people to share pictures and short videos of their workouts and other fitness related content. Moreover, SpotMe strives to create a casual, low-stakes social media atmosphere so that everyone, no matter how fit they may be, feels comfortable sharing their fitness journey through SpotMe
## What it does
Spot Me currently has three core functionalities. The first functionality is the ability to upload a picture with a short caption, the second is the ability to see fitness pictures of one’s friends, and the last is the ability to see one’s progress via the pictures they have uploaded in the past.
## How we built it
We implemented the Front End on Expo Go app via React Native in VS Code. Moreover, we were able to implement a basic back-end through the use of Convex.
## Challenges we ran into
1. All of us are relatively new to React
2. New to convex
3. No experience implementing a backend or an app
4. Lack of cohesive plan/implementation strategy
## Accomplishments that we're proud of
We’re proud of how we were able to figure out the backend with Convex and create a semi functioning app given our limited experience and time.
## What we learned
We learned an enormous amount of React Native as well as the process behind creating an iOS application.
## What's next for SpotMe
We are unsure about the future of SpotMe, but we are thinking about adding more features such as workout badges, achievements, team groups and more. | ## Inspiration
Recently I have injured myself in the gym working out, and I decided to do something so it will not happen again. The second source of inspiration for the whole team, was the talk given by the presented at the begging of the hackathon, talking about the future of technology. After discussing it a bit, we decided health and exercise was one of the fields which we wanted to pursue a project in.
## What it does
Using a Microsoft Band, we are collecting the motion of a persons hand. The data is being passed to an Android phone though bluetooth. From there it is being saved and processed on Azure. Using machine learning on Azure, we are able to determine what exercise the person is performing and how well they are performing the exercise in order not to injure themselves.
## Challenges I ran into
There were a few problems with the cloud service, where we had issues with accessing our databases. After a couple hours, and a lot of caffeine, we managed to get around it, and get back to building.
## Accomplishments that I'm proud of
I am extremely impressed with how well the prediction is able to identify the different workouts( 99% accuracy). I am proud of how well our team worked together, and how we managed to put our knowledge and experience in different fields.
## What I learned
The most our team learned is how to use the Azure workframe, it has proven to be extremely powerful. We have also learned a lot about ways to better use artificial intelligence to solve the problems we encounter in our projects.
## What's next for Spotter
The team is planning to support it, and make a more elaborate GUI and ways to interact with the application, and get even better feedback | ## Inspiration
We have a desire to spread awareness surrounding health issues in modern society. We also love data and the insights in can provide, so we wanted to build an application that made it easy and fun to explore the data that we all create and learn something about being active and healthy.
## What it does
Our web application processes data exported by Apple health and provides visualizations of the data as well as the ability to share data with others and be encouraged to remain healthy. Our educational component uses real world health data to educate users about the topics surrounding their health. Our application also provides insight into just how much data we all constantly are producing.
## How we built it
We build the application from the ground up, with a custom data processing pipeline from raw data upload to visualization and sharing. We designed the interface carefully to allow for the greatest impact of the data while still being enjoyable and easy to use.
## Challenges we ran into
We had a lot to learn, especially about moving and storing large amounts of data and especially doing it in a timely and user-friendly manner. Our biggest struggle was handling the daunting task of taking in raw data from Apple health and storing it in a format that was easy to access and analyze.
## Accomplishments that we're proud of
We're proud of the completed product that we came to despite early struggles to find the best approach to the challenge at hand. An architecture this complicated with so many moving components - large data, authentication, user experience design, and security - was above the scope of projects we worked on in the past, especially to complete in under 48 hours. We're proud to have come out with a complete and working product that has value to us and hopefully to others as well.
## What we learned
We learned a lot about building large scale applications and the challenges that come with rapid development. We had to move quickly, making many decisions while still focusing on producing a quality product that would stand the test of time.
## What's next for Open Health Board
We plan to expand the scope of our application to incorporate more data insights and educational components. While our platform is built entirely mobile friendly, a native iPhone application is hopefully in the near future to aid in keeping data up to sync with minimal work from the user. We plan to continue developing our data sharing and social aspects of the platform to encourage communication around the topic of health and wellness. | losing |
## Inspiration
We got lost so many times inside MIT... And no one could help us :( No Google Maps, no Apple Maps, NO ONE. Since now, we always dreamed about the idea of a more precise navigation platform working inside buildings. And here it is. But that's not all: as traffic GPS usually do, we also want to avoid the big crowds that sometimes stand in corridors.
## What it does
Using just the pdf of the floor plans, it builds a digital map and creates the data structures needed to find the shortest path between two points, considering walls, stairs and even elevators. Moreover, using fictional crowd data, it avoids big crowds so that it is safer and faster to walk inside buildings.
## How we built it
Using k-means, we created nodes and clustered them using the elbow diminishing returns optimization. We obtained the hallways centers combining scikit-learn and filtering them applying k-means. Finally, we created the edges between nodes, simulated crowd hotspots and calculated the shortest path accordingly. Each wifi hotspot takes into account the number of devices connected to the internet to estimate the number of nearby people. This information allows us to weight some paths and penalize those with large nearby crowds.
A path can be searched on a website powered by Flask, where the corresponding result is shown.
## Challenges we ran into
At first, we didn't know which was the best approach to convert a pdf map to useful data.
The maps we worked with are taken from the MIT intranet and we are not allowed to share them, so our web app cannot be published as it uses those maps...
Furthermore, we had limited experience with Machine Learning and Computer Vision algorithms.
## Accomplishments that we're proud of
We're proud of having developed a useful application that can be employed by many people and can be extended automatically to any building thanks to our map recognition algorithms. Also, using real data from sensors (wifi hotspots or any other similar devices) to detect crowds and penalize nearby paths.
## What we learned
We learned more about Python, Flask, Computer Vision algorithms and Machine Learning. Also about frienship :)
## What's next for SmartPaths
The next steps would be honing the Machine Learning part and using real data from sensors. | # Project Story: Proactive, AI-Driven Cooling and Power Management
## About the Project
### Inspiration and Market Validation
Our project was inspired by insights gained from the NSF ICorps course in May, where we received strong market validation for the problem we aimed to solve. To ensure we were addressing a real need, we conducted over 20 customer interviews with representatives from big tech firms, industry experts, and data center professionals who specialize in cooling and power distribution services. These interviews confirmed that the problem of inefficient cooling and power management in data centers is prevalent and impactful.
### Learning and Development
Throughout this project, we learned extensively about the transition from CPUs to GPUs in data centers and the associated challenges in cooling and power distribution. Traditional air-cooling systems are becoming inadequate as data centers shift towards liquid cooling to manage the increased heat output from GPUs. Despite this transition, even large tech companies are still experimenting with and refining liquid cooling systems. This gap in the market presents an opportunity for innovative solutions like ours.
### Building the Project
Our hackathon project aimed to create an MVP that leverages AI to make cooling and power management predictive rather than reactive. We used SLURM, a workload manager and job scheduling system, to gather data about GPU availability and job scheduling. Our system predicts when and where jobs will run and proactively triggers the cooling systems before the GPUs heat up, thereby optimizing cooling efficiency.
### Challenges Faced
We faced several challenges during the development of this project:
1. **Data Collection:** Gathering accurate and comprehensive historical job data, node availability, and job scheduling logs from SLURM was time-consuming and required meticulous attention to detail.
2. **Model Accuracy:** Building a predictive model that could accurately forecast job run times and node allocations was complex. We tested various machine learning models, including Random Forest, Gradient Boosting Machines, LSTM, and GRU, to improve prediction accuracy.
3. **Integration with Existing Systems:** Integrating our predictive system with existing data center infrastructure, which traditionally relies on reactive cooling mechanisms, required careful planning and implementation.
## Implementation Details
### Steps to Implement the Project
1. **Data Collection:**
* **Historical Job Data:** Collect data on job submissions, including job ID, submission time, requested resources (CPU, memory, GPUs), priority, and actual start and end times.
* **Node Data:** Gather information on node availability, current workload, and resource usage.
* **Job Scheduling Logs:** Extract SLURM scheduling logs that detail job allocation and execution.
2. **Feature Engineering:**
* **Create Relevant Features:** Include features such as time of submission, day of the week, job priority, resource requirements, and node state (idle, allocated).
* **Time Series Features:** Use lag features (e.g., previous job allocations) and rolling statistics (e.g., average load in the past hour).
3. **Model Selection:**
* **Classification Models:** Random Forest, Gradient Boosting Machines, and Logistic Regression for predicting server allocation.
* **Time Series Models:** LSTM and GRU for predicting the time of allocation.
* **Regression Models:** Linear Regression and Decision Trees for predicting the time until allocation.
4. **Predictive Model Approach:**
* **Data Collection:** Gather historical scheduling data from SLURM logs, including job submissions and their attributes, node allocations, and resource usage.
* **Feature Engineering:** Develop features related to job priority, requested resources, expected runtime, and node state.
* **Modeling:** Use a classification approach to predict the node allocation for a job or a regression approach to predict resource allocation probabilities and select the node with the highest probability.
### Results and Benefits
By implementing this predictive cooling and power management system, we anticipate the following benefits:
1. **Increased Cooling Efficiency:** Proactively triggering cooling systems based on job predictions reduces the power required for cooling by at least 10%, resulting in significant cost savings.
2. **Extended Equipment Life:** Optimized cooling management increases the lifespan of data center equipment by reducing thermal stress.
3. **Environmental Impact:** Reducing the power required for cooling contributes to lower overall energy consumption, aligning with global sustainability goals.
### Future Plans
Post-hackathon, we plan to further refine our MVP and seek early adopters to implement this solution. The transition to GPU-based data centers is an ongoing trend, and our proactive cooling and power management system is well-positioned to address the associated challenges. By continuing to improve our predictive models and integrating more advanced AI techniques, we aim to revolutionize data center operations and significantly reduce their environmental footprint. | ## Inspiration
Walking is a sustainable and effective form of transit whose popularity is negatively impacted by perceived concerns about boredom and safety. People who are choosing between multiple forms of transit might not select walking due to these issues. Our goal was to create a solution that would make walking more enjoyable, encouraging people to follow a more sustainable lifestyle by providing new benefits to the walking experience.
## What it does
Our web app, WalkWithMe, helps connect users to other walkers nearby based on times and routes, allowing them to walk together to their intended destinations. It approximately finds the path that maximizes time spent walking together while also minimizing total travel distance for the people involved. People can create accounts that allows them to become verified users in the network, introducing a social aspect to walking that makes it fun and productive. Additionally, this reduces safety concerns as these are often less pronounced in groups of people versus individuals while walking; this is especially true at night.
## How we built it
We used react.js for the frontend, Sonr and Golang for the backend. We hosted our website using Firebase. Our map data was generated from the Google Maps API.
## Challenges we ran into
Our frontend team had to completely learn react.js for the project. We also did not have prior experience with the Sonr and Google Maps API. We needed to figure out how to integrate Sonr into the backend with Golang and Google Maps API to find the path.
## Accomplishments that we're proud of
We are proud of developing and implementing a heuristic algorithm that finds a reasonable path to walk to the destination and for creating an effective backend and frontend setup despite just learning react and Sonr in the hackathon. We also overcame many bugs relating to Google's geocoding API.
## What we learned
We learned react.js to display our interactive website efficiently, how to integrate Sonr into our project to store profile and location data, and how to use Google Maps to achieve our goals with our program.
## What's next for WalkWithMe
We have many ideas for how we can take the next step with our app. We want to add a tiered verification system that grants you credit for completing walks without issues. The higher you are in the rating system, the more often you will be recommended walks with smaller groups of people (as you are viewed as more trustworthy). We also want to improve the user interface of the app, making it more intuitive to use. We also want to expand on the social aspect of the app, allowing people to form walking groups with others and deepen connections with people they meet. We also want to add geolocation trackers so that users can see where their group members are, in case they don't walk at a similar speed toward the meet-up location. | winning |
## Inspiration
We were inspired by Katie's 3-month hospital stay as a child when she had a difficult-to-diagnose condition. During that time, she remembers being bored and scared -- there was nothing fun to do and no one to talk to. We also looked into the larger problem and realized that 10-15% of kids in hospitals develop PTSD from their experience (not their injury) and 20-25% in ICUs develop PTSD.
## What it does
The AR iOS app we created presents educational, gamification aspects to make the hospital experience more bearable for elementary-aged children. These features include:
* An **augmented reality game system** with **educational medical questions** that pop up based on image recognition of given hospital objects. For example, if the child points the phone at an MRI machine, a basic quiz question about MRIs will pop-up.
* If the child chooses the correct answer in these quizzes, they see a sparkly animation indicating that they earned **gems**. These gems go towards their total gem count.
* Each time they earn enough gems, kids **level-up**. On their profile, they can see a progress bar of how many total levels they've conquered.
* Upon leveling up, children are presented with an **emotional check-in**. We do sentiment analysis on their response and **parents receive a text message** of their child's input and an analysis of the strongest emotion portrayed in the text.
* Kids can also view a **leaderboard of gem rankings** within their hospital. This social aspect helps connect kids in the hospital in a fun way as they compete to see who can earn the most gems.
## How we built it
We used **Xcode** to make the UI-heavy screens of the app. We used **Unity** with **Augmented Reality** for the gamification and learning aspect. The **iOS app (with Unity embedded)** calls a **Firebase Realtime Database** to get the user’s progress and score as well as push new data. We also use **IBM Watson** to analyze the child input for sentiment and the **Twilio API** to send updates to the parents. The backend, which communicates with the **Swift** and **C# code** is written in **Python** using the **Flask** microframework. We deployed this Flask app using **Heroku**.
## Accomplishments that we're proud of
We are proud of getting all the components to work together in our app, given our use of multiple APIs and development platforms. In particular, we are proud of getting our flask backend to work with each component.
## What's next for HealthHunt AR
In the future, we would like to add more game features like more questions, detecting real-life objects in addition to images, and adding safety measures like GPS tracking. We would also like to add an ALERT button for if the child needs assistance. Other cool extensions include a chatbot for learning medical facts, a QR code scavenger hunt, and better social interactions. To enhance the quality of our quizzes, we would interview doctors and teachers to create the best educational content. | ## Inspiration
Most of us have had to visit loved ones in the hospital before, and we know that it is often not a great experience. The process can be overwhelming between knowing where they are, which doctor is treating them, what medications they need to take, and worrying about their status overall. We decided that there had to be a better way and that we would create that better way, which brings us to EZ-Med!
## What it does
This web app changes the logistics involved when visiting people in the hospital. Some of our primary features include a home page with a variety of patient updates that give the patient's current status based on recent logs from the doctors and nurses. The next page is the resources page, which is meant to connect the family with the medical team assigned to your loved one and provide resources for any grief or hardships associated with having a loved one in the hospital. The map page shows the user how to navigate to the patient they are trying to visit and then back to the parking lot after their visit since we know that these small details can be frustrating during a hospital visit. Lastly, we have a patient update and login screen, where they both run on a database we set and populate the information to the patient updates screen and validate the login data.
## How we built it
We built this web app using a variety of technologies. We used React to create the web app and MongoDB for the backend/database component of the app. We then decided to utilize the MappedIn SDK to integrate our map service seamlessly into the project.
## Challenges we ran into
We ran into many challenges during this hackathon, but we learned a lot through the struggle. Our first challenge was trying to use React Native. We tried this for hours, but after much confusion among redirect challenges, we had to completely change course around halfway in :( In the end, we learnt a lot about the React development process, came out of this event much more experienced, and built the best product possible in the time we had left.
## Accomplishments that we're proud of
We're proud that we could pivot after much struggle with React. We are also proud that we decided to explore the area of healthcare, which none of us had ever interacted with in a programming setting.
## What we learned
We learned a lot about React and MongoDB, two frameworks we had minimal experience with before this hackathon. We also learned about the value of playing around with different frameworks before committing to using them for an entire hackathon, haha!
## What's next for EZ-Med
The next step for EZ-Med is to iron out all the bugs and have it fully functioning. | ## Inspiration
We want to fix healthcare! 48% of physicians in the US are burned out, which is a driver for higher rates of medical error, lower patient satisfaction, higher rates of depression and suicide. Three graduate students at Stanford have been applying design thinking to the burnout epidemic. A CS grad from USC joined us for TreeHacks!
We conducted 300 hours of interviews, learned iteratively using low-fidelity prototypes, to discover,
i) There was no “check engine” light that went off warning individuals to “re-balance”
ii) Current wellness services weren’t designed for individuals working 80+ hour weeks
iii) Employers will pay a premium to prevent burnout
And Code Coral was born.
## What it does
Our platform helps highly-trained individuals and teams working in stressful environments proactively manage their burnout. The platform captures your phones’ digital phenotype to monitor the key predictors of burnout using machine learning. With timely, bite-sized reminders we reinforce individuals’ atomic wellness habits and provide personalized services from laundry to life-coaching.
Check out more information about our project goals: <https://youtu.be/zjV3KeNv-ok>
## How we built it
We built the backend using a combination of API's to Fitbit/Googlemaps/Apple Health/Beiwe; Built a machine learning algorithm and relied on an App Builder for the front end.
## Challenges we ran into
API's not working the way we want. Collecting and aggregating "tagged" data for our machine learning algorithm. Trying to figure out which features are the most relevant!
## Accomplishments that we're proud of
We had figured out a unique solution to addressing burnout but hadn't written any lines of code yet! We are really proud to have gotten this project off the ground!
i) Setting up a system to collect digital phenotyping features from a smart phone ii) Building machine learning experiments to hypothesis test going from our digital phenotype to metrics of burnout iii) We figured out how to detect anomalies using an individual's baseline data on driving, walking and time at home using the Microsoft Azure platform iv) Build a working front end with actual data!
Note - login information to codecoral.net: username - test password - testtest
## What we learned
We are learning how to set up AWS, a functioning back end, building supervised learning models, integrating data from many source to give new insights. We also flexed our web development skills.
## What's next for Coral Board
We would like to connect the backend data and validating our platform with real data! | winning |
## Inspiration
TL;DR: Cut Lines, Cut Time.
With the overflowing amount of information and the limited time that we have, it is important to efficiently distribute the time and get the most out of it.
With people scrolling short videos endlessly on the most popular apps such as Tiktok, Instagram, and Youtube, we thought, why not provide a similar service but for texts that can not only be fun but also productive?
As a group of college students occupied with not only school but also hobbies and goals, we envisioned an app that can summarize any kind of long text effectively so that while we can get the essence of the text, we can also spend more time on other important things.
Without having to ask someone to provide a TL;DR for us, we wanted to generate it ourselves in a matter of few seconds, which will help us get the big picture of the text.
TL;DR is applicable anywhere, from social media such as Reddit and Messenger to Wikipedia and academic journals, that are able to pick out the most essentials in just one click.
Ever on a crunch for time to read a 10-page research paper?
Want to stay updated on the news but are too lazy to actually read the whole article?
Got sent a box of texts from a friend and just want to know the gist of it.
TL;DR: this is the app for you!
## What it does
TL;DR helps summarize passages and articles into more short forms of writing, making it easier (and faster) to read on the go.
## How we built it
We started by prototyping the project on Figma and discussing our vision for TL;DR. From there, we separated our unique roles within the team into NLP, frontend, and backend. We utilized a plethora of services provided by the sponsors for CalHacks, using Azure to host much of our API and CockRoachDB Serverless to seamlessly integrate persistent data on the cloud. We also utilized Vercel’s Edge network to allow our application to quickly be visited by all people across the globe.
## Web/Extension
The minimalistic user interface portraying our goal of simplification provides a web interface and a handy extension accessible by a simple right click. Simply select the text, and it will instantly be shortened and stored for future use!
## Backend and connections
The backend was built with Flask via Python and hosted on Microsoft Azure as an App Service. GitHub Actions were also used in this process to deploy our code from GitHub itself to Microsoft Azure. Cockroach Lab’s DB to store our user data (email, phone number, and password) and cached summaries of past TL;DR. Twilio is also used for user authentication as well as exporting a TL;DR from your laptop to your phone.
We utilized Co:here’s APIs extensively, making use of the text summarization and sentiment classifier endpoints. Leveraging Beautiful Soup’s capability to extract information, these pair together to generate the output needed by our app. In addition, we went above and beyond to better the NLP landscape by allowing our users to make modifications to Co:here’s generations, which we can send to Co:here. Through this, we are empowering a community of users that help support the development of accessible ML and get their work done as well - win/win!
## Challenges we ran into
Every successful project comes with its own challenges, and we sure had to overcome some bugs and obstacles along the way! First, we took our time settling on the perfect idea, as we all wanted to create something that really impacts the lives of fellow students and the general population. Although our project is “quick”, we were slow to make sure that everything was thoroughly thought through.
In addition, we spent some time debugging our database connection, where a combination of user error and inexperience stumped our progress. However, with a bit of digging around and pair programming, we managed to solve all these problems and learned so much along the way!
## Accomplishments that we're proud of
The integration of different APIs into one platform was a major accomplishment since the numerous code bases that were brought into play and exchanged data had to be done carefully. It did take a while but felt amazing when it all worked out.
## What we learned
From this experience, we learned a lot about using new technologies, especially the APIs and servers provided by the sponsors, which helped us be creative in how we implement them in each part of our backend and analysis. We have also learned the power of collaboration and creating a better product through team synergy and combining our creativity and knowledge together.
## What's next for TL;DR
We have so much in store for TL;DR! Specifically, we were looking to support generating TL;DR for youtube videos (using the captions API or GCP’s speech-to-text service). In addition, we are always striving for the best user experience possible and will find new ways to make the app more enjoyable. This includes allowing users to make more editions and moving to more platforms! | ## Inspiration 💡
The push behind EcoCart is the pressing call to weave sustainability into our everyday actions. I've envisioned a tool that makes it easy for people to opt for green choices when shopping.
## What it does 📑
EcoCart is your AI-guided Sustainable Shopping Assistant, designed to help shoppers minimize their carbon impact. It comes with a user-centric dashboard and a browser add-on for streamlined purchase monitoring.
By integrating EcoCart's browser add-on with favorite online shopping sites, users can easily oversee their carbon emissions. The AI functionality dives deep into the data, offering granular insights on the ecological implications of every transaction.
Our dashboard is crafted to help users see their sustainable journey and make educated choices. Engaging charts and a gamified approach nudge users towards greener options and aware buying behaviors.
EcoCart fosters an eco-friendly lifestyle, fusing AI, an accessible dashboard, and a purchase-monitoring add-on. Collectively, our choices can echo a positive note for the planet.
## How it's built 🏗️
EcoCart is carved out using avant-garde AI tools and a strong backend setup. While our AI digs into product specifics, the backend ensures smooth data workflow and user engagement. A pivotal feature is the inclusion of SGID to ward off bots and uphold genuine user interaction, delivering an uninterrupted user journey and trustworthy eco metrics.
## Challenges and hurdles along the way 🧱
* Regular hiccups with Chrome add-on's hot reloading during development
* Sparse online guides on meshing Supabase Google Auth with a Chrome add-on
* Encountered glitches when using Vite for bundling our Chrome extension
## Accomplishments that I'am proud of 🦚
* Striking user interface
* Working prototype
* Successful integration of Supabase in our Chrome add-on
* Advocacy for sustainability through #techforpublicgood
## What I've learned 🏫
* Integrating SGID into a NextJS CSR web platform
* Deploying Supabase in a Chrome add-on
* Crafting aesthetically appealing and practical charts via Chart.js
## What's next for EcoCart ⌛
* Expanding to more e-commerce giants like Carousell, Taobao, etc.
* Introducing a rewards mechanism linked with our gamified setup
* Launching a SaaS subscription model for our user base. | ## Inspiration
Frustrated with the overwhelming amount of notes required in AP classes, we decided to make life easier for ourselves. With the development of machine learning and neural networks, automatic text summary generation has become increasingly accurate; our mission is to provide easy and simple access to the service.
## What it does
The web app takes in a picture/screenshot of text and auto-generates a summary and highlights important sentences, making skimming a dense article simple. In addition, keywords and their definitions are provided along with some other information (sentiment, classification, and Flesch-Kincaid readability). Finally, a few miscellaneous community tools (random student-related articles and a link to Stack Exchange) are also available.
## How we built it
The natural language processing was split into two different parts: abstractive and extractive.
The abstractive section was carried out using a neural network from [this paper](https://arxiv.org/abs/1704.04368) by Abigail See, Peter J. Liu, and Christopher D. Manning ([Github](https://github.com/abisee/pointer-generator)). Stanford's CoreNLP, was used to chunk and preprocess text for analysis.
Extractive text summarize was done using Google Cloud Language, and the python modules gensim, word2vec and nltk.
We also used Google Cloud Vision API to extract text from an image. To find random student-related articles, we webscraped using BeautifulSoup4.
The front end was built using HTML, CSS, and Bootstrap.
## Challenges we ran into
We found it difficult to parse/chunk our plain-text into the correct format for the neural net to take in.
In addition, we found it extremely difficult to set up and host our flask app on App Engine/Firestore in the given time; we were unable to successfully upload our model due to our large files and the lack of time. To solve this problem, we decided to keep our project local and use cookies for data retention. Because of this we were able to redirect our efforts towards other features.
## Accomplishments that we're proud of
We're extremely proud of having a working product at the end of a hackathon, especially a project we are so passionate about. We have so many ideas that we haven't implemented in this short amount of time, and we plan to improve and develop our project further afterwards.
## What we learned
We learned how to work with flask, tensorflow models, various forms of natural language processing, and REST (specifically Google Cloud) APIs.
## What's next for NoteWorthy
Although our product is "finished," we have a lot planned for NoteWorthy. Our main goal is to make NoteWorthy a product not only for the individual but for the community (possibly as a tool in the classroom). We want to enable multi-user availability of summarized documents to encourage discussion and group learning. Additionally, we want to personalize NoteWorthy according to the user's actions. This includes utilizing the subjects of summarized articles and their respective reading levels to provide relevant news articles as well as forum recommendations. | partial |
## Inspiration
One of our main inspirations was looking through some of the most engaging presentations that we've seen in the past. One of the best ways to engage an audience is through laughter, and making it easier for users to implement comedic elements is what we are setting out to do. Giggle Gear might sound like a silly name, but we aren't joking around when it comes to optimizing content creation for all users.
## What it does
Giggle Gear is an Adobe Express add on that streamlines the gap between image generation and the creation of graphics, presentations and much more. Giggle Gear allows users to quickly import and caption memes, reaction images, and everything in between straight from an extensive database of internet content. We also have a fully functional AI feature that allows users to enter simple prompts and receive a newly generated meme relevant to the content of their slides.
## How we built it
Giggle gear is based on React, a library that pairs very well with Adobe Express and allows for quick responsiveness. We use imgflip, a large meme database that allows users to to create their own templates on the fly. We bridge the gap using their premium API, and as a result we're able to access their large database of images while making it as efficient as possible for all Express users.
## Challenges we ran into
Our greatest challenge was figuring out the Adobe Express add on process, and implementing a well designed front end took time as we tried our best to make our app look as accessible and easy to use as possible. Thankfully due to the wealth of resources that Adobe provides when it came to documentation, we were able to create a polished end product.
## Accomplishments that we're proud of
We are proud of the ease of use that comes with our add on, considering the amount of steps that it cuts out for the average content creator. We're also very proud of how efficiently our program is able to parse through a wide variety of images while giving a wealth of choices to all of our users. We were able to keep the cost of queries low and when the app is officially pushed to the adobe add on section, most features associated with Giggle Gear will be completely free to use for all users.
## What we learned
We learned that when we are creating content of our own, speed and ease of use comes first and that's exactly what we aimed to do. Allowing users to be as creative as possible while giving them access to the power of generative AI can unlock a treasure trove of content that not only engages audiences more, but lets creators explore areas of their presentation that they may never have thought of utilizing before.
## What's next for Giggle Gear
We're aiming to monetize Giggle Gear in partnership with Adobe Express' Premium feature. This will allow us to keep producing AI prompts at lower cost to us, and hopefully allow us to turn a profit on their platform. Giggle Gear will always be a free to use add-on, but there are some features that will allow Adobe Express Premium members to get the absolute out of our software when it comes to digital content creation. | ## Inspiration
We both came to the hackathon without a team and couldn't generate ideas about our project in the Hackathon. So we went to 9gag to look into memes and it suddenly struck us to make an Alexa skill that generates memes. The idea sounded cool and somewhat within our skill-sets so we went forward with it. We procrastinated till the second day and eventually started coding in the noon.
## What it does
The name says it all, you basically scream at Alexa to make memes (Well not quite, speaking would work too) We have working website [www.memescream.net](http://www.memescream.net) which can generate memes using Alexa or keyboard and mouse. We also added features to download memes and share to Facebook and Twitter.
## How we built it
We divided the responsibilities such that one of us handled the entire front-end, while the other gets his hands dirty with the backend. We used HTML, CSS, jQuery to make the web framework for the app and used Alexa, Node.js, PHP, Amazon Web Services (AWS), FFMPEG to create the backend for the skill. We started coding by the noon of the second day and continued uninterruptible till the project was concluded.
## Challenges we ran into
We ran into challenges with understanding the GIPHY API, Parsing text into GIFs and transferring data from Alexa to web-app. (Well waking up next day was also a pretty daunting challenge all things considered)
## Accomplishments that we're proud of
We're proud our persistence to actually finish the project on time and fulfill all requirements that were formulated in the planning phase. We also learned about GIPHY API and learned a lot from the workshops we attended (Well we're also proud that we could wake up a bit early the following day)
## What we learned
Since we ran into issues when we began connecting the web app with the skill, we gained a lot of insight into using PHP, jQuery, Node.js, FFMPEG and GIPHY API.
## What's next for MemeScream
We're eager to publish the more robust public website and the Alexa skill-set to Amazon. | ## Inspiration
There's something about brief glints in the past that just stop you in your tracks: you dip down, pick up an old DVD of a movie while you're packing, and you're suddenly brought back to the innocent and carefree joy of when you were a kid. It's like comfort food.
So why not leverage this to make money? The ethos of nostalgic elements from everyone's favourite childhood relics turns heads. Nostalgic feelings have been repeatedly found in studies to increase consumer willingness to spend money, boosting brand exposure, conversion, and profit.
## What it does
Large Language Marketing (LLM) is a SaaS built for businesses looking to revamp their digital presence through "throwback"-themed product advertisements.
Tinder x Mean Girls? The Barbie Movie? Adobe x Bob Ross? Apple x Sesame Street? That could be your brand, too. Here's how:
1. You input a product description and target demographic to begin a profile
2. LLM uses the data with the Co:here API to generate a throwback theme and corresponding image descriptions of marketing posts
3. OpenAI prompt engineering generates a more detailed image generation prompt featuring motifs and composition elements
4. DALL-E 3 is fed the finalized image generation prompt and marketing campaign to generate a series of visual social media advertisements
5. The Co:here API generates captions for each advertisement
6. You're taken to a simplistic interface where you can directly view, edit, generate new components for, and publish each social media post, all in one!
7. You publish directly to your business's social media accounts to kick off a new campaign 🥳
## How we built it
* **Frontend**: React, TypeScript, Vite
* **Backend**: Python, Flask, PostgreSQL
* **APIs/services**: OpenAI, DALL-E 3, Co:here, Instagram Graph API
* **Design**: Figma
## Challenges we ran into
* **Prompt engineering**: tuning prompts to get our desired outputs was very, very difficult, where fixing one issue would open up another in a fine game of balance to maximize utility
* **CORS hell**: needing to serve externally-sourced images back and forth between frontend and backend meant fighting a battle with the browser -- we ended up writing a proxy
* **API integration**: with a lot of technologies being incorporated over our frontend, backend, database, data pipeline, and AI services, massive overhead was introduced into getting everything set up and running on everyone's devices -- npm versions, virtual environments, PostgreSQL, the Instagram Graph API (*especially*)...
* **Rate-limiting**: the number of calls we wanted to make versus the number of calls we were allowed was a small tragedy
## Accomplishments that we're proud of
We're really, really proud of integrating a lot of different technologies together in a fully functioning, cohesive manner! This project involved a genuinely technology-rich stack that allowed each one of us to pick up entirely new skills in web app development.
## What we learned
Our team was uniquely well-balanced in that every one of us ended up being able to partake in everything, especially things we hadn't done before, including:
1. DALL-E
2. OpenAI API
3. Co:here API
4. Integrating AI data pipelines into a web app
5. Using PostgreSQL with Flask
6. For our non-frontend-enthusiasts, atomic design and state-heavy UI creation :)
7. Auth0
## What's next for Large Language Marketing
* Optimizing the runtime of image/prompt generation
* Text-to-video output
* Abstraction allowing any user log in to make Instagram Posts
* More social media integration (YouTube, LinkedIn, Twitter, and WeChat support)
* AI-generated timelines for long-lasting campaigns
* AI-based partnership/collaboration suggestions and contact-finding
* UX revamp for collaboration
* Option to add original content alongside AI-generated content in our interface | partial |
![Logo](https://challengepost-s3-challengepost.netdna-ssl.com/photos/production/challenge_photos/001/482/029/datas/full_width.png)
## Inspiration
Ever wanted to lead your own choir? Wanted to generate music just by your moving your hands?
Its not some magic! Its Hakuna Matata!
## What it does
It determines what hand gesture you are making from the American Sign Language and plays music corresponding to each of them.
## How we built it
* First, we made a webapp using React js with a cool Music Visualizer.
* Added authentication system to it.
* Developed a Deep Learning model using Cognitive Services.
* Integrated the model with webapp and made an API for the same.
* Added the Keyboard Music Player.
## Challenges we ran into
* Experienced great difficult in sending the image from the frontend to the API as API accepted only URLs
* Faced time constraints with respect to the idea.
## Accomplishments that we're proud of
* Even though in the 24hr hackathon, we made an awesome interface with an authentication system.
* Developed a Highly accurate Deep Learning model and trained over a data of thousands of images.
* We managed the work very well right from the team formation to the providing the MVP.
## What we learned
* About postman API.
* Conversion of Image to DataURL and vice versa
## What's next for Sound Mixer
* At moment, it works on images(kept due to less number of Cloud credits). When brought commercially with huge number of cloud credits, can be expanded to videos as well. | ## Inspiration
In Canada alone, there are over 350,000 Canadians who are deaf and around another 3 million who are hard of hearing. However, picking up sign language for them can often be a challenge, especially for deaf children who are often born to hearing parents who often don't know sign language. The inability to communicate with and understand their peers can lead to isolation and loneliness, effectively walling them off from on of the joys of life - communication. We aim to fix this problem by not only enhancing the sign language learning process for those who are hearing impaired, but also encouraging those who are sound of hearing to pick up sign language - breaking down the communication barrier between hearing impaired people and their peers.
## What it does
Our application leverages the power of AI and neural networks to not only detect and identify sign language gestures in real-time, but also provide feedback to users so they can learn more effectively from their mistakes. We also combine this technology with engaging and interactive lessons, ensuring that learning sign language is not only effective but also enjoyable.
## How we built it
To detect hand gestures, used Python and the OpenCV library to format the images being sent through the user's webcam and MediaPipe and SciKit to detect hands gestures and predict the symbol being signed. For the Frontend, we mainly used React.js and Tailwind for the UI and CSS respectively. Finally, for the Backend, we used Express.js and Flask to handle requests from the React application and Python machine learning model respectively.
## Challenges we ran into
Training the model was a big problem as we spent a lot of time near the start trying to find a pretrained model. However, all of the pretrained models we found had very little documentation, so we weren't able to find out how to use them. We only resorted to building and training our own model very late into the hackathon, giving us very little time to make sure it meshed well with the rest of our project. We spent a lot of time dealing with React's async functions and also had a lot of trouble deploying our application.
## Accomplishments that we're proud of
We are proud of being able to accomplish what we've accomplished given the short time frame and our smaller group size.
## What we learned
To not get stuck trying to fix a single stupid bug for hours and instead move on.
## What's next for Silyntax
We aim to allow Silyntax to not only be able to recognize gestures through singular frames, but also through chaining together multiple frames into larger movements, allowing the detection of more complex gestures. We also aim to implement more game modes, such as a mode where players are given a sequence of letters/words and have to compete with one another to see who signs the sequence the fastest (kind of like Typeracer), and also a maze game mode where the player has to sign different words to move around and navigate through the maze. | ## Inspiration
Partially inspired by the Smart Cities track, we wanted our app to have the direct utility of ordering food, while still being fun to interact with. We aimed to combine convenience with entertainment, making the experience more enjoyable than your typical drive-through order.
## What it does
You interact using only your voice. The app automatically detects when you start and stop talking, uses AI to transcribe what you say, figures out the food items (with modifications) you want to order, and adds them to your current order. It even handles details like size and flavor preferences. The AI then generates text-to-speech audio, which is played back to confirm your order in a humorous, engaging way. There is absolutely zero set-up or management necessary, as the program will completely ignore all background noises and conversation. Even then, it will still take your order with staggering precision.
## How we built it
The frontend of the app is built with React and TypeScript, while the backend uses Flask and Python. We containerized the app using Docker and deployed it using Defang. The design of the menu is also done in Canva with a dash of Harvard colors.
## Challenges we ran into
One major challenge was getting the different parts of the app—frontend, backend, and AI—to communicate effectively. From media file conversions to AI prompt engineering, we worked through each of the problems together. We struggled particularly with maintaining smooth communication once the app was deployed. Additionally, fine-tuning the AI to accurately extract order information from voice inputs while keeping the interaction natural was a big hurdle.
## Accomplishments that we're proud of
We're proud of building a fully functioning product that successfully integrates all the features we envisioned. We also managed to deploy the app, which was a huge achievement given the complexity of the project. Completing our initial feature set within the hackathon timeframe was a key success for us. Trying to work with Python data type was difficult to manage, and we were proud to navigate around that. We are also extremely proud to meet a bunch of new people and tackle new challenges that we were not previously comfortable with.
## What we learned
We honed our skills in React, TypeScript, Flask, and Python, especially in how to make these technologies work together. We also learned how to containerize and deploy applications using Docker and Docker Compose, as well as how to use Defang for cloud deployment.
## What's next for Harvard Burger
Moving forward, we want to add a business-facing interface, where restaurant staff would be able to view and fulfill customer orders. There will also be individual kiosk devices to handle order inputs. These features would allow *Harvard Burger* to move from a demo to a fully functional app that restaurants could actually use. Lastly, we can sell the product by designing marketing strategies for fast food chains. | losing |
## Inspiration
Having previously volunteered and worked with children with cerebral palsy, we were struck with the monotony and inaccessibility of traditional physiotherapy. We came up with a cheaper, more portable, and more engaging way to deliver treatment by creating virtual reality games geared towards 12-15 year olds. We targeted this age group because puberty is a crucial period for retention of plasticity in a child's limbs. We implemented interactive games in VR using Oculus' Rift and Leap motion's controllers.
## What it does
We designed games that targeted specific hand/elbow/shoulder gestures and used a leap motion controller to track the gestures. Our system improves motor skill, cognitive abilities, emotional growth and social skills of children affected by cerebral palsy.
## How we built it
Our games use of leap-motion's hand-tracking technology and the Oculus' immersive system to deliver engaging, exciting, physiotherapy sessions that patients will look forward to playing. These games were created using Unity and C#, and could be played using an Oculus Rift with a Leap Motion controller mounted on top. We also used an Alienware computer with a dedicated graphics card to run the Oculus.
## Challenges we ran into
The biggest challenge we ran into was getting the Oculus running. None of our computers had the ports and the capabilities needed to run the Oculus because it needed so much power. Thankfully we were able to acquire an appropriate laptop through MLH, but the Alienware computer we got was locked out of windows. We then spent the first 6 hours re-installing windows and repairing the laptop, which was a challenge. We also faced difficulties programming the interactions between the hands and the objects in the games because it was our first time creating a VR game using Unity, leap motion controls, and Oculus Rift.
## Accomplishments that we're proud of
We were proud of our end result because it was our first time creating a VR game with an Oculus Rift and we were amazed by the user experience we were able to provide. Our games were really fun to play! It was intensely gratifying to see our games working, and to know that it would be able to help others!
## What we learned
This project gave us the opportunity to educate ourselves on the realities of not being able-bodied. We developed an appreciation for the struggles people living with cerebral palsy face, and also learned a lot of Unity.
## What's next for Alternative Physical Treatment
We will develop more advanced games involving a greater combination of hand and elbow gestures, and hopefully get testing in local rehabilitation hospitals. We also hope to integrate data recording and playback functions for treatment analysis.
## Business Model Canvas
<https://mcgill-my.sharepoint.com/:b:/g/personal/ion_banaru_mail_mcgill_ca/EYvNcH-mRI1Eo9bQFMoVu5sB7iIn1o7RXM_SoTUFdsPEdw?e=SWf6PO> | ## Inspiration
Members of our team know multiple people who suffer from permanent or partial paralysis. We wanted to build something that could be fun to develop and use, but at the same time make a real impact in people's everyday lives. We also wanted to make an affordable solution, as most solutions to paralysis cost thousands and are inaccessible. We wanted something that was modular, that we could 3rd print and also make open source for others to use.
## What it does and how we built it
The main component is a bionic hand assistant called The PulseGrip. We used an ECG sensor in order to detect electrical signals. When it detects your muscles are trying to close your hand it uses a servo motor in order to close your hand around an object (a foam baseball for example). If it stops detecting a signal (you're no longer trying to close) it will loosen your hand back to a natural resting position. Along with this at all times it sends a signal through websockets to our Amazon EC2 server and game. This is stored on a MongoDB database, using API requests we can communicate between our games, server and PostGrip. We can track live motor speed, angles, and if it's open or closed. Our website is a full-stack application (react styled with tailwind on the front end, node js on the backend). Our website also has games that communicate with the device to test the project and provide entertainment. We have one to test for continuous holding and another for rapid inputs, this could be used in recovery as well.
## Challenges we ran into
This project forced us to consider different avenues and work through difficulties. Our main problem was when we fried our EMG sensor, twice! This was a major setback since an EMG sensor was going to be the main detector for the project. We tried calling around the whole city but could not find a new one. We decided to switch paths and use an ECG sensor instead, this is designed for heartbeats but we managed to make it work. This involved wiring our project completely differently and using a very different algorithm. When we thought we were free, our websocket didn't work. We troubleshooted for an hour looking at the Wifi, the device itself and more. Without this, we couldn't send data from the PulseGrip to our server and games. We decided to ask for some mentor's help and reset the device completely, after using different libraries we managed to make it work. These experiences taught us to keep pushing even when we thought we were done, and taught us different ways to think about the same problem.
## Accomplishments that we're proud of
Firstly, just getting the device working was a huge achievement, as we had so many setbacks and times we thought the event was over for us. But we managed to keep going and got to the end, even if it wasn't exactly what we planned or expected. We are also proud of the breadth and depth of our project, we have a physical side with 3rd printed materials, sensors and complicated algorithms. But we also have a game side, with 2 (questionable original) games that can be used. But they are not just random games, but ones that test the user in 2 different ways that are critical to using the device. Short burst and hold term holding of objects. Lastly, we have a full-stack application that users can use to access the games and see live stats on the device.
## What's next for PulseGrip
* working to improve sensors, adding more games, seeing how we can help people
We think this project had a ton of potential and we can't wait to see what we can do with the ideas learned here.
## Check it out
<https://hacks.pulsegrip.design>
<https://github.com/PulseGrip> | ## Inspiration
Every year, our school does a Grand Challenges Research project where they focus on important topic in the world. This year, the focus is on mental health which providing cost-effective treatment and make it accessible to everyone. We all may know someone who has a phobia, came back from a tour in the military, or is living with another mental illness, 1 in 5 Americans to be exact. As the increase of mental health awareness rises, the availability of appointments with counselors and treatments lessens. Additionally, we this could be used to provide at home inquires for people who are hesitant to get help. With Alexa M.D. we hope to use IoT (internet of things) to bring the necessary treatment to the patient for better access, cost, and also to reduce the stigma of mental illness.
## What it does
The user can receive information and various treatment options through Alexa M.D. First, the user speaks to Alexa through the Amazon echo, the central interface, and they can either inquire about various medical information or pick at-home treatment options. Through web-scraping of Web M.D. and other sites, Alexa M.D. provides information simply by asking. Next, Alexa M.D. will prompt the user with various treatment options which are a version of exposure therapy for many of the symptoms. The user will engage in virtual reality treatment by re-enacting various situations that may usually cause them anxiety or distress, but instead in a controlled environment through the Oculus Rift. Treatments will incrementally lessen the user's anxieties; they can use the Leap Motion to engage in another dimension of treatment when they are ready to move to the next step. This virtualizes an interaction with many of the stimuli that they are trying to overcome. When the treatment session has concluded, Alexa M.D. will dispense the user's prescribed medication through the automated medicine dispenser, powered by the Intel Edison. This ensures users take appropriate dosages while also encouraging them to go through their treatment session before taking their medication.
## How we built it
We used the Alexa skills sets to teach the Amazon Echo to recognize new commands. This enables communication to both the Oculus and our automated medicine dispenser through our backend on Firebase. We generated various virtual environments through Unity; the Leap Motion is connected to the Oculus which enables the user to interact with their virtual environment. When prompted by a medical questions, Alexa M.D. uses web-scraping from various medical websites, including Web M.D., to produce accurate responses. To make the automated medicine dispenser, we 3D printed the dispensing mechanism, and laser cut acrylic to provide the structural support. The dispenser is controlled by a servo motor via the Intel Edison and controls output of the medication as prescribed by Alexa M.D.
## Challenges we ran into
We found it difficult to sync the various components together (Oculus, Intel Edison, Amazon Alexa), and communicating between all 3 pieces.
## Accomplishments that we're proud of
The Internet of Things is the frontier of technology, and we are proud of integrating the 3 very distinct components together. Additionally, the pill dispenser was sketched and created all within the span of the hackathon, and we were able to utilize various new methods such as laser cutting.
## What we learned
Through the weekend, we learned a great deal about working with Amazon web service, as well as Amazon Alexa and how to integrate these technologies. Additionally, we learned about using modeling software for both 3D printing and laser printing. Furthermore, we learned how to set up the Arduino shield for the Intel Edison and integrating the leap motion with the Oculus Rift.
## What's next for Alexa M.D.
We hope that this can become available for all households, and that it can reduce the cost necessary for treatments, as well as improve access to such treatments. Costs for regular treatment include transportation, doctors and nurses, pharmacy visits, and more. It can be a first step for people are are hesitant to consult a specialist, or a main component of long-term treatment. Some mental illnesses, such as PTSD, even prevent patients from being able to interact with the outside world, which present difficulties when going to seek treatment. Additionally, we hope that this can reduce the stigma of treatment of mental illnesses by integrating such treatments easily into the daily lives of users. Patients can continue their treatments in the privacy of their own home where they won't feel any pressures. | winning |
## About the Project
### TLDR:
Caught a fish? Take a snap. Our AI-powered app identifies the catch, keeps track of stats, and puts that fish in your 3d, virtual, interactive aquarium! Simply click on any fish in your aquarium, and all its details — its rarity, location, and more — appear, bringing your fishing memories back to life. Also, depending on the fish you catch, reel in achievements, such as your first fish caught (ever!), or your first 20 incher. The cherry on top? All users’ catches are displayed on an interactive map (built with Leaflet), where you can discover new fishing spots, or plan to get your next big catch :)
### Inspiration
Our journey began with a simple observation: while fishing creates lasting memories, capturing those moments often falls short. We realized that a picture might be worth a thousand words, but a well-told fish tale is priceless. This spark ignited our mission to blend the age-old art of fishing with cutting-edge AI technology.
### What We Learned
Diving into this project was like casting into uncharted waters – exhilarating and full of surprises. We expanded our skills in:
* Integrating AI models (Google's Gemini LLM) for image recognition and creative text generation
* Crafting seamless user experiences in React
* Building robust backend systems with Node.js and Express
* Managing data with MongoDB Atlas
* Creating immersive 3D environments using Three.js
But beyond the technical skills, we learned the art of transforming a simple idea into a full-fledged application that brings joy and preserves memories.
### How We Built It
Our development process was as meticulously planned as a fishing expedition:
1. We started by mapping out the user journey, from snapping a photo to exploring their virtual aquarium.
2. The frontend was crafted in React, ensuring a responsive and intuitive interface.
3. We leveraged Three.js to create an engaging 3D aquarium, bringing caught fish to life in a virtual environment.
4. Our Node.js and Express backend became the sturdy boat, handling requests and managing data flow.
5. MongoDB Atlas served as our net, capturing and storing each precious catch securely.
6. The Gemini AI was our expert fishing guide, identifying species and spinning yarns about each catch.
### Challenges We Faced
Like any fishing trip, we encountered our fair share of challenges:
* **Integrating Gemini AI**: Ensuring accurate fish identification and generating coherent, engaging stories required fine-tuning and creative problem-solving.
* **3D Rendering**: Creating a performant and visually appealing aquarium in Three.js pushed our graphics programming skills to the limit.
* **Data Management**: Structuring our database to efficiently store and retrieve diverse catch data presented unique challenges.
* **User Experience**: Balancing feature-rich functionality with an intuitive, streamlined interface was a constant tug-of-war.
Despite these challenges, or perhaps because of them, our team grew stronger and more resourceful. Each obstacle overcome was like landing a prized catch, making the final product all the more rewarding.
As we cast our project out into the world, we're excited to see how it will evolve and grow, much like the tales of fishing adventures it's designed to capture. | ## Inspiration
Our project is inspired by the sister of one our creators, Joseph Ntaimo. Joseph often needs to help locate wheelchair accessible entrances to accommodate her, but they can be hard to find when buildings have multiple entrances. Therefore, we created our app as an innovative piece of assistive tech to improve accessibility across the campus.
## What it does
The user can find wheelchair accessible entrances with ease and get directions on where to find them.
## How we built it
We started off using MIT’s Accessible Routes interactive map to see where the wheelchair friendly entrances were located at MIT. We then inspected the JavaScript code running behind the map to find the latitude and longitude coordinates for each of the wheelchair locations.
We then created a Python script that filtered out the latitude and longitude values, ignoring the other syntax from the coordinate data, and stored the values in separate text files.
We tested whether our method would work in Python first, because it is the language we are most familiar with, by using string concatenation to add the proper Java syntax to the latitude and longitude points. Then we printed all of the points to the terminal and imported them into Android Studio.
After being certain that the method would work, we uploaded these files into the raw folder in Android Studio and wrote code in Java that would iterate through both of the latitude/longitude lists simultaneously and plot them onto the map.
The next step was learning how to change the color and image associated with each marker, which was very time intensive, but led us to having our custom logo for each of the markers.
Separately, we designed elements of the app in Adobe Illustrator and imported logos and button designs into Android Studio. Then, through trial and error (and YouTube videos), we figured out how to make buttons link to different pages, so we could have both a FAQ page and the map.
Then we combined both of the apps together atop of the original maps directory and ironed out the errors so that the pages would display properly.
## Challenges we ran into/Accomplishments
We had a lot more ideas than we were able to implement. Stripping our app to basic, reasonable features was something we had to tackle in the beginning, but it kept changing as we discovered the limitations of our project throughout the 24 hours. Therefore, we had to sacrifice features that we would otherwise have loved to add.
A big difficulty for our team was combining our different elements into a cohesive project. Since our team split up the usage of Android Studio, Adobe illustrator, and programming using the Google Maps API, it was most difficult to integrate all our work together.
We are proud of how effectively we were able to split up our team’s roles based on everyone’s unique skills. In this way, we were able to be maximally productive and play to our strengths.
We were also able to add Boston University accessible entrances in addition to MIT's, which proved that we could adopt this project for other schools and locations, not just MIT.
## What we learned
We used Android Studio for the first time to make apps. We discovered how much Google API had to offer, allowing us to make our map and include features such as instant directions to a location. This helped us realize that we should use our resources to their full capabilities.
## What's next for HandyMap
If given more time, we would have added many features such as accessibility for visually impaired students to help them find entrances, alerts for issues with accessing ramps and power doors, a community rating system of entrances, using machine learning and the community feature to auto-import maps that aren't interactive, and much, much more. Most important of all, we would apply it to all colleges and even anywhere in the world. | ## Inspiration
Data analytics can be **extremely** time-consuming. We strove to create a tool utilizing modern AI technology to generate analysis such as trend recognition on user-uploaded datasets.The inspiration behind our product stemmed from the growing complexity and volume of data in today's digital age. As businesses and organizations grapple with increasingly massive datasets, the need for efficient, accurate, and rapid data analysis became evident. We even saw this within one of our sponsor's work, CapitalOne, in which they have volumes of financial transaction data, which is very difficult to manually, or even programmatically parse.
We recognized the frustration many professionals faced when dealing with cumbersome manual data analysis processes. By combining **advanced machine learning algorithms** with **user-friendly design**, we aimed to empower users from various domains to effortlessly extract valuable insights from their data.
## What it does
On our website, a user can upload their data, generally in the form of a .csv file, which will then be sent to our backend processes. These backend processes utilize Docker and MLBot to train a LLM which performs the proper data analyses.
## How we built it
Front-end was very simple. We created the platform using Next.js and React.js and hosted on Vercel.
The back-end was created using Python, in which we employed use of technologies such as Docker and MLBot to perform data analyses as well as return charts, which were then processed on the front-end using ApexCharts.js.
## Challenges we ran into
* It was some of our first times working in live time with multiple people on the same project. This advanced our understand of how Git's features worked.
* There was difficulty getting the Docker server to be publicly available to our front-end, since we had our server locally hosted on the back-end.
* Even once it was publicly available, it was difficult to figure out how to actually connect it to the front-end.
## Accomplishments that we're proud of
* We were able to create a full-fledged, functional product within the allotted time we were given.
* We utilized our knowledge of how APIs worked to incorporate multiple of them into our project.
* We worked positively as a team even though we had not met each other before.
## What we learned
* Learning how to incorporate multiple APIs into one product with Next.
* Learned a new tech-stack
* Learned how to work simultaneously on the same product with multiple people.
## What's next for DataDaddy
### Short Term
* Add a more diverse applicability to different types of datasets and statistical analyses.
* Add more compatibility with SQL/NoSQL commands from Natural Language.
* Attend more hackathons :)
### Long Term
* Minimize the amount of work workers need to do for their data analyses, almost creating a pipeline from data to results.
* Have the product be able to interpret what type of data it has (e.g. financial, physical, etc.) to perform the most appropriate analyses. | winning |
## Inspiration
We created ASL Bridgify to address the need for an interactive real time pose-estimation based learning model for ASL. In a post-pandemic world, we foresee that working from home and more remote experiences signals the need to communicate with individuals with hearing disabilities. It is a feature that is missing from various video conferencing, learning, and entertainment - based platforms. Shockingly, Duolingo the number 1 language learning platform does not teach ASL.
## What it does
ASLBridgify is an educational platform that specifically focuses on the learning of ASL. We provide comprehensive modules that help you learn languages in scientifically proven ways. We provide easy to follow UI and personalized AI assistance in your learning journey. We realize that the future of AI comes in more than chatbot form, so our AI models are integrated within video to track hand-movement using Media pipe and TensorFlow.
## How we built it
We created an educational platform by leveraging many technologies. Our frontend uses Next.js, Tailwind and Supabase. Our backend used Python libraries such as PyTorch, TensorFlow, and Keras to train our LLMs with the use of Intel Developer Cloud GPU and CPU to expedite the training. We connected the Frontend with the Backend with Flask. Moreover, we combined our trained models with Google Search API OpenAI API for Retrival-Augemented-Generation (RAG)
## Challenges we ran into
The biggest challenge was time. The time it took to train one Large Language Model, even when using Intel Developer Cloud GPU capabilities was immense. It was a roadblock because we couldn't test any other code on one computer until the LLM was done training. Initially we tried to preprocess both words and sentences using hand pose to map ASL and using encoder decoder architecture, but we were not able to complete this because of the time constraint. ASL sentences is something we want to incorporate in the future.
## Accomphishments we are proud of
We successfully trained preliminary large language models using PyTorch GPUs on the Intel Cloud from scratch. We're thrilled to integrate this accomplishment into our frontend. Through implementing three AI tools—each using different methods such as calling an API and building with IPEX—we've gained valuable insights into AI. Our excitement grows as we introduce our one-of-a-kind educational platform for ASL to the world. | Hospital visits can be an uneasy and stressful time for parents and children. Since parent's aren't always able to be present during children's long hospital stays, MediFeel tries to facilitate the communication process between the child, parent, and doctors and nurses.
The child is able to send a status update about their feelings, and if they need doctor assistance, are able to call for help. The parents are notified of all of this and the doctor can easily communicate what happened. If a child is feeling upset, the parents would know to better reach out to child earlier than they would have otherwise.
For future implementation, the UI of the website would look something closer to this:
[link](https://invis.io/J4FBSWXEKZF#/273379990_Desktop_HD) | ## Inspiration
The ability to easily communicate with others is something that most of take for granted in our everyday life. However, for the millions of hearing impaired and deaf people all around the world, communicating their wants and needs is a battle they have to go through every day. The desire to make the world a more accessible place by bringing ASL to the general public in a fun and engaging manner was the motivation behind our app.
## What it does
Our app is essentially an education platform for ASL that is designed to also be fun and engaging. We provide lessons for basic ASL such as the alphabet, with plans to introduce more lessons in the future. What differentiates our app and makes it engaging, is that users can practice their ASL skills right in the app, with any new letter or word they learn, the app uses their webcam along with AI to instantly tell users when they are making the correct sign. The app also has a skills game that puts what they learnt to the test, in a time trial, that allows users to earn score for every signed letter/word. There is also a leaderboard so that users can compete globally and with friends.
## How we built it
Our app is a React app that we built with different libraries such as MUI, React Icons, Router, React-Webcam, and most importantly Fingerpose along with TensorflowJS for all our AI capabilities to recognize sign language gestures in the browser.
## Challenges we ran into
Our main struggle within this app was implementing Tensorflowjs as none of us have experience with this library prior to this event. Recognizing gestures in the browser in real time initially came with a lot of lag that led to a bad user experience, and so it took a lot of configuring and debugging in order to get a much more seamless experience.
## Accomplishments that we're proud of
As a team we were initially building another application with a similar theme that involved hardware components and we had to pivot quite late due to some unforseen complications, and so we're proud of being able to turn around with such a short amount of time and make a good product that we would be proud to show anyone. We're also proud of building a project that also has a real world usage to it that we all feel strongly about and that we think really does require a solution for.
## What we learned
Through this experience we all learned more about React as a framework, in addition to real time AI with Tensorflowjs.
## What's next for Battle Sign Language
Battle Sign Language has many more features that we would look to provide in the future, we currently have limited lessons, and our gestures are limited to the alphabet, so in the future we would increase our app to include more complex ASL such as words or sentences. We would also look forward to adding multiplayer games so that people can have fun learning and competing with friends simultaneously. | partial |
## Inspiration 💡
In the face of a growing loneliness epidemic, notably among millennial women, 42% of whom fear loneliness more than cancer, and young adults with 61% feeling "serious loneliness,” we built a solution that extends beyond mere digital interaction. For women, the consequences of loneliness raises the risks of heart disease and stroke by nearly a third and is intricately connected with heightened rates of depression, anxiety, and the potential for cognitive decline. Feelings of isolation are also a factor that drive many women to enter toxic relationships.
Our AI boyfriend concept – called DamonAI – is born from the urgent need to combat these alarming trends by providing emotional support and companionship, aiming to mitigate the adverse effects of loneliness. Designed to provide companionship via adaptive conversation and physical gift sending, DamonAI enhances self-worth and fulfillment. It provides users with connection and emotional support in a safe, controlled, and engaging environment.
## What it does 💪
![Knowledge](https://i.imgur.com/DAjI3JE.png)
The primary platform for DamonAI's interaction is through Telegram, where it engages in text-based conversations. However, our AI boyfriend is engineered to serve as more than a chatbot. DamonAI, designed to mimic the evolving nature of human relationships, leverages advanced natural language processing to remember and build upon every conversation. This ensures dynamic and evolving interactions. Central to its design is an integration with Instacart that responds to keywords in conversations to dispatch physical gifts, adding a layer of tangible interaction. This feature mimics the attentiveness found in relationships, making the AI's support feel remarkably genuine and tailored to the user's emotional state.
DamonAI also utilizes WeHead to simulate verbal face-to-face interactions, which can seamlessly transition to Vision Pros. This gives users the option of enjoying a more immersive and mobile interaction experience, enriching the depth of their virtual companionship with DamonAI.
To align with our mission and maintain a healthy user relationship with DamonAI, we've implemented strategies to prevent dependency. First, we've established a daily 2-hour time limit on interactions, preventing excessive use. Approaching the 2 hour mark, DamonAI will start to nudge the user to pursue other activities, before temporarily pausing interactions until it resets. Furthermore, DamonAI is programmed to recognize when to suggest users engage in social activities beyond the screen (via questions or in its responses to user inputs), gently nudging them towards enriching real-world interactions. This dual approach helps DamonAI fulfill its role as a supportive companion, while also encouraging users to develop confidence and self-worth necessary for pursuing and maintaining healthy external relationships.
## UX flow:
Personalized interactions to emulate the understanding and care found in human relationships, addressing the emotional void contributing to mental health issues like depression and anxiety.
Scheduled and spontaneous communications to offer a sense of belonging and reduce feelings of isolation, potentially lowering the risk of heart disease and cognitive decline associated with loneliness.
Instacart integration for delivering thoughtful gifts, reinforcing the tangible aspects of care and connection, crucial in combating the emotional and physical ramifications of social isolation.
## How we built it ⚙️
![Logic Flow](https://i.imgur.com/jJK3oly.png)
DamonAI's architecture integrates various components for a sophisticated user experience. At its core, a Vector Database acts as long-term memory, retaining conversation history for personalized interactions. The AI leverages an Internal Thoughts Model to generate nuanced responses and utilizes Selenium WebDriver for dynamic interaction capabilities, such as orchestrating Instacart deliveries based on conversational cues. Input queries can come through Telegram (text-based) or WeHead (speech-based), with responses delivered via the same channels, ensuring seamless communication whether in text or through augmented reality on WeHead.
## Challenges we ran into 😤
In developing DamonAI's Instacart integration, we encountered a significant hurdle with the platform's captcha system, which was triggered by the high frequency of our automated requests. To overcome this, we engineered a timed request protocol that spaced out interactions to avoid detection as robotic activity, thus maintaining the fluidity of the user experience without interruption. We also faced a recurring challenge in aligning the AI's natural language understanding with Instacart's API to accurately trigger gift deliveries. The difficulty lay in refining the AI's keyword detection within a user's emotional context to initiate an appropriate Instacart action. We developed a solution using a context-aware algorithm that not only parsed trigger words but also assessed the conversational sentiment. This ensured that the gift-sending feature activated under correct emotional circumstances, enhancing the user's experience without compromising on privacy or relevance.
Ethically, the creation of DamonAI brought forth the concern of user overdependence, potentially leading to the erosion of human connections. We proactively addressed this by establishing ethical guidelines that included built-in mechanisms to encourage users to maintain real-world relationships. Time limits and regular prompts within the AI's dialogue framework remind users of the importance of human interaction and suggest offline activities.
## Accomplishments that we're proud of ✨
![Post](https://i.imgur.com/gMO4VMq.jpeg)
In half a day, we achieved significant social media engagement by posting about DamonAI on 2 Instagram pages (collective following of 3M, mostly female audience), and collaborated with a TikTok influencer (named Damon) to use their likeness, name, and audience for the first iteration of the boyfriend. In less than 6 hours we had 120 sign-ups on our waiting list. This early-stage social campaign already shows real consumer demand for on-demand companionship.
## What we learned 🙌
Throughout this journey, we learned the importance of nuanced user engagement, the technical intricacies of integrating third-party services like Instacart, and the ethical considerations in AI development. Balancing technical innovation with ethical responsibility taught us to prioritize user well-being above all, ensuring DamonAI enhances rather than replaces human interaction. This project underscored the significance of continuous learning and adaptation in the rapidly evolving field of AI.
What's next for our AI Boyfriend 🚀
Moving forward with DamonAI, we're focusing on expanding its capabilities and reach. We plan to refine the AI's emotional intelligence algorithms to better understand and respond to user needs, as well as enhancing the Instacart integration for a wider range of gift options. Furthermore, we're looking to grow our user base by reaching out to more influencers and expanding our social media presence. Lastly, we will continue to iterate on our ethical framework to ensure DamonAI remains a beneficial and supportive presence without fostering overdependence. | ## Inspiration
The motivation behind creating catPhish arises from the unfortunate reality that many non-tech-savvy individuals often fall victim to phishing scams. These scams can result in innocent people losing nearly their entire life savings due to these deceptive tactics employed by cybercriminals. By leveraging both AI technology and various APIs, this tool aims to empower users to identify and prevent potential threats. It serves as a vital resource in helping users recognize whether a website is reputable and trusted, thereby contributing in the prevention of financial and personal data loss.
## What it does
catPhish integrates multiple APIs, including the OpenAI API, to combat phishing schemes effectively. Designed as a user friendly Chrome extension, catPhish unites various services into a single tool. With just a simple click, users can diminish their doubts or avoid potential mistakes, making it an accessible solution for users of all levels of technical expertise.
## How we built it
CatPhish was developed using React for the user interface visible in the browser, while Python and JavaScript were employed for the backend operations. We integrated various tools to enhance its effectiveness in combating phishing attempts. These tools include the Google Safe Browsing API, which alerts users about potentially harmful websites, virus total, Exerra Anti-Phish, which specializes in detecting phishing threats. In addition, we incorporated OpenAI to leverage advanced technology for identifying malicious websites. To assess the credibility of websites, we employed the IP Quality Score tool, which evaluates factors like risk level. For managing user authentication and data storage, we relied on Firebase, a comprehensive platform that facilitates secure user authentication and data management. By combining these components, CatPhish emerges as a sturdy solution for safeguarding users against online scams, offering enhanced security and peace of mind during web browsing.
## Challenges we ran into
Throughout the development process, we came across various permissions and security related challenges essential to the project. Issues such as CORS (Cross-Origin Resource Sharing) and web-related security hurdles posed a significant amount of obstacles. While there were no straightforward solutions to these challenges, we adopted a proactive approach to address them effectively. One of the strategies we employed involved leveraging Python's Flask CORS to navigate around permission issues arising from cross origin requests. This allowed us to facilitate communication between different domains. Additionally, we encountered security issues such as unauthorized routing, however through careful analysis, we patched up these vulnerabilities to ensure the integrity and security of the application. Despite the complexity of the challenges, our team remained resilient and resourceful, allowing us to overcome them through critical thinking and innovative problem solving techniques. One noteworthy challenge we faced was the limitation of React browser routing within Chrome extensions. We discovered that traditional routing methods didn't work as expected within this environment, which allowed us to explore alternative solutions. Through research and experimentation, we learned about MemoryBrowsing, one of React's components. Implementing this approach enabled us to get around the limitations of Chrome's native routing restrictions.
## Accomplishments that we're proud of
We take great pride in our ability to successfully integrate several functionalities into a single project, despite facing several complexities and challenges along the way. Our team's collaborative effort, resilience, and support for one another have been extremely resourceful in overcoming obstacles and achieving our goals. By leveraging our expertise and working closely together, we were able to navigate through many technical issues, implement sophisticated features, and deliver a solid solution that addresses the critical need for enhanced security against phishing attacks. We take pride in the teamwork and trust among our team members.
## What we learned
Our journey with this project has been an extremely profound learning experience for all of us. As a team, it was our first venture into building a browser extension, which provided valuable insights into the complexity of extension development. We navigated through the process, gaining a deeper understanding of extension architecture and functionality. One of the significant learning points was integrating Python with TypeScript to facilitate communication between different parts of the project. This required us to manage API requests and data fetching efficiently, enhancing our skills in backend/frontend integration. Furthermore, diving into routing mechanisms within the extension environment expanded our knowledge base, with some team members developing a stronger grasp of routing concepts and implementation. The use of Tailwind CSS for styling purposes presented another learning opportunity. We explored its features and capabilities, improving our skills in responsive design and UI development. Understanding how extensions operate and interact with web browsers was another enlightening aspect of the project as it actually differed from how a web application operates. It provided practical insights into the inner workings of browser extensions and their functionalities. Additionally, our hands-on experience with Firebase empowered us to practice database implementation. Leveraging Firebase's user friendly interface, we gained experience in managing and storing data securely. The project also afforded us the chance to integrate multiple APIs using both Python and JavaScript, strengthening our understanding of API integration. Implementing these APIs within the React framework, coupled with TypeScript, improved our ability to build sturdy and scalable applications. Overall, our journey with this project has been marked by continuous learning and growth, furnishing us with valuable skills and insights that will undoubtedly benefit us in future endeavors.
## What's next for catPhish
What's next for CatPhish
The future holds exciting possibilities for CatPhish as we continue to enhance its capabilities and expand some of its offerings. One of our key objectives is to integrate additional trusted APIs to increase its phishing detection capabilities further. By leveraging a huge range of API services, we aim to further CatPhish's ability to identify and raduce phishing threats. We were also exploring the development of a proprietary machine learning model trained specifically on phishing attempts. This dedicated model will allow CatPhish to evolve and adapt to emerging phishing techniques. As the cybersecurity realm grows, on the other hand, cybercriminals are using effective and advanced skills such as MiTM (Man In the Middle) Attacks through advanced use of phishing pages and such. In addition to refining our machine learning capabilities, we plan to enhance the functionality of the OpenAI API chat GPT bot. By using advanced features such as web browsing using Bing and expanding its conversational abilities, we see ourselves creating a more comprehensive and intuitive user experience. | ## Inspiration
hiding
## What it does
## How I built it
## Challenges I ran into
## Accomplishments that I'm proud of
## What I learned
## What's next for Spine | losing |
Electric boogaloo lights up the fret/string combo to play for different chords, and can show you how to play songs in realtime by lighting up the banjo neck.
It's also an electronic banjo, as opposed to an electric banjo, meaning that instead of just an amplifier on regular string audio, it knows which string is being strummed and which frets are being held down, and can use those to play whatever audio files you wish, similarly to an electric keyboard. | Duet's music generation revolutionizes how we approach music therapy. We capture real-time brainwave data using Emotiv EEG technology, translating it into dynamic, personalized soundscapes live. Our platform, backed by machine learning, classifies emotional states and generates adaptive music that evolves with your mind. We are all intrinsically creative, but some—whether language or developmental barriers—struggle to convey it. We’re not just creating music; we’re using the intersection of art, neuroscience, and technology to let your inner mind shine.
## About the project
**Inspiration**
Duet revolutionizes the way children with developmental disabilities—approximately 1 in 10 in the United States—express their creativity through music by harnessing EEG technology to translate brainwaves into personalized musical experiences.
Daniel and Justin have extensive experience teaching music to children, but working with those who have developmental disabilities presents unique challenges:
1. Identifying and adapting resources for non-verbal and special needs students.
2. Integrating music therapy principles into lessons to foster creativity.
3. Encouraging improvisation to facilitate emotional expression.
4. Navigating the complexities of individual accessibility needs.
Unfortunately, many children are left without the tools they need to communicate and express themselves creatively. That's where Duet comes in. By utilizing EEG technology, we aim to transform the way these children interact with music, giving them a voice and a means to share their feelings.
At Duet, we are committed to making music an inclusive experience for all, ensuring that every child—and anyone who struggles to express themselves—has the opportunity to convey their true creative self!
**What it does:**
1. Wear an EEG
2. Experience your brain waves as music! Focus and relaxation levels will change how fast/exciting vs. slow/relaxing the music is.
**How we built it:**
We started off by experimenting with Emotiv’s EEGs — devices that feed a stream of brain wave activity in real time! After trying it out on ourselves, the CalHacks stuffed bear, and the Ariana Grande cutout in the movie theater, we dove into coding. We built the backend in Python, leveraging the Cortex library that allowed us to communicate with the EEGs. For our database, we decided on SingleStore for its low latency, real-time applications, since our goal was to ultimately be able to process and display the brain wave information live on our frontend.
Traditional live music is done procedurally, with rules manually fixed by the developer to decide what to generate. On the other hand, existing AI music generators often generate sounds through diffusion-like models and pre-set prompts. However, we wanted to take a completely new approach — what if we could have an AI be a live “composer”, where it decided based on the previous few seconds of live emotional data, a list of available instruments it can select to “play”, and what it previously generated to compose the next few seconds of music? This way, we could have live AI music generation (which, to our knowledge, does not exist yet). Powered by Google’s Gemini LLM, we crafted a prompt that would do just that — and it turned out to be not too shabby!
To play our AI-generated scores live, we used Sonic Pi, a Ruby-based library that specializes in live music generation (think DJing in code). We fed this and our brain wave data to a frontend built in Next.js to display the brain waves from the EEG and sound spectrum from our audio that highlight the correlation between them.
**Challenges:**
Our biggest challenge was coming up with a way to generate live music with AI. We originally thought it was impossible and that the tech wasn’t “there” yet — we couldn’t find anything online about it, and even spent hours thinking about how to pivot to another idea that we could use our EEGs with.
However, we eventually pushed through and came up with a completely new method of doing live AI music generation that, to our knowledge, doesn’t exist anywhere else! It was most of our first times working with this type of hardware, and we ran into many issues with getting it to connect properly to our computers — but in the end, we got everything to run smoothly, so it was a huge feat for us to make it all work!
**What’s next for Duet?**
Music therapy is on the rise – and Duet aims to harness this momentum by integrating EEG technology to facilitate emotional expression through music. With a projected growth rate of 15.59% in the music therapy sector, our mission is to empower kids and individuals through personalized musical experiences. We plan to implement our programs in schools across the states, providing students with a unique platform to express their emotions creatively. By partnering with EEG companies, we’ll ensure access to the latest technology, enhancing the therapeutic impact of our programs. Duet gives everyone a voice to express emotions and ideas that transcend words, and we are committed to making this future a reality!
**Built with:**
* Emotiv EEG headset
* SingleStore real-time database
* Python
* Google Gemini
* Sonic Pi (Ruby library)
* Next.js | ## 💡 Inspiration
We love Piano Tiles, but how about we make it a more satisfying tactile experience AND fun and flashy????? Introducing *it's lit*, a hardware and web game for your local fidgeter to tap physical buttons along with lights and music!!
## 🚨 What it does
it’s lit allows users to take a break anytime and give their fingers a little bit of activity. The game helps improve hand-eye coordination, or just gives the opportunity to vibe to colourful lights and music!! *(hooray!)*
From our website, the user can select the song they’d like to ~vibe~ and play along with! Once gameplay has started, the user will redirect their attention to the physical hardware. They will try to tap the buttons to incoming LEDs as they appear, which also match up with the beats of the song. At the end of the song, the user’s accuracy will be displayed on the website screen, so that they can continue to improve their performance!! No lives lost :)
## ⚡ How we built it
**Hardware**: Circuits were first prototyped using Arduinos before being moved to the Raspberry Pi. We used LEDs to represent the tile-like gameplay and pushbuttons for user input. 8-bit shift registers with parallel out were used to control LEDs. Raspberry Pi was programmed in C.
**Frontend**: Prototyped on Figma, then used React to build the site
## 🐿️ Challenges we ran into
Since this was our first time making a hardware hack, we were having a bit of trouble setting it up. We're grateful for Hack the North's resources that helped us get started!
On the hardware side, we ran into an issue of having too many led pin outputs needed. To solve this, we used an 8-bit shift register but reading the documentation for the SN74HCH595N 8 bit shift register took a while. This lets us use 6 pins instead of 16 to drive the 16 leds.
## 🚀 Accomplishments and what we learned
Being able to try something new with the hardware and make a game that we could play was very rewarding!! We love music and lights, and we had a lot of fun building this project!
## 🏔️ What's next for *it's lit*
* Adding a multiplayer feature so friends can tap to flashing lights together (and see who's better ;) )
* Automated song creation between the music and the lights! | winning |
## Inspiration
When travelling in a new place, it is often the case that one doesn't have an adequate amount of mobile data to search for information they need.
## What it does
Mr.Worldwide allows the user to send queries and receive responses regarding the weather, directions, news and translations in the form of sms and therefore without the need of any data.
## How I built it
A natural language understanding model was built and trained with the use of Rasa nlu. This model has been trained to work as best possible with many variations of query styles to act as a chatbot. The queries are sent up to a server by sms with the twill API. A response is then sent back the same way to function as a chatbot.
## Challenges I ran into
Implementing the Twilio API was a lot more time consuming than we assumed it would be. This was due to the fact that a virtual environment had to be set up and our connection to the server originally was not directly connecting.
Another challenge was providing the NLU model with adequate information to train on.
## Accomplishments that I'm proud of
We are proud that our end result works as we intended it to.
## What I learned
A lot about NLU models and implementing API's.
## What's next for Mr.Worldwide
Potentially expanding the the scope of what services/information it can provide to the user. | ## Inspiration
After our initial hack failed, and with only 12 hours of time remaining, we decided to create a proof-of-concept that was achievable in the time remaining. As Twilio was a sponsor, we had the idea of using SMS to control a video game. We created Hackermon to demonstrate how this technology has potential, and as a proof-of-concept of more practical uses.
## What it does
Controlled entirely via SMS, two players can select a knockoff Pokemon and fight each other, with the ability to block or attack. The game is turn based, and has checks to ensure the person texting the API is the correct person, so cheating is effectively impossible.
## How we built it
The backend is built with Node.js and Express.js, with SMS controls made possible with Twilio's API. The frontend is built in HTML, CSS, JavaScript and jQuery and uses AJAX to constantly poll the backend for updates.
## Challenges we ran into
Sleep deprivation was a major challenge that affected us. Trying to focus on learning a new API and developing with a new framework was very challenging after being awake for 22 hours. However, having to prototype something so rapidly was very rewarding - we had to carefully prioritise and cut features in order to create a demoable product in time.
## What we learned
Our initial idea for a project involved using Facebook's Instant Game API. We discovered that many of Facebook's APIs aren't as documented as we expected, and some of their post-Cambridge Analytica security features can cause major unexpected issues.
This was the first time we'd ever used the Twilio API, and it was great to learn how powerful the platform is. Initially, we'd never had to handle getting requests from the backend to the frontend in Node.js before, so managing to get this to work consistently was amazing - even though we know it's not done in the most efficient way.
## What's next for Hackermon
While the game itself is only a basic proof-of-concept, the mechanic of using SMS to control a game has many applications. For example, a quiz webapp used in university classes could accept inputs via SMS rather than requiring students to download a clunky and badly designed app. | >
> `2023-10-10 Update`
>
> We've moved all of our project information to our GitHub repo so that it's up to date.
> Our project is completely open source, so please feel free to contribute if you want!
> <https://github.com/soobinrho/BeeMovr>
>
>
> | partial |
## Realm Inspiration
Our inspiration stemmed from our fascination in the growing fields of AR and virtual worlds, from full-body tracking to 3D-visualization. We were interested in realizing ideas in this space, specifically with sensor detecting movements and seamlessly integrating 3D gestures. We felt that the prime way we could display our interest in this technology and the potential was to communicate using it. This is what led us to create Realm, a technology that allows users to create dynamic, collaborative, presentations with voice commands, image searches and complete body-tracking for the most customizable and interactive presentations. We envision an increased ease in dynamic presentations and limitless collaborative work spaces with improvements to technology like Realm.
## Realm Tech Stack
Web View (AWS SageMaker, S3, Lex, DynamoDB and ReactJS): Realm's stack relies heavily on its reliance on AWS. We begin by receiving images from the frontend and passing it into SageMaker where the images are tagged corresponding to their content. These tags, and the images themselves, are put into an S3 bucket. Amazon Lex is used for dialog flow, where text is parsed and tools, animations or simple images are chosen. The Amazon Lex commands are completed by parsing through the S3 Bucket, selecting the desired image, and storing the image URL with all other on-screen images on to DynamoDB. The list of URLs are posted on an endpoint that Swift calls to render.
AR View ( AR-kit, Swift ): The Realm app renders text, images, slides and SCN animations in pixel perfect AR models that are interactive and are interactive with a physics engine. Some of the models we have included in our demo, are presentation functionality and rain interacting with an umbrella. Swift 3 allows full body tracking and we configure the tools to provide optimal tracking and placement gestures. Users can move objects in their hands, place objects and interact with 3D items to enhance the presentation.
## Applications of Realm:
In the future we hope to see our idea implemented in real workplaces in the future. We see classrooms using Realm to provide students interactive spaces to learn, professional employees a way to create interactive presentations, teams to come together and collaborate as easy as possible and so much more. Extensions include creating more animated/interactive AR features and real-time collaboration methods. We hope to further polish our features for usage in industries such as AR/VR gaming & marketing. | ## Inspiration
We wanted to get home safe
## What it does
Stride pairs you with walkers just like UBC SafeWalk, but outside of campus grounds, to get you home safe!
## How we built it
React Native, Express JS, MongoDB
## Challenges we ran into
Getting environment setups working
## Accomplishments that we're proud of
Finishing the app
## What we learned
Mobile development
## What's next for Stride
Improve the app | ## Inspiration
Inspired by those with Asperger's, Sentimentex is the idea of Siyuan Liu, Faazilah Mohamed, and Saniyah Shaikh at PennApps xvi (Fall 2017). It aims to help those who have a hard time discerning the emotions from texts, emails, and other text-based media by using machine learning, natural language processing, and webscraping to calculate the percentages of emotions present in the given text.
## What it does
Sentimentex takes in text from the user and returns the probabilities of predictions for emotions present in that text, including joy, disgust, anger, fear, sadness, and guilt. Most of the current sentiment analyzers simply detect whether a sentence is positive, negative, and neutral, and we believe that a more fine-grained classifier can be useful for a variety of reasons, including for those with disabilities.
## How I built it
Over 7,000 entities of labelled data were used from a repo. A machine learning classifier, SVM, was trained on the labelled data and then used to predict the emotions for a user-inputted text. Other scores, such as the sentiment rating (whether a sentence is positive, negative, or neutral) and the sentiment score outputted by Google Natural Language Processing API, were considered as features but ultimately did not turn out to be promising. The code for the project was primarily in Python, though R, Javascript, and HTML/CSS were used as well.
## Challenges I ran into
The biggest challenge was incorporating Python with HTML using Flask. Since none of the members of the team have significant front-end experience, a fair bit of time was spent trying to get the Flask app to be hosted and work.
## Accomplishments that I'm proud of
Our model is able to predict between the 7 emotions in consideration with over 3x the accuracy of predicting at random. We are also proud of managing to integrate Python with the web application since this is the first time we have attempted to do that.
## What I learned
We learned the cases in which to use certain Machine Learning models, interacted with multiple APIs from the Google Cloud service, and learned how to use Flask and Javascript for the very first time.
## What's next for Sentimentex
All of us are very excited for the prospect of using Machine Learning and Natural Language Processing to capture the essence of written text. We plan on continuing to experiment with larger datasets and different features to improve the accuracy score of a given text, as well as to look into Speech APIs for fine-grained sentiment analysis as well. | winning |
## Inspiration
One of our team members, Jack, was inspired by his dad to create Get-It-Done. His dad works as a freelance landscaper and owns a small business in Vancouver, BC. When he first started his work, he had trouble finding and acquiring new clients and had to rely on his network to get contracts. Similarly, those without access to higher education or specialized training often have trouble finding jobs that pay higher than minimum wage. It was important for our team to address this issue because hiring people with specialized training can be time-consuming and costly, when the jobs themselves could be completed by someone without experience.
## What it does
Get-It-Done is an online platform that allows users to work niche, labour jobs and earn extra income on their own time. Users sign up and provide their availability, preferred work locations, and niche skills, such as grocery shopping or mowing lawns. Consumers can then request work from those who are currently available in their area of interest and who possess the skills that they need for the job.
## How we built it
We used Xcode and Swift for the frontend, Node.js for the backend, and Alamofire to connect the two.
Those with less programming experience focused on designing the user interface. For this, we used Figma to create the layout of each page, and Miro to visualize the control flow.
## Challenges we ran into
Our biggest challenge was figuring out how to configure the Alamofire HTTP framework and link it to our Xcode project. To overcome this challenge, we consulted with mentors, watched YouTube tutorials, and read Stack Overflow threads for guidance.
Another challenge we faced was having a mentor tell us that our overarching idea existed through a few other companies. Get-It-Done, however, is different because users are not selling their services as a product—they instead provide their availability, location, and labour skills and are offered jobs by consumers. Users can then focus on doing good work for good reviews rather than searching for job opportunities or marketing their services. Moreover, consumers can immediately see the hours during which users are available, eliminating the need to make time arrangements. They can also set their own hourly rate for the job and negotiate it with the user instead of paying for a service. Allowing the user to negotiate pay rates and say no to job offers puts both parties on an even platform. This makes the hiring and job searching process more efficient and accessible.
## Accomplishments that we're proud of
For this project, we managed to successfully create a working backend with a database of users. Furthermore, we were able to use Alamofire to access the database within the application.
Our team consists of four first-year university students. For all of us, this is our first or second hackathon, so although we do not have much programming experience, we are proud of how well our team worked together and utilized each other’s skills.
## What we learned
We learned how to use Alamofire, third party libraries in Swift, and optionals on the frontend. We also learned how to use Figma and the hackathon basics–Git, Devpost, and Miro. | ## Inspiration
We wanted to do something good for the people and the environment, but had no idea, then suddenly realized that there's no eCommerce that lets you borrow things from other people, so we decided to work on it, since it seemed like a really good idea and something innovative.
## What it does
This platform (iOS app and website) lets you borrow things from other people around you area, the idea is that you can share the things you don't use with other people and get paid for it, this is really good for the environment since it means that not everybody has to buy things. It also makes it possible for people who can't afford some of these things to try them for a way cheaper price.
All the payments will be done by debit/credit card (or PayPal) and the buyer has to pick up and drop off the goods from the owner's place. For every item there's a price that the buyer would have to pay in case he breaks the goods or if he hold onto them, we would also do an id check just to make sure that nobody has double accounts and also to prosecute them.
Furthermore, there's way more human interaction than a regular eCommerce since you get things from other people and you have to talk with them and get out of your house
## How we built it
We built the website with bootstrap and wrote down a lot of the html, the backend is written in PHP and it offers restful API that can be used by the website and the iOS app, both of the projects aren't totally done yet, the website is partially done, we just need to set up everything using the APIs. The iOS app has all the graphics and was written totally from scratch, but just a small part of logic, we need to finish everything up using our own APIs. The authentication is done through Firebase and we've also used Google Maps' and Google Places' APIs because we wanted to make an interactive map with all the good you could borrow .
## Challenges we ran into
Shortage of time, that was definitely the biggest problem, but it also took us a while to figure out an actual plan which we all agreed on and which seemed achievable. Furthermore, we started with some service and languages we didn't know and that made us waste a lot of time.
## Accomplishments that we're proud of
We're proud of our idea, we really think it can be really good for our planet and for human beings. We're also proud of being able to partially realise something so ambitious, overall it was a great experience.
## What we learned
We learned that team-work is the key to success in these type of competitions and we also deepened our knowledge in computer science. We also learnt that having a good gameplay is really important and that's where we lacked, next time we will try to be more organised and make a better plan.
## What's next for Shareal
We might work on this project because it really seems like a good idea, but we need to think about college first, we're all freshmen and we're starting on Monday so we need to focus on that for a while. Later on we will probably continue with this project, we really think that this project can be really big! | ## Inspiration
As roommates, we found that keeping track of our weekly chore schedule and house expenses was a tedious process, more tedious than we initially figured.
Though we created a Google Doc to share among us to keep the weekly rotation in line with everyone, manually updating this became hectic and cumbersome--some of us rotated the chores clockwise, others in a zig-zag.
Collecting debts for small purchases for the house split between four other roommates was another pain point we wanted to address. We decided if we were to build technology to automate it, it must be accessible by all of us as we do not share a phone OS in common (half of us are on iPhone, the other half on Android).
## What it does
**Chores:**
Abode automatically assigns a weekly chore rotation and keeps track of expenses within a house. Only one person needs to be a part of the app for it to work--the others simply receive a text message detailing their chores for the week and reply “done” when they are finished.
If they do not finish by close to the deadline, they’ll receive another text reminding them to do their chores.
**Expenses:**
Expenses can be added and each amount owed is automatically calculated and transactions are automatically expensed to each roommates credit card using the Stripe API.
## How we built it
We started by defining user stories and simple user flow diagrams. We then designed the database where we were able to structure our user models. Mock designs were created for the iOS application and was implemented in two separate components (dashboard and the onboarding process). The front and back-end were completed separately where endpoints were defined clearly to allow for a seamless integration process thanks to Standard Library.
## Challenges we ran into
One of the significant challenges that the team faced was when the back-end database experienced technical difficulties at the tail end of the hackathon. This slowed down our ability to integrate our iOS app with our API. However, the team fought back while facing adversity and came out on top.
## Accomplishments that we're proud of
**Back-end:**
Using Standard Library we developed a comprehensive back-end for our iOS app consisting of 13 end-points, along with being able to interface via text messages using Twilio for users that do not necessarily want to download the app.
**Design:**
The team is particularly proud of the design that the application is based on. We decided to choose a relatively simplistic and modern approach through the use of a simple washed out colour palette. The team was inspired by material designs that are commonly found in many modern applications. It was imperative that the designs for each screen were consistent to ensure a seamless user experience and as a result a mock-up of design components was created prior to beginning to the project.
**Use case:**
Not only that, but our app has a real use case for us, and we look forward to iterating on our project for our own use and a potential future release.
## What we learned
This was the first time any of us had gone into a hackathon with no initial idea. There was a lot of startup-cost when fleshing out our design, and as a result a lot of back and forth between our front and back-end members. This showed us the value of good team communication as well as how valuable documentation is -- before going straight into the code.
## What's next for Abode
Abode was set out to be a solution to the gripes that we encountered on a daily basis.
Currently, we only support the core functionality - it will require some refactoring and abstractions so that we can make it extensible. We also only did manual testing of our API, so some automated test suites and unit tests are on the horizon. | losing |
## Inspiration
We were looking for an innovative solution to keep us aware of what we were eating.
## What it does
Nutlogger is a web and mobile application that tracks nutritional data over a period of the day, month, year. With this data, we hope that users can get a better understanding of their eating habits.
## How I built it
Lots of hard work and patience. The web application was built with MERN and the mobile application was built with Android Studio.
## Challenges I ran into
Parsing the information for Google's vision API was difficult.
## Accomplishments that I'm proud of
Developing a functional application that actually works
## What I learned
* Google cloud platform
* React Typescript
* android camera
## What's next for Nutlogger
* account system and profiles
* admin panel for doctors
* chat with nutritionists
## Credits:
Icons made by Freepik from [www.flaticon.com](http://www.flaticon.com) is licensed by CC 3.0 BY | Check out our project at <http://gitcured.com/>
## Inspiration
* We chose the Treehacks Health challenge about creating a crowdsourced question platform for sparking conversations between patients and physicians in order to increase understanding of medical conditions.
* We really wanted to build a platform that did much more than just educate users with statistics and discussion boards. We also wanted to explore the idea that not many people understand how different medical conditions work conjunctively.
* Often, people don't realize that medical conditions don't happen one at a time. They can happen together, thus raising complications with prescribed medication that, when taken at the same time, can be dangerous together and may lead to unpredictable outcomes. These are issues that the medical community is well aware of but your average Joe might be oblivious to.
* Our platform encourages people to ask questions and discuss the effects of living with two or more common diseases, and take a closer look at the apex that form when these diseases begin to affect the effects of each other on one's body.
## What it does
* In essence, the platform wants patients to submit questions about their health, discuss these topics in a freestyle chat system while exploring statistics, cures and related diseases.
* By making each disease, symptom, and medication a tag rather than a category, the mixing of all topics is what fuels the full potential of this platform. Patients, and even physicians, who might explore the questions raised regarding the overlap between, for example Diabetes and HIV, contribute to the collective curiosity to find out what exactly happens when a patient is suffering both diseases at the same time, and the possible outcomes from the interactions between the drugs that treat both diseases.
* Each explored topic is searchable and the patient can delve quite deep into the many combinations of concepts. GitCured really is fueled by the questions that patients think of about their healthcare, and depend on their curiosity to learn and a strong community to discuss ideas in chat-style forums.
## How we built it
Languages used: Node.js, Sockets.IO, MongoDB, HTML/CSS, Javascript, ChartJS, Wolfram Alpha, Python, Bootstrap
## Challenges we ran into
* We had problems in implementing a multi-user real-time chat using sockets.io for every question that has been asked on our platform.
* Health data is incredibly hard to find. There are certain resources, such as data.gov and university research websites that are available, but there is no way to ensure quality data that can be easily parseable and usable for a health hack. Most data that we did find didn't help us much with the development of this app but it provided an insight to us to understand the magnitude of the health related problems.
* Another issue we faced was to differentiate ourselves from other services that meet part of the criteria of the prompt. Our focus was to critically think how each medical concept affects people, along with providing patients a platform to discuss their healthcare. The goal was to design a space that encourages creative and curious thinking, and ask questions that might never have been previously answered. We wanted to give patients a space to discuss and critically think about how each medical concept affects each other.
## Accomplishments that we're proud of
We were pretty surprised we got this far into the development of this app. While it isn't complete, as apps never are, we had a great experience of putting ideas together and building a health-focused web platform from scratch.
## What we learned
* There is a very big issue that there is no central and reliable source for health data. People may have clear statistics on finance or technology, but there is so much secrecy and inconsistencies that come with working with data in the medical field. This creates a big, and often invisible, problem where computer scientists find it harder and harder to analyze biomedical data compared to other types of data. If we hadn't committed to developing a patient platform, I think our team would have worked on designing a central bank of health data that can be easily implementable in new and important health software. Without good data, development of bio technology will always be slow when developers find themselves trapped or stuck. | ## Inspiration
When it comes to finding solutions to global issues, we often feel helpless: making us feel as if our small impact will not help the bigger picture. Climate change is a critical concern of our age; however, the extent of this matter often reaches beyond what one person can do....or so we think!
Inspired by the feeling of "not much we can do", we created *eatco*. *Eatco* allows the user to gain live updates and learn how their usage of the platform helps fight climate change. This allows us to not only present users with a medium to make an impact but also helps spread information about how mother nature can heal.
## What it does
While *eatco* is centered around providing an eco-friendly alternative lifestyle, we narrowed our approach to something everyone loves and can apt to; food! Other than the plenty of health benefits of adopting a vegetarian diet — such as lowering cholesterol intake and protecting against cardiovascular diseases — having a meatless diet also allows you to reduce greenhouse gas emissions which contribute to 60% of our climate crisis. Providing users with a vegetarian (or vegan!) alternative to their favourite foods, *eatco* aims to use small wins to create a big impact on the issue of global warming. Moreover, with an option to connect their *eatco* account with Spotify we engage our users and make them love the cooking process even more by using their personal song choices, mixed with the flavours of our recipe, to create a personalized playlist for every recipe.
## How we built it
For the front-end component of the website, we created our web-app pages in React and used HTML5 with CSS3 to style the site. There are three main pages the site routes to: the main app, and the login and register page. The login pages utilized a minimalist aesthetic with a CSS style sheet integrated into an HTML file while the recipe pages used React for the database. Because we wanted to keep the user experience cohesive and reduce the delay with rendering different pages through the backend, the main app — recipe searching and viewing — occurs on one page. We also wanted to reduce the wait time for fetching search results so rather than rendering a new page and searching again for the same query we use React to hide and render the appropriate components. We built the backend using the Flask framework. The required functionalities were implemented using specific libraries in python as well as certain APIs. For example, our web search API utilized the googlesearch and beautifulsoup4 libraries to access search results for vegetarian alternatives and return relevant data using web scraping. We also made use of Spotify Web API to access metadata about the user’s favourite artists and tracks to generate a personalized playlist based on the recipe being made. Lastly, we used a mongoDB database to store and access user-specific information such as their username, trees saved, recipes viewed, etc. We made multiple GET and POST requests to update the user’s info, i.e. saved recipes and recipes viewed, as well as making use of our web scraping API that retrieves recipe search results using the recipe query users submit.
## Challenges we ran into
In terms of the front-end, we should have considered implementing Routing earlier because when it came to doing so afterward, it would be too complicated to split up the main app page into different routes; this however ended up working out alright as we decided to keep the main page on one main component. Moreover, integrating animation transitions with React was something we hadn’t done and if we had more time we would’ve liked to add it in. Finally, only one of us working on the front-end was familiar with React so balancing what was familiar (HTML) and being able to integrate it into the React workflow took some time. Implementing the backend, particularly the spotify playlist feature, was quite tedious since some aspects of the spotify web API were not as well explained in online resources and hence, we had to rely solely on documentation. Furthermore, having web scraping and APIs in our project meant that we had to parse a lot of dictionaries and lists, making sure that all our keys were exactly correct. Additionally, since dictionaries in Python can have single quotes, when converting these to JSONs we had many issues with not having them be double quotes. The JSONs for the recipes also often had quotation marks in the title, so we had to carefully replace these before the recipes were themselves returned. Later, we also ran into issues with rate limiting which made it difficult to consistently test our application as it would send too many requests in a small period of time. As a result, we had to increase the pause interval between requests when testing which made it a slow and time consuming process. Integrating the Spotify API calls on the backend with the frontend proved quite difficult. This involved making sure that the authentication and redirects were done properly. We first planned to do this with a popup that called back to the original recipe page, but with the enormous amount of complexity of this task, we switched to have the playlist open in a separate page.
## Accomplishments that we're proud of
Besides our main idea of allowing users to create a better carbon footprint for themselves, we are proud of accomplishing our Spotify integration. Using the Spotify API and metadata was something none of the team had worked with before and we're glad we learned the new skill because it adds great character to the site. We all love music and being able to use metadata for personalized playlists satisfied our inner musical geek and the integration turned out great so we're really happy with the feature. Along with our vast recipe database this far, we are also proud of our integration! Creating a full-stack database application can be tough and putting together all of our different parts was quite hard, especially as it's something we have limited experience with; hence, we're really proud of our service layer for that. Finally, this was the first time our front-end developers used React for a hackathon; hence, using it in a time and resource constraint environment for the first time and managing to do it as well as we did is also one of our greatest accomplishments.
## What we learned
This hackathon was a great learning experience for all of us because everyone delved into a tool that they'd never used before! As a group, one of the main things we learned was the importance of a good git workflow because it allows all team members to have a medium to collaborate efficiently by combing individual parts. Moreover, we also learned about Spotify embedding which not only gave *eatco* a great feature but also provided us with exposure to metadata and API tools. Moreover, we also learned more about creating a component hierarchy and routing on the front end. Another new tool that we used in the back-end was learning how to perform database operations on a cloud-based MongoDB Atlas database from a python script using the pymongo API. This allowed us to complete our recipe database which was the biggest functionality in *eatco*.
## What's next for Eatco
Our team is proud of what *eatco* stands for and we want to continue this project beyond the scope of this hackathon and join the fight against climate change. We truly believe in this cause and feel eatco has the power to bring meaningful change; thus, we plan to improve the site further and release it as a web platform and a mobile application. Before making *eatco* available for users publically we want to add more functionality and further improve the database and present the user with a more accurate update of their carbon footprint. In addition to making our recipe database bigger, we also want to focus on enhancing the front-end for a better user experience. Furthermore, we also hope to include features such as connecting to maps (if the user doesn't have a certain ingredient, they will be directed to the nearest facility where that item can be found), and better use of the Spotify metadata to generate even better playlists. Lastly, we also want to add a saved water feature to also contribute into the global water crisis because eating green also helps cut back on wasteful water consumption! We firmly believe that *eatco* can go beyond the range of the last 36 hours and make impactful change on our planet; hence, we want to share with the world how global issues don't always need huge corporate or public support to be solved, but one person can also make a difference. | partial |
This project was developed with the RBC challenge in mind of developing the Help Desk of the future.
## What inspired us
We were inspired by our motivation to improve the world of work.
## Background
If we want technical support, we usually contact companies by phone, which is slow and painful for both users and technical support agents, especially when questions are obvious. Our solution is an online chat that responds to people immediately using our own bank of answers. This is a portable and a scalable solution.
## Try it!
<http://www.rbcH.tech>
## What we learned
Using NLP, dealing with devilish CORS, implementing docker successfully, struggling with Kubernetes.
## How we built
* Node.js for our servers (one server for our webapp, one for BotFront)
* React for our front-end
* Rasa-based Botfront, which is the REST API we are calling for each user interaction
* We wrote our own Botfront database during the last day and night
## Our philosophy for delivery
Hack, hack, hack until it works. Léonard was our webapp and backend expert, François built the DevOps side of things and Han solidified the front end and Antoine wrote + tested the data for NLP.
## Challenges we faced
Learning brand new technologies is sometimes difficult! Kubernetes (and CORS brought us some painful pain... and new skills and confidence.
## Our code
<https://github.com/ntnco/mchacks/>
## Our training data
<https://github.com/lool01/mchack-training-data> | ## Inspiration: We're trying to get involved in the AI chat-bot craze and pull together cool pieces of technology -> including Google Cloud for our backend, Microsoft Cognitive Services and Facebook Messenger API
## What it does: Have a look - message Black Box on Facebook and find out!
## How we built it: SO MUCH PYTHON
## Challenges we ran into: State machines (i.e. mapping out the whole user flow and making it as seamless as possible) and NLP training
## Accomplishments that we're proud of: Working NLP, Many API integrations including Eventful and Zapato
## What we learned
## What's next for BlackBox: Integration with google calendar - and movement towards a more general interactive calendar application. Its an assistant that will actively engage with you to try and get your tasks/events/other parts of your life managed. This has a lot of potential - but for the sake of the hackathon, we thought we'd try do it on a topic that's more fun (and of course, I'm sure quite a few us can benefit from it's advice :) ) | ## Inspiration
Mental health has become one of the most prominent issues today that has impacted a high percentage of people. This has taken a negative toll on their lives and have made people feel like they do not belong anywhere. Due to this, our group decided to assist these people by creating a phone application that minimizes these negative feeling by providing a helping hand and guiding them to additional aid if necessary.
## What it does
Our application utilizes a chat bot using voice recognition to communicate with users while responding accordingly to their mood. It strives to help their overall mentality and guide them to a greater overall personal satisfaction.
## How we built it
We used Android Studio to create an android application that incorporates Firebase for user authentication and data management using their database. In addition, the chat bot uses Dialogflow's machine learning capabilities and dialog intents to simulate a real life conversation while providing the option for anonymity. In conjunction to Dialogflow, Avaya's API was utilized for its voice recognition and connection to emergency situation through SMS and phone calling.
## Challenges we ran into
It was very challenging for us to implement the Avaya API because of its compatibility with Java DK, making it difficult to get the correct HTTP connection needed. This required specific Java versions as well as maven to be able to integrate it in conjunction with the data output from Avaya's API. In addition, the Firebase implementation provided difficulties because of it is NoSQL database which made it tough to retrieve and interact with the data.
## Accomplishments that we're proud of
Despite the challenges faced, we were still able to implement both the Avaya API, which is now able to both call and sent text messages, and the Firebase database to store all the user data. This all came together with our final product where the chat bot is able to interact with, call, and send text messages when required.
## What we learned
The biggest takeaway from this is learning to think outside the box and understand that there is always another way around a seemingly unsolvable goal. For example, the Avaya API library was difficult to implement because it required downloading a library and using an intermediary such as maven to access the library. However, despite this obstacle, our team was still able to find an alternative in accessing the API through Curl calls and access the needed data. A similar obstacle happened for Firebase database where the pull requests would not process as required, but we were able to find an alternative way to connect to the Firebase and still retrieve the needed data.
## What's next for ASAP Assistance
The more the chat bot is utilized, the better the communication will be between the user and the bot. Further training
will improve the the bot's capabilities which means it could use many more intents to improve overall user experience. With continued contribution to the logical capabilities of the bot, a wider range of communication can be supported between the user and the bot. | winning |
## Inspiration
The inspiration of our project is rooted in relatable and often amusing real-life situations that many of us have experienced. It's those moments of almost forgetting a friend's birthday, the accidental inclusion of an ingredient they're allergic to or prefer to avoid in a shared meal, and even the occasional mental lapse in recalling their address despite numerous visits. These instances serve as the driving force behind our idea to create a solution that not only addresses these common slip ups but also enhances the overall quality of our interpersonal relationships.
## What it does
Introducing our innovative mobile application LifeLens. LifeLens is designed to streamline interpersonal connections and is the comprehensive friend management tool. Users have the ability to input personal information such as birthdays, allergies/food restrictions, and residential addresses. The uniqueness of our app lies in its group functionality, enabling users to join multiple groups comprised of their friends. Within these groups, individuals can readily access and peruse the relevant information of their friends, fostering a more informed and considerate social environment. There are also additional features such as profile editing and showing the number of days before their friend's birthday. Gone are the days of forgetting birthdays or inadvertently overlooking dietary preferences. This app serves as a sophisticated yet user-friendly solution to enhance the quality of social interactions and facilitate seamless communication among friends.
## How we built it
Our project boasts a user-friendly interface crafted with Flutter, ensuring a seamless and enjoyable user experience. The backend functionalities were implemented with Python FastAPI, providing a robust foundation for efficient operations. To securely store and manage users' information, we seamlessly integrated Kintone's database, emphasizing reliability and scalability. Authentication and data security are paramount, and for this, we employed Auth0. This not only facilitates a secure login process but also ensures the protection of users' sensitive information.
## Challenges we ran into
One notable hurdle was encountered while working with Kintone, where handling multiple keys for different tables posed a complex puzzle that took some time and dedication to unravel. Debugging became a significant aspect, especially when fine-tuning API calls and scrutinizing JSON bodies to ensure seamless communication between the front and back end. Connecting these vital components required meticulous troubleshooting and collaboration.
## Accomplishments that we're proud of
Some notable accomplishments that our team is proud of regarding this project include:
-successful implementation of a user-friendly interface using Flutter, providing a smooth and intuitive experience for our users
-development of a robust and efficient backend utilizing Python FastAPI, ensuring the seamless operation of our application
-integration of Auth0 for user authentication, enhancing the security of the login process and safeguarding users' sensitive information
## What we learned
We acquired proficiency in developing user-friendly interfaces using Flutter, developed a strong understanding of backend development with Python FastAPI, gained valuable experience in structuring and managing databases with Kintone, and deepened our understanding of Auth0's role in identity verification and data protection.
## What's next for LifeLens
Here are some features and improvements we want LifeLens incorporate:
* Emergency Contacts: Integrate a section for users to input and manage emergency contacts. This addition can be crucial for ensuring the safety and well-being of users and their friends.
* Notifications and Reminders: Implement a notification system for important events such as birthdays or upcoming gatherings. Reminders can help users stay connected and engaged with their friends.
* Privacy Controls: Introduce customizable privacy settings, enabling users to choose what information is visible to different friend groups. This adds a layer of personalization and control over shared details.
* Integration with Other Platforms: Explore integration with other social media platforms or calendar apps for a more seamless user experience and connectivity.
* Enhanced Security Measures: Continuously evaluate and enhance security measures, ensuring the protection of user data and privacy. | ## Inspiration
If you're lucky enough to enjoy public speaking, we're jealous of you. None of us like public speaking, and we realized that there are not a lot of ways to get real-time feedback on how we can improve without boring your friends or family to listen to you.
We wanted to build a tool that would help us practice public-speaking - whether that be giving a speech or doing an interview.
## What it does
Stage Fight analyzes your voice, body movement, and word choices using different machine learning models in order to provide real-time constructive feedback about your speaking. The tool can give suggestions on whether or not you were too stiff, used too many crutch words (umm... like...), or spoke too fast.
## How we built it
Our platform is built upon the machine learning models from Google's Speech-to-Text API and using OpenCV and trained models to track hand movement. Our simple backend server is built on Flask while the frontend is built with no more than a little jQuery and Javascript.
## Challenges we ran into
Streaming live audio while recording from the webcam and using a pool of workers to detect hand movements all while running the Flask server in the main thread gets a little wild - and macOS doesn't allow recording from most of this hardware outside of the main thread. There were lots of problems where websockets and threads would go missing and work sometimes and not the next. Lots of development had to be done pair-programming style on our one Ubuntu machine. Good times!
## Accomplishments that we're proud of
Despite all challenges, we overcame them. Some notable wins include stringing all components together, using efficient read/writes to files instead of trying to fix WebSockets, and cool graphs.
## What we learned
A lot of technology, a lot about collaboration, and the Villager Puff matchup (we took lots of Smash breaks). | ## Inspiration
One of us works at a coffeeshop here in Montreal. There is no centralized system for organizing restocking of supplies, so we decided to fill the need with a technological solution.
## What it does
Low on Beans allows employees and managers to keep track of when stock is running low, and automatically alerts the designated restocker of each item.
## How I built it
We used Xcode to build an iOS app, as well as Sketch to design graphics.
## Challenges I ran into
Learning a new language and interface overnight.
## Accomplishments that I'm proud of
Making our first app at our first hackathon!!
## What I learned
That hackathons are really fun and full of food and helpful, kind folks.
## What's next for Low on Beans
More functionality and features, making it more appealing and useful for businesses and users.
Logo credit: In Stock by Dmitry Orlov from the Noun Project | partial |
The Book Reading Bot (brb) programmatically flips through physical books, and using TTS reads the pages aloud. There are also options to download the pdf or audiobook.
I read an article on [The Spectator](http://columbiaspectator.com/) how some low-income students cannot afford textbooks, and actually spend time at the library manually scanning the books on their phones. I realized this was a perfect opportunity for technology to help people and eliminate repetitive tasks. All you do is click start on the web app and the software and hardware do the rest!
Another use case is for young children who do not know how to read yet. Using brb, they can read Dr. Seuss alone! As kids nowadays spend too much time on the television, I hope this might lure kids back to children books.
On a high level technical overview, the web app (bootstrap) sends an image to a flask server which uses ocr and tts. | ## Inspiration
Alex K's girlfriend Allie is a writer and loves to read, but has had trouble with reading for the last few years because of an eye tracking disorder. She now tends towards listening to audiobooks when possible, but misses the experience of reading a physical book.
Millions of other people also struggle with reading, whether for medical reasons or because of dyslexia (15-43 million Americans) or not knowing how to read. They face significant limitations in life, both for reading books and things like street signs, but existing phone apps that read text out loud are cumbersome to use, and existing "reading glasses" are thousands of dollars!
Thankfully, modern technology makes developing "reading glasses" much cheaper and easier, thanks to advances in AI for the software side and 3D printing for rapid prototyping. We set out to prove through this hackathon that glasses that open the world of written text to those who have trouble entering it themselves can be cheap and accessible.
## What it does
Our device attaches magnetically to a pair of glasses to allow users to wear it comfortably while reading, whether that's on a couch, at a desk or elsewhere. The software tracks what they are seeing and when written words appear in front of it, chooses the clearest frame and transcribes the text and then reads it out loud.
## How we built it
**Software (Alex K)** -
On the software side, we first needed to get image-to-text (OCR or optical character recognition) and text-to-speech (TTS) working. After trying a couple of libraries for each, we found Google's Cloud Vision API to have the best performance for OCR and their Google Cloud Text-to-Speech to also be the top pick for TTS.
The TTS performance was perfect for our purposes out of the box, but bizarrely, the OCR API seemed to predict characters with an excellent level of accuracy individually, but poor accuracy overall due to seemingly not including any knowledge of the English language in the process. (E.g. errors like "Intreduction" etc.) So the next step was implementing a simple unigram language model to filter down the Google library's predictions to the most likely words.
Stringing everything together was done in Python with a combination of Google API calls and various libraries including OpenCV for camera/image work, pydub for audio and PIL and matplotlib for image manipulation.
**Hardware (Alex G)**: We tore apart an unsuspecting Logitech webcam, and had to do some minor surgery to focus the lens at an arms-length reading distance. We CAD-ed a custom housing for the camera with mounts for magnets to easily attach to the legs of glasses. This was 3D printed on a Form 2 printer, and a set of magnets glued in to the slots, with a corresponding set on some NerdNation glasses.
## Challenges we ran into
The Google Cloud Vision API was very easy to use for individual images, but making synchronous batched calls proved to be challenging!
Finding the best video frame to use for the OCR software was also not easy and writing that code took up a good fraction of the total time.
Perhaps most annoyingly, the Logitech webcam did not focus well at any distance! When we cracked it open we were able to carefully remove bits of glue holding the lens to the seller’s configuration, and dial it to the right distance for holding a book at arm’s length.
We also couldn’t find magnets until the last minute and made a guess on the magnet mount hole sizes and had an *exciting* Dremel session to fit them which resulted in the part cracking and being beautifully epoxied back together.
## Acknowledgements
The Alexes would like to thank our girlfriends, Allie and Min Joo, for their patience and understanding while we went off to be each other's Valentine's at this hackathon. | ## Inspiration
Bookshelves are worse than fjords to navigate. There is too much choice, and indecision hits when trying to pick out a cool book at a library or bookstore. Why isn’t there an easy way to compare the ratings of different books from just the spine? That’s where BookBud comes in. Paper books are a staple part of our lives - everyone has a bookshelf, hard to find them, very manual organisation
## What it does
Bookbud is Shazam but for books. Bookbud allows users to click on relevant text relating to their book in a live video stream while they scan the shelves. Without needing to go through the awkward process of googling long book titles or finding the right resource, readers can quickly find useful information on their books.
## How we built it
We built it from the ground up using Swift. The first component involves taking in camera camera input. We then implement Apple’s Vision ML framework to retrieve the text recognised within the scene. This text is passed into the second component that deals with calling the Google Books API to retrieve the data to be displayed.
## Challenges we ran into
We ran into an unusual bug in the process of combining the two halves of our project. The first half was the OCR piece that takes in a photo of a bookshelf and recognises text such as title, author and publisher, and the second half was the piece that speaks directly to the Google client to retrieve details such as average rating, maturity\_level and reviews from text. More generally, we ran into compatibility issues as Apple recently shifted from the pseudo-deprecated UIKit to SwiftUI and this required many hours of tweaking to finally ensure the different components played well together.
We also initially tried to separate each book’s spine from a bookshelf can be tackled easily through openCV but we did not initially code in objective c++ so it was not compatible with the rest of our code.
## Accomplishments that we're proud of
We were able to successfully learn how to use and implement Apple Vision ML framework to run OCR on camera input to extract a book title. We also successfully interacted with the Google API to retrieve average ratings and title for a book, integrating the two into an interface.
## What we learned
For 3 of 4 on the team, it was the first time working with Swift or mobile app development. This proved to be a steep learning curve, but one that was extremely rewarding. Not only was simulation a tool we drew on extensively in our process, but we also learned about different objects and syntax that Swift uses compared to C.
## What's next for BookBud
There are many technical details BookBud could improve on:
Improved UI
Basic improvements and features include immediately prompting a camera,
Booklovers need an endearing UI. Simple, intuitive - but also stylish and chic.
Create a recommendation system of books for the reader depending on the books that readers have looked at/wanted more information on in the past or their past reading history
Do this in AR, instead of having it be a photo, overlaying each book with a color density that corresponds to the rating or even the “recommendation score” of each book.
Image Segmentation through Bounding Boxes
Automatically detect all books in the live stream and suggest which book has the highest recommendation score.
Create a ‘find your book’ feature that allows you to find a specific book amidst the sea of books in a bookshelf.
More ambitious applications…
Transfer AR overlay of the bookshelf into a metaversal library of people and their books. Avid readers can join international rooms to give book recommendations and talk about their interpretations of material in a friendly, communal fashion.
I can imagine individuals wanting NFTs of the bookshelves of celebrities, their families, and friends. There is a distinct intellectual flavor of showing what is on your bookshelf.
NFT book?
Goodreads is far superior to Google Books, so hopefully they start issuing developer keys again! | winning |
We have the best app, the best. A tremendous app. People come from all over the world to tell us how great our app is. Believe us, we know apps.
With Trump Speech Simulator, write a tweet in Donald Trump's voice and our app will magically stitch a video of Trump speaking the words you wrote. Poof!
President Trump often holds long rallies with his followers, where he makes speeches that are then uploaded on Youtube and feature detailed subtitles. We realized that we could parse these subtitles to isolate individual words. We used ffmpeg to slice rally videos and then intelligently stitch them back together. | ## Inspiration
Gone are the days of practicing public speaking in a mirror. You shouldn’t need an auditorium full of hundreds of people to be able to visualize giving a keynote speech. This app allows people to put themselves in public speaking situations that are difficult to emulate in every day life. We also wanted to give anyone who wants to improve their speech, including those with speech impediments, a safe space to practice and attain feedback.
## What it does
The Queen’s Speech allows users to use Google Cardboard with a virtual reality environment to record and analyze their audience interaction while giving a virtual speech. Using 3D head tracking, we are able to give real time feedback on where the speaker is looking during the speech so that users can improve their interaction with the audience. We also allow the users to play their speech back in order to listen to pace, intonation, and content. We are working on providing immediate feedback on the number of "um"s and "like"s to improve eloquence and clarity of speech.
## How we built it
Incorporating Adobe After Effects and the Unity game engine, we used C# scripting to combine the best of 360 degree imagery and speech feedback.
## Challenges we ran into
Connecting to the Microsoft Project Oxford proved more difficult than expected on our Mac laptops than the typical PC. We couldn't integrate real 360 footage due to lack of Unity support.
## Accomplishments that we're proud of
Being able to provide a 3D like video experience through image sequencing, as well as highlighting user focus points, and expanding user engagement. Hosting on Google Cardboard makes it accessible to more users.
## What's next for The Queen's Speech
Currently working on word analysis to track "Ums" and "Likes" and incorporating Project Oxford, as well as more diverse 3D videos. | ## Inspiration
Alex K's girlfriend Allie is a writer and loves to read, but has had trouble with reading for the last few years because of an eye tracking disorder. She now tends towards listening to audiobooks when possible, but misses the experience of reading a physical book.
Millions of other people also struggle with reading, whether for medical reasons or because of dyslexia (15-43 million Americans) or not knowing how to read. They face significant limitations in life, both for reading books and things like street signs, but existing phone apps that read text out loud are cumbersome to use, and existing "reading glasses" are thousands of dollars!
Thankfully, modern technology makes developing "reading glasses" much cheaper and easier, thanks to advances in AI for the software side and 3D printing for rapid prototyping. We set out to prove through this hackathon that glasses that open the world of written text to those who have trouble entering it themselves can be cheap and accessible.
## What it does
Our device attaches magnetically to a pair of glasses to allow users to wear it comfortably while reading, whether that's on a couch, at a desk or elsewhere. The software tracks what they are seeing and when written words appear in front of it, chooses the clearest frame and transcribes the text and then reads it out loud.
## How we built it
**Software (Alex K)** -
On the software side, we first needed to get image-to-text (OCR or optical character recognition) and text-to-speech (TTS) working. After trying a couple of libraries for each, we found Google's Cloud Vision API to have the best performance for OCR and their Google Cloud Text-to-Speech to also be the top pick for TTS.
The TTS performance was perfect for our purposes out of the box, but bizarrely, the OCR API seemed to predict characters with an excellent level of accuracy individually, but poor accuracy overall due to seemingly not including any knowledge of the English language in the process. (E.g. errors like "Intreduction" etc.) So the next step was implementing a simple unigram language model to filter down the Google library's predictions to the most likely words.
Stringing everything together was done in Python with a combination of Google API calls and various libraries including OpenCV for camera/image work, pydub for audio and PIL and matplotlib for image manipulation.
**Hardware (Alex G)**: We tore apart an unsuspecting Logitech webcam, and had to do some minor surgery to focus the lens at an arms-length reading distance. We CAD-ed a custom housing for the camera with mounts for magnets to easily attach to the legs of glasses. This was 3D printed on a Form 2 printer, and a set of magnets glued in to the slots, with a corresponding set on some NerdNation glasses.
## Challenges we ran into
The Google Cloud Vision API was very easy to use for individual images, but making synchronous batched calls proved to be challenging!
Finding the best video frame to use for the OCR software was also not easy and writing that code took up a good fraction of the total time.
Perhaps most annoyingly, the Logitech webcam did not focus well at any distance! When we cracked it open we were able to carefully remove bits of glue holding the lens to the seller’s configuration, and dial it to the right distance for holding a book at arm’s length.
We also couldn’t find magnets until the last minute and made a guess on the magnet mount hole sizes and had an *exciting* Dremel session to fit them which resulted in the part cracking and being beautifully epoxied back together.
## Acknowledgements
The Alexes would like to thank our girlfriends, Allie and Min Joo, for their patience and understanding while we went off to be each other's Valentine's at this hackathon. | partial |
# 🤖🖌️ [VizArt Computer Vision Drawing Platform](https://vizart.tech)
Create and share your artwork with the world using VizArt - a simple yet powerful air drawing platform.
![image](https://user-images.githubusercontent.com/65676392/215330789-e38f2b41-1d7b-45b9-bb4f-09be3ffb9bf8.png)
## 💫 Inspiration
>
> "Art is the signature of civilizations." - Beverly Sills
>
>
>
Art is a gateway to creative expression. With [VizArt](https://vizart.tech/create), we are pushing the boundaries of what's possible with computer vision and enabling a new level of artistic expression. ***We envision a world where people can interact with both the physical and digital realms in creative ways.***
We started by pushing the limits of what's possible with customizable deep learning, streaming media, and AR technologies. With VizArt, you can draw in art, interact with the real world digitally, and share your creations with your friends!
>
> "Art is the reflection of life, and life is the reflection of art." - Unknow
>
>
>
Air writing is made possible with hand gestures, such as a pen gesture to draw and an eraser gesture to erase lines. With VizArt, you can turn your ideas into reality by sketching in the air.
![image](https://user-images.githubusercontent.com/65676392/215330736-0e670fe9-4b35-47f5-a948-a8cc107e78e1.png)
![4](https://user-images.githubusercontent.com/65676392/215330565-568a319a-6175-434e-b2de-5017ea4853c5.png)
![5](https://user-images.githubusercontent.com/65676392/215330572-36799049-dc33-430d-b59b-59ad50eb9e7a.png)
Our computer vision algorithm enables you to interact with the world using a color picker gesture and a snipping tool to manipulate real-world objects.
![](https://user-images.githubusercontent.com/65676392/215331038-055999cb-85ad-4383-8373-f47d3534457d.png)
>
> "Art is not what you see, but what you make others see." - Claude Monet
>
>
>
The features I listed above are great! But what's the point of creating something if you can't share it with the world? That's why we've built a platform for you to showcase your art. You'll be able to record and share your drawings with friends.
![image](https://user-images.githubusercontent.com/65676392/215331079-f676ea67-5e5c-4164-9c92-969919ef285b.png)
![image](https://user-images.githubusercontent.com/65676392/215331103-10c5a04c-f4f8-48a1-b40c-a1ff06202ffa.png)
I hope you will enjoy using VizArt and share it with your friends. Remember: Make good gifts, Make good art.
# ❤️ Use Cases
### Drawing Competition/Game
VizArt can be used to host a fun and interactive drawing competition or game. Players can challenge each other to create the best masterpiece, using the computer vision features such as the color picker and eraser.
### Whiteboard Replacement
VizArt is a great alternative to traditional whiteboards. It can be used in classrooms and offices to present ideas, collaborate with others, and make annotations. Its computer vision features make drawing and erasing easier.
### People with Disabilities
VizArt enables people with disabilities to express their creativity. Its computer vision capabilities facilitate drawing, erasing, and annotating without the need for physical tools or contact.
### Strategy Games
VizArt can be used to create and play strategy games with friends. Players can draw their own boards and pieces, and then use the computer vision features to move them around the board. This allows for a more interactive and engaging experience than traditional board games.
### Remote Collaboration
With VizArt, teams can collaborate remotely and in real-time. The platform is equipped with features such as the color picker, eraser, and snipping tool, making it easy to interact with the environment. It also has a sharing platform where users can record and share their drawings with anyone. This makes VizArt a great tool for remote collaboration and creativity.
# 👋 Gestures Tutorial
![image](https://user-images.githubusercontent.com/65676392/215335093-d911eaa1-0cc6-4e78-adc7-b63b323b2f74.png)
![image](https://user-images.githubusercontent.com/65676392/215335107-09c394a4-4811-4199-b692-74ef7377b23c.png)
![image](https://user-images.githubusercontent.com/65676392/215335122-8a517c4a-1374-42f0-ac71-6372a63a7075.png)
![image](https://user-images.githubusercontent.com/65676392/215335137-61a1bd8a-a95c-4e0d-806c-53c443dcdd9d.png)
![image](https://user-images.githubusercontent.com/65676392/215335143-93bc8edb-c2b2-4a8f-b562-d67b8524ac66.png)
# ⚒️ Engineering
Ah, this is where even more fun begins!
## Stack
### Frontend
We designed the frontend with Figma and after a few iterations, we had an initial design to begin working with. The frontend was made with React and Typescript and styled with Sass.
### Backend
We wrote the backend in Flask. To implement uploading videos along with their thumbnails we simply use a filesystem database.
## Computer Vision AI
We use MediaPipe to grab the coordinates of the joints and upload images. WIth the coordinates, we plot with CanvasRenderingContext2D on the canvas, where we use algorithms and vector calculations to determinate the gesture. Then, for image generation, we use the DeepAI open source library.
# Experimentation
We were using generative AI to generate images, however we ran out of time.
![image](https://user-images.githubusercontent.com/65676392/215340713-9b4064a0-37ac-4760-bd35-e6a30c2f4613.png)
![image](https://user-images.githubusercontent.com/65676392/215340723-ee993e2b-70bb-4aa3-a009-ac4459f23f72.png)
# 👨💻 Team (”The Sprint Team”)
@Sheheryar Pavaz
@Anton Otaner
@Jingxiang Mo
@Tommy He | ## Inspiration
We want to have some fun and find out what we could get out of Computer Vision API from Microsoft.
## What it does
This is a web application that allows the user to upload an image, generates an intelligent poem from it and reads the poem out load with different chosen voices.
## How we built it
We used Python interface of Cognitive Service API from Microsoft and built a web application with django. We used a public open source tone generator to play different tones reading the poem to the users.
## Challenges we ran into
We learned django from scratch. It's not very easy to use. But we eventually made all the components connect together using Python.
## Accomplishments that we're proud of
It’s fun!
## What we learned
It's difficult to combine different components together.
## What's next for PIIC - Poetic and Intelligent Image Caption
We plan to make an independent project with different technology than Cognitive Services and published to the world. | ## Inspiration
Our journey to creating this project stems from a shared realization: the path from idea to execution is fraught with inefficiencies that can dilute even the most brilliant concepts. As developers with a knack for turning visions into reality, we've faced the slow erosion of enthusiasm and value that time imposes on innovation. This challenge is magnified for those outside the technical realm, where a lack of coding skills transforms potential breakthroughs into missed opportunities. Harvard Business Review and TechCrunch analyzed Y Combinator startups and found that around 40% of founders are non-technical.
Drawing from our experiences in fast-paced sectors like health and finance, we recognized the critical need for speed and agility. The ability to iterate quickly and gather user feedback is not just beneficial but essential in these fields. Yet, this process remains a daunting barrier for many, including non-technical visionaries whose ideas have the potential to reshape industries.
With this in mind, we set out to democratize the development process. Our goal was to forge a tool that transcends technical barriers, enabling anyone to bring their ideas to life swiftly and efficiently. By leveraging our skills and insights into the needs of both developers and non-developers alike, we've crafted a solution that bridges the gap between imagination and tangible innovation, ensuring that no idea is left unexplored due to the constraints of technical execution.
This project is more than just a tool; it's a testament to our belief that the right technology can unlock the potential within every creative thought, transforming fleeting ideas into impactful realities.
## What it does
Building on the foundation laid by your vision, MockupMagic represents a leap toward democratizing digital innovation. By transforming sketches into interactive prototypes, we not only streamline the development process but also foster a culture of inclusivity where ideas, not technical prowess, stand in the spotlight. This tool is a catalyst for creativity, enabling individuals from diverse backgrounds to participate actively in the digital creation sphere.
The user can upload a messy sketch on paper to our website. MockupMagic will then digitize your low-fidelity prototype into a high-fidelity replica with interactive capabilities. The user can also see code alongside the generated mockups, which serves as both a bridge to tweak the generated prototype and a learning tool, gently guiding users toward deeper technical understanding. Moreover, the integration of a community feedback mechanism through the Discussion tab directly within the platform enhances the iterative design process, allowing for real-time user critique and collaboration.
MockupMagic is more than a tool; it's a movement towards a future where the digital divide is narrowed, and the translation of ideas into digital formats is accessible to all. By empowering users to rapidly prototype and refine their concepts, we're not just accelerating the pace of innovation; we're ensuring that every great idea has the chance to be seen, refined, and realized in the digital world.
## How we built it
Conceptualization: The project began with brainstorming sessions where we discussed the challenges non-technical individuals face in bringing their ideas to life. Understanding the value of quick prototyping, especially for designers and founders with creative but potentially fleeting ideas, we focused on developing a solution that accelerates this process.
Research and Design: We conducted research to understand the needs of our target users, including designers, founders, and anyone in between who might lack technical skills. This phase helped us design a user-friendly interface that would make it intuitive for users to upload sketches and receive functional web mockups.
Technology Selection: Choosing the right technologies was crucial. We decided on a combination of advanced image processing and AI algorithms capable of interpreting hand-drawn sketches and translating them into HTML, CSS, and JavaScript code. We leveraged and finetuned existing AI models from MonsterAPI and GPT API and tailored them to our specific needs for better accuracy in digitizing sketches.
Development: The development phase involved coding the backend logic that processes the uploaded sketches, the AI model integration for sketch interpretation, and the frontend development for a seamless user experience. We used the Reflex platform to build out our user-facing website, capitalizing on their intuitive Python-like web development tools.
Testing and Feedback: Rigorous testing was conducted to ensure the accuracy of the mockups generated from sketches. We also sought feedback from early users, including designers and founders, to understand how well the tool met their needs and what improvements could be made.
## Challenges we ran into
We initially began by building off our own model, hoping to aggregate quality training data mapping hand-drawn UI components to final front-end components, but we quickly realized this data was very difficult to find and hard to scrape for. Our model performs well for a few screens however it still struggles to establish connections between multiple screens or more complex actions.
## Accomplishments that we're proud of
Neither of us had much front-end & back-end experience going into this hackathon, so we made it a goal to use a framework that would give us experience in this field. After learning about Reflex during our initial talks with sponsors, we were amazed that Web Apps could be built in pure Python and wanted to jump right in. Using Reflex was an eye-opening experience because we were not held back by preconceived notions of traditional web development - we got to enjoy learning about Reflex and how to build products with it. Reflex’s novelty also translates to limited knowledge about it within LLM tools developers use to help them while coding, this helped us solidify our programming skills through reading documentation and creative debugging methodologies - skills almost being abstracted away by LLM coding tools. Finally, our favorite part about doing hackathons is building products we enjoy using. It helps us stay aligned with the end user while giving us personal incentives to build the best hack we can.
## What we learned
Through this project, we learned that we aren’t afraid to tackle big problems in a short amount of time. Bringing ideas on napkins to full-fledged projects is difficult, and it became apparent hitting all of our end goals would be difficult to finish in one weekend. We quickly realigned and ensured that our MVP was as good as it could get before demo day.
## What's next for MockupMagic
We would like to fine-tune our model to handle more edge cases in handwritten UIs. While MockupMagic can handle a wide range of scenarios, we hope to perform extensive user testing to figure out where we can improve our model the most. Furthermore, we want to add an easy deployment pipeline to give non-technical founders even more autonomy without knowing how to code. As we continue to develop MockupMagic, we would love to see the platform being used even at TreeHacks next year by students who want to rapidly prototype to test several ideas! | winning |
## Inspiration
We wanted to take an ancient video game that jump-started the video game industry (Asteroids) and be able to revive it using Virtual Reality.
## What it does
You are spawned in a world that has randomly generated asteroids and must approach one of four green asteroids to beat the stage. Lasers are utilized to destroy asteroids after a certain amount of collisions. Forty asteroids are always present during gameplay and are attracted to you via gravity.
## How we built it
We utilized Unity (C#) alongside an HTC Vive libraries. The Asset Store was utilized to have celestial images for our skybox environment.
## Challenges we ran into
Our Graphical User Interface is not able to be projected to the HTC Vive. Each asteroid has an attractive force towards the player; it was difficult to optimize how all of these forces were rendered and preventing them from interfering with each other. Generalizing projectile functionality across game and menu scenes was also difficult.
## Accomplishments that we're proud of
For most of the group, this was the first time we had experienced using Unity and the first time using an HTC Vive for all members. Learning the Unity workflow and development environment made us all proud and solving the problem of randomly generated asteroids causing interference with any other game objects.
## What we learned
We understood how to create interactive 3D Video Games alongside with a Virtual Reality environment. We learned how to map inputs from VR Controllers to the program.
## What's next for Estar Guars
Applying a scoring system and upgrades menu would be ideal for the game. Improve controls and polishing object collision animations. Figuring out a GUI that displays on to the HTC Vive. Having an interactive story mode to create more dynamic objects and environments. | ## Inspiration
People struggle to work effectively in a home environment, so we were looking for ways to make it more engaging. Our team came up with the idea for InspireAR because we wanted to design a web app that could motivate remote workers be more organized in a fun and interesting way. Augmented reality seemed very fascinating to us, so we came up with the idea of InspireAR.
## What it does
InspireAR consists of the website, as well as a companion app. The website allows users to set daily goals at the start of the day. Upon completing all of their goals, the user is rewarded with a 3-D object that they can view immediately using their smartphone camera. The user can additionally combine their earned models within the companion app. The app allows the user to manipulate the objects they have earned within their home using AR technology. This means that as the user completes goals, they can build their dream office within their home using our app and AR functionality.
## How we built it
Our website is implemented using the Django web framework. The companion app is implemented using Unity and Xcode. The AR models come from echoAR. Languages used throughout the whole project consist of Python, HTML, CSS, C#, Swift and JavaScript.
## Challenges we ran into
Our team faced multiple challenges, as it is our first time ever building a website. Our team also lacked experience in the creation of back end relational databases and in Unity. In particular, we struggled with orienting the AR models within our app. Additionally, we spent a lot of time brainstorming different possibilities for user authentication.
## Accomplishments that we're proud of
We are proud with our finished product, however the website is the strongest component. We were able to create an aesthetically pleasing , bug free interface in a short period of time and without prior experience. We are also satisfied with our ability to integrate echoAR models into our project.
## What we learned
As a team, we learned a lot during this project. Not only did we learn the basics of Django, Unity, and databases, we also learned how to divide tasks efficiently and work together.
## What's next for InspireAR
The first step would be increasing the number and variety of models to give the user more freedom with the type of space they construct. We have also thought about expanding into the VR world using products such as Google Cardboard, and other accessories. This would give the user more freedom to explore more interesting locations other than just their living room. | ## Inspiration
Our inspiration comes from the idea that the **Metaverse is inevitable** and will impact **every aspect** of society.
The Metaverse has recently gained lots of traction with **tech giants** like Google, Facebook, and Microsoft investing into it.
Furthermore, the pandemic has **shifted our real-world experiences to an online environment**. During lockdown, people were confined to their bedrooms, and we were inspired to find a way to basically have **access to an infinite space** while in a finite amount of space.
## What it does
* Our project utilizes **non-Euclidean geometry** to provide a new medium for exploring and consuming content
* Non-Euclidean geometry allows us to render rooms that would otherwise not be possible in the real world
* Dynamically generates personalized content, and supports **infinite content traversal** in a 3D context
* Users can use their space effectively (they're essentially "scrolling infinitely in 3D space")
* Offers new frontier for navigating online environments
+ Has **applicability in endless fields** (business, gaming, VR "experiences")
+ Changing the landscape of working from home
+ Adaptable to a VR space
## How we built it
We built our project using Unity. Some assets were used from the Echo3D Api. We used C# to write the game. jsfxr was used for the game sound effects, and the Storyblocks library was used for the soundscape. On top of all that, this project would not have been possible without lots of moral support, timbits, and caffeine. 😊
## Challenges we ran into
* Summarizing the concept in a relatively simple way
* Figuring out why our Echo3D API calls were failing (it turned out that we had to edit some of the security settings)
* Implementing the game. Our "Killer Tetris" game went through a few iterations and getting the blocks to move and generate took some trouble. Cutting back on how many details we add into the game (however, it did give us lots of ideas for future game jams)
* Having a spinning arrow in our presentation
* Getting the phone gif to loop
## Accomplishments that we're proud of
* Having an awesome working demo 😎
* How swiftly our team organized ourselves and work efficiently to complete the project in the given time frame 🕙
* Utilizing each of our strengths in a collaborative way 💪
* Figuring out the game logic 🕹️
* Our cute game character, Al 🥺
* Cole and Natalie's first in-person hackathon 🥳
## What we learned
### Mathias
* Learning how to use the Echo3D API
* The value of teamwork and friendship 🤝
* Games working with grids
### Cole
* Using screen-to-gif
* Hacking google slides animations
* Dealing with unwieldly gifs
* Ways to cheat grids
### Natalie
* Learning how to use the Echo3D API
* Editing gifs in photoshop
* Hacking google slides animations
* Exposure to Unity is used to render 3D environments, how assets and textures are edited in Blender, what goes into sound design for video games
## What's next for genee
* Supporting shopping
+ Trying on clothes on a 3D avatar of yourself
* Advertising rooms
+ E.g. as your switching between rooms, there could be a "Lululemon room" in which there would be clothes you can try / general advertising for their products
* Custom-built rooms by users
* Application to education / labs
+ Instead of doing chemistry labs in-class where accidents can occur and students can get injured, a lab could run in a virtual environment. This would have a much lower risk and cost.
…the possibility are endless | partial |
## Inspiration
Our inspiration for this project was our experience as students. We believe students need more a digestible feed when it comes to due dates. Having to manually plan for homework, projects, and exams can be annoying and time consuming. StudyHedge is here to lift the scheduling burden off your shoulders!
## What it does
StudyHedge uses your Canvas API token to compile a list of upcoming assignments and exams. You can configure a profile detailing personal events, preferred study hours, number of assignments to complete in a day, and more. StudyHedge combines this information to create a manageable study schedule for you.
## How we built it
We built the project using React (Front-End), Flask (Back-End), Firebase (Database), and Google Cloud Run.
## Challenges we ran into
Our biggest challenge resulted from difficulty connecting Firebase and FullCalendar.io. Due to inexperience, we were unable to resolve this issue in the given time. We also struggled with using the Eisenhower Matrix to come up with the right formula for weighting assignments. We discovered that there are many ways to do this. After exploring various branches of mathematics, we settled on a simple formula (Rank= weight/time^2).
## Accomplishments that we're proud of
We are incredibly proud that we have a functional Back-End and that our UI is visually similar to our wireframes. We are also excited that we performed so well together as a newly formed group.
## What we learned
Keith used React for the first time. He learned a lot about responsive front-end development and managed to create a remarkable website despite encountering some issues with third-party software along the way. Gabriella designed the UI and helped code the front-end. She learned about input validation and designing features to meet functionality requirements. Eli coded the back-end using Flask and Python. He struggled with using Docker to deploy his script but managed to conquer the steep learning curve. He also learned how to use the Twilio API.
## What's next for StudyHedge
We are extremely excited to continue developing StudyHedge. As college students, we hope this idea can be as useful to others as it is to us. We want to scale this project eventually expand its reach to other universities. We'd also like to add more personal customization and calendar integration features. We are also considering implementing AI suggestions. | ## Inspiration
We know the struggles of students. Trying to get to that one class across campus in time. Deciding what to make for dinner. But there was one that stuck out to all of us: finding a study spot on campus. There have been countless times when we wander around Mills or Thode looking for a free space to study, wasting our precious study time before the exam. So, taking inspiration from parking lots, we designed a website that presents a live map of the free study areas of Thode Library.
## What it does
A network of small mountable microcontrollers that uses ultrasonic sensors to check if a desk/study spot is occupied. In addition, it uses machine learning to determine peak hours and suggested availability from the aggregated data it collects from the sensors. A webpage that presents a live map, as well as peak hours and suggested availability .
## How we built it
We used a Raspberry Pi 3B+ to receive distance data from an ultrasonic sensor and used a Python script to push the data to our database running MongoDB. The data is then pushed to our webpage running Node.js and Express.js as the backend, where it is updated in real time to a map. Using the data stored on our database, a machine learning algorithm was trained to determine peak hours and determine the best time to go to the library.
## Challenges we ran into
We had an **life changing** experience learning back-end development, delving into new frameworks such as Node.js and Express.js. Although we were comfortable with front end design, linking the front end and the back end together to ensure the web app functioned as intended was challenging. For most of the team, this was the first time dabbling in ML. While we were able to find a Python library to assist us with training the model, connecting the model to our web app with Flask was a surprising challenge. In the end, we persevered through these challenges to arrive at our final hack.
## Accomplishments that we are proud of
We think that our greatest accomplishment is the sheer amount of learning and knowledge we gained from doing this hack! Our hack seems simple in theory but putting it together was one of the toughest experiences at any hackathon we've attended. Pulling through and not giving up until the end was also noteworthy. Most importantly, we are all proud of our hack and cannot wait to show it off!
## What we learned
Through rigorous debugging and non-stop testing, we earned more experience with Javascript and its various frameworks such as Node.js and Express.js. We also got hands-on involvement with programming concepts and databases such as mongoDB, machine learning, HTML, and scripting where we learned the applications of these tools.
## What's next for desk.lib
If we had more time to work on this hack, we would have been able to increase cost effectiveness by branching four sensors off one chip. Also, we would implement more features to make an impact in other areas such as the ability to create social group beacons where others can join in for study, activities, or general socialization. We were also debating whether to integrate a solar panel so that the installation process can be easier. | # Are You Taking
It's the anti-scheduling app. 'Are You Taking' is the no-nonsense way to figure out if you have class with your friends by comparing your course schedules with ease. No more screenshots, only good vibes!
## Inspiration
The fall semester is approaching... too quickly. And we don't want to have to be in class by ourselves.
Every year, we do the same routine of sending screenshots to our peers of what we're taking that term. It's tedious, and every time you change courses, you have to resend a picture. It also doesn't scale well to groups of people trying to find all of the different overlaps.
So, we built a fix. Introducing "Are You Taking" (AYT), an app that allows users to upload their calendars and find event overlap.
It works very similar to scheduling apps like when2meet, except with the goal of finding where there *is* conflict, instead of where there isn't.
## What it does
The flow goes as follows:
1. Users upload their calendar, and get a custom URL like `https://areyoutaking.tech/calendar/<uuidv4>`
2. They can then send that URL wherever it suits them most
3. Other users may then upload their own calendars
4. The link stays alive so users can go back to see who has class with who
## How we built it
We leveraged React on the front-end, along with Next, Sass, React-Big-Calendar and Bootstrap.
For the back-end, we used Python with Flask. We also used CockroachDB for storing events and handled deployment using Google Cloud Run (GCR) on GCP. We were able to create Dockerfiles for both our front-end and back-end separately and likewise deploy them each to a separate GCR instance.
## Challenges we ran into
There were two major challenges we faced in development.
The first was modelling relationships between the various entities involved in our application. From one-to-one, to one-to-many, to many-to-many, we had to write effective schemas to ensure we could render data efficiently.
The second was connecting our front-end code to our back-end code; we waited perhaps a bit too long to pair them together and really felt a time crunch as the deadline approached.
## Accomplishments that we're proud of
We managed to cover a lot of new ground!
* Being able to effectively render calendar events
* Being able to handle file uploads and store event data
* Deploying the application on GCP using GCR
* Capturing various relationships with database schemas and SQL
## What we learned
We used each of these technologies for the first time:
* Next
* CockroachDB
* Google Cloud Run
## What's next for Are You Taking (AYT)
There's a few major features we'd like to add!
* Support for direct Google Calendar links, Apple Calendar links, Outlook links
* Edit the calendar so you don't have to re-upload the file
* Integrations with common platforms: Messenger, Discord, email, Slack
* Simple passwords for calendars and users
* Render a 'generic week' as the calendar, instead of specific dates | winning |
## Inspiration
* Inspired by Tints clothing brand ([www.tintsstreetwear.com](http://www.tintsstreetwear.com)), we saw how personalized video thank-yous and abandoned cart follow-ups significantly boosted conversion rates and lowered CAC.
* Building relationships before sales proved effective but was extremely time-consuming, requiring manual efforts like social media interactions, content reposting, and connecting over shared interests.
* We are on a mission to assist millions of small business owners in accelerating the growth of their companies through AI-powered technology that decreases cost, and saves time
## What it does
* Gesture AI introduces an AI-Powered Video Personalization Tool that closes the gap in human element AI sales tools, improving business follow-ups and nurturing relationships through video automation, facial cloning, and synthetic media integration.
## How we built it
* By harnessing advanced AI technologies, we developed a system that transforms text into speech for video automation and uses lip-sync technology for realistic personalization. From uploading a contact list to deploying personalized videos through a batch API on GCP, our process uses OpenAI for script customization, ElevenLabs for audio generation, and a fine-tuned model for video synthesis, all invoked from the front-end.
## Challenges we ran into
* We faced challenges in creating ultra-realistic personalized videos and ensuring the seamless integration of synthetic media for genuine interactions.
## Accomplishments that we're proud of
* We are proud to have expanded our AI tool's capabilities across languages, enabling brands to personalize ads for customers in different languages, such as Spanish.
## What we learned
* We learned about the crucial importance of personalization in the modern digital landscape and AI's potential to revolutionize traditional marketing strategies, fostering authentic customer connections.
## What's next for Gesture AI
* We plan to broaden our market reach in B2B e-commerce and further optimize our platform to make it realistic. Additionally, we plan on expanding our influence beyond the e-commerce space into areas such as advocacy where personalized outreach can create societal change. | ## Inspiration
Going to Stanford, we see all the investor hype around tech but we also see that small businesses are growing faster than ever, and many require capital to expand. We talked to small businesses and realized they often struggle to raise money (and investors are losing out on growing companies). Especially because these small businesses are central to our communities—people want to help and our website helps them do that in a market-driven approach.
## What it does
We built a tool that connects small businesses with investors—firms, nonprofits, and accredited individuals—who want to invest in their community's "mom and pop shops" and make money while they're at at. Our platform has a few key features:
1) When small business entrepreneurs submit a request for funding that includes comprehensive questions about them—and a chatbot to make sure we're asking the questions that matter most to investors—we pass that request through our investor-like LLM agent that creates an analyst report and scores potential investments based on growth potential and risk.
2) In the investor portal, potential investors can find promising small businesses that want funding.
3) We create custom NFTs that act as fictional "fractional shares" of small businesses. The businesses are sent a link to control these NFTs so they can control who gets sent the shares (therefore controlling who gets stake in their company).
4) In the backend, we've incorporated a tool where investors can search for things that they want in an investment using augmented vector retrieval (example: "find me a burger shop in Palo Alto that's growing fast" will match them with just that if one exists in the data). In MVP, we have this almost completed fully.
## How we built it
1) Front-End: we used Reflex to build our front-end. We built 3 pages: chatbot page (where small business entrepreneurs can interact with our investor agents), business page (where businesses can fill out a form for initial funding), and investor page (where investors can search for good matches for them).
2) Generated of synthetic training dataset using OpenAI LLMs (simulating economics models in this process)
3) OpenAI LLM for investor analysis report and scoring
4) Pinecone for vector db and search
5) Crossmint API for creating collections and tokens unique to each company
## Challenges we ran into
First and foremost, none of us had much experience with front-end (which is where our project hit a roadblock). We had to figure out how to use Reflex to integrate with backend. Some Reflex features didn't function as expected, so navigating around that was difficult.
How we structured the project is that each team member generally focused a sector of the project, and linking these sectors together (due to Reflex's compatibility with all of our systems) proved to be difficult. The team's support was helpful but we remained challenged by this.
Crossmint was initially to get used to the Crossmint platform because we wanted a feature that they didn't directly support, but then the Crossmint team was very helpful in adding that feature for us.
## Accomplishments that we're proud of
We think our product solves a legitimate market need. We're proud of our progress during the 2 day period, and we're excited to continue pursuing the project as a startup after the Hackathon ends.
We're also proud of the fact that we were able to create each individual component of our project and work very nicely together to problem-solve and build something cool together.
## What we learned
We learned a lot about the front-end aspect of development, using LLMs to our benefit, and about the use-cases of blockchain.
## What's next for BizToken
We want to continue with this idea and work on it as a startup. We want to use this idea as a means to grow our knowledge about AI, LLMs, NFT, and investing, and we want to talk to more customers about how we can make it better for them. | ## Inspiration
**Introducing Ghostwriter: Your silent partner in progress.** Ever been in a class where resources are so hard to come by, you find yourself practically living at office hours? As teaching assistants on **increasingly short-handed course staffs**, it can be **difficult to keep up with student demands while making long-lasting improvements** to your favorite courses.
Imagine effortlessly improving your course materials as you interact with students during office hours. **Ghostwriter listens intelligently to these conversations**, capturing valuable insights and automatically updating your notes and class documentation. No more tedious post-session revisions or forgotten improvement ideas. Instead, you can really **focus on helping your students in the moment**.
Ghostwriter is your silent partner in educational excellence, turning every interaction into an opportunity for long-term improvement. It's the invisible presence that delivers visible results, making continuous refinement effortless and impactful. With Ghostwriter, you're not just tutoring or bug-bashing - **you're evolving your content with every conversation**.
## What it does
Ghostwriter hosts your class resources, and supports searching across them in many ways (by metadata, semantically by content). It allows adding, deleting, and rendering markdown notes. However, Ghostwriter's core feature is in its recording capabilities.
The record button starts a writing session. As you speak, Ghostwriter will transcribe and digest your speech, decide whether it's worth adding to your notes, and if so, navigate to the appropriate document and insert them at a line-by-line granularity in your notes, integrating seamlessly with your current formatting.
## How we built it
We used Reflex to build the app full-stack in Python, and support the various note-management features including addition, deleting, selecting, and rendering. As notes are added to the application database, they are also summarized and then embedded by Gemini 1.5 Flash-8B before being added to ChromaDB with a shared key. Our semantic search is also powered by Gemini-embedding and ChromaDB.
The recording feature is powered by Deepgram's threaded live-audio transcription API. The text is processed live by Gemini, and chunks are sent to ChromaDB for queries. Distance metrics are used as thresholds to not create notes, add to an existing note, or create a new note. In the latter two cases, llama3-70b-8192 is run through Groq to write on our (existing) documents. It does this through a RAG on our docs, as well as some prompt-engineering. To make insertion granular we add unique tokens to identify candidate insertion-points throughout our original text. We then structurally generate the desired markdown, as well as the desired point of insertion, and render the changes live to the user.
## Challenges we ran into
Using Deepgram and live-generation required a lot of tasks to run concurrently, without blocking UI interactivity. We had some trouble reconciling the requirements posed by Deepgram and Reflex on how these were handled, and required us redesign the backend a few times.
Generation was also rather difficult, as text would come out with irrelevant vestiges and explanations. It took a lot of trial and error through prompting and other tweaks to the generation calls and structure to get our required outputs.
## Accomplishments that we're proud of
* Our whole live note-generation pipeline!
* From audio transcription process to the granular retrieval-augmented structured generation process.
* Spinning up a full-stack application using Reflex (especially the frontend, as two backend engineers)
* We were also able to set up a few tools to push dummy data into various points of our process, which made debugging much, much easier.
## What's next for GhostWriter
Ghostwriter can work on the student-side as well, allowing a voice-interface to improving your own class notes, perhaps as a companion during lecture. We find Ghostwriter's note identification and improvement process very useful ourselves.
On the teaching end, we hope GhostWriter will continue to grow into a well-rounded platform for educators on all ends. We envision that office hour questions and engagement going through our platform can be aggregated to improve course planning to better fit students' needs.
Ghostwriter's potential doesn't stop at education. In the software world, where companies like AWS and Databricks struggle with complex documentation and enormous solutions teams, Ghostwriter shines. It transforms customer support calls into documentation gold, organizing and structuring information seamlessly. This means fewer repetitive calls and more self-sufficient users! | losing |
## Background
Being a team of four first year computer science students who are all participating in a Hackathon for their first times, we had no idea what to expect going in. Our only goals in mind were to make friends, learn things, and get some stickers.
## The Journey
For a solid few hours after the hacking started, we had no idea what we were going to make. We ended up singling out 3 ideas, a microwave time calculating app, an idea/teamwork posting app, and a Github for everything app. After calm and peaceful discussions, we decided upon the microwave app, because the idea of using computer vision was just too cool to pass up.
With that, we went straight to work. After a few hours of blood, sweat, and yes, a few tears, we finally got a basic website up and running. At big problem we had was figuring out how to calculate the microwave time accurately, since we couldn't find any data on the microwave time of each food. Our solution for now was to group foods into different categories and just return the time based on what category it was in.
After attending the lecture on the Microsoft bot service, we integrated the technology into our app to create a reheat bot. Instead of just being a web app, users could now send an image through messenger to the bot and get the time that they should microwave the food for. The bot was quite a challenge, since none of us had used Microsoft Azure before and none of us knew C# neither. Somehow, we managed to finish the bot in the end.
Around 3 AM, when we were all half dead, we came up with the solution to implement basic machine learning into our app to get more accurate times. Instead of just getting the time data from a static server file, the data would now modify itself depending on user feedback. With this implementation, the microwave times of each food item would be crowd sourced instead of just defaulting on some preset value, thus making it more accurate.
## The Conclusion
After 125 commits, we finally brought the project to an end. Find the Github repository [here](https://github.com/jackzheng06/microwave-time). We used Clarifai for image recognition and Microsoft Azure to host our back end as well as our chat bot.
Reheat is a web application that takes a picture of food and determines how long it should be microwaved for. The website is mobile friendly, and there is a messenger chat bot for those who don't want to use the website. The app uses basic machine learning with user feedback to adjust the microwave times of each food item. Reheat is the future of microwaves, perfectly heated food is just one click away! | ## Our Inspiration
Have you ever looked inside the fridge, saw a particular piece of food, and wondered if it was still edible? Maybe you felt it, or looked it up online, or even gave it the old sniff test! Yet, at the end of the day, you still threw it away.
Did you know that US citizens waste an average of 200lbs of food per year, per person? Much of that waste can be traced back to food casually tossed out of the fridge without a second thought. Additionally, 80% of Americans rely on the printed ‘best by’ dates to determine if their food is still fresh, when in reality it is almost never a true indicator of expiration. We aim to address these problems and more.
## What do we do?
NoWaste.ai is a two-pronged approach to the same issue, allowing us to extend our support to users all over the world. At its core, NoWaste allows users to snap photos of an edible product, describe its context, and harness the power of generative AI to receive a gauge on the quality of their food. An example could be the following: "Pizza left on a warm counter for 6 hours, then refrigerated for 8" + [Picture of the pizza], which would return the AI's score of the food's safety.
In developed countries, everyone has a smartphone, which is why we provide a mobile application to make this process as convenient as possible. In order to create a product that users will be more likely to consistently use, the app has been designed to be extremely streamlined, rapidly responsive, and very lightweight.
However, in other parts of the world, this luxury is not so common. Hence, we have researched extensively to design and provide a cheap and scalable solution for communities around the world via Raspberry Pi Cameras. At just $100 per assembly and $200 per deployment, these individual servers have potential to be dispersed all across the world, saving thousands of lives, millions of pounds of waste, and billions of dollars.
## How we built it
Our hardware stack is hosted completely on the Raspberry Pi in a React/Flask environment. We utilize cloud AI models, such as LLAMA 70B on Together.ai and GPT4 Vision, to offload computation and make our solution as cheap and scalable as possible. Our software stack was built using Swift and communicates with similar APIs.
We began by brainstorming the potential services and technologies we could use to create lightweight applications as quickly as possible. Once we settled on a general idea, we split off and began implementing our own responsibilities: while some of us prototyped the frontend, others were experimenting with AI models or doing market research. This delegation of responsibility allowed us to work in parallel and design a comprehensive solution for a problem as large (yet seemingly so clear) as this.
Of course, our initial ideas were far from what we eventually settled on.
## Challenges we ran into
Finalizing and completing our stack was one of our greatest challenges. Our frontend technologies changed the most over the course of the weekend as we experimented with Django, Reflex, React, and Flask, not to mention the different APIs and LLM hosts that we researched. Additionally, we were ambitious in wanting to only use open-source solutions to further drive home the idea of world-wide collaboration for sustainability and the greater good, but we failed to identify a solution for our Vision model. We attempted to train LLAVA using Intel cloud machines, but our lack of time made it difficult as beginners. Our team also faced hardware issues, from broken cameras to faulty particle sensors. However, we were successful in remedying each of these issues in their own unique ways, and we are happy to present a product within the timeframe we had.
## Accomplishments that we're proud of
We are incredibly proud of what we were able to accomplish in such a short amount of time. We were passionate about both the underlying practicalities of our application as well as the core implementation. We created not only a webapp hosted on a Raspberry Pi, equipped with odor sensors and a camera, but a mobile app prototype and a mini business plan as well. We were able to target multiple audiences and clear areas where this issue prevails, and we have proposed solutions that suit all of them.
## What's next for NoWaste.ai
The technology that we propose is infinitely extensible. Beyond this weekend at TreeHacks, there is room for fine-tuning or training more models to produce more accurate results on rotting and spoiled food. Dedicated chips and board designs can bring the cost of production and extension down, making it even easier to provide solutions across the world. Augmented Reality is becoming more prevalent every day, and there is a clear spot for NoWaste to streamline our technology to work seamlessly with humans. The possibilities are endless, and we hope you can join and support us on our journey! | ## Inspiration
As college who have all recently moved into apartments for the first time, we found that we were wasting more food than we could've expected. Having to find rotten yet untouched lettuce in the depths of the fridge is not only incredibly wasteful for the environment but also harmful to our nutrition. We wanted to create this app to help other students keep track of the items in their fridge, without having to wrack their brains for what to cook everyday. Our goal was to both streamline mealtime preparations and provide a sustainable solution to everyday food waste.
## What it does
Our app is meant to be simple and intuitive. Users are able to upload a photo of their receipt directly from our app, which we then process and extract the food items. Then, we take these ingredients, calculate expiration dates, and produce recipes for the user using the ingredients that they already have, prioritizing ingredients that are expiring sooner.
## How we built it
Our tech stack consisted of React-Native, Express, MongoDB, Open AI API, and OCR. We used React-Native for our frontend and Express for our backend support. MongoDB was used to store the data we parsed from user receipts, so that way our app would not be memoryless. To actually process and recognize the text on the receipt, we used OCR. To generate recipes, we utilized Open AI API and engineered prompts that would yield the best results.
## Challenges we ran into
For this project, we wanted to really challenge ourselves by using a tech stack we had never used before, such as React-Native, Express, OpenAI API, and OCR. Since essentially our entire tech stack was unfamiliar, we faced many challenges in understanding syntax, routing, and communication between the frontend and backend. Additionally, we faced issues with technology like Multer in the middleware when it came to sending image information from the front end to backend, as we had never used Multer before either. However, we are incredibly proud of ourselves for being able to persevere and find solutions to our problems, to both learn new skills as well as produce our MVP.
## Accomplishments that we're proud of
We are incredibly proud of being able to produce our final product. Though it may not be the best, we hope that it symbolizes our learning, development, and perseverance. From getting our MongoDB database set up to getting our frontend to properly communicate with our backend, we will be taking away many accomplishments with us.
## What we learned
As previously mentioned, we learned an entirely new tech stack. We got to experience React-Native, Express, OpenAI API, and OCR for the first time. It's hard to verbalize what we have learned without talking about our entire project process, since we truly learned something new every time we implemented something.
## What's next for Beat the Receipt
Originally, we wanted to implement our in-app camera, but due to an unfamiliar tech stack, we didn't get a chance to implement it for this iteration, but are already working on it. Additionally for the future, we hope to allow users to choose recipes that better cater to their tastes while still using soon-to-expire ingredients. Eventually, we would also like to implement a budgeting option, where users can visualize how much of their budget has been spent on their groceries, with our app handling the calculations. | partial |
## DAASH (Disaster Assistance and Safety Helper)
Storms and floods and wild-fires, oh my!
Need help with recovery solutions after a calamity? Need a plan before the event even happens? You're in the right place!
## Inspiration:
The recent increase in natural disasters such as the fires in California and tropical storms around the world has caused damage that have cost a lot to many families; for some, even their lives. We created something that will help everyone plan for the next disaster and take precautions so that they are sooner able to return to their normal routines after these events occur..
## What it does:
DAASH is a web app that provides useful disaster information in a user-friendly way. It pulls disaster data from FEMA and overlays it with news to give the user helpful insights into disasters happening around them. DAASH helps you find the current disasters occurring in different parts of the world and understand your home using reliable data.
## How it was built:
We used Node.js for the backend, using APIs from FEMA, NewsAPI, and PredictHQ. We used a combination of JavaScript, HTML, and CSS to craft the frontend.
## Challenges:
One of our main challenges was finding the best disaster dataset to use to populate our map. We also had to dynamically generate disaster markers from a dataset, which proved to be more challenging than we initially anticipated. Another issue we ran into was our accessing of past FEMA data. Since it was such a huge dataset we decided to create a database and store all disaster information in there.
## What we learned along the way:
Ben - Learned more about interacting with Google APIs, also learned a lot from the interesting workshops & chats held during the time at PennApps.
Tanya - Learned where and how to find resources to achieve the goal and how to code on JavaScript.
Ronke - Learned about using different frond-end platforms and data manipulation with JavaScript.
Yuxi - Learned about utilizing services of Google APIs (especially Firebase that hosts our database).
## Future:
Later on we plan on expanding the dataset and recreate it for many other neglected sections of the world and other different kinds of natural disasters. We also plan on making our map more specific, such as targeting U.S. counties for disaster frequency. | ## Inspiration
Our inspiration for the disaster management project came from living in the Bay Area, where earthquakes and wildfires are constant threats. Last semester, we experienced a 5.1 magnitude earthquake during class, which left us feeling vulnerable and unprepared. This incident made us realize the lack of a comprehensive disaster management plan for our school and community. We decided to take action and develop a project on disaster management to better prepare ourselves for future disasters.
## What it does
Our application serves as a valuable tool to help manage chaos during disasters such as earthquakes and fire. With features such as family search, location sharing, searching for family members, an AI chatbot for first aid, and the ability to donate to affected individuals and communities, our app can be a lifeline for those affected by a crisis.
## How we built it
Our disaster management application was built with Flutter for the Android UI, Dialogflow for the AI chat assistant, and Firebase for the database. The image face similarity API was implemented using OpenCV in Django REST.
## Challenges we ran into
We are proud of the fact that, as first-time participants in a hackathon, we were able to learn and implement a range of new technologies within a 36-hour time frame.
## Accomplishments that we're proud of
* Our disaster management application has a valuable feature that allows users to search for their family members during a crisis. By using an image similarity algorithm API (OpenCV), users can enter the name of a family member and get information about their recent location. This helps to ensure the safety of loved ones during a disaster, and can help identify people who are injured or unconscious in hospitals. The image is uploaded to Firebase, and the algorithm searches the entire database for a match. We're proud of this feature, and will continue to refine it and add new technologies to the application.
## What we learned
We were not able to implement the live location sharing feature due to time constraints, but we hope to add it in the future as we believe it could be valuable in emergency situations.
## What's next for -
We plan to improve our AI chatbot, implement an adaptive UI for responders, and add text alerts to the application in the future. | ## Inspiration
As the lines between AI-generated and real-world images blur, the integrity and trustworthiness of visual content have become critical concerns. Traditional metadata isn't as reliable as it once was, prompting us to seek out groundbreaking solutions to ensure authenticity.
## What it does
"The Mask" introduces a revolutionary approach to differentiate between AI-generated images and real-world photos. By integrating a masking layer during the propagation step of stable diffusion, it embeds a unique hash. This hash is directly obtained from the Solana blockchain, acting as a verifiable seal of authenticity. Whenever someone encounters an image, they can instantly verify its origin: whether it's an AI creation or an authentic capture from the real world.
## How we built it
Our team began with an in-depth study of the stable diffusion mechanism, pinpointing the most effective point to integrate the masking layer. We then collaborated with blockchain experts to harness Solana's robust infrastructure, ensuring seamless and secure hash integration. Through iterative testing and refining, we combined these components into a cohesive, reliable system.
## Challenges we ran into
Melding the complex world of blockchain with the intricacies of stable diffusion was no small feat. We faced hurdles in ensuring the hash's non-intrusiveness, so it didn't distort the image. Achieving real-time hash retrieval and embedding while maintaining system efficiency was another significant challenge.
As the lines between AI-generated and real-world images blur, the integrity and trustworthiness of visual content have become critical concerns. Traditional metadata isn't as reliable as it once was, prompting us to seek out groundbreaking solutions to ensure authenticity.
## Accomplishments that we're proud of
Successfully integrating a seamless masking layer that does not compromise image quality.
Achieving instantaneous hash retrieval from Solana, ensuring real-time verification.
Pioneering a solution that addresses a pressing concern in the AI and digital era.
Garnering interest from major digital platforms for potential integration.
## What we learned
The journey taught us the importance of interdisciplinary collaboration. Bringing together experts in AI, image processing, and blockchain was crucial. We also discovered the potential of blockchain beyond cryptocurrency, especially in preserving digital integrity.\
## What's next for The Mask
We envision "The Mask" as the future gold standard for digital content verification. We're in talks with online platforms and content creators to integrate our solution. Furthermore, we're exploring the potential to expand beyond images, offering verification solutions for videos, audio, and other digital content forms. | losing |
## Inspiration
Ever wish you could hear your baby cry where you are in the world? Probably not, but it's great to know anyways! Did you know that babies often cry when at least one of their needs is not met? How could possibly know about baby's needs without being there watching the baby sleep?
## What it does
Our team of 3 visionaries present to you **the** innovation of the 21st century. Using just your mobile phone and an internet connection, you can now remotely receive updates on whether or not your baby is crying, and whether your baby has reached dangerously high temperatures.
## How we built it
We used AndroidStudio for building the app that receives the updates. We used Socket.io for the backend communication between the phone and the Intel Edison.
## Challenges we ran into
Attempting to make push notifications work accounted for a large portion of time spent building this prototype. In the future versions, push notifications would be included.
## Accomplishments that we're proud of
We are proud of paving the future of baby-to-mobile communications for fast footed parents around the globe.
## What we learned
As software people we are proud that we were to able to communicate with the Intel Edison.
## What's next for Baby Monitor
Push notifications. Stay tuned!! | ## Inspiration
Many of us have a hard time preparing for interviews, presentations, and any other social situation. We wanted to sit down and have a real talk... with ourselves.
## What it does
The app will analyse your speech, hand gestures, and facial expressions and give you both real-time feedback as well as a complete rundown of your results after you're done.
## How We built it
We used Flask for the backend and used OpenCV, TensorFlow, and Google Cloud speech to text API to perform all of the background analyses. In the frontend, we used ReactJS and Formidable's Victory library to display real-time data visualisations.
## Challenges we ran into
We had some difficulties on the backend integrating both video and voice together using multi-threading. We also ran into some issues with populating real-time data into our dashboard to display the results correctly in real-time.
## Accomplishments that we're proud of
We were able to build a complete package that we believe is purposeful and gives users real feedback that is applicable to real life. We also managed to finish the app slightly ahead of schedule, giving us time to regroup and add some finishing touches.
## What we learned
We learned that planning ahead is very effective because we had a very smooth experience for a majority of the hackathon since we knew exactly what we had to do from the start.
## What's next for RealTalk
We'd like to transform the app into an actual service where people could log in and save their presentations so they can look at past recordings and results, and track their progress over time. We'd also like to implement a feature in the future where users could post their presentations online for real feedback from other users. Finally, we'd also like to re-implement the communication endpoints with websockets so we can push data directly to the client rather than spamming requests to the server.
![Image](https://i.imgur.com/aehDk3L.gif)
Tracks movement of hands and face to provide real-time analysis on expressions and body-language.
![Image](https://i.imgur.com/tZAM0sI.gif) | ## Inspiration
I present *Peanuts*, a baby entertainment app that helps out babysitters and parents alike.
## What it does
It listens to babies and "talks" back to them. Once the baby gets tired of talking, *Peanuts* plays a sweet, tranquil lullaby, *Baby* by Justin Bieber, to put them to sleep. [File not included in GitHub[
## How I built it
I built a web app using Javascript and some audio libraries.
## Challenges I ran into
My main obstacle was Javscript syntax because I was not familiar with it. Building my app required a lot of googling and debugging.
## Accomplishments that I'm proud of
My main goal at PennApps was to learn Javascript and build a web app for the first time. I used different packages to input, output, and edit sound in my web app.
## What I learned
I now know the basics (and hopefully some advanced parts) of Javascript.
## What's next for Entertain Me
Video features! Use Angular.js to have cleaner and more organized code. | winning |
## Inspiration
Security and facial recognition was our main interest when planning our project for this Makeathon. We were inspired by a documentary in which a male was convicted of murder by the police deparment in Los Angeles. The man was imprisoned for six months away from his daughter and wife. He was wrongfully convicted and this was discovered thorugh the means of a video that displayed evidence of him beign present at a LA Dodgers game at the same time of the alleged murder conviction. Thus, he was set free and this story truly impacted us from an emotional standpoint because the man had to pay a hefty price of six month prison time for no reason. This exposed us to the world of facial recognition and software that can help identify faces that are not explicitly shown. We wanted to employ software that would help identify faces based on neural networks that were preloaded.
## What it does
The webcam takes a picture of the user's face, and it compares it to preloaded images of the user's face from the database. The algorithm will then draw boxes around the user's face and eyes.
## How I built it
To build this project, we used a PYNQ board, a computer with an Ethernet cable, and several cables to power the PYNQ board as well as neural networks to power the technology to identify the faces (xml files), as well as Python programming to power the software. We used a microprocessor, ethernet cable, HDMI cable, and webcam to power the necessary devices for the PYNQ board. The Python programming coupled with the xml files that were trained to recognize different faces and eyes were used on a Jupyter platform to display the picture taken as well as boxes around the face and eyes.
## Challenges I ran into
We faced a plethora of problems while completing this project. These range from technical gaps in knowledge to hardware malfunctions that were unexpected by the team.
The first issue we ran into was being given an SD card for the PYNQ board that was not preloaded with the required information. This meant that we had to download a PYNQ file with 1.5 GB of data from the pynq.io. This would hinder our process as it could lead to future difficulties so we decided to switch the SD card with one that is preloaded. This lead us to lose valuable time trying to debug the PYNQ board.
Another issue we had was the SD card was corrupted. This was because we unintentionally and ignorantly uploaded files to the Jupyter platform by clicking “Upload” and choosing the files from our personal computer. What we should have done was to use map networking to load the files from our personal computer to Jupyter successfully. Thus, we will be able to implement pictures for computer recognition.
Finally, the final issue we had was trying to import the face recognition API that was developed by the Massachusetts Institute of Technology. We did not know how to import the library for use, and given more time, we would explore that avenue more as this was our very first hackathon. We would export it in the PYNQ folder and not the data folder, which is a feature that was elaborated upon by the Xilinx representative.
## Accomplishments that I'm proud of
Loading code and images from our computers into the PYNQ board. We were also able to link a web camera with the board while also being able to analyse the pictures taken from the web camera.
## What I learned
As a team we were able to learn more about neural networks and how the PYNQ board technology could be incorporated into various areas including our intended purpose in security. To be specific, we learned how to use Jupyter and Python as tools to create these possible embedded systems and even got to explore ideas of possible machine learning.
## What's next for PYNQ EYE
Our project is able to recognize users using their facial features. With that being said, there is a huge application in the security industry. In instances where employees/workers have security recognize them and their id to enter the premise of the company, this technology could prove to be useful. The automation in the security aspect of facial recognition would allow employees to show their face to a camera and be granted access to the building/premise, removing the need for extra security detail and identification that could easily be falsified, making the security of the premise much safer. Another application would be home security where the facial recognition system would be used to disable home alarms by the faces of the residents of the property. Such applications prove that this project has the potential to boost security in the workforce and at home. | ## Inspiration
Our inspiration came from our curiosity and excitement to make a simple house-hold item into a high-tech tool that changes the way we interact with everyday things. We wanted to maximize the functionality of an everyday object while implementing innovative technology. We specifically wanted to focus on a mirror by designing and creating a smart mirror that responds and adapts to its diverse environment.
## What it does
It gives a modern and aesthetic appeal to the room it is implemented in. What makes our Intelligent mirror system smart, is its ability to display any information you want on it. This mirror can be customized to display local weather forecasts, news bulletins, upcoming notifications, and daily quotes. One crucial feature is that it can store the data of medication inside a cupboard, and remind the user to take the specific pills at its predetermined time. It uses facial recognition to detect if a user is in front of the mirror and that will trigger the mirror to start up.
## How I built it
We built our project using Mind Studio for Huawei, Github for sharing the code, Raspian to program the Raspberry Pi, and a JavaScript environment.
## Challenges I ran into
During the initial brainstorming phase, our group thought of multiple ideas. Unfortunately, due to the large pool of possibilities of ideas, we had a difficult time committing to a single idea. Once we began working on an idea, we resulted in switching only after a couple of hours.
At last, we set on to utilize the Telus Dev Shield. After gaining a major stride of progress, we were left stuck and disappointed when we were unable to set up the environment from the Telus Dev Shield. Although we tried our best to figure out a solution, we were, in the end, unable to do so, forcing us to abandon our idea once again.
Another major barrier we faced was having the Raspberry Pi and the Huawei Atlas 200 DK communicate with each other. Unfortunately, we used all the ports possible on the Raspberry Pi, giving us no room to connect the Atlas.
## Accomplishments that I'm proud of
Finishing the vision recognition model was the standout for the accomplishment of our group while using a trained AI model for facial recognition along with Huawei's Atlas 200 DK.
## What I learned
An important ability we learned over the course of 24 hours was to be patient and overcome challenges that appear nearly impossible at first. For roughly the first half of our making session, our group was unable to make substantial progress. After having frequent reflection and brainstorming sessions, we were able to find an idea that our team all agreed on. With the remaining time, we were able to finalize and product that satisfied our desire of making an everyday item into something truly unique. The second we learned was more on the technical side. Since our team consisted of primarily first years, we were unfamiliar with the hardware offered at the event, and thus, the related software. However, after going through the tutorials and attending the workshops, we were able to learn something new about the Huawei Atlas 200 DK and also the Raspbian OS.
## What's next for I.M.S (Intelligent, Mirror, System)
Create a better physical model with a two-way mirror and LED lights behind the frame
Implement with a smart home system (Alexa, Google Home...)
Have Personal profiles based on the facial recognition it can be personalized for each person
Have voice control to give commands to the mirror, and have a speaker to have responsive feedback | ## Inspiration
We wanted to make the interactions with our computers more intuitive while giving people with special needs more options to navigate in the digital world. With the digital landscape around us evolving, we got inspired by scenes in movies featuring Tony Stark, where he interacts with computers within his high-tech office. Instead of using a mouse and computer, he uses hand gestures and his voice to control his work environment.
## What it does
Instead of a mouse, Input/Output Artificial Intelligence, or I/OAI, uses a user's webcam to move their cursor to where their face OR hand is pointing towards through machine learning.
Additionally, I/OAI allows users to map their preferred hand movements for commands such as "click", "minimize", "open applications", "navigate websites", and more!
I/OAI also allows users to input data using their voice, so they don't need to use a keyboard and mouse. This increases accessbility for those who don't readily have access to these peripherals.
## How we built it
Face tracker -> Dlib
Hand tracker -> Mediapipe
Voice Recognition -> Google Cloud
Graphical User Interface -> tkinter
Mouse and Keyboard Simulation -> pyautogui
## Challenges we ran into
Running this many programs at the same time slows it down considerably, we therefore need to selectively choose which ones we wanted to keep during the implementation. We solved this by using multithreading and carefully investigating efficiency.
We also had a hard time mapping the face because of the angles of rotation of the head, increasing the complexity of the matching algorithm.
## Accomplishments we're proud of
We were able to implement everything we set out to do in a short amount of time, as there was a lot of integrations with multiple frameworks and our own algorithms.
## What we learned
How to use multithreading for multiple trackers, using openCV for easy camera frames, tkinter GUI building and pyautogui for automation.
## What's next for I/OAI
We need to figure a way to incorporate features more efficiently or get a supercomputer like Tony Stark!
By improving on the features, people will have more accessbility at their computers by simply downloading a program instead of buying expensive products like an eyetracker. | partial |
## Inspiration
BThere emerged from a genuine desire to strengthen friendships by addressing the subtle challenges in understanding friends on a deeper level. The team recognized that nuanced conversations often go unnoticed, hindering meaningful support and genuine interactions. In the context of the COVID-19 pandemic, the shift to virtual communication intensified these challenges, making it harder to connect on a profound level. Lockdowns and social distancing amplified feelings of isolation, and the absence of in-person cues made understanding friends even more complex. BThere aims to use advanced technologies to overcome these obstacles, fostering stronger and more authentic connections in a world where the value of meaningful interactions has become increasingly apparent.
## What it does
BThere is a friend-assisting application that utilizes cutting-edge technologies to analyze conversations and provide insightful suggestions for users to connect with their friends on a deeper level. By recording conversations through video, the application employs Google Cloud's facial recognition and speech-to-text APIs to understand the friend's mood, likes, and dislikes. The OpenAI API generates personalized suggestions based on this analysis, offering recommendations to uplift a friend in moments of sadness or providing conversation topics and activities for neutral or happy states. The backend, powered by Python Flask, handles data storage using Firebase for authentication and data persistence. The frontend is developed using React, JavaScript, Next.js, HTML, and CSS, creating a user-friendly interface for seamless interaction.
## How we built it
BThere involves a multi-faceted approach, incorporating various technologies and platforms to achieve its goals. The recording feature utilizes WebRTC for live video streaming to the backend through sockets, but also allows users to upload videos for analysis. Google Cloud's facial recognition API identifies facial expressions, while the speech-to-text API extracts spoken content. The combination of these outputs serves as input for the OpenAI API, generating personalized suggestions. The backend, implemented in Python Flask, manages data storage in Firebase, ensuring secure authentication and persistent data access. The frontend, developed using React, JavaScript, Next.js, HTML, and CSS, delivers an intuitive user interface.
## Accomplishments that we're proud of
* Successfully integrating multiple technologies into a cohesive and functional application
* Developing a user-friendly frontend for a seamless experience
* Implementing real-time video streaming using WebRTC and sockets
* Leveraging Google Cloud and OpenAI APIs for advanced facial recognition, speech-to-text, and suggestion generation
## What's next for BThere
* Continuously optimizing the speech to text and emotion analysis model for improved accuracy with different accents, speech mannerisms, and languages
* Exploring advanced natural language processing (NLP) techniques to enhance conversational analysis
* Enhancing user experience through further personalization and more privacy features
* Conducting user feedback sessions to refine and expand the application's capabilities | ## Inspiration
We were inspired by all the people who go along their days thinking that no one can actually relate to what they are experiencing. The Covid-19 pandemic has taken a mental toll on many of us and has kept us feeling isolated. We wanted to make an easy to use web-app which keeps people connected and allows users to share their experiences with other users that can relate to them.
## What it does
Alone Together connects two matching people based on mental health issues they have in common. When you create an account you are prompted with a list of the general mental health categories that most fall under. Once your account is created you are sent to the home screen and entered into a pool of individuals looking for someone to talk to. When Alone Together has found someone with matching mental health issues you are connected to that person and forwarded to a chat room. In this chat room there is video-chat and text-chat. There is also an icebreaker question box that you can shuffle through to find a question to ask the person you are talking to.
## How we built it
Alone Together is built with React as frontend, a backend in Golang (using Gorilla for websockets), WebRTC for video and text chat, and Google Firebase for authentication and database. The video chat is built from scratch using WebRTC and signaling with the Golang backend.
## Challenges we ran into
This is our first remote Hackathon and it is also the first ever Hackathon for one of our teammates (Alex Stathis)! Working as a team virtually was definitely a challenge that we were ready to face. We had to communicate a lot more than we normally would to make sure that we stayed consistent with our work and that there was no overlap.
As for the technical challenges, we decided to use WebRTC for our video chat feature. The documentation for WebRTC was not the easiest to understand, since it is still relatively new and obscure. This also means that it is very hard to find resources on it. Despite all this, we were able to implement the video chat feature! It works, we just ran out of time to host it on a cloud server with SSL, meaning the video is not sent on localhost (no encryption). Google App Engine also doesn't allow websockets in standard mode, and also doesn't allow `go.mod` on `flex` mode, which was inconvenient and we didn't have time to rewrite parts of our webapp.
## Accomplishments that we're proud of
We are very proud for bringing our idea to life and working as a team to make this happen! WebRTC was not easy to implement, but hard work pays off.
## What we learned
We learned that whether we work virtually together or physically together we can create anything we want as long as we stay curious and collaborative!
## What's next for Alone Together
In the future, we would like to allow our users to add other users as friends. This would mean in addition of meeting new people with the same mental health issues as them, they could build stronger connections with people that they have already talked to.
We would also allow users to have the option to add moderation with AI. This would offer a more "supervised" experience to the user, meaning that if our AI detects any dangerous change of behavior we would provide the user with tools to help them or (with the authorization of the user) we would give the user's phone number to appropriate authorities to contact them. | ## ✨ Inspiration
Quarantining is hard, and during the pandemic, symptoms of anxiety and depression are shown to be at their peak 😔[[source]](https://www.kff.org/coronavirus-covid-19/issue-brief/the-implications-of-covid-19-for-mental-health-and-substance-use/). To combat the negative effects of isolation and social anxiety [[source]](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7306546/), we wanted to provide a platform for people to seek out others with similar interests. To reduce any friction between new users (who may experience anxiety or just be shy!), we developed an AI recommendation system that can suggest virtual, quarantine-safe activities, such as Spotify listening parties🎵, food delivery suggestions 🍔, or movie streaming 🎥 at the comfort of one’s own home.
## 🧐 What it Friendle?
Quarantining alone is hard😥. Choosing fun things to do together is even harder 😰.
After signing up for Friendle, users can create a deck showing their interests in food, games, movies, and music. Friendle matches similar users together and puts together some hangout ideas for those matched. 🤝💖 I
## 🧑💻 How we built Friendle?
To start off, our designer created a low-fidelity mockup in Figma to get a good sense of what the app would look like. We wanted it to have a friendly and inviting look to it, with simple actions as well. Our designer also created all the vector illustrations to give the app a cohesive appearance. Later on, our designer created a high-fidelity mockup for the front-end developer to follow.
The frontend was built using react native.
![arch diagram](https://raw.githubusercontent.com/christinetrac/uofthacks-frontend/master/docs/uofthacks-backend.jpg)
We split our backend tasks into two main parts: 1) API development for DB accesses and 3rd-party API support and 2) similarity computation, storage, and matchmaking. Both the APIs and the batch computation app use Firestore to persist data.
### ☁️ Google Cloud
For the API development, we used Google Cloud Platform Cloud Functions with the API Gateway to manage our APIs. The serverless architecture allows our service to automatically scale up to handle high load and scale down when there is little load to save costs. Our Cloud Functions run on Python 3, and access the Spotify, Yelp, and TMDB APIs for recommendation queries. We also have a NoSQL schema to store our users' data in Firebase.
### 🖥 Distributed Computer
The similarity computation and matching algorithm is powered by a node.js app which leverages the Distributed Computer for parallel computing. We encode the user's preferences and Meyers-Briggs type into a feature vector, then compare similarity using cosine similarity. The cosine similarity algorithm is a good candidate for parallelizing since each computation is independent of the results of others.
We experimented with different strategies to batch up our data prior to slicing & job creation to balance the trade-off between individual job compute speed and scheduling delays. By selecting a proper batch size, we were able to reduce our overall computation speed by around 70% (varies based on the status of the DC network, distribution scheduling, etc).
## 😢 Challenges we ran into
* We had to be flexible with modifying our API contracts as we discovered more about 3rd-party APIs and our front-end designs became more fleshed out.
* We spent a lot of time designing for features and scalability problems that we would not necessarily face in a Hackathon setting. We also faced some challenges with deploying our service to the cloud.
* Parallelizing load with DCP
## 🏆 Accomplishments that we're proud of
* Creating a platform where people can connect with one another, alleviating the stress of quarantine and social isolation
* Smooth and fluid UI with slick transitions
* Learning about and implementing a serverless back-end allowed for quick setup and iterating changes.
* Designing and Creating a functional REST API from scratch - You can make a POST request to our test endpoint (with your own interests) to get recommended quarantine activities anywhere, anytime 😊
e.g.
`curl -d '{"username":"turbo","location":"toronto,ca","mbti":"entp","music":["kpop"],"movies":["action"],"food":["sushi"]}' -H 'Content-Type: application/json' ' https://recgate-1g9rdgr6.uc.gateway.dev/rec'`
## 🚀 What we learned
* Balancing the trade-off between computational cost and scheduling delay for parallel computing can be a fun problem :)
* Moving server-based architecture (Flask) to Serverless in the cloud ☁
* How to design and deploy APIs and structure good schema for our developers and users
## ⏩ What's next for Friendle
* Make a web-app for desktop users 😎
* Improve matching algorithms and architecture
* Adding a messaging component to the app | partial |
## Inspiration
At the University of Toronto, accessibility services are always in demand of more volunteer note-takers for students who are unable to attend classes. Video lectures are not always available and most profs either don't post notes, or post very imprecise, or none-detailed notes. Without a doubt, the best way for students to learn is to attend in person, but what is the next best option? That is the problem we tried to tackle this weekend, with notepal. Other applications include large scale presentations such as corporate meetings, or use for regular students who learn better through visuals and audio rather than note-taking, etc.
## What it does
notepal is an automated note taking assistant that uses both computer vision as well as Speech-To-Text NLP to generate nicely typed LaTeX documents. We made a built-in file management system and everything syncs with the cloud upon command. We hope to provide users with a smooth, integrated experience that lasts from the moment they start notepal to the moment they see their notes on the cloud.
## Accomplishments that we're proud of
Being able to integrate so many different services, APIs, and command-line SDKs was the toughest part, but also the part we tackled really well. This was the hardest project in terms of the number of services/tools we had to integrate, but a rewarding one nevertheless.
## What's Next
* Better command/cue system to avoid having to use direct commands each time the "board" refreshes.
* Create our own word editor system so the user can easily edit the document, then export and share with friends.
## See For Your Self
Primary: <https://note-pal.com>
Backup: <https://danielkooeun.lib.id/notepal-api@dev/> | ## Inspiration
In the work from home era, many are missing the social aspect of in-person work. And what time of the workday most provided that social interaction? The lunch break. culina aims to bring back the social aspect to work from home lunches. Furthermore, it helps users reduce their food waste by encouraging the use of food that could otherwise be discarded and diversifying their palette by exposing them to international cuisine (that uses food they already have on hand)!
## What it does
First, users input the groceries they have on hand. When another user is found with a similar pantry, the two are matched up and displayed a list of healthy, quick recipes that make use of their mutual ingredients. Then, they can use our built-in chat feature to choose a recipe and coordinate the means by which they want to remotely enjoy their meal together.
## How we built it
The frontend was built using React.js, with all CSS styling, icons, and animation made entirely by us. The backend is a Flask server. Both a RESTful API (for user creation) and WebSockets (for matching and chatting) are used to communicate between the client and server. Users are stored in MongoDB. The full app is hosted on a Google App Engine flex instance and our database is hosted on MongoDB Atlas also through Google Cloud. We created our own recipe dataset by filtering and cleaning an existing one using Pandas, as well as scraping the image URLs that correspond to each recipe.
## Challenges we ran into
We found it challenging to implement the matching system, especially coordinating client state using WebSockets. It was also difficult to scrape a set of images for the dataset. Some of our team members also overcame technical roadblocks on their machines so they had to think outside the box for solutions.
## Accomplishments that we're proud of.
We are proud to have a working demo of such a complex application with many moving parts – and one that has impacts across many areas. We are also particularly proud of the design and branding of our project (the landing page is gorgeous 😍 props to David!) Furthermore, we are proud of the novel dataset that we created for our application.
## What we learned
Each member of the team was exposed to new things throughout the development of culina. Yu Lu was very unfamiliar with anything web-dev related, so this hack allowed her to learn some basics of frontend, as well as explore image crawling techniques. For Camilla and David, React was a new skill for them to learn and this hackathon improved their styling techniques using CSS. David also learned more about how to make beautiful animations. Josh had never implemented a chat feature before, and gained experience teaching web development and managing full-stack application development with multiple collaborators.
## What's next for culina
Future plans for the website include adding videochat component to so users don't need to leave our platform. To revolutionize the dating world, we would also like to allow users to decide if they are interested in using culina as a virtual dating app to find love while cooking. We would also be interested in implementing organization-level management to make it easier for companies to provide this as a service to their employees only. Lastly, the ability to decline a match would be a nice quality-of-life addition. | ## Inspiration
* Everyone and their dog have a big idea for the next hit app but most people lack the skills or resources to build them.
* Having used some commercial app-building and prototyping tools in the past, we consider them inefficient as they don't reflect what the app is actually going to look like until it is run on a real device.
## What it does
Appception allows you to build mobile apps on your iPhone through a simple drag and drop interface. While building your app, it is always running and it has its own behaviors and states. With Appception, anyone can build apps that use the device's sensors, persist and retrieve data locally or remotely, and interact with third party services. If you are pursuing more complex development, with just a tap of a button, we'll generate the source code of your app and send it you.
## How we built it
We built Appception with React Native, a new open-source framework by Facebook for building mobile cross platform native apps using JavaScript.
Using Redux, a predictable state container JavaScript library, we can handle the state of the user created app.
We designed a software architecture that allowed us to effectively isolate the states of the app builder and the user generated app, within the same iOS app. (hence App-ception)
Appception communicates with a server application in order to deliver a generated app to the user.
## Challenges I ran into
We ran into a lot of challenges with creating barriers, keeping the building app and the dynamic app separate, while at the same time expanding the possible functionality that a user can build.
## Accomplishments that I'm proud of
We're proud to have built a proof of concept app that, if deployed at scale will lower the barrier of entry for people to build apps that create value for their users.
Everyone, including your grandma can build the dumb ideas that never got built because uber for cats actually isn’t a good idea.
## What I learned
Today we all learned React Native. Although some of us were familiar before hand, creating an app with JavaScript was a whole new experience for some others.
## What's next for Appception
Expanding the range of apps that you can build with Appception by providing more draggable components and integrations.
Integrate deployment facilities within the Appception iPhone app to allow users to ship the app to beta users and push updates directly to their devices instantly. | winning |
## Inspiration
From daily use of Spotify and my friends
## What it does
It displays shared songs of 2 Spotify accounts on a webpage
## How we built it
We used the open-source Spotipy library to handle the connection with Spotify API. We wrote python code for retrieving users' saved songs, comparing them, and storing the result. We wrote html file to display the result on a webpage.
## Challenges we ran into
Understanding how the API works, binding python code with html page page code.
## Accomplishments that we're proud of
It was a great experience working with public APIs. We are proud of learning and using these resources for our first time and achieving some results. It is not only about how to use API but also we learned more about database and network. Since our project is based on HTML and Python two parts, it was really nice to work between different program languages and make them work together.
## What we learned
As mentioned above, we learned about how to use APIs, more knowledge of databases and networks. And also we become more skilled in the programming languages we used to write the project.
## What's next for Spotify Shared Songs Displayer
Since the limit of Spotify API and private security, it is hard to have improvement on the implementation of the function. But the front end can be improved to bring a better experience for users. | ## Inspiration
**Inspiration and perspiration combine together to form an unwieldy yet grandiose abomination: [genius](https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcSX9EOgzlpavyXrIm-E_d5DAdeKlLKeM35lkA&s).** The inspiration for Customerify came from the need to enhance customer service experiences with quick, accurate, and human-like interactions. Traditional customer service methods often lead to long wait times and inconsistent responses. We wanted to leverage AI technology to create a solution that provides instant, reliable support, ensuring customer satisfaction and efficiency.
## What it does
**What you do in life is always more important than what life does to you, and this also applies to [robots](https://images.pexels.com/photos/45201/kitty-cat-kitten-pet-45201.jpeg?auto=compress&cs=tinysrgb&dpr=1&w=500).** Customerify is an AI-driven application that allows businesses to deploy lifelike customer service agents. Users simply specify their data sources, such as online websites and documents, and select a phone number. Customers can then call this number to get fast and accurate answers to their questions, thanks to our advanced Text-to-Speech (TTS) and Speech-to-Text (STT) technology.
## How we built it
**While we may have signed an [NDA](https://www.shutterstock.com/image-photo/magnificent-killer-whale-jumping-drops-260nw-2340299829.jpg) so that we can turn this into a startup, we can still tell you exactly how we built this.** We built Customerify using a combination of state-of-the-art AI technologies, including large language models (LLMs => Groq) for natural language understanding and generation. We integrated TTS (LMNT) and STT (Deepgram) capabilities to ensure seamless voice interactions (Twilio). The backend (Flask + Express) is designed to efficiently handle data sources, while the frontend (Streamlit) provides a user-friendly interface for easy setup and deployment.
## Challenges we ran into
**Our team members do not often get challenged by the inconsequential trivialities of life, but when we do, we write about them on [Devpost](https://img.freepik.com/free-photo/funny-monkey-with-glasses-studio_23-2150844104.jpg).** One of the main challenges we faced was ensuring the AI's responses were both accurate and contextually appropriate. Integrating TTS and STT technologies to maintain a natural and human-like conversation flow also required significant fine-tuning. Additionally, making the setup process intuitive for users with varying technical skills posed a challenge.
## Accomplishments that we're proud of
**Our parents are proud of their children; their children are proud of their [code](https://t3.ftcdn.net/jpg/05/69/30/42/360_F_569304262_RGVohUth9wyR5Msa3CoR4XFvMYE8VG1k.jpg).** We are proud of creating a robust and reliable system that delivers on its promise of instant, lifelike customer service. Achieving high accuracy in both voice recognition and response generation, and ensuring a smooth user experience for setup and deployment, are significant accomplishments. We are also proud of the positive feedback from beta testers such as the people at our table.
## What we learned
**When it comes to hacking, there is a lot to learn, and a lot to lose [(dignity)](https://i.ytimg.com/vi/oDG4truqCDo/maxresdefault.jpg).** Through this project, we learned the importance of balancing technical complexity with user-friendliness. We gained insights into optimizing AI models for real-time interactions and the critical role of user feedback in refining our application. The experience also highlighted the potential of AI to transform traditional business processes.
## What's next for Customerify
**We cannot let anybody ever know our next moves, but we can on [Devpost](https://t3.ftcdn.net/jpg/05/39/70/38/360_F_539703832_o8YCdMU7qnXEYAho58N3br0VdyWx7nS6.jpg).** Next, we aim to expand Customerify's capabilities by adding support for more languages and integrating with additional communication channels such as chat and email. We also plan to continuously improve the AI's conversational abilities and expand the range of data sources it can utilize. Our goal is to make Customerify an indispensable tool for businesses worldwide, providing unparalleled customer service experiences. | A "Tinder-fyed" Option for Adoption!
## Inspiration
All too often, I seem to hear of friends or family flocking to pet stores or specialized breeders in the hope of finding the exact new pet that they want. When an animal reaches towards the forefront of the pop culture scene, this is especially true. Many new pet owners can adopt for the wrong reasons, and many later become overwhelmed, and we see these pets enter the shelter systems, with a hard chance of getting out.
## What it does
PetSwipe is designed to bring the ease and variety of pet stores and pet breeders to local shelter adoption. Users can sign up with a profile and based on the contents of their profile, will be presented with different available pets from local shelters, one at a time. Each animal is presented with their name, sex, age, and their breed if applicable, along with a primary image of the pet. The individual can choose to accept or reject each potential pet. From here, the activity is loaded into a database, where the shelter could pull out the information for the Users' clicking accept on the pet, and email those who's profiles best suit the pet for an in-person meet and greet. The shelters can effectively accept or reject these Users' from their end.
## How I built it
A browsable API was built on a remote server with Python, Django, and the Django REST Framework. We built a website, using PHP, javascript, and HTML/CSS with Bootstrap, where users could create an account, configure their preferences, browse and swipe through current pets in their location. These interactions are saved to the databases accessible through our API.
## Challenges I ran into
The PetFinder API came with a whole host of unnecessary challenges in its use. From unnecessary MD5 hashing, to location restrictions, this API is actually in Beta and it requires quite a bit of patience to use.
## What I learned
Attractive, quick front end website design, paired with a remote server was new for most of the team.
We feel PawSwipe could become a fantastic way to promote the "Adopt, Don't Shop" motto of local rescue agencies. As long as the responsibilities of pet owner ship are conveyed to users, paired with the adoptee vetting performed by the shelters, a lot of lovable pets could find their way back into caring homes! | losing |
## Inspiration
In an unprecedented time of fear, isolation, and *can everyone see my screen?*, no ones life has been the same since COVID.
We saw people come together to protect others, but also those who refused to wear cloth over their noses. We’ve come up with cutting edge, wearable technology to protect ourselves against the latter, because in 2022, no one wants anyone invading their personal space.
Introducing the anti anti-masker mask, the solution to all your pandemic related worries.
## What it does
The anti anti-masker mask is a wearable defense mechanism to protect yourself from COVID-19 mandate breakers. It detects if someone within 6 feet of you is wearing a mask or not, and if they dare be mask-less in your vicinity, the shooter mechanism will fire darts at them until they leave.
Never worry about anti-maskers invading your personal space again!
## How we built it
The mask can be split into 3 main subsystems.
**The shooter/launcher**
The frame and mechanisms are entirely custom modeled and built using SolidWorks and FDM 3D Printing Technology. We also bought a NERF Gun, and the NERF launcher is powered by a small lipo battery and uses 2 brushless drone motors as flywheels.
The darts are automatically loaded into the launcher by a rack and pinion mechanism driven by a servo, and the entire launcher is controlled by an Arduino Nano which receives serial communications from the laptop.
**Sensors and Vision**
We used a single point lidar to detect whether a non mask wearer is within 6 ft of the user. For the mask detection system, we use a downloadable app to take live video stream to a web server where the processing takes place. Finally, for the vision processing, our OpenCV pipeline reads the data from the webserver.
**Code**
Other than spending 9 hours trying to install OpenCV on a raspberry pi 🤡 the software was one of the most fun parts. To program the lidar, we used an open source library that has premade methods that can return the distance from the lidar to the next closest object. By checking if the lidar is within 500 and 1500mm, we can ensure that a target that is not wearing a mask is within cough range (6ft) before punishing them.
The mask detection with OpenCV allowed us to find those public anti-maskers and then send a signal to the serial port. The Arduino then takes the signals and runs the motors to shoot the darts until the offender is gone.
## Challenges we ran into
The biggest challenge was working with the Pi Zero.
Installing OpenCV was a struggle, the camera FPS was a struggle, the lidar was a struggle, you get the point.
Because of this, we changed the project from Raspi to Arduino, but neither the Arduino Uno or the Arduino Nano ran supported dual serial communication, so we had to downgrade to a VL53L0X lidar, which supported I2C, a protocol that the nano supported. After downloading DFRobot’s VL53L0X’s lidar library, we used their sample code to gather the distance measurement which was used in the final project.
Another challenges we faced was designing the feeding mechanism for our darts, we originally wanted to use a slider crank mechanism, however it was designed to be quite compact and as a result the crank caused too much friction with the servo mount and the printed piece cracked. In our second iteration we used a rack and pinion design which significantly reduced the lateral forces and very accurately linearly actuated, this was ultimately used in our final design.
## Accomplishments that we're proud of
We have an awesome working product that's super fun to play with / terrorize your friends with. The shooter, albiet many painful hours of getting it working, worked SO WELL and the fact we adapted and ended up with robust and consistently working software was a huge W as well.
## What we learned
Install ur python libraries before the hackathon starts 😢 but also interfacing with lidars, making wearables, all that good stuff.
## What's next for Anti Anti-Masker Mask
We would want to add dart targeting and a turret to track victims.
During our prototyping process we explored running the separate flywheels at different speeds to try to curve the dart, this would have ensured more accurate shots at our 2 meter range. Ultimately we did not have time to finish this process however we would love to explore it in the future.
Improve wearablility → reduce the laptop by using something like jetson or a Pi, maybe try to reduce the dart shooter or create a more compact “punishment” device. Try to mount it all to one clothing item instead of 2.5. | ## Inspiration
Want to take advantage of the AR and object detection technologies to help people to gain safer walking experiences and communicate distance information to help people with visual loss navigate.
## What it does
Augment the world with the beeping sounds that change depending on your proximity towards obstacles and identifying surrounding objects and convert to speech to alert the user.
## How we built it
ARKit; RealityKit uses Lidar sensor to detect the distance; AVFoundation, text to speech technology; CoreML with YoloV3 real time object detection machine learning model; SwiftUI
## Challenges we ran into
Computational efficiency. Going through all pixels in the LiDAR sensor in real time wasn’t feasible. We had to optimize by cropping sensor data to the center of the screen
## Accomplishments that we're proud of
It works as intended.
## What we learned
We learned how to combine AR, AI, LiDar, ARKit and SwiftUI to make an iOS app in 15 hours.
## What's next for SeerAR
Expand to Apple watch and Android devices;
Improve the accuracy of object detection and recognition;
Connect with Firebase and Google cloud APIs; | ## Inspiration
We were inspired to create this as being computer science students we are always looking for opportunities to leverage our passion for technology by helping others.
## What it does
Think In Sync is a platform that uses groundbreaking AI to make learning more accessible. It has features that enable it to generate images and audio with a selected text description. It works with a given audio as well, by generating the equivalent text or image. This is done so that children have an easier time learning according to their primary learning language.
## How we built it
We built an interface and medium fidelity prototype using Figma. We used Python as our back end to integrate open AI's API.
## Challenges we ran into
None of us have worked with API keys and authentication previously so that was new for all of us.
## Accomplishments that we're proud of
We are proud of what we have accomplished given the short amount of time.
## What we learned
We have extended our computer science knowledge out of the syllabus and we have learned more about collaboration and teamwork.
## What's next for Think In Sync
Creating a high-fidelity prototype along with integrating the front end to the back end. | winning |
## Inspiration
Concrete, potholes, and traffic jams. We've created a way to find the diamonds in the rough amongst an evergrowing, urban metropolis.
## What it does
Simply open up the app, and check for beautiful murals in the vicinity.
The app opens up to a map with all of the current murals present in Montreal. Find murals near you by clicking on the red tags for a preview and some interesting facts.
## How we built it
The city of Montreal provides open data for all of its current murals, found in this JSON file: <http://donnees.ville.montreal.qc.ca/dataset/murales/resource/d325352b-1c06-4c3a-bf5e-1e4c98e0636b>
Java was used to parse the JSON open data and create a list of Mural objects. The methods were called in Android Studio to obtain the list of murals and populate them on the google map using the Google Maps API. Clicking on a tag on the map opens up a page with the address and some additional information about each mural.
## Challenges we ran into
The UI for the android application required extensive use of google search and StackOverflow.
## Accomplishments that we're proud of
Getting the app to work. We ended up with a simple, yet functional interface that is easy to navigate. The application met our original specifications.
## What we learned
None of us had extensive experience in Android Studio or UI design, so learning on the fly was a tedious, but fruitful process, where we learned mobile development framework and getting XML and Java files to work together.
## What's next for MTLMural
If we can do this for murals, we can do this for restaurants, museums, Bixi bike stations, etc. Everything will feel just a little closer. Maybe even avoid construction? ;) | ## Inspiration
Crime rates are on the rise across America, and many people, especially women, fear walking alone at night, even in their own neighborhoods. When we first came to MIT, we all experienced some level of harassment on the streets. Current navigation apps do not include the necessary safety precautions that pedestrians need to identify and avoid dimly-lit, high-crime areas.
## What it does
Using a combination of police crime reports and the Mapbox API, VIA offers users multiple secure paths to their destination and a user-friendly display of crime reports within the past few months. Ultimately, VIA is an app that provides up-to-date data about the safety of pedestrian routes.
## How we built it
We built our interactive map with the Mapbox API, programming functions with HTML and Javascript which overlays Boston police department crime data on the map and generates multiple routes given start and end destinations.
## Challenges we ran into
We had some difficulty with instruction banners at the end of the hackathon that we will definitely work on in the future.
## Accomplishments that we're proud of
None of us had much experience with frontend programming or working with APIs, and a lot of the process was trial and error. Creating the visuals for the maps in such a short period of time pushed us to step out of our comfort zones. We'd been ideating this project for quite some time, so actually creating an MVP is something we are very proud of.
## What we learned
This project was the first time that any of us actually built tangible applications outside of school, so coding this in 24-hours was a great learning experience. We learned about working with APIs and how to read the documentation involved in using them as well as breaking down data files into workable data structures. With all of us having busy schedules this weekend, it was also important to communicate properly so that we each new what our tasks were for the day as we weren't all together for a majority of the hackathon. However, we were all able to collaborate well, and we learned how to communicate effectively and work together to overcome our project challenges.
## What's next for VIA
We plan on working outside of school on this project to hone some of the designs and make the navigation features with the data available beyond Boston. There are many areas that we can improve the design, such as making the application a mobile app instead of a web app, which we will consider working on in the future. | ## Presentation + Award
See the presentation and awards ceremony here: <https://www.youtube.com/watch?v=jd8-WVqPKKo&t=351s&ab_channel=JoshuaQin>
## Inspiration
Back when we first came to the Yale campus, we were stunned by the architecture and the public works of art. One monument in particular stood out to us - the *Lipstick (Ascending) on Caterpillar Tracks* in the Morse College courtyard, for its oddity and its prominence. We learned from fellow students about the background and history behind the sculpture, as well as more personal experiences on how students used and interacted with the sculpture over time.
One of the great joys of traveling to new places is to learn about the community from locals, information which is often not recorded anywhere else. From monuments to parks to buildings, there are always interesting fixtures in a community with stories behind them that would otherwise go untold. We wanted to create a platform for people to easily discover and share those stories with one another.
## What it does
Our app allows anybody to point their phone camera at an interesting object, snap a picture of it, and learn more about the story behind it. Users also have the ability to browse interesting fixtures in the area around them, add new fixtures and stories by themselves, or modify and add to existing stories with their own information and experiences.
In addition to user-generated content, we also wrote scripts that scraped Wikipedia for geographic location, names, and descriptions of interesting monuments from around the New Haven community. The data we scraped was used both for testing purposes and to serve as initial data for the app, to encourage early adoption.
## How we built it
We used a combination of GPS location data and Google Cloud's image comparison tools to take any image snapped of a fixture and identify in our database what the object is. Our app is able to identify any fixture by first considering all the known fixtures within a fixed radius around the user, and then considering the similarity between known images of those fixtures and the image sent in by the user. Once we have identified the object, we provide a description of the object to the user. Our app also provides endpoints for members of the community to contribute their knowledge by modifying descriptions.
Our client application is a PWA written in React, which allows us to quickly deploy a lightweight and mobile-friendly app on as many devices as possible. Our server is written in Flask and Python, and we use Redis for our data store.
We used GitHub for source control and collaboration and organized our project by breaking it into three layers and providing each their separate repository in a GitHub organization. We used GitHub projects and issues to keep track of our to-dos and assign roles to different members of the team.
## Challenges we ran into
The first challenge that we ran into is that Google Cloud's image comparison tools were designed to recognize products rather than arbitrary images, which still worked well for our purposes but required us to implement workarounds. Because products couldn't be tagged by geographic data and could only be tagged under product categories, we were unable to optimize our image recognition to a specific geographic area, which could pose challenges to scaling. One workaround that we discussed was to implement several regions with overlapping fixtures, so that the image comparisons could be limited to any given user's immediate surroundings.
This was also the first time that many of us had used Flask before, and we had a difficult time choosing an appropriate architecture and structure. As a result, the integration between the frontend, middleware, and AI engine has not been completely finished, although each component is fully functional on its own. In addition, our team faced various technical difficulties throughout the duration of the hackathon.
## Accomplishments that we're proud of
We're proud of completing a fully functional PWA frontend, for effectively scraping 220+ locations from Wikipedia to populate our initial set of data, and for successfully implementing the Google Cloud's image comparison tools to meet our requirements, despite its limitations.
## What we learned
Many of the tools that we worked on in this hackathon were new to the members working on them. We learned a lot about Google Cloud's image recognition tools, progressive web applications, and Flask with Python-based web development.
## What's next for LOCA
We believe that our project is both unique and useful. Our next steps are to finish the integration between our three layers, add authentication and user roles, and implement a Wikipedia-style edit history record in order to keep track of changes over time. We would also want to add features to the app that would reward members of the community for their contributions, to encourage active participants. | losing |
## Inspiration
We were inspired by how there were many instances of fraud with regards to how donations were handled. It is heartbreaking to see how mismanagement can lead to victims of disasters not receiving the help that they desperately need.
## What it does
TrustTrace introduces unparalleled levels of transparency and accountability to charities and fundraisers. Donors will now feel more comfortable donating to causes such as helping earthquake victims since they will now know how their money will be spent, and where every dollar raised is going to.
## How we built it
We created a smart contract that allowed organisations to specify how much money they want to raise and how they want to segment their spending into specific categories. This is then enforced by the smart contract, which puts donors at ease as they know that their donations will not be misused, and will go to those who are truly in need.
## Challenges we ran into
The integration of smart contracts and the web development frameworks were more difficult than we expected, and we overcame them with lots of trial and error.
## Accomplishments that we're proud of
We are proud of being able to create this in under 36 hours for a noble cause. We hope that this creation can eventually evolve into a product that can raise interest, trust, and participation in donating to humanitarian causes across the world.
## What we learned
We learnt a lot about how smart contract works, how they are deployed, and how they can be used to enforce trust.
## What's next for TrustTrace
Expanding towards serving a wider range of fundraisers such as GoFundMes, Kickstarters, etc. | ## Inspiration
Many students rely on scholarships to attend college. As students in different universities, the team understands the impact of scholarships on people's college experiences. When scholarships fall through, it can be difficult for students who cannot attend college without them. In situations like these, they have to depend on existing crowdfunding websites such as GoFundMe. However, platforms like GoFundMe are not necessarily the most reliable solution as there is no way of verifying student status and the success of the campaign depends on social media reach. That is why we designed ScholarSource: an easy way for people to donate to college students in need!
## What it does
ScholarSource harnesses the power of blockchain technology to enhance transparency, security, and trust in the crowdfunding process. Here's how it works:
Transparent Funding Process: ScholarSource utilizes blockchain to create an immutable and transparent ledger of all transactions and donations. Every step of the funding process, from the initial donation to the final disbursement, is recorded on the blockchain, ensuring transparency and accountability.
Verified Student Profiles: ScholarSource employs blockchain-based identity verification mechanisms to authenticate student profiles. This process ensures that only eligible students with a genuine need for funding can participate in the platform, minimizing the risk of fraudulent campaigns.
Smart Contracts for Funding Conditions: Smart contracts, powered by blockchain technology, are used on ScholarSource to establish and enforce funding conditions. These self-executing contracts automatically trigger the release of funds when predetermined criteria are met, such as project milestones or the achievement of specific research outcomes. This feature provides donors with assurance that their contributions will be used appropriately and incentivizes students to deliver on their promised objectives.
Immutable Project Documentation: Students can securely upload project documentation, research papers, and progress reports onto the blockchain. This ensures the integrity and immutability of their work, providing a reliable record of their accomplishments and facilitating the evaluation process for potential donors.
Decentralized Funding: ScholarSource operates on a decentralized network, powered by blockchain technology. This decentralization eliminates the need for intermediaries, reduces transaction costs, and allows for global participation. Students can receive funding from donors around the world, expanding their opportunities for financial support.
Community Governance: ScholarSource incorporates community governance mechanisms, where participants have a say in platform policies and decision-making processes. Through decentralized voting systems, stakeholders can collectively shape the direction and development of the platform, fostering a sense of ownership and inclusivity.
## How we built it
We used React and Nextjs for the front end. We also integrated with ThirdWeb's SDK that provided authentication with wallets like Metamask. Furthermore, we built a smart contract in order to manage the crowdfunding for recipients and scholars.
## Challenges we ran into
We had trouble integrating with MetaMask and Third Web after writing the solidity contract. The reason was that our configuration was throwing errors, but we had to configure the HTTP/HTTPS link,
## Accomplishments that we're proud of
Our team is proud of building a full end-to-end platform that incorporates the very essence of blockchain technology. We are very excited that we are learning a lot about blockchain technology and connecting with students at UPenn.
## What we learned
* Aleo
* Blockchain
* Solidity
* React and Nextjs
* UI/UX Design
* Thirdweb integration
## What's next for ScholarSource
We are looking to expand to other blockchains and incorporate multiple blockchains like Aleo. We are also looking to onboard users as we continue to expand and new features. | ## Inspiration
**Addressing a Dual-Faceted Issue in Philanthropy to Support UN Sustainability Goals**
Charity Chain was inspired by the pervasive issues of mistrust and inefficiency in the philanthropic sector - a problem affecting both donors and charities. By recognizing this gap, we aimed to create a solution that not only empowers individual giving but also aligns with the broader vision of the United Nations Sustainable Development Goals, particularly Goal 17: Revitalize the global partnership for sustainable development.
## What it does
**Charity Chain: A Platform for Transparent and Effective Giving**
For Donors: Provides a transparent, traceable path for their donations, ensuring that their contributions are used effectively and for the intended purposes.
For Charities: Offers a platform to demonstrate their commitment to transparency and effectiveness, helping them build trust and secure more funding.
## How we built it
**Tech Stack**: Utilized React for a dynamic front-end, Node.js/Express.js for a robust server-side solution, and blockchain technology for secure, transparent record-keeping.
**Collaborative Design**: Engaged with charity organizations, donors, and technology experts to create a user-friendly platform that meets the needs of all stakeholders.
## Challenges we ran into
**Integrating blockchain**: As we were all new to blockchain and web3, we didn't have a basis of what to
**Onramping fiat currency**: Something we wished to do was for donors to be able to donate through using their government currency (CAD, USD, etc) and be converted to a stable cryptocurrency, such as USDT. This way we any donation that was made would be tracked and be made visible on the blockchain and donors would be able to see where their money was being used. However, to get access to API keys we would have to apply, regardless of the API we were using (thirdfy, changelly, ramp network, kraken, etc) which would take up several business days for approval.
## Accomplishments that we're proud of
Accomplishing the various aspects separately
-Learned what a blockchain is
-Learned how to create our own cryptocurrencies
-Learned react and tailwindcss
-Learned ethers.js with solidity to connect frontend to web3
## What we learned
**Blockchainining** and how we could use and create our own CryptoCurrency using Solidity and the Remix IDE.
**Ethers.js**: We learned how to use ethers.js in order to connect our frontend to web3. This allowed us to incorporate Smart Contracts into our front-end
\*\*How to query live transactions from a frront-end interface to a backend blockchain.
## What's next for Charity Chain
**Onramping fiat currency**: This allows for a simpler, more inclusive donor side that wouldn't require their knowledge of blockchain and could donate simply with paypal or a banking card.
**Purchases through CharityChain**: Hosting purchases on behalf of hosted charities (similar to how Raukten) | winning |
## Inspiration
Large Language Models (LLMs) are limited by a token cap, making it difficult for them to process large contexts, such as entire codebases. We wanted to overcome this limitation and provide a solution that enables LLMs to handle extensive projects more efficiently.
## What it does
LLM Pro Max intelligently breaks a codebase into manageable chunks and feeds only the relevant information to the LLM, ensuring token efficiency and improved response accuracy. It also provides an interactive dependency graph that visualizes the relationships between different parts of the codebase, making it easier to understand complex dependencies.
## How we built it
Our landing page and chatbot interface were developed using React. We used Python and Pyvis to create an interactive visualization graph, while FastAPI powered the backend for dependency graph content. We've added third-party authentication using the GitHub Social Identity Provider on Auth0. We set up our project's backend using Convex and also added a Convex database to store the chats. We implemented Chroma for vector embeddings of GitHub codebases, leveraging advanced Retrieval-Augmented Generation (RAG) techniques, including query expansion and re-ranking. This enhanced the Cohere-powered chatbot’s ability to respond with high accuracy by focusing on relevant sections of the codebase.
## Challenges we ran into
We faced a learning curve with vector embedding codebases and applying new RAG techniques. Integrating all the components—especially since different team members worked on separate parts—posed a challenge when connecting everything at the end.
## Accomplishments that we're proud of
We successfully created a fully functional repo agent capable of retrieving and presenting highly relevant and accurate information from GitHub repositories. This feat was made possible through RAG techniques, surpassing the limits of current chatbots restricted by character context.
## What we learned
We deepened our understanding of vector embedding, enhanced our skills with RAG techniques, and gained valuable experience in team collaboration and merging diverse components into a cohesive product.
## What's next for LLM Pro Max
We aim to improve the user interface and refine the chatbot’s interactions, making the experience even smoother and more visually appealing. (Please Fund Us) | ## Inspiration
it's really fucking cool that big LLMs (ChatGPT) are able to figure out on their own how to use various tools to accomplish tasks.
for example, see Toolformer: Language Models Can Teach Themselves to Use Tools (<https://arxiv.org/abs/2302.04761>)
this enables a new paradigm self-assembling software: machines controlling machines.
what if we could harness this to make our own lives better -- a lil LLM that works for you?
## What it does
i made an AI assistant (SMS) using GPT-3 that's able to access various online services (calendar, email, google maps) to do things on your behalf.
it's just like talking to your friend and asking them to help you out.
## How we built it
a lot of prompt engineering + few shot prompting.
## What's next for jarbls
shopping, logistics, research, etc -- possibilities are endless
* more integrations !!!
the capabilities explode exponentially with the number of integrations added
* long term memory
come by and i can give you a demo | ## Inspiration
Some of our team members are hobbyist DJs, playing shows for college parties and other local events. As smaller-size DJs we've always dreamed of having amazing live visuals for our sets, at the same quality as the largest EDM festivals; however, the cost, effort, and skillset was clearly infeasible for us, and infeasible for any musicians but the higher-profile ones who have the budget to commission lengthy visualizations and animations from third-party digital artists. That was a few years ago; recently, since generative vision models in ML have reached such a high quality of output, and open-source models (namely, stable diffusion) allow for intricate, transformative engineering at the level of model internals, we've realized our music live visuals dream has become possible.
We've come up with an app that takes a song and a prompt, and generates a music video in real-time that's perfectly synced to the music. The app will allow DJs and other musicians to have professional-level visuals for their live shows without the need for expensive commissions or complex technical setups. With our app, any musician can have access to stunning visuals that will take their performances to the next level.
## What it does
Mixingjays is an innovative video editing tool powered by machine learning that's designed specifically for creating stunning visual accompaniments to music. Our app allows clients to upload their music tracks and specify various parameters such as textures, patterns, objects, colors, and effects for different sections of the song.
Our system then takes these specifications and translates them into prompts for our advanced ML stable diffusion model, which generates a unique video tailored to the uploaded music track. Once the video is generated, the user can easily edit it using our intuitive and user-friendly interface. Our app provides users with a range of editing options, from basic graphical edits (similar to popular photo editing apps like Instagram and VSCO) to advanced generative edits that are handled by our ML pipeline.
The user experience of Mixingjays is similar to that of well-known video editing software like Adobe Premiere or Final Cut, but with the added power of our machine learning technology driving the video outputs. Our app provides users with the ability to create professional-quality music videos with ease and without the need for extensive technical knowledge. Whether you're a professional musician or an amateur DJ, Mixingjays can help you create stunning visuals that will take your music to the next level while giving the musicians full creative control over the video.
## How we built it
One of our baseline innovations is generating video from stable diffusion and engineering on the model internals of stable diffusion. As a standalone, stable diffusion outputs static images based on natural language prompts. Given a list of images that stable diffusion outputs from several prompts, we're able to generate video between the images by interpolating between images in CLIP latent space. A detail of stable diffusion is that it actually computes its output on a latent space, a compressed coordinate system the the model abstractly uses to represent images and text; the latent vector in latent space is then decoded into the image we see. This latent space is well-behaved in Euclidean space, so we can generate new image outputs and smooth transitions between images, strung into a video, by linearly interpolating between the latent vectors of two different images before decoding every interpolated vector. If the two images we interpolate between are from two different prompts from the same seed of stable diffusion, or the same prompt across two different seeds of stable diffusion, the resulting interpolations and video end up being semantically coherent and appealing.[1] A nice augmentation to our interpolation technique is perturbing the latent vector in coordinate space, making a new latent vector anywhere in a small Lp-norm ball surrounding the original vector. (This resembles the setting of adversarial robustness in computer vision research, although our application is not adversarial.) As a result, given a list of stable diffusion images and their underlying latent vectors, we can generate video by latent-space-interpolating between the images in order.
We generate videos directly from the music, which relies on a suite of algorithms for music analysis. The demucs model from Facebook AI Research (FAIR) performs "stem separation," or isolation of certain individual instruments/elements in a music track; we pass our music track into the model and generate four new music tracks of the same length, containing only the vocals, drums, bass, and melodic/harmonic instruments of the track, respectively. With OpenAI Whisper, we're able to extract all lyrics from the isolated vocal track, as well as bucket lyrics into corresponding timestamps at regular intervals. With more classical music analysis algorithms, we're able to:
* convert our drum track and bass track into timestamps for the rhythm/groove of the track
* and convert our melody/harmony track into a timestamped chord progression, essentially the formal musical notation of harmonic information in the track.
These aspects of music analysis directly interface into our stable diffusion and latent-space-interpolation pipeline. In practice, the natural language prompts into stable diffusion usually consist of long lists of keywords specifying:
* objects, textures, themes, backgrounds
* physical details
* adjectives influencing tone of the picture
* photographic specs: lighting, shadows, colors, saturation/intensity, image quality ("hd", "4k", etc)
* artistic styles
Many of these keywords are exposed to the client at the UI level of our application, as small text inputs, sliders, knobs, dropdown menus, and more. As a result, the client retains many options for influencing the video output, independently of the music. These keywords are concatenated with natural language encapsulations of musical aspects:
* Lyrics are directly added to the prompts.
* Melodic/harmonic information and chord progressions map to different colors and themes, based on different chords' "feel" in music theory.
* Rhythm and groove is added to the interpolation system! We speed up, slow down, and alter our trajectory of movement though latent space in time with the rhythm.
The result is a high-quality visualizer that incorporates both the user's specifications/edits and the diverse analyzed aspects of the music track.
Theoretically, with good engineering, we'd be able to run our video generation and editing pipelines very fast, essentially in real time! Because interpolation occurs at the very last layer of the stable diffusion model internals, and video output from it depends only on a simple decoder instead of the entire stable diffusion model, video generation from interpolation is very fast and the runtime bottleneck depends only on stable diffusion. We generate one or two dozen videos from stable diffusion per video, so by generating each image in parallel on a wide GPU array, as well as incorporating stable diffusion speedups, we're able to generate the entire video for a music track in a few seconds!
## Challenges we ran into
Prompt generation: Generating good quality prompts is hard. We tried different ways of prompting the model to see its affect on the quality of generated images and output video and discovered we needed more description of the song in addition to the manually inputted prompt. This motivated us to look into music analysis.
Music Analysis: Our music analysis required pulling together a lot of disparate libraries and systems for digital signal processing that aren't typically used together, or popular enough to be highly supported. The challenge here was wrangling these libraries into a single system without the pipeline crumbling to obscure bugs.
Stable Diffusion: The main challenge in using stable diffusion for multiple image generation is the overhead cost, compute and time. Primitive implementations of stable diffusion took 30s to generate an image which made things hard for us since we need to generate multiple images against each song and prompt pair. Modal workshop at Treehacks was very helpful for us to navigate this issue. We used modal containers and stubs to bake our functions into the images such that they are run only when the functions are called and free the GPUs otherwise for other functions to run. It also helped us parallelise our code which made things faster. However, since it is new, it was difficult to get it to recognise other files, adding data types to function methods etc.
Interpolation: This was by far the hardest part. Interpolation is not only slow but also hard to implement or reproduce. After extensive research and trials with different libraries, we used Google Research’s Frame-interpolation method. It is implemented in tensorflow, in contrary to the other code which was more PyTorch heavy. In addition to that it is slow and scales exponentially with the number of images. A 1 minute video took 10 minutes to generate. Given the 36 hour time limitation, we had to generate the videos in advance for our app but there is tons of scope to make it faster.
Integration: The biggest bottlenecks came about when trying to generate visuals for arbitrary input mp3 files. Because of this, we had to reduce our expectation of on-the-fly, real-time rendering of any input file, to a set of input files that we generated offline. Render times ranged anywhere from 20 seconds to many minutes, which means that we’re still a little bit removed from bringing this forward as a real-application for users around the world to toy with. Real-time video editing capabilities also proved difficult to implement given the lack of time for building substantial infrastructure, but we see this as a next step in building out a fully functional product that anyone can use!
## Accomplishments that we're proud of
We believe we've made significant progress towards a finalized, professional-grade application for our purposes. We showed that the interpolation idea produces video output that is high-quality, semantically coherent, granularly editable, and closely linked to the concurrent musical aspects of rhythm, melody, harmony, and lyrics. This makes our video offerings genuinely usable in professional live performances. Our hackathon result is a more rudimentary proof-of-concept, but our results lead us to believe that continuing this project in a longer-term setting would lead to a robust, fully-featured offering that directly plugs into the creative workflow of hobbyist and professional DJs and other musical live performers.
## What we learned
We found ourselves learning a ton about the challenges of implementing cutting-edge AI/ML techniques in the context of a novel product. Given the diverse set of experiences that each of our team members have, we also learned a ton through cross-team collaboration between scientists, engineers, and designers.
## What's next for Mixingjays
Moving forward, we have several exciting plans for improving and expanding our app's capabilities. Here are a few of our top priorities:
Improving speed and latency: We're committed to making our app as fast and responsive as possible. That means improving the speed and latency of our image-to-video interpolation process, as well as optimizing other parts of the app for speed and reliability.
Video editing: We want to give our users more control over their videos. To that end, we're developing an interactive video editing tool that will allow users to regenerate portions of their videos and choose which generation they want to keep. Additionally, we're exploring the use of computer vision to enable users to select, change, and modify objects in their generated videos.
Script-based visualization: We see a lot of potential in using our app to create visualizations based on scripts, particularly for plays and music videos. By providing a script, our app could generate visualizations that offer guidance on stage direction, camera angles, lighting, and other creative elements. This would be a powerful tool for creators looking to visualize their ideas in a quick and efficient manner.
Overall, we're excited to continue developing and refining our app to meet the needs of a wide range of creators and musicians. With our commitment to innovation and our focus on user experience, we're confident that we can create a truly powerful and unique tool for music video creation.
## References
Seth Forsgren and Hayk, Martiros, "Riffusion - Stable diffusion for real-time music generation." 2022. <https://riffusion.com/about> | winning |
## Inspiration
As most of our team became students here at the University of Waterloo, many of us had our first experience living in a shared space with roommates. Without the constant nagging by parents to clean up after ourselves that we found at home and some slightly unorganized roommates, many shared spaces in our residences and apartments like kitchen counters became cluttered and unusable.
## What it does
CleanCue is a hardware product that tracks clutter in shared spaces using computer vision. By tracking unused items taking up valuable counter space and making speech and notification reminders, CleanCue encourages roommates to clean up after themselves. This product promotes individual accountability and respect, repairing relationships between roommates, and filling the need some of us have for nagging and reminders by parents.
## How we built it
The current iteration of CleanCue is powered by a Raspberry Pi with a Camera Module sending a video stream to an Nvidia CUDA enabled laptop/desktop. The laptop is responsible for running our OpenCV object detection algorithms, which enable us to log how long items are left unattended and send appropriate reminders to a speaker or notification services. We used Cohere to create unique messages with personality to make it more like a maternal figure. Additionally, we used some TTS APIs to emulate a voice of a mother.
## Challenges we ran into
Our original idea was to create a more granular product which would customize decluttering reminders based on the items detected. For example, this version of the product could detect perishable food items and make reminders to return items to the fridge to prevent food spoilage. However, the pre-trained OpenCV models that we used did not have enough variety in trained items and precision to support this goal, so we settled for this simpler version for this limited hackathon period.
## Accomplishments that we're proud of
We are proud of our planning throughout the event, which allowed us to both complete our project while also enjoying the event. Additionally, we are proud of how we broke down our tasks at the beginning, and identified what our MVP was, so that when there were problems, we knew what our core priorities were. Lastly, we are glad we submitted a working project to Hack the North!!!!
## What we learned
The core frameworks that our project is built out of were all new to the team. We had never used OpenCV or Taipy before, but had a lot of fun learning these tools. We also learned how to create improvised networking infrastructure to enable hardware prototyping in a public hackathon environment. Though not on the technical side, we also learned the importance of re-assessing if our solution actually was solving the problem we were intending to solve throughout the project and make necessary adjustments based on what we prioritized. Also, this was our first hardware hack!
## What's next for CleanCue
We definitely want to improve our prototype to be able to more accurately describe a wide array of kitchen objects, enabling us to tackle more important issues like food waste prevention. Further, we also realized that the technology in this project can also aid individuals with dementia. We would also love to explore more in the mobile app development space. We would also love to use this to notify any dangers within the kitchen, for example, a young child getting too close to the stove, or an open fire left on for a long time. Additionally, we had constraints based on hardware availability, and ideally, we would love to use an Nvidia Jetson based platform for hardware compactness and flexibility. | We present a blockchain agnostic system for benchmarking smart contract execution times. To do this we designed a simple programming language capable of running small performance benchmarks. We then implemented an interpreter for that language on the Ethereum, Solana, and Polkadot blockchains in the form of a smart contract. To perform a measurement we then submit the same program to each chain and time its execution.
Deploying new smart contracts is expensive and learning the tooling and programming languages required for their deployment is time consuming. This makes a single blockchain agnostic language appealing for developers as it cuts down on cost and time. It also means that new blockchains can be added later and all of the existing tests easily run after the deployment of a single smart contract.
You can think of this as "a JVM for performance measurements." To demonstrate how this can be used to measure non-blockchain runtimes we also implemented an interpreter on Cloudflare Workers and present some benchmarks of that. Cloudflare Workers was an order of magnitude faster than the fastest blockchain we tested.
Our results show that network and mining time dominate smart contract execution time. Despite considerable effort we were unable to find a program that notably impacted the execution time of a smart contract while remaining within smart contract execution limits. These observations suggest three things:
1. Once a smart contract developer has written a functional smart contract there is little payoff to optimizing the code for performance as network and mining latency will dominate.
2. Smart contract developers concerned about performance should look primarily at transaction throughput and latency when choosing a platform to deploy their contracts.
3. Even blockchains like Solana which bill themselves as being high performance are much, much slower than their centralized counterparts.
### Results
We measured the performance of three programs:
1. An inefficient, recursive fibonacci number generator computing the 12th fibonacci number.
2. A program designed to "thrash the cache" by repeatedly making modifications to dispirate memory locations.
3. A simple program consisting of two instructions to measure cold start times
In addition to running these programs on our smart contracts we also wrote a runtime on top of Cloudflare Workers as a point of comparison. Like these smart contracts Cloudflare Workers run in geographically distributed locations and feature reasonably strict limitations on runtime resource consumption.
To compute execution time we measured the time between when the transaction to run the start contract was sent and when it was confirmed by the blockchain. Due to budgetary constraints our testing was done on test networks.
We understand that this is an imperfect proxy for actual code execution time. Due to determinism requirements on all of the smart contract platforms that we used, access to the system time is prohibited to smart contracts. This makes measuring actual code execution time difficult. Additionally as smart contracts are executed and validated on multiple miners it is not clear what a measurement of actual code execution time would mean. This is an area that we would like to explore further given the time.
In the meantime we imagine that most users of a smart contract benchmarking system care primarily about total transaction time. This is the time delay that users of their smart contracts will experience and also the time that we measure.
![](https://challengepost-s3-challengepost.netdna-ssl.com/photos/production/software_photos/001/714/760/datas/original.png)
Our results showed that Solana and Polkadot significantly outperformed Ethereum with Solana being the fastest blockchain we measured.
### Additional observations
While Solana was faster than Polkadot and Ethereum in our benchmarks it also had the most restrictive computational limits. The plot below shows the largest fibonacci number computable on each blockchain before computational limits were exceeded. Once again we include Cloudflare Workers as a non-blockchain baseline.
![](https://challengepost-s3-challengepost.netdna-ssl.com/photos/production/software_photos/001/714/761/datas/original.png)
### The benchmarking language
To provide a unified interface for performance measurements we have designed and implemented a 17 instruction programming language called Arcesco. For each platform we then implement a runtime for Arcesco and time the execution of a standard suite of programs.
Each runtime takes assembled Arcesco bytecode through stdin and prints the execution result to stdout. An example invocation might look like this:
```
cat program.bc | assembler | runtime
```
This unified runtime interface means that very different runtimes can be plugged in and run the same way. As testament to the simplicity of runtime implementations we were able to implement five different runtimes over the course of the weekend.
Arcesco is designed as a simple stack machine which is as easy as possible to implement an interpreter for. An example Arcesco program that computes the 10th fibonacci number looks like this:
```
pi 10
call fib
exit
fib:
copy
pi 3
jlt done
copy
pi 1
sub
call fib
rot 1
pi 2
sub
call fib
add
done:
ret
```
To simplify the job of Arcesco interpreters we have written a very simple bytecode compiler for Arcesco which replaces labels with relative jumps and encodes instructions into 40 bit instructions. That entire pipeline for the above program looks like this:
```
text | assembled | bytecode
----------------|---------------|--------------------
| |
pi 10 | pi 10 | 0x010a000000
call fib | call 2 | 0x0e02000000
exit | exit | 0x1100000000
fib: | |
copy | copy | 0x0200000000
pi 3 | pi 3 | 0x0103000000
jlt done | jlt 10 | 0x0b0a000000
copy | copy | 0x0200000000
pi 1 | pi 1 | 0x0101000000
sub | sub | 0x0400000000
call fib | call -6 | 0x0efaffffff
rot 1 | rot 1 | 0x0d01000000
pi 2 | pi 2 | 0x0102000000
sub | sub | 0x0400000000
call fib | call -10 | 0x0ef6ffffff
add | add | 0x0300000000
done: | |
ret | ret | 0x0f00000000
| |
```
Each bytecode instruction is five bytes. The first byte is the instructions opcode and the next four are its immediate. Even instructions without immediates are encoded this way to simplify instruction decoding in interpreters. We understand this to be a small performance tradeoff but as much as possible we were optimizing for ease of interpretation.
```
0 8 40
+--------+-------------------------------+
| opcode | immediate |
+--------+-------------------------------+
```
The result of this is that an interpreter for Arcesco bytecode is just a simple while loop and switch statement. Each bytecode instruction being the same size and format makes decoding instructions very simple.
```
while True:
switch opcode:
case 1:
stack.push(immediate)
break
# etc..
```
This makes it very simple to implement an interpreter for Arcesco bytecode which is essential for smart contracts where larger programs are more expensive and less auditable.
A complete reference for the Arcesco instruction set is below.
```
opcode | instruction | explanation
-----------------------------------
1 | pi <value> | push immediate - pushes VALUE to the stack
2 | copy | duplicates the value on top of the stack
3 | add | pops two values off the stack and adds them pushing
the result back onto the stack.
4 | sub | like add but subtracts.
5 | mul | like add but multiplies.
6 | div | like add but divides.
7 | mod | like add but modulus.
8 | jump <label> | moves program execution to LABEL
9 | jeq <label> | moves program execution to LABEL if the two two
stack values are equal. Pops those values from the
stack.
10 | jneq <label> | like jeq but not equal.
11 | jlt <label> | like jeq but less than.
12 | jgt <label> | like jeq but greater than.
13 | rot <value> | swaps stack item VALUE items from the top with the
stack item VALUE-1 items from the top. VALUE must
be >= 1.
14 | call <label> | moves program execution to LABEL and places the
current PC on the runtime's call stack
15 | ret | sets PC to the value on top of the call stack and
pops that value.
16 | pop | pops the value on top of the stack.
17 | exit | terminates program execution. The value at the top
of the stack is the program's return value.
```
### Reflections on smart contract development
Despite a lot of hype about smart contracts we found that writing them was quite painful.
Solana was far and away the most pleasant to work with as its `solana-test-validator` program made local development easy. Solana's documentation was also approachable and centralized. The process of actually executing a Solana smart contract after it was deployed was very low level and required a pretty good understanding of the entire stack before it could be done.
Ethereum comes in at a nice second. The documentation was reasonably approachable and the sheer size of the Ethereum community meant that there was almost too much information. Unlike Solana though, we were unable to set up a functional local development environment which meant that the code -> compile -> test feedback loop was slow. Working on Ethereum felt like working on a large C++ project where you spend much of your time waiting for things to compile.
Polkadot was an abject nightmare to work with. The documentation was massively confusing and what tutorials did exist failed to explain how one might interface with a smart contract outside of some silly web UI. This was surprising given that Polkadot has a $43 billion market cap and was regularly featured in "best smart contract" articles that we read at the beginning of this hackathon.
We had a ton of fun working on this project. Externally, it can often be very hard to tell the truth from marketing fiction when looking in the blockchain space. It was fun to dig into the technical details of it for a weekend.
### Future work
On our quest to find the worst performing smart contract possible, we would like to implement a fuzzer that integrates with Clockchain to generate adversarial bytecode. We would also like to explore the use of oracles in blockchains for more accurate performance measurements. Finally, we would like to flesh out our front-end to be dynamically usable for a wide audience. | ## Inspiration
It’'s pretty common that you will come back from a grocery trip, put away all the food you bought in your fridge and pantry, and forget about it. Even if you read the expiration date while buying a carton of milk, chances are that a decent portion of your food will expire. After that you’ll throw away food that used to be perfectly good. But, that’s only how much food you and I are wasting. What about everything that Walmart or Costco trashes on a day to day basis?
Each year, 119 billion pounds of food is wasted in the United States alone. That equates to 130 billion meals and more than $408 billion in food thrown away each year.
About 30 percent of food in American grocery stores is thrown away. US retail stores generate about 16 billion pounds of food waste every year.
But, if there was a solution that could ensure that no food would be needlessly wasted, that would change the world.
## What it does
PantryPuzzle will scan in images of food items as well as extract its expiration date, and add it to an inventory of items that users can manage. When food nears expiration, it will notify users to incentivize action to be taken. The app will take actions to take with any particular food item, like recipes that use the items in a user’s pantry according to their preference. Additionally, users can choose to donate food items, after which they can share their location to food pantries and delivery drivers.
## How we built it
We built it with a React frontend and a Python flask backend. We stored food entries in a database using Firebase. For the food image recognition and expiration date extraction, we used a tuned version of Google Vision API’s object detection and optical character recognition (OCR) respectively. For the recipe recommendation feature, we used OpenAI’s GPT-3 DaVinci large language model. For tracking user location for the donation feature, we used Nominatim open street map.
## Challenges we ran into
React to properly display
Storing multiple values into database at once (food item, exp date)
How to display all firebase elements (doing proof of concept with console.log)
Donated food being displayed before even clicking the button (fixed by using function for onclick here)
Getting location of the user to be accessed and stored, not just longtitude/latitude
Needing to log day that a food was gotten
Deleting an item when expired.
Syncing my stash w/ donations. Don’t wanna list if not wanting to donate anymore)
How to delete the food from the Firebase (but weird bc of weird doc ID)
Predicting when non-labeled foods expire. (using OpenAI)
## Accomplishments that we're proud of
* We were able to get a good computer vision algorithm that is able to detect the type of food and a very accurate expiry date.
* Integrating the API that helps us figure out our location from the latitudes and longitudes.
* Used a scalable database like firebase, and completed all features that we originally wanted to achieve regarding generative AI, computer vision and efficient CRUD operations.
## What we learned
We learnt how big of a problem the food waste disposal was, and were surprised to know that so much food was being thrown away.
## What's next for PantryPuzzle
We want to add user authentication, so every user in every home and grocery has access to their personal pantry, and also maintains their access to the global donations list to search for food items others don't want.
We integrate this app with the Internet of Things (IoT) so refrigerators can come built in with this product to detect food and their expiry date.
We also want to add a feature where if the expiry date is not visible, the app can predict what the likely expiration date could be using computer vision (texture and color of food) and generative AI. | partial |
## Inspiration
My college friends and brother inspired me for doing such good project .This is mainly a addictive game which is same as we played in keypad phones
## What it does
This is a 2-d game which includes tunes graphics and much more .we can command the snake to move ip-down-right and left
## How we built it
I built it using pygame module in python
## Challenges we ran into
Many bugs are arrived such as runtime error but finally i manged to fix all this problems
## Accomplishments that we're proud of
I am proud of my own project that i built a user interactive program
## What we learned
I learned to use pygame in python and also this project attarct me towards python programming
## What's next for Snake Game using pygame
Next I am doing various python projects such as alarm,Virtual Assistant program,Flappy bird program,Health management system And library management system using python | ## Inspiration
this is a project which is given to me by an organization and my collegues inspierd me to do this project
## What it does
It can remind what we have to do in future and also set time when it is to be done
## How we built it
I built it using command time utility in python programming
## Challenges we ran into
Many challanges such as storing data in file and many bugs ae come in between middle of this program
## Accomplishments that we're proud of
I am proud that i make this real time project which reminds a person todo his tasks
## What we learned
I leraned more about command line utility of python
## What's next for Todo list
Next I am doing various projects such as Virtual assistant and game development | ## Inspiration
Instead of a traditional keyboard, hand-held equipment can always bring players with more interactive gaming experience. However, current devices like the Nintendo Joy-Con are platform-exclusive, and these platforms usually do not come at a very cheap price. This project aims to make these interactive games universal.
## What it does
The hand-hold module detects the movement based on readings from the MPU 6050 accelerometer inside. The feedback signal goes back to the rhythm game built upon Python as input
## How we built it
The GUI was built with the Pygames library. The serial library was used to read the input data from the Arduino. Arduino IDE sketch uses the MPU6050 library to collect the data from the accelerometer, which was wired according to the datasheet.
## Challenges we encountered
Installing the MPU6050 library to the Arduino IDE resulted in a library not found error during compilation for one of the laptops even when the same steps worked on the other three laptops. In the end, it was found to be the path in the setting of the IDE which was hard to notice. Occasionally, python code times out when running due to problems with data transfer from the Arduino IDE.
## What we are proud of
Create a working mini version of the game from scratch within the time limit given. | partial |
## Inspiration
College students are busy, juggling classes, research, extracurriculars and more. On top of that, creating a todo list and schedule can be overwhelming and stressful. Personally, we used Google Keep and Google Calendar to manage our tasks, but these tools require constant maintenance and force the scheduling and planning onto the user.
Several tools such as Motion and Reclaim help business executives to optimize their time and maximize productivity. After talking to our peers, we realized college students are not solely concerned with maximizing output. Instead, we value our social lives, mental health, and work-life balance. With so many scheduling applications centered around productivity, we wanted to create a tool that works **with** users to maximize happiness and health.
## What it does
Clockwork consists of a scheduling algorithm and full-stack application. The scheduling algorithm takes in a list of tasks and events, as well as individual user preferences, and outputs a balanced and doable schedule. Tasks include a name, description, estimated workload, dependencies (either a start date or previous task), and deadline.
The algorithm first traverses the graph to augment nodes with additional information, such as the eventual due date and total hours needed for linked sub-tasks. Then, using a greedy algorithm, Clockwork matches your availability with the closest task sorted by due date. After creating an initial schedule, Clockwork finds how much free time is available, and creates modified schedules that satisfy user preferences such as workload distribution and weekend activity.
The website allows users to create an account and log in to their dashboard. On the dashboard, users can quickly create tasks using both a form and a graphical user interface. Due dates and dependencies between tasks can be easily specified. Finally, users can view tasks due on a particular day, abstracting away the scheduling process and reducing stress.
## How we built it
The scheduling algorithm uses a greedy algorithm and is implemented with Python, Object Oriented Programming, and MatPlotLib. The backend server is built with Python, FastAPI, SQLModel, and SQLite, and tested using Postman. It can accept asynchronous requests and uses a type system to safely interface with the SQL database. The website is built using functional ReactJS, TailwindCSS, React Redux, and the uber/react-digraph GitHub library. In total, we wrote about 2,000 lines of code, split 2/1 between JavaScript and Python.
## Challenges we ran into
The uber/react-digraph library, while popular on GitHub with ~2k stars, has little documentation and some broken examples, making development of the website GUI more difficult. We used an iterative approach to incrementally add features and debug various bugs that arose. We initially struggled setting up CORS between the frontend and backend for the authentication workflow. We also spent several hours formulating the best approach for the scheduling algorithm and pivoted a couple times before reaching the greedy algorithm solution presented here.
## Accomplishments that we're proud of
We are proud of finishing several aspects of the project. The algorithm required complex operations to traverse the task graph and augment nodes with downstream due dates. The backend required learning several new frameworks and creating a robust API service. The frontend is highly functional and supports multiple methods of creating new tasks. We also feel strongly that this product has real-world usability, and are proud of validating the idea during YHack.
## What we learned
We both learned more about Python and Object Oriented Programming while working on the scheduling algorithm. Using the react-digraph package also was a good exercise in reading documentation and source code to leverage an existing product in an unconventional way. Finally, thinking about the applications of Clockwork helped us better understand our own needs within the scheduling space.
## What's next for Clockwork
Aside from polishing the several components worked on during the hackathon, we hope to integrate Clockwork with Google Calendar to allow for time blocking and a more seamless user interaction. We also hope to increase personalization and allow all users to create schedules that work best with their own preferences. Finally, we could add a metrics component to the project that helps users improve their time blocking and more effectively manage their time and energy. | ## Inspiration
Us college students all can relate to having a teacher that was not engaging enough during lectures or mumbling to the point where we cannot hear them at all. Instead of finding solutions to help the students outside of the classroom, we realized that the teachers need better feedback to see how they can improve themselves to create better lecture sessions and better ratemyprofessor ratings.
## What it does
Morpheus is a machine learning system that analyzes a professor’s lesson audio in order to differentiate between various emotions portrayed through his speech. We then use an original algorithm to grade the lecture. Similarly we record and score the professor’s body language throughout the lesson using motion detection/analyzing software. We then store everything on a database and show the data on a dashboard which the professor can access and utilize to improve their body and voice engagement with students. This is all in hopes of allowing the professor to be more engaging and effective during their lectures through their speech and body language.
## How we built it
### Visual Studio Code/Front End Development: Sovannratana Khek
Used a premade React foundation with Material UI to create a basic dashboard. I deleted and added certain pages which we needed for our specific purpose. Since the foundation came with components pre build, I looked into how they worked and edited them to work for our purpose instead of working from scratch to save time on styling to a theme. I needed to add a couple new original functionalities and connect to our database endpoints which required learning a fetching library in React. In the end we have a dashboard with a development history displayed through a line graph representing a score per lecture (refer to section 2) and a selection for a single lecture summary display. This is based on our backend database setup. There is also space available for scalability and added functionality.
### PHP-MySQL-Docker/Backend Development & DevOps: Giuseppe Steduto
I developed the backend for the application and connected the different pieces of the software together. I designed a relational database using MySQL and created API endpoints for the frontend using PHP. These endpoints filter and process the data generated by our machine learning algorithm before presenting it to the frontend side of the dashboard. I chose PHP because it gives the developer the option to quickly get an application running, avoiding the hassle of converters and compilers, and gives easy access to the SQL database. Since we’re dealing with personal data about the professor, every endpoint is only accessible prior authentication (handled with session tokens) and stored following security best-practices (e.g. salting and hashing passwords). I deployed a PhpMyAdmin instance to easily manage the database in a user-friendly way.
In order to make the software easily portable across different platforms, I containerized the whole tech stack using docker and docker-compose to handle the interaction among several containers at once.
### MATLAB/Machine Learning Model for Speech and Emotion Recognition: Braulio Aguilar Islas
I developed a machine learning model to recognize speech emotion patterns using MATLAB’s audio toolbox, simulink and deep learning toolbox. I used the Berlin Database of Emotional Speech To train my model. I augmented the dataset in order to increase accuracy of my results, normalized the data in order to seamlessly visualize it using a pie chart, providing an easy and seamless integration with our database that connects to our website.
### Solidworks/Product Design Engineering: Riki Osako
Utilizing Solidworks, I created the 3D model design of Morpheus including fixtures, sensors, and materials. Our team had to consider how this device would be tracking the teacher’s movements and hearing the volume while not disturbing the flow of class. Currently the main sensors being utilized in this product are a microphone (to detect volume for recording and data), nfc sensor (for card tapping), front camera, and tilt sensor (for vertical tilting and tracking professor). The device also has a magnetic connector on the bottom to allow itself to change from stationary position to mobility position. It’s able to modularly connect to a holonomic drivetrain to move freely around the classroom if the professor moves around a lot. Overall, this allowed us to create a visual model of how our product would look and how the professor could possibly interact with it. To keep the device and drivetrain up and running, it does require USB-C charging.
### Figma/UI Design of the Product: Riki Osako
Utilizing Figma, I created the UI design of Morpheus to show how the professor would interact with it. In the demo shown, we made it a simple interface for the professor so that all they would need to do is to scan in using their school ID, then either check his lecture data or start the lecture. Overall, the professor is able to see if the device is tracking his movements and volume throughout the lecture and see the results of their lecture at the end.
## Challenges we ran into
Riki Osako: Two issues I faced was learning how to model the product in a way that would feel simple for the user to understand through Solidworks and Figma (using it for the first time). I had to do a lot of research through Amazon videos and see how they created their amazon echo model and looking back in my UI/UX notes in the Google Coursera Certification course that I’m taking.
Sovannratana Khek: The main issues I ran into stemmed from my inexperience with the React framework. Oftentimes, I’m confused as to how to implement a certain feature I want to add. I overcame these by researching existing documentation on errors and utilizing existing libraries. There were some problems that couldn’t be solved with this method as it was logic specific to our software. Fortunately, these problems just needed time and a lot of debugging with some help from peers, existing resources, and since React is javascript based, I was able to use past experiences with JS and django to help despite using an unfamiliar framework.
Giuseppe Steduto: The main issue I faced was making everything run in a smooth way and interact in the correct manner. Often I ended up in a dependency hell, and had to rethink the architecture of the whole project to not over engineer it without losing speed or consistency.
Braulio Aguilar Islas: The main issue I faced was working with audio data in order to train my model and finding a way to quantify the fluctuations that resulted in different emotions when speaking. Also, the dataset was in german
## Accomplishments that we're proud of
Achieved about 60% accuracy in detecting speech emotion patterns, wrote data to our database, and created an attractive dashboard to present the results of the data analysis while learning new technologies (such as React and Docker), even though our time was short.
## What we learned
As a team coming from different backgrounds, we learned how we could utilize our strengths in different aspects of the project to smoothly operate. For example, Riki is a mechanical engineering major with little coding experience, but we were able to allow his strengths in that area to create us a visual model of our product and create a UI design interface using Figma. Sovannratana is a freshman that had his first hackathon experience and was able to utilize his experience to create a website for the first time. Braulio and Gisueppe were the most experienced in the team but we were all able to help each other not just in the coding aspect, with different ideas as well.
## What's next for Untitled
We have a couple of ideas on how we would like to proceed with this project after HackHarvard and after hibernating for a couple of days.
From a coding standpoint, we would like to improve the UI experience for the user on the website by adding more features and better style designs for the professor to interact with. In addition, add motion tracking data feedback to the professor to get a general idea of how they should be changing their gestures.
We would also like to integrate a student portal and gather data on their performance and help the teacher better understand where the students need most help with.
From a business standpoint, we would like to possibly see if we could team up with our university, Illinois Institute of Technology, and test the functionality of it in actual classrooms. | ## Inspiration
As double majors in college, we understood and felt the stress that students go through when trying to decide how to allocate their time efficiently. Often times, it feels like there just isn't enough time to be able to do all your work. We wanted to create a platform that specifically helps college students manage their course load through scheduling which improves time management. 87% of students say that better time management and organization skills would help them get better grades and there are additional important benefits such as reducing stress. In fact, college students see ineffective time management as one of their biggest barriers to efficacy and success and specifically 26% of college students stated that neglecting to create a schedule of their time was one of their biggest barriers to efficacy and success. Therefore, we wanted to make it easier for college students to create their weekly schedules by creating a platform that can create students' schedules for them.
## What it does
Our app takes in your current schedule (classes, assignments, free time), and allocates your assignments to the free slots in order to come up with an efficient way for you to complete your work by the deadlines.
## How we built it
Our platform uses backend machine learning to determine the most optimal order and plan for you to do your work by their deadlines. We use decision trees to determine on what day of the week a certain assignment should be completed. By factoring in the amount of time needed to complete the project, the number of days left to complete the project, and potentially including some historical data on the rigor of certain classes, we came up with boundaries that classify the data into different days of the week. Our MVP uses a Python script to analyze the rigor of an assignment in comparison to the amount of time left to complete it in order to compute a "score" for this assignment. This score is then compared to the scores of other assignments in order to determine an efficient prioritization of work.
## Challenges we ran into
We had to run through some iterations of our algorithm in order to find one that worked the best. We are also still in the process of seamlessly integrating our backend with our frontend, but hope to be done soon!
## Accomplishments that we're proud of
We are proud to have the opportunity to help other students like us reduce the stress in their already over-booked lives. We hope that our platform can be used by students to allow them to remove one of the big pain points in their lives and let them focus on the actual work and learning rather than just trying to figure out what to do.
## What we learned
We were surprised to learn about the impact that scheduling has on the lives of college students when we dived deeper into the statistics and we realized that it’s a lot more exciting to build a product that solves a problem we’re passionate about compared to building something that is simply technically challenging. We learned from each other since Niki is more experienced in the frontend and Aadhrik is more experienced in the backend; however, we both worked on both parts in order to have the most diverse ideas possible. The mentors were also amazing in teaching us about software development, product management, and leadership in general!
## What's next for Weekly
We hope to add teachers onto the app and students enrolled in the class can directly see what assignments they have coming up and when it’s due. Teachers understand how long the assignments should take and therefore helps the students plan ahead. Teachers can also receive feedback on how long it actually is taking students to complete certain assignments and adjust the future difficulty level accordingly. | winning |
## Inspiration
The inspiration behind this comes our realization that women are less involved in the financial world and thus have less financial freedom. We aim to promote the equality of women in finance and encourage them to invest their money, instead of leaving it in a bank account, if they have the means. This calculator aims to provide financial advice based on one's circumstances.
## What it does
It is a website that shows our mission, and includes a budget calculator where the user can input each factor of their income and expenses. The calculator will calculate their total monthly income, total monthly expenses, and the total monthly savings. This will show the user their financial health, and advise them on financial tips they can do to invest and increase the financial wellbeing.
## How we built it
We built the website using HTML and CSS for the design and flow of the website, and Javascript to deal with the input/output of the budget calculator.
## Challenges we ran into
We ran into many challenges while building our project, one being that we didn't have prior experience with Javascript. It was a challenging project due to our limited web development experience. It was very difficult to learn it in a short amount of time, and we ran into many bugs that were difficult to solve, such as getting the input properly.
## Accomplishments that we're proud of
We are proud of eventually figuring out how to align the boxes with the text in the budget calculator, as well as changing the input properly.
## What we learned
We learned that we must persevere even when times are tough, and that it is extremely important to have a strict plan to ensure that we get things done in a timely manner.
## What's next for WinInvest
We hope to finish this project and eventually make it a workable product that can give tailored financial advice based on the results. | ## Inspiration
As students who are soon going to be start living on our own, handling our finances and budgeting are important tasks to learn. We decided to make this app in order to make all this budgeting easier.
## What it does
Provides an easy way to organize and budget your finances.
## How we built it
This was built completely in vanilla JavaScript, HTML, and CSS.
## Challenges we ran into
Using vanilla JavaScript to insert elements into an HTML document without the help of a framework like React was quite cumbersome, but with practice, we were able to get the hang of it.
## Accomplishments that we're proud of
We are proud of creating something complete in a short time period for the first time!
## What we learned
We deepened our skills with JavaScript, and learned how to collaborate with other progammers.
## What's next for financials app
There are still many potential bugs to fix, and we also want to improve the overall appearance of the app. | ## Inspiration
Our inspiration was to provide a robust and user-friendly financial platform. After hours of brainstorming, we decided to create a decentralized mutual fund on mobile devices. We also wanted to explore new technologies whilst creating a product that has several use cases of social impact. Our team used React Native and Smart Contracts along with Celo's SDK to discover BlockChain and the conglomerate use cases associated with these technologies. This includes group insurance, financial literacy, personal investment.
## What it does
Allows users in shared communities to pool their funds and use our platform to easily invest in different stock and companies for which they are passionate about with a decreased/shared risk.
## How we built it
* Smart Contract for the transfer of funds on the blockchain made using Solidity
* A robust backend and authentication system made using node.js, express.js, and MongoDB.
* Elegant front end made with react-native and Celo's SDK.
## Challenges we ran into
Unfamiliar with the tech stack used to create this project and the BlockChain technology.
## What we learned
We learned the many new languages and frameworks used. This includes building cross-platform mobile apps on react-native, the underlying principles of BlockChain technology such as smart contracts, and decentralized apps.
## What's next for *PoolNVest*
Expanding our API to select low-risk stocks and allows the community to vote upon where to invest the funds.
Refine and improve the proof of concept into a marketable MVP and tailor the UI towards the specific use cases as mentioned above. | losing |
## Inspiration:
The inspiration for Kisan Mitra came from the realization that Indian farmers face a number of challenges in accessing information that can help them improve their productivity and incomes. These challenges include:
```
Limited reach of extension services
Lack of awareness of government schemes
Difficulty understanding complex information
Language barriers
```
Kisan Mitra is designed to address these challenges by providing farmers with timely and accurate information in a user-friendly and accessible manner.
## What it does :
Kisan Mitra is a chatbot that can answer farmers' questions on a wide range of topics, including:
```
Government schemes and eligibility criteria
Farming techniques and best practices
Crop selection and pest management
Irrigation and water management
Market prices and weather conditions
```
Kisan Mitra can also provide farmers with links to additional resources, such as government websites and agricultural research papers.
## How we built it:
Kisan Mitra is built using the PaLM API, which is a large language model from Google AI. PaLM is trained on a massive dataset of text and code, which allows it to generate text, translate languages, write different kinds of creative content, and answer your questions in an informative way.
Kisan Mitra is also integrated with a number of government databases and agricultural knowledge bases. This ensures that the information that Kisan Mitra provides is accurate and up-to-date.
## Challenges we ran into:
One of the biggest challenges we faced in developing Kisan Mitra was making it accessible to farmers of all levels of literacy and technical expertise. We wanted to create a chatbot that was easy to use and understand, even for farmers who have never used a smartphone before.
Another challenge was ensuring that Kisan Mitra could provide accurate and up-to-date information on a wide range of topics. We worked closely with government agencies and agricultural experts to develop a knowledge base that is comprehensive and reliable.
## Accomplishments that we're proud of:
We are proud of the fact that Kisan Mitra is a first-of-its-kind chatbot that is designed to address the specific needs of Indian farmers. We are also proud of the fact that Kisan Mitra is user-friendly and accessible to farmers of all levels of literacy and technical expertise.
## What we learned:
We learned a lot while developing Kisan Mitra. We learned about the challenges that Indian farmers face in accessing information, and we learned how to develop a chatbot that is both user-friendly and informative. We also learned about the importance of working closely with domain experts to ensure that the information that we provide is accurate and up-to-date.
## What's next for Kisan Mitra:
We are committed to continuing to develop and improve Kisan Mitra. We plan to add new features and functionality, and we plan to expand the knowledge base to cover more topics. We also plan to work with more government agencies and agricultural experts to ensure that Kisan Mitra is the best possible resource for Indian farmers.
We hope that Kisan Mitra will make a positive impact on the lives of Indian farmers by helping them to improve their productivity and incomes. | ## Introduction
[Best Friends Animal Society](http://bestfriends.org)'s mission is to **bring about a time when there are No More Homeless Pets**
They have an ambitious goal of **reducing the death of homeless pets by 4 million/year**
(they are doing some amazing work in our local communities and definitely deserve more support from us)
## How this project fits in
Originally, I was only focusing on a very specific feature (adoption helper).
But after conversations with awesome folks at Best Friends came a realization that **bots can fit into a much bigger picture in how the organization is being run** to not only **save resources**, but also **increase engagement level** and **lower the barrier of entry points** for strangers to discover and become involved with the organization (volunteering, donating, etc.)
This "design hack" comprises of seven different features and use cases for integrating Facebook Messenger Bot to address Best Friends's organizational and operational needs with full mockups and animated demos:
1. Streamline volunteer sign-up process
2. Save human resource with FAQ bot
3. Lower the barrier for pet adoption
4. Easier donations
5. Increase visibility and drive engagement
6. Increase local event awareness
7. Realtime pet lost-and-found network
I also "designed" ~~(this is a design hack right)~~ the backend service architecture, which I'm happy to have discussions about too!
## How I built it
```
def design_hack():
s = get_sketch()
m = s.make_awesome_mockups()
k = get_apple_keynote()
return k.make_beautiful_presentation(m)
```
## Challenges I ran into
* Coming up with a meaningful set of features that can organically fit into the existing organization
* ~~Resisting the urge to write code~~
## What I learned
* Unique organizational and operational challenges that Best Friends is facing
* How to use Sketch
* How to create ~~quasi-~~prototypes with Keynote
## What's next for Messenger Bots' Best Friends
* Refine features and code :D | # Doc.Care
## Our revolutionary platform for medical professionals and patients: a centralized data hub for accessing medical history. Our platform provides easy access to medical history data, allowing doctors and other medical professionals worldwide to access and reference patient records in real-time with a unique search feature. Patients can now have their medical history at their fingertips with our conversational chat function, powered by OpenAI's GPT-3. With this feature, patients can easily ask specific questions about their medical history, such as "when was their last flu shot?" and receive an answer immediately. Say goodbye to the hassle of reading through cryptic records and finally get quick answers for any quick questions patients have.
Leveraging Figma's web interface, we successfully crafted an initial design for our platform. The incorporation of key functionalities, including the organizational search tool and chat box, was executed utilizing a combination of React, CSS, and OpenAI's Chat GPT.
Our implementation of OpenAI's technology enabled us to develop a sophisticated model capable of generating a simulated medical record history for patients. Using this model, we can effectively recall and relay requested patient information in a conversational format.
As first-time participants in the hacking community, we take great pride in our Treehacks 2023 project. We are excited to continue developing this project further in the near future. | winning |
## Inspiration
**Do you find yourself easily distracted while watching online educational videos? Introducing Edumedia.**
Our idea began with an interaction I had with my lecturer in a small math class. My lecturer became sick before an exam and had to cancel office hours which was something that the class needed. He decided to try streaming his office hours on Twitch as a method of being able to be available to more students while also keeping everyone safe. This was very successful and now he streams on Twitch every week for math.
A week later, I was talking to him about how this idea of streaming on Twitch was very beneficial for the students and how it is hard for teachers like him to have live interactions with their students. He mentioned a good point about how there’s no website for this kind of niche as most educational live websites are paid and private tutoring.
Our idea stems from this and is about opening a space in educational streaming where there’s no barrier of entry and you can just start streaming educational content with the use of interactive mathematical tools that students can use alongside a lecture.
**We wanted to create an engaging online platform that mimicked real-classroom experiences. You know- the feel of sketching graphs in your notepad while listening to your algebra lecture.**
## What it does
Edumedia lets its users seamlessly transition from educational videos/recorded lectures to applications about the subject such as desmos, thus enhancing the student learning experience. We wanted to build a product that eliminated the hassle of toggling between multiple tabs so that the student can work more efficiently. Our user-friendly interface is accessible to all with multiple triggers such as keyboard shortcuts, voice, etc. We make sure to retain the delightful in-classroom experience with interactive educational tools and collaboration with comments.
Our product development principles are simple:
* it enhances learning through an engaging & delightful interface.
* it simulates the authenticity of a real classroom and translates this digitally.
* it’s economical and ensures the accessibility of quality education to all.
## How we built it
*Research*
We started by listing use cases for our product. We studied the challenges faced by professors and students while using current educational streaming platforms. Professors cited issues such as live streaming difficulties and students faced distracted learning and lack of involvement.
*Ideation & Design*
We scrambled the components of current learning modules to sketch a more usable and interactive interface. We synthesized from our research that students/professors use tools such as Desmos while learning and hence tried to integrate these in our layout. We wire-framed a low fidelity version to test our app flow before adding the major design and branding.
*Development*
We made a plan of the needed programming languages and related API’s that had to be implemented to make the development process more efficient. We used React for the front end coding and Python Flask for the back end coding. The Firebase Engine was utilized for the storage of videos and it was useful for updating information in real-time.
## Challenges we ran into
The idea of integrating an interactive mechanism such as Desmos was difficult to execute while maintaining the user experience of multiple viewports. We approached this issue by sketching out multiple layouts and testing at low-fidelity to see which was navigation-friendly.
## Accomplishments that we’re proud of
We challenged ourselves to adhere to our product principles whilst delivering a quality product. The task was to transform the currently monotonous experience of one-dimensional video streaming to a high-involvement experience and Edumedia has managed to achieve that.
We also feel accomplished for being able to collaborate as a team of developers and designers and achieve our individual goals in the process.
## What we learned
With the implementation of an interactive website, we were able to learn new topics such as using REST API and incorporating the Firebase Engine.
The product development process taught us a great deal about user-experience and customer psychology. As a team, we were all able to elevate and enhance our team working, coordination, and time management skills. We each possess different competencies and were able to teach each other in the process.
## What’s next for EDUMEDIA
Design and development is an iterative process and we’d like to test our current prototype on target users to gather feedback and refine our creation. We plan to do this till we’ve reached a stage of outstanding user satisfaction.
Secondly, we’d like to expand product experience to multiple types of subjects such as music and design as we believe that our interactive platforms can be customized to suit different learning objectives.
From a business perspective, we’d like to collaborate with educational institutions and form a network of channels and users - one big happy Edumedia family! | ## Inspiration
The classroom experience has *drastically* changed over the years. Today, most students and professors prefer to conduct their course organization and lecture notes electronically. Although there are applications that enabled a connected classroom, none of them are centered around measuring students' understanding during lectures.
The inspiration behind Enrich was driven by the need to create a user-friendly platform that expands the possibilities of electronic in-class course lectures: for both the students and the professors. We wanted to create a way for professors to better understand the student's viewpoint, recognize when their students need further help with a concept, and lead a lecture that would best provide value to students.
## What it does
Enrich is an interactive course organization platform. The essential idea of the app is that professor can create "classrooms" to which students can add themselves using a unique key provided by the professor. The professor has the ability to create multiple such classrooms for any class that he/she teaches. For each classroom, we provide a wide suite of services to enable a productive lecture.
An important feature in our app is a "learning ratio" statistic, which lets the professor know how well he/she is teaching the topics. As the teacher is going through the material, students can anonymously give real-time feedback on how they are responding to the lecture.The aggregation of this data is used to determine a color gradient from red (the lecture is going poorly) to green (the lecture is very clear and understandable). This allows the teacher to slow down if she recognizes that students are getting lost.
We also have a speech-to-text translation service that transcribes the lecture as it is going, providing students with the ability to read what the teacher is saying. This not only provides accessibility to those who can't hear, but also allows students to go back over what the teacher has said in the lecture.
Lastly, we have a messaging service that connects the students to Teaching Assistants during the lecture. This allows them to ask questions to clarify their understanding without disrupting the class.
## How we built it
Our platform consists of two sides to it: Learners and Educators. We used React.js as the front-end for both the Learner-side and Educator-side of our application. The whole project revolves around a effectively organized Firebase RealTime database, which stores the hierarchy of professor-class-student relationships. The React Components interface with Firebase to update students as and when they enter and leave a classroom. We also used Pusher to develop the chat service on the classrooms.
For the speech-to-text detection, we used the Google Speech-to-Text API to detect speech from the Educator's computer, transcribe this, and update the Firebase RealTime database with the transcript. The web application then updates user-facing site with the transcript.
## Challenges we ran into
The database structure on Firebase is quite intricate
Figuring out the best design for the Firebase database was challenging, because we wanted a seamless way to structure classes, students, their responses, and recordings. The speech-to-text transcription was also very challenging. We worked through using various APIs for the service, before finally settling on the Google Speech-to-Text API. Once we got the transcription service to work, it was hard to integrate it into the web application.
## Accomplishments that we're proud of
We were proud of getting the speech-to-text transcription service to work, as it took a while to connect to the API, get the transcription, and then transfer that over to our web application.
## What we learned
Despite using React for previous projects, we utilized new ways of state management through Redux that made things much simpler than before. We have also learned to integrate different services within our React application, such as the Chatbox in our application.
## What's next for Enrich - an education platform to increase collaboration
The great thing about Enrich is that it has a massive scope to expand! We had so many ideas to implement, but only such little time. We could have added a camera that tracks the expressions of students to analyze how they are reacting to lectures. This would have been a hands-off approach to getting feedback. We could also have added a progress bar for how far the lecture is going, a screen-sharing capability, and interactive whiteboard. | ## Inspiration
In our journey as students, we have come to recognize the profound predicament that ensues when attempting to glean meaningful insights from lengthy video content. The struggle to synthesize information effectively from such resources is palpable. In light of this predicament, we embarked on a mission to craft a solution that not only ameliorates this issue but also bestows a boon to our fellow students by saving their precious time. We sought to empower them with a tool that not only facilitates the curation of their educational content but also ensures the utmost convenience by allowing them to revisit videos pertinent to specific topics.
## What it does
Our innovative product, EduScribe, is endowed with multifaceted capabilities aimed at enhancing the educational experience of its users. It excels in the art of distilling high-quality summaries from YouTube videos, thereby rendering the extraction of essential insights an effortless endeavor. Furthermore, EduScribe takes the onus of maintaining a comprehensive record of the user's viewing history, facilitating easy reference to previously watched videos. This feature is invaluable for those seeking to revisit content related to a specific subject matter. In addition, EduScribe offers a succinct overview of search results based on the user's content history, thereby streamlining the process of finding pertinent information.
## How we built it
The development of EduScribe was an endeavor that harnessed the capabilities of three cutting-edge technologies, namely Vectara AI, Together.ai, and Convex. The fusion of these technological marvels resulted in the creation of our robust product. Vectara AI contributed its prowess in frontend development, ensuring a seamless user interface. Together.ai played a pivotal role in facilitating data migration, a task often fraught with challenges. Convex, while a potent tool, presented its own set of challenges, particularly in comprehending the syntax for database querying. Additionally, the process of fine-tuning an AI model for summary generation and implementing zero-shot prompting for the LLaMa 2 models posed intricate challenges that we had to surmount.
## Challenges we ran into
Our journey in developing EduScribe was not without its share of obstacles. Foremost among these was the formidable challenge of querying from the Convex database. The intricacies of database querying, coupled with the necessity of understanding and implementing the correct syntax, proved to be a daunting task. On the frontend, integrating Vectara AI presented its own set of difficulties, albeit surmountable. Data migration, while a crucial step in the development process, was not without its complications. Fine-tuning an AI model for summary generation, a task requiring precision and expertise, demanded a significant investment of time and effort. Moreover, implementing zero-shot prompting for the LLaMa 2 models, though a promising avenue, presented its own unique challenges that required creative problem-solving.
## Accomplishments that we're proud of
Our journey in developing EduScribe culminated in a series of accomplishments that we hold in high esteem. Most notably, we successfully crafted a full-stack project, reflecting our proficiency in both frontend and backend development. The crowning achievement of our endeavor was the creation of a sophisticated machine learning model for summary generation, a feat that showcases our commitment to innovation and excellence in the field of artificial intelligence.
## What we learned
The development of EduScribe was a crucible of learning, where we immersed ourselves in a plethora of new and diverse technologies. Throughout this journey, we gained invaluable insights and knowledge, transcending our previous boundaries and expanding our technological horizons. Our acquisition of expertise in a variety of domains and the cultivation of our problem-solving skills have left an indelible mark on our development team.
## What's next for EduScribe
The horizon of possibilities for EduScribe is broad and promising. Our future endeavors include the transformation of EduScribe into a browser extension, thereby increasing its accessibility and usability. This expansion will further enhance the educational experience of our users, making EduScribe an indispensable tool for the academic journey. | losing |
## Inspiration
* Professor and TAs spend lots of time answering relatively trivial questions about assignments and the material that are usually found in the book or on the website, but can be hard to find for students. In addition, up to 25% of new questions on community based forums like Piazza and EdStem are duplicate questions. I wanted to create something like Ask.com that could be used in the same way for assignments and course material.
## What it does
* In "Ask Any Question", I have created a text box where people can ask any general question to content that may be available on Google, or can upload their textbook/assignment to easily query GPT-3 with it.
## How we built it
* I experimented with the OpenAI API until I found settings that answered questions to my satisfaction, then integrated the code for the API into a website that I built on Heroku, using Flask.
## Challenges we ran into
* I tried using other hosting services and website stacks, but ran into difficulties integrating the OpenAI API and adding upload file buttons.
* In addition, OpenAI has limits for how much data can be read in
## Accomplishments that we're proud of
* The website works pretty well!
## What we learned
* How to build a website using Flask on Heroku
* Using the OpenAI GPT-3 API
* The power of transformers (no, not the robots)
## What's next for Ask Any Question
* Integrating with APIs for forums like Facebook, Piazza, Quora and StackOverflow to answer trivial questions so that moderators would not have to spend as much time answering duplicate questions or handle questions that could be handled by GPT-3 | ## Inspiration
Presentation inspired by classic chatbots
## What it does
Interfaces with gpt through mindsdb, and engineers the prompt to lead towards leading questions. Saves the query's entered by the user and on a regular interval generates a quiz on topics related to the entries, at the same skill level.
## How we built it
Using reflex for the framework, and mindsdb to interface with gpt
## Challenges we ran into
During the duration of this challenge, we had noticed a significant productivity curve; especially at night. This was due to multiple factors, but the most apparent one is a lack of preparation with us needing to download significant files during the peak hours of the day.
## Accomplishments that we're proud of
We are extremely satisfied with our use of the reflex framework; this season our team comprised of only 2 members with no significant web development history. So we are proud that we had optimized our time management so that we could learn while creating.
## What we learned
Python,git,reflex,css
## What's next for Ai-Educate
We want to get to the point where we can save inputs into a large database so that our program is not as linear, if we were to implement this, older topics would appear less often but would not disappear outright. We also want a better way to determine the similarity between two inputs, we had significant trouble with that due to our reliance of gpt, we believe that the next best solution is to create our own Machine Learning engine, combined with user rating of correctness of assessment.
We were also looking into ripple, as we understand it, we could use it to assign a number of points to our users, and with those points we can limit their access to this resource, and we can also distribute points through our quizzes, this would foster a greater incentive to absorb the content as it would enable users to have more inputs | ## Inspiration
Textbooks have not fundamentally changed since their invention in the 16th century. Although there are now digital textbooks (ePubs and the like), they're still just pictures and text. From educational literature, we know that discussion and interactivity is crucial for improving student outcomes (and, particularly, those of marginalized students). But we still do the majority of our learning with static words and images on a page.
## What it does
How do we keep students engaged? Introducing *Talk To History*. This is a living textbook, where students can read about a historical figure, click on their face on the side of the page, and have an immersive conversation with them. Or, read a primary text and then directly engage to ask questions about the writing. This enables a richer, multimodal interaction. It makes history more immersive. It creates more places to engage and retain knowledge. Most importantly, it makes the textbook fun. If Civ5 can make history fun, why can’t textbooks?
## How we built it
*Talk To History* was built using TypeScript, React, Next.js, Vercel, Chakra, Python, Google Text-To-Speech, Wav2Lip, GPT, and lots of caffeine :) The platform has several components, including a frontend for students and a backend to handle user data and text analysis. We also used Google's Text-To-Speech (TTS) API to generate high-quality speech output, which we then fed into Wav2Lip, a deep generative adversarial network, to produce realistic lip movements for the characters. For accelerated inference, we deployed Wav2Lip on an NVIDIA A40 GPU server.
## Challenges we ran into
* Dealing with CUDA memory leaks when performing inference using the Wav2Lip model
* Finetuning hyperparameters of the Wav2Lip model and optimizing PyTorch loading to reduce latency
* Connecting and deploying all of the different services (TTS, GPT, Wav2Lip) into a unified product
## Accomplishments we're proud of
We're most proud of building a platform that makes learning fun and engaging for students. On the technical side, we're proud of seamlessly integrating several cutting-edge technologies, such as Wav2Lip and GPT, to create a more immersive experience; this project required advanced techniques in full-stack engineering, multi-processing, and latency optimization. The end result was more than worth the effort, as we successfully created a platform that makes education more engaging and immersive. With *Talk To History*, we hope to transform the way students learn history.
## What we learned
We learned how to integrate multiple services and optimize our code to handle large amounts of data, but perhaps more importantly, we gained a deep appreciation for the importance of creating an exciting experience for students.
## What's next
* Scalability and speed improvements for Wav2Lip GPU instances for more realtime chats
* Improved robustness against adversarial prompts
* Broader selection of articles and speakers organized into different domains, such as "Pioneers in Environmental Sustainability", "Female Heroes in Science", and "Diverse Voices in Literature"
* *Talk to History* as a platform: ability for any educational content author to add their own character (subject to content approval) given some context and voice and integrate it on their website or e-reader | losing |
## Slooth
Slooth.tech was born from the combined laziness and frustration towards long to navigate school websites of four Montréal based hackers.
When faced with the task of creating a hack for McHacks 2016, the creators of Slooth found the perfect opportunity to solve a problem they faced for a long time: navigating tediously complicated school websites.
Inspired by Natural Language Processing technologies and personal assistants such as Google Now and Siri, Slooth was aimed at providing an easy and modern way to access important documents on their school websites.
The Chrome extension Slooth was built with two main features in mind: customization and ease of use.
# Customization:
Slooth is based on user recorded macros. Each user will record any actions they which to automate using the macro recorder and associate an activation phrase to it.
# Ease of use:
Slooth is intended to simplify its user's workflow. As such, it was implemented as an easily accessible Chrome extension and utilizes voice commands to lead its user to their destination.
# Implementation:
Slooth is a Chrome extension built in JS and HTML.
The speech recognition part of Slooth is based on the Nuance ASR API kindly provided to all McHacks attendees.
# Features:
-Fully customizable macros
-No background spying. Slooth's speech recognition is done completely server side and notifies the user when it is recording their speech.
-Minimal server side interaction. Slooth's data is stored entirely locally, never shared with any outside server. Thus you can be confident that your personal browsing information is not publicly available.
-Minimal UI. Slooth is designed to simplify one's life. You will never need a user guide to figure out Slooth.
# Future
While Slooth reached its set goals during McHacks 2016, it still has room to grow.
In the future, the Slooth creators hope to implement the following:
-Full compatibility with single page applications
-Fully encrypted autofill forms synched with the user's Google account for cross platform use.
-Implementation of the Nuance NLU api to add more customization options to macros (such as verbs with differing parameters).
# Thanks
Special thanks to the following companies for their help and support in providing us with resources and APIs:
-Nuance
-Google
-DotTech | ### Overview
Resililink is a node-based mesh network leveraging LoRa technology to facilitate communication in disaster-prone regions where traditional infrastructure, such as cell towers and internet services, is unavailable. The system is designed to operate in low-power environments and cover long distances, ensuring that essential communication can still occur when it is most needed. A key feature of this network is the integration of a "super" node equipped with satellite connectivity (via Skylo), which serves as the bridge between local nodes and a centralized server. The server processes the data and sends SMS notifications through Twilio to the intended recipients. Importantly, the system provides acknowledgment back to the originating node, confirming successful delivery of the message. This solution is aimed at enabling individuals to notify loved ones or emergency responders during critical times, such as natural disasters, when conventional communication channels are down.
### Project Inspiration
The inspiration for Resililink came from personal experiences of communication outages during hurricanes. In each instance, we found ourselves cut off from vital resources like the internet, making it impossible to check on family members, friends, or receive updates on the situation. These moments of helplessness highlighted the urgent need for a resilient communication network that could function even when the usual infrastructure fails.
### System Capabilities
Resililink is designed to be resilient, easy to deploy, and scalable, with several key features:
* **Ease of Deployment**: The network is fast to set up, making it particularly useful in emergency situations.
* **Dual Connectivity**: It allows communication both across the internet and in peer-to-peer fashion over long ranges, ensuring continuous data flow even in remote areas.
* **Cost-Efficiency**: The nodes are inexpensive to produce, as each consists of a single LoRa radio and an ESP32 microcontroller, keeping hardware costs to a minimum.
### Development Approach
The development of Resililink involved creating a custom communication protocol based on Protocol Buffers (protobufs) to efficiently manage data exchange. The core hardware components include LoRa radios, which provide long-range communication, and Skylo satellite connectivity, enabling nodes to transmit data to the internet using the MQTT protocol.
On the backend, a server hosted on Microsoft Azure handles the incoming MQTT messages, decrypts them, and forwards the relevant information to appropriate APIs, such as Twilio, for further processing and notification delivery. This seamless integration of satellite technology and cloud infrastructure ensures the reliability and scalability of the system.
### Key Challenges
Several challenges arose during the development process. One of the most significant issues was the lack of clear documentation for the AT commands on the Mutura evaluation board, which made it difficult to implement some of the core functionalities. Additionally, given the low-level nature of the project, debugging was particularly challenging, requiring in-depth tracing of system operations to identify and resolve issues. Another constraint was the limited packet size of 256 bytes, necessitating careful optimization to ensure efficient use of every byte of data transmitted.
### Achievements
Despite these challenges, we successfully developed a fully functional network, complete with a working demonstration. The system proved capable of delivering messages over long distances with low power consumption, validating the concept and laying the groundwork for future enhancements.
### Lessons Learned
Through this project, we gained a deeper understanding of computer networking, particularly in the context of low-power, long-range communication technologies like LoRa. The experience also provided valuable insights into the complexities of integrating satellite communication with terrestrial mesh networks.
### Future Plans for Resililink
Looking ahead, we plan to explore ways to scale the network, focusing on enhancing its reliability and expanding its reach to serve larger geographic areas. We are also interested in further refining the underlying protocol and exploring new applications for Resililink beyond disaster recovery scenarios, such as in rural connectivity or industrial IoT use cases. | ## Inspiration
Oftentimes when we find ourselves not understanding the content that has been taught in class and rarely remembering what exactly is being conveyed. And some of us have the habit of mismatching notes and forgetting where we put them. So to help all the ailing students, there was this idea to make an app that would give the students curated automatic content from the notes which they upload online.
## What it does
A student uploads his notes to the application. The application creates a summary of the notes, additional information on the subject of the notes, flashcards for easy remembering and quizzes to test his knowledge. There is also the option to view other student's notes (who have uploaded it in the same platform) and do all of the above with them as well. We made an interactive website that can help students digitize and share notes!
## How we built it
Google cloud vision was used to convert images into text files. We used Google cloud NLP API for the formation of questions from the plain text by identifying the entities and syntax of the notes. We also identified the most salient features of the conversation and assumed it to be the topic of interest. By doing this, we are able to scrape more detailed information on the topic using google custom search engine API. We also scrape information from Wikipedia. Then we make flashcards based on the questions and answers and also make quizzes to test the knowledge of the student. We used Django as the backend to create a web app. We also made a chatbot in google dialog-flow to inherently enable the use of google assistant skills.
## Challenges we ran into
Extending the platform to a collaborative domain was tough. Connecting the chatbot framework to the backend and sending back dynamic responses using webhook was more complicated than we expected.
Also, we had to go through multiple iterations to get our question formation framework right. We used the assumption that the main topic would be the noun at the beginning of the sentence. Also, we had to replace pronouns in order to keep track of the conversation.
## Accomplishments that we're proud of
We have only 3 members in the team and one of them has a background in electronics engineering and no experience in computer science and as we only had the idea of what we were planning to make but no idea of how we will make. We are very proud to have achieved a fully functional application at the end of this 36-hour hackathon. We learned a lot of concepts regarding UI/UX design, backend logic formation, connecting backend and frontend in Django and general software engineering techniques.
## What we learned
We learned a lot about the problems of integrations and deploying an application. We also had a lot of fun making this application because we had the motive to contribute to a large number of people in day to day life. Also, we learned about NLP, UI/UX and the importance of having a well-set plan.
## What's next for Noted
In the best-case scenario, we would want to convert this into an open-source startup and help millions of students with their studies. So that they can score good marks in their upcoming examinations. | winning |
## Inspiration
The fame of IBM Watson and potential of AI to shape the future of music industry.
## What it does
It makes IBM Watson sing any famous song by singer and song name input with a background music.
## How I built it
I used IBM Watson API from a web application that I coded in Python. Once a song is sang, it is saved for the future so that the API doesn't need to be called all the time. The calls are only needed for the new song inputs.
## Challenges I ran into
Coming up with a single day, single person futuristic project that benefits from AI.
Unfortunately, I wasn't able to join to a team.
## Accomplishments that I'm proud of
Finishing the project.
## What I learned
There is still progress that needs to be done for an AI to take over the singing industry. However, IBM Watson certainly has some potential especially if a speech model for signing gets generated.
## What's next for WhatSing
Change the reading pitches by doing some sentimental analysis of the songs.
Take custom song text from the user. | ## Inspiration
Our team originally wanted to build AI-generated sounds for music production. However, to our disappointment, the labeled data we needed for that idea wasn’t sufficient. As a result, we pivoted to trying to make better, more accessible dataset labeling and creation for these smaller, underserved industries.
For this project, we settled on audio CAPTCHAs as an example method of online authentication. Why?
* We saw there was a pressing need for accessible and comprehensive datasets in many underserved industries seeking to leverage machine learning.
* To empower/support these niche markets/opportunities by supporting/enhancing their data labeling processes.
* In the future we can bring the ability to create similar simple tasks for a variety of unharnessed datasets.
## What it does
An audio authentication system that Labels ML Datasets through the reCaptcha authentication system. Users will label descriptive themes and keywords that the audio files evoke as their verification process, which can help benefit other programs overall. As users complete more CAPTCHAs, a dataset of keywords that describe the audio clips is built. For every new audio clip, users are prompted to note one word that describes the clip. Once a cache of repeated tags/themes exists and some of these keywords’ counts cover a certain threshold, users who have been given that audio clip as a CAPTCHA from then on will select/rank key words from the word bank. This allows the backend dataset of word descriptors to be fully robust and automatically crowdsourced.
## How we built it
* Our backend runs on Flask and Python, hosted on an Oracle Cloud Infrastructure Compute Instance.
* Our frontend is powered by Retool.
## Challenges we ran into
* How to continually label data using crowdsourcing without any ground truth
* How to get good initial datasets of unlabeled data
* How to make the reCAPTCHA as least frustrating as possible for the users
* Distinguishing from existing services
## Accomplishments that we're proud of
Creating a frontend, simple in nature, that allows for the creation of a sophisticated dataset.
## What we learned
* How to use Retool’s prebuilt UI components and utilize their API to connect to our backend server.
* How to design and build automated data labeling through simple user input.
* Running Flask on the Oracle Instance to create our own custom RESTful API
## What's next for
* Validation of ease of use with customer basis (and potentially also visually impaired as a better audio CAPTCHA than current ones)
* Potential integration to replace mobile gaming ads
* Music instead of sound effects
* Stronger Security Measures (More robustness toward bot attacks, too) | ## Inspiration
Our inspiration comes from the desire to democratize music creation. We saw a gap where many people, especially youth, had a passion for making music but lacked the resources, knowledge, or time to learn traditional instruments and complex DAW software. We wanted to create a solution that makes music creation accessible and enjoyable for everyone, allowing them to express their musical ideas quickly and easily.
## What it does
IHUM is an AI application that converts artists' vocals or hums into MIDI instruments. It simplifies the music creation process by allowing users to record their vocal melodies and transform them into instrumental tracks with preset instruments. This enables users to create complex musical pieces without needing to know how to play an instrument or use a DAW.
## How we built it
For the frontend, we used HTML, CSS and React JS to develop IHUM. Using React JS and its libraries such as Pitchy, we were able to process, change, and output the sound waves of audio inputs. For the backend, we used Auth0's API to create a login/signup system, which stores and verifies user emails and passwords.
## Challenges we ran into
One of the main challenges we faced was ensuring the AI's accuracy in interpreting vocal inputs and converting them into MIDI data that sounds natural and musical. Furthermore, the instruments that we converted had a ton of issues in how it sounds, especially regarding pitch, tone, etc. However, we were able to pull and troubleshoot through our way through most of them.
## Accomplishments that we're proud of
Through all the hours of hard work and effort, an accomplishment we are extremely proud of is the fact that our program is able to process the audio. By using Pitchy JS, we were able to change the audio to fit how we want it to sound. On top of this, we are also proud that we were able to implement a fully working login/signup system using Auth0's API and integrate it within the program.
## What we learned
As this was our first time working with audio in web development and many of our group's first hackathons, we faced many issues that we had to overcome and learn from. From processing to setting up APIs to modifying the sound waves, it definitely provided us valuable insight and expanded our skillsets.
## What's next for IHUM
Our future updates allow running multiple audio samples simultaneously and increasing our instrument libraries. By doing so, IHUM can essentially be a simple and easy-to-use DAW which permits the user to create entire beats out of their voice on our web application. | losing |
## Inspiration
Our project is inspired by the sister of one our creators, Joseph Ntaimo. Joseph often needs to help locate wheelchair accessible entrances to accommodate her, but they can be hard to find when buildings have multiple entrances. Therefore, we created our app as an innovative piece of assistive tech to improve accessibility across the campus.
## What it does
The user can find wheelchair accessible entrances with ease and get directions on where to find them.
## How we built it
We started off using MIT’s Accessible Routes interactive map to see where the wheelchair friendly entrances were located at MIT. We then inspected the JavaScript code running behind the map to find the latitude and longitude coordinates for each of the wheelchair locations.
We then created a Python script that filtered out the latitude and longitude values, ignoring the other syntax from the coordinate data, and stored the values in separate text files.
We tested whether our method would work in Python first, because it is the language we are most familiar with, by using string concatenation to add the proper Java syntax to the latitude and longitude points. Then we printed all of the points to the terminal and imported them into Android Studio.
After being certain that the method would work, we uploaded these files into the raw folder in Android Studio and wrote code in Java that would iterate through both of the latitude/longitude lists simultaneously and plot them onto the map.
The next step was learning how to change the color and image associated with each marker, which was very time intensive, but led us to having our custom logo for each of the markers.
Separately, we designed elements of the app in Adobe Illustrator and imported logos and button designs into Android Studio. Then, through trial and error (and YouTube videos), we figured out how to make buttons link to different pages, so we could have both a FAQ page and the map.
Then we combined both of the apps together atop of the original maps directory and ironed out the errors so that the pages would display properly.
## Challenges we ran into/Accomplishments
We had a lot more ideas than we were able to implement. Stripping our app to basic, reasonable features was something we had to tackle in the beginning, but it kept changing as we discovered the limitations of our project throughout the 24 hours. Therefore, we had to sacrifice features that we would otherwise have loved to add.
A big difficulty for our team was combining our different elements into a cohesive project. Since our team split up the usage of Android Studio, Adobe illustrator, and programming using the Google Maps API, it was most difficult to integrate all our work together.
We are proud of how effectively we were able to split up our team’s roles based on everyone’s unique skills. In this way, we were able to be maximally productive and play to our strengths.
We were also able to add Boston University accessible entrances in addition to MIT's, which proved that we could adopt this project for other schools and locations, not just MIT.
## What we learned
We used Android Studio for the first time to make apps. We discovered how much Google API had to offer, allowing us to make our map and include features such as instant directions to a location. This helped us realize that we should use our resources to their full capabilities.
## What's next for HandyMap
If given more time, we would have added many features such as accessibility for visually impaired students to help them find entrances, alerts for issues with accessing ramps and power doors, a community rating system of entrances, using machine learning and the community feature to auto-import maps that aren't interactive, and much, much more. Most important of all, we would apply it to all colleges and even anywhere in the world. | ## Presentation + Award
See the presentation and awards ceremony here: <https://www.youtube.com/watch?v=jd8-WVqPKKo&t=351s&ab_channel=JoshuaQin>
## Inspiration
Back when we first came to the Yale campus, we were stunned by the architecture and the public works of art. One monument in particular stood out to us - the *Lipstick (Ascending) on Caterpillar Tracks* in the Morse College courtyard, for its oddity and its prominence. We learned from fellow students about the background and history behind the sculpture, as well as more personal experiences on how students used and interacted with the sculpture over time.
One of the great joys of traveling to new places is to learn about the community from locals, information which is often not recorded anywhere else. From monuments to parks to buildings, there are always interesting fixtures in a community with stories behind them that would otherwise go untold. We wanted to create a platform for people to easily discover and share those stories with one another.
## What it does
Our app allows anybody to point their phone camera at an interesting object, snap a picture of it, and learn more about the story behind it. Users also have the ability to browse interesting fixtures in the area around them, add new fixtures and stories by themselves, or modify and add to existing stories with their own information and experiences.
In addition to user-generated content, we also wrote scripts that scraped Wikipedia for geographic location, names, and descriptions of interesting monuments from around the New Haven community. The data we scraped was used both for testing purposes and to serve as initial data for the app, to encourage early adoption.
## How we built it
We used a combination of GPS location data and Google Cloud's image comparison tools to take any image snapped of a fixture and identify in our database what the object is. Our app is able to identify any fixture by first considering all the known fixtures within a fixed radius around the user, and then considering the similarity between known images of those fixtures and the image sent in by the user. Once we have identified the object, we provide a description of the object to the user. Our app also provides endpoints for members of the community to contribute their knowledge by modifying descriptions.
Our client application is a PWA written in React, which allows us to quickly deploy a lightweight and mobile-friendly app on as many devices as possible. Our server is written in Flask and Python, and we use Redis for our data store.
We used GitHub for source control and collaboration and organized our project by breaking it into three layers and providing each their separate repository in a GitHub organization. We used GitHub projects and issues to keep track of our to-dos and assign roles to different members of the team.
## Challenges we ran into
The first challenge that we ran into is that Google Cloud's image comparison tools were designed to recognize products rather than arbitrary images, which still worked well for our purposes but required us to implement workarounds. Because products couldn't be tagged by geographic data and could only be tagged under product categories, we were unable to optimize our image recognition to a specific geographic area, which could pose challenges to scaling. One workaround that we discussed was to implement several regions with overlapping fixtures, so that the image comparisons could be limited to any given user's immediate surroundings.
This was also the first time that many of us had used Flask before, and we had a difficult time choosing an appropriate architecture and structure. As a result, the integration between the frontend, middleware, and AI engine has not been completely finished, although each component is fully functional on its own. In addition, our team faced various technical difficulties throughout the duration of the hackathon.
## Accomplishments that we're proud of
We're proud of completing a fully functional PWA frontend, for effectively scraping 220+ locations from Wikipedia to populate our initial set of data, and for successfully implementing the Google Cloud's image comparison tools to meet our requirements, despite its limitations.
## What we learned
Many of the tools that we worked on in this hackathon were new to the members working on them. We learned a lot about Google Cloud's image recognition tools, progressive web applications, and Flask with Python-based web development.
## What's next for LOCA
We believe that our project is both unique and useful. Our next steps are to finish the integration between our three layers, add authentication and user roles, and implement a Wikipedia-style edit history record in order to keep track of changes over time. We would also want to add features to the app that would reward members of the community for their contributions, to encourage active participants. | ## Inspiration
As first-year students, we have experienced the difficulties of navigating our way around our new home. We wanted to facilitate the transition to university by helping students learn more about their university campus.
## What it does
A social media app for students to share daily images of their campus and earn points by accurately guessing the locations of their friend's image. After guessing, students can explore the location in full with detailed maps, including within university buildings.
## How we built it
Mapped-in SDK was used to display user locations in relation to surrounding buildings and help identify different campus areas. Reactjs was used as a mobile website as the SDK was unavailable for mobile. Express and Node for backend, and MongoDB Atlas serves as the backend for flexible datatypes.
## Challenges we ran into
* Developing in an unfamiliar framework (React) after learning that Android Studio was not feasible
* Bypassing CORS permissions when accessing the user's camera
## Accomplishments that we're proud of
* Using a new SDK purposely to address an issue that was relevant to our team
* Going through the development process, and gaining a range of experiences over a short period of time
## What we learned
* Planning time effectively and redirecting our goals accordingly
* How to learn by collaborating with team members to SDK experts, as well as reading documentation.
* Our tech stack
## What's next for LooGuessr
* creating more social elements, such as a global leaderboard/tournaments to increase engagement beyond first years
* considering freemium components, such as extra guesses, 360-view, and interpersonal wagers
* showcasing 360-picture view by stitching together a video recording from the user
* addressing privacy concerns with image face blur and an option for delaying showing the image | winning |
## Inspiration
As victims, bystanders and perpetrators of cyberbullying, we felt it was necessary to focus our efforts this weekend on combating an issue that impacts 1 in 5 Canadian teens. As technology continues to advance, children are being exposed to vulgarities online at a much younger age than before.
## What it does
**Prof**(ani)**ty** searches through any webpage a child may access, censors black-listed words and replaces them with an appropriate emoji. This easy to install chrome extension is accessible for all institutional settings or even applicable home devices.
## How we built it
We built a Google chrome extension using JavaScript (JQuery), HTML, and CSS. We also used regular expressions to detect and replace profanities on webpages. The UI was developed with Sketch.
## Challenges we ran into
Every member of our team was a first-time hacker, with little web development experience. We learned how to use JavaScript and Sketch on the fly. We’re incredibly grateful for the mentors who supported us and guided us while we developed these new skills (shout out to Kush from Hootsuite)!
## Accomplishments that we're proud of
Learning how to make beautiful webpages.
Parsing specific keywords from HTML elements.
Learning how to use JavaScript, HTML, CSS and Sketch for the first time.
## What we learned
The manifest.json file is not to be messed with.
## What's next for PROFTY
Expand the size of our black-list.
Increase robustness so it parses pop-up messages as well, such as live-stream comments. | ## What it does
"ImpromPPTX" uses your computer microphone to listen while you talk. Based on what you're speaking about, it generates content to appear on your screen in a presentation in real time. It can retrieve images and graphs, as well as making relevant titles, and summarizing your words into bullet points.
## How We built it
Our project is comprised of many interconnected components, which we detail below:
#### Formatting Engine
To know how to adjust the slide content when a new bullet point or image needs to be added, we had to build a formatting engine. This engine uses flex-boxes to distribute space between text and images, and has custom Javascript to resize images based on aspect ratio and fit, and to switch between the multiple slide types (Title slide, Image only, Text only, Image and Text, Big Number) when required.
#### Voice-to-speech
We use Google’s Text To Speech API to process audio on the microphone of the laptop. Mobile Phones currently do not support the continuous audio implementation of the spec, so we process audio on the presenter’s laptop instead. The Text To Speech is captured whenever a user holds down their clicker button, and when they let go the aggregated text is sent to the server over websockets to be processed.
#### Topic Analysis
Fundamentally we needed a way to determine whether a given sentence included a request to an image or not. So we gathered a repository of sample sentences from BBC news articles for “no” examples, and manually curated a list of “yes” examples. We then used Facebook’s Deep Learning text classificiation library, FastText, to train a custom NN that could perform text classification.
#### Image Scraping
Once we have a sentence that the NN classifies as a request for an image, such as “and here you can see a picture of a golden retriever”, we use part of speech tagging and some tree theory rules to extract the subject, “golden retriever”, and scrape Bing for pictures of the golden animal. These image urls are then sent over websockets to be rendered on screen.
#### Graph Generation
Once the backend detects that the user specifically wants a graph which demonstrates their point, we employ matplotlib code to programmatically generate graphs that align with the user’s expectations. These graphs are then added to the presentation in real-time.
#### Sentence Segmentation
When we receive text back from the google text to speech api, it doesn’t naturally add periods when we pause in our speech. This can give more conventional NLP analysis (like part-of-speech analysis), some trouble because the text is grammatically incorrect. We use a sequence to sequence transformer architecture, *seq2seq*, and transfer learned a new head that was capable of classifying the borders between sentences. This was then able to add punctuation back into the text before the rest of the processing pipeline.
#### Text Title-ification
Using Part-of-speech analysis, we determine which parts of a sentence (or sentences) would best serve as a title to a new slide. We do this by searching through sentence dependency trees to find short sub-phrases (1-5 words optimally) which contain important words and verbs. If the user is signalling the clicker that it needs a new slide, this function is run on their text until a suitable sub-phrase is found. When it is, a new slide is created using that sub-phrase as a title.
#### Text Summarization
When the user is talking “normally,” and not signalling for a new slide, image, or graph, we attempt to summarize their speech into bullet points which can be displayed on screen. This summarization is performed using custom Part-of-speech analysis, which starts at verbs with many dependencies and works its way outward in the dependency tree, pruning branches of the sentence that are superfluous.
#### Mobile Clicker
Since it is really convenient to have a clicker device that you can use while moving around during your presentation, we decided to integrate it into your mobile device. After logging into the website on your phone, we send you to a clicker page that communicates with the server when you click the “New Slide” or “New Element” buttons. Pressing and holding these buttons activates the microphone on your laptop and begins to analyze the text on the server and sends the information back in real-time. This real-time communication is accomplished using WebSockets.
#### Internal Socket Communication
In addition to the websockets portion of our project, we had to use internal socket communications to do the actual text analysis. Unfortunately, the machine learning prediction could not be run within the web app itself, so we had to put it into its own process and thread and send the information over regular sockets so that the website would work. When the server receives a relevant websockets message, it creates a connection to our socket server running the machine learning model and sends information about what the user has been saying to the model. Once it receives the details back from the model, it broadcasts the new elements that need to be added to the slides and the front-end JavaScript adds the content to the slides.
## Challenges We ran into
* Text summarization is extremely difficult -- while there are many powerful algorithms for turning articles into paragraph summaries, there is essentially nothing on shortening sentences into bullet points. We ended up having to develop a custom pipeline for bullet-point generation based on Part-of-speech and dependency analysis.
* The Web Speech API is not supported across all browsers, and even though it is "supported" on Android, Android devices are incapable of continuous streaming. Because of this, we had to move the recording segment of our code from the phone to the laptop.
## Accomplishments that we're proud of
* Making a multi-faceted application, with a variety of machine learning and non-machine learning techniques.
* Working on an unsolved machine learning problem (sentence simplification)
* Connecting a mobile device to the laptop browser’s mic using WebSockets
* Real-time text analysis to determine new elements
## What's next for ImpromPPTX
* Predict what the user intends to say next
* Scraping Primary sources to automatically add citations and definitions.
* Improving text summarization with word reordering and synonym analysis. | ## Inspiration
Despite the advent of the information age, misinformation remains a big issue in today's day and age. Yet, mass media accessibility for newer language speakers, such as younger children or recent immigrants, remains lacking. We want these people to be able to do their own research on various news topics easily and reliably, without being limited by their understanding of the language.
## What it does
Our Chrome extension allows users to shorten and simplify and any article of text to a basic reading level. Additionally, if a user is not interested in reading the entire article, it comes with a tl;dr feature. Lastly, if a user finds the article interesting, our extension will find and link related articles that the user may wish to read later. We also include warnings to the user if the content of the article contains potentially sensitive topics, or comes from a source that is known to be unreliable.
Inside of the settings menu, users can choose a range of dates for the related articles which our extension finds. Additionally, users can also disable the extension from working on articles that feature explicit or political content, alongside being able to disable thumbnail images for related articles if they do not wish to view such content.
## How we built it
The front-end Chrome extension was developed in pure HTML, CSS and JavaScript. The CSS was done with the help of [Bootstrap](https://getbootstrap.com/), but still mostly written on our own. The front-end communicates with the back-end using REST API calls.
The back-end server was built using [Flask](https://flask.palletsprojects.com/en/2.0.x/), which is where we handled all of our web scraping and natural language processing.
We implemented text summaries using various NLP techniques (SMMRY, TF-IDF), which were then fed into the OpenAI API in order to generate a simplified version of the summary. Source reliability was determined using a combination of research data provided by [Ad Fontes Media](https://www.adfontesmedia.com/) and [Media Bias Check](https://mediabiasfactcheck.com/).
To save time (and spend less on API tokens), parsed articles are saved in a [MongoDB](https://www.mongodb.com/) database, which acts as a cache and saves considerable time by skipping all the NLP for previously processed news articles.
Finally, [GitHub Actions](https://github.com/features/actions) was used to automate our builds and deployments to [Heroku](https://www.heroku.com/), which hosted our server.
## Challenges we ran into
Heroku was having issues with API keys, causing very confusing errors which took a significant amount of time to debug.
In regards to web scraping, news websites have wildly different formatting which made extracting the article's main text difficult to generalize across different sites. This difficulty was compounded by the closure of many prevalent APIs in this field, such as Google News API which shut down in 2011.
We also faced challenges with tuning the prompts in our requests to OpenAI to generate the output we were expecting. A significant amount of work done in the Flask server is pre-processing the article's text, in order to feed OpenAI a more suitable prompt, while retaining the meaning.
## Accomplishments that we're proud of
This was everyone on our team's first time creating a Google Chrome extension, and we felt that we were successful at it. Additionally, we are happy that our first attempt at NLP was relatively successful, since none of us have had any prior experience with NLP.
Finally, we slept at a Hackathon for the first time, so that's pretty cool.
## What we learned
We gained knowledge of how to build a Chrome extension, as well as various natural language processing techniques.
## What's next for OpBop
Increasing the types of text that can be simplified, such as academic articles. Making summaries and simplifications more accurate to what a human would produce.
Improving the hit rate of the cache by web crawling and scraping new articles while idle.
## Love,
## FSq x ANMOL x BRIAN | winning |
## Inspiration
As university students, we often find that we have groceries in the fridge but we end up eating out and the groceries end up going bad.
## What It Does
After you buy groceries from supermarkets, you can use our app to take a picture of your receipt. Our app will parse through the items in the receipts and add the items into the database representing your fridge. Using the items you have in your fridge, our app will be able to recommend recipes for dishes for you to make.
## How We Built It
On the back-end, we have a Flask server that receives the image from the front-end through ngrok and then sends the image of the receipt to Google Cloud Vision to get the text extracted. We then post-process the data we receive to filter out any unwanted noise in the data.
On the front-end, our app is built using react-native, using axios to query from the recipe API, and then stores data into Firebase.
## Challenges We Ran Into
Some of the challenges we ran into included deploying our Flask to Google App Engine, and styling in react. We found that it was not possible to write into Google App Engine storage, instead we had to write into Firestore and have that interact with Google App Engine.
On the frontend, we had trouble designing the UI to be responsive across platforms, especially since we were relatively inexperienced with React Native development. We also had trouble finding a recipe API that suited our needs and had sufficient documentation. | ## Inspiration
Let’s face it: getting your groceries is hard. As students, we’re constantly looking for food that is healthy, convenient, and cheap. With so many choices for where to shop and what to get, it’s hard to find the best way to get your groceries. Our product makes it easy. It helps you plan your grocery trip, helping you save time and money.
## What it does
Our product takes your list of grocery items and searches an automatically generated database of deals and prices at numerous stores. We collect this data by collecting prices from both grocery store websites directly as well as couponing websites.
We show you the best way to purchase items from stores nearby your postal code, choosing the best deals per item, and algorithmically determining a fast way to make your grocery run to these stores. We help you shorten your grocery trip by allowing you to filter which stores you want to visit, and suggesting ways to balance trip time with savings. This helps you reach a balance that is fast and affordable.
For your convenience, we offer an alternative option where you could get your grocery orders delivered from several different stores by ordering online.
Finally, as a bonus, we offer AI generated suggestions for recipes you can cook, because you might not know exactly what you want right away. Also, as students, it is incredibly helpful to have a thorough recipe ready to go right away.
## How we built it
On the frontend, we used **JavaScript** with **React, Vite, and TailwindCSS**. On the backend, we made a server using **Python and FastAPI**.
In order to collect grocery information quickly and accurately, we used **Cloudscraper** (Python) and **Puppeteer** (Node.js). We processed data using handcrafted text searching. To find the items that most relate to what the user desires, we experimented with **Cohere's semantic search**, but found that an implementation of the **Levenshtein distance string algorithm** works best for this case, largely since the user only provides one to two-word grocery item entries.
To determine the best travel paths, we combined the **Google Maps API** with our own path-finding code. We determine the path using a **greedy algorithm**. This algorithm, though heuristic in nature, still gives us a reasonably accurate result without exhausting resources and time on simulating many different possibilities.
To process user payments, we used the **Paybilt API** to accept Interac E-transfers. Sometimes, it is more convenient for us to just have the items delivered than to go out and buy it ourselves.
To provide automatically generated recipes, we used **OpenAI’s GPT API**.
## Challenges we ran into
Everything.
Firstly, as Waterloo students, we are facing midterms next week. Throughout this weekend, it has been essential to balance working on our project with our mental health, rest, and last-minute study.
Collaborating in a team of four was a challenge. We had to decide on a project idea, scope, and expectations, and get working on it immediately. Maximizing our productivity was difficult when some tasks depended on others. We also faced a number of challenges with merging our Git commits; we tended to overwrite one anothers’ code, and bugs resulted. We all had to learn new technologies, techniques, and ideas to make it all happen.
Of course, we also faced a fair number of technical roadblocks working with code and APIs. However, with reading documentation, speaking with sponsors/mentors, and admittedly a few workarounds, we solved them.
## Accomplishments that we’re proud of
We felt that we put forth a very effective prototype given the time and resource constraints. This is an app that we ourselves can start using right away for our own purposes.
## What we learned
Perhaps most of our learning came from the process of going from an idea to a fully working prototype. We learned to work efficiently even when we didn’t know what we were doing, or when we were tired at 2 am. We had to develop a team dynamic in less than two days, understanding how best to communicate and work together quickly, resolving our literal and metaphorical merge conflicts. We persisted towards our goal, and we were successful.
Additionally, we were able to learn about technologies in software development. We incorporated location and map data, web scraping, payments, and large language models into our product.
## What’s next for our project
We’re very proud that, although still rough, our product is functional. We don’t have any specific plans, but we’re considering further work on it. Obviously, we will use it to save time in our own daily lives. | ## Inspiration
As college who have all recently moved into apartments for the first time, we found that we were wasting more food than we could've expected. Having to find rotten yet untouched lettuce in the depths of the fridge is not only incredibly wasteful for the environment but also harmful to our nutrition. We wanted to create this app to help other students keep track of the items in their fridge, without having to wrack their brains for what to cook everyday. Our goal was to both streamline mealtime preparations and provide a sustainable solution to everyday food waste.
## What it does
Our app is meant to be simple and intuitive. Users are able to upload a photo of their receipt directly from our app, which we then process and extract the food items. Then, we take these ingredients, calculate expiration dates, and produce recipes for the user using the ingredients that they already have, prioritizing ingredients that are expiring sooner.
## How we built it
Our tech stack consisted of React-Native, Express, MongoDB, Open AI API, and OCR. We used React-Native for our frontend and Express for our backend support. MongoDB was used to store the data we parsed from user receipts, so that way our app would not be memoryless. To actually process and recognize the text on the receipt, we used OCR. To generate recipes, we utilized Open AI API and engineered prompts that would yield the best results.
## Challenges we ran into
For this project, we wanted to really challenge ourselves by using a tech stack we had never used before, such as React-Native, Express, OpenAI API, and OCR. Since essentially our entire tech stack was unfamiliar, we faced many challenges in understanding syntax, routing, and communication between the frontend and backend. Additionally, we faced issues with technology like Multer in the middleware when it came to sending image information from the front end to backend, as we had never used Multer before either. However, we are incredibly proud of ourselves for being able to persevere and find solutions to our problems, to both learn new skills as well as produce our MVP.
## Accomplishments that we're proud of
We are incredibly proud of being able to produce our final product. Though it may not be the best, we hope that it symbolizes our learning, development, and perseverance. From getting our MongoDB database set up to getting our frontend to properly communicate with our backend, we will be taking away many accomplishments with us.
## What we learned
As previously mentioned, we learned an entirely new tech stack. We got to experience React-Native, Express, OpenAI API, and OCR for the first time. It's hard to verbalize what we have learned without talking about our entire project process, since we truly learned something new every time we implemented something.
## What's next for Beat the Receipt
Originally, we wanted to implement our in-app camera, but due to an unfamiliar tech stack, we didn't get a chance to implement it for this iteration, but are already working on it. Additionally for the future, we hope to allow users to choose recipes that better cater to their tastes while still using soon-to-expire ingredients. Eventually, we would also like to implement a budgeting option, where users can visualize how much of their budget has been spent on their groceries, with our app handling the calculations. | partial |
# **Cough It**
#### COVID-19 Diagnosis at Ease
## Inspiration
As the pandemic has nearly crippled all the nations and still in many countries, people are in lockdown, there are many innovations in these two years that came up in order to find an effective way of tackling the issues of COVID-19. Out of all the problems, detecting the COVID-19 strain has been the hardest so far as it is always mutating due to rapid infections.
Just like many others, we started to work on an idea to detect COVID-19 with the help of cough samples generated by the patients. What makes this app useful is its simplicity and scalability as users can record a cough sample and just wait for the results to load and it can give an accurate result of where one have the chances of having COVID-19 or not.
## Objective
The current COVID-19 diagnostic procedures are resource-intensive, expensive and slow. Therefore they are lacking scalability and retarding the efficiency of mass-testing during the pandemic. In many cases even the physical distancing protocol has to be violated in order to collect subject's samples. Disposing off biohazardous samples after diagnosis is also not eco-friendly.
To tackle this, we aim to develop a mobile-based application COVID-19 diagnostic system that:
* provides a fast, safe and user-friendly to detect COVID-19 infection just by providing their cough audio samples
* is accurate enough so that can be scaled-up to cater a large population, thus eliminating dependency on resource-heavy labs
* makes frequent testing and result tracking efficient, inexpensive and free of human error, thus eliminating economical and logistic barriers, and reducing the wokload of medical professionals
Our [proposed CNN](https://dicova2021.github.io/docs/reports/team_Brogrammers_DiCOVA_2021_Challenge_System_Report.pdf) architecture also secured Rank 1 at [DiCOVA](https://dicova2021.github.io/) Challenge 2021, held by IISc Bangalore researchers, amongst 85 teams spread across the globe. With only being trained on small dataset of 1,040 cough samples our model reported:
Accuracy: 94.61%
Sensitivity: 80% (20% false negative rate)
AUC of ROC curve: 87.07% (on blind test set)
## What it does
The working of **Cough It** is simple. User can simply install the app and tap to open it. Then, the app will ask for user permission for external storage and microphone. The user can then just tap the record button and it will take the user to a countdown timer like interface. Playing the play button will simply start recording a 7-seconds clip of cough sample of the user and upon completion it will navigate to the result screen for prediction the chances of the user having COVID-19
## How we built it
Our project is divided into three different modules -->
#### **ML Model**
Our machine learning model ( CNN architecture ) will be trained and deployed using the Sagemaker API which is apart of AWS to predict positive or negative infection from the pre-processed audio samples. The training data will also contain noisy and bad quality audio sample, so that it is robust for practical applications.
#### **Android App**
At first, we prepared the wireframe for the app and decided the architecture of the app which we will be using for our case. Then, we worked from the backend part first, so that we can structure our app in proper android MVVM architecture. We constructed all the models, Retrofit Instances and other necessary modules for code separation.
The android app is built in Kotlin and is following MVVM architecture for scalability. The app uses Media Recorder class to record the cough samples of the patient and store them locally. The saved file is then accessed by the android app and converted to byte array and Base64 encoded which is then sent to the web backend through Retrofit.
#### **Web Backend**
The web backend is actually a Node.js application which is deployed on EC2 instance in AWS. We choose this type of architecture for our backend service because we wanted a more reliable connection between our ML model and our Node.js application.
At first, we created a backend server using Node.js and Express.js and deployed the Node.js server in AWS EC2 instance. The server then receives the audio file in Base64 encoded form from the android client through a POST request API call. After that, the file is getting converted to .wav file through a module in terminal through command. After successfully, generating the .wav file, we put that .wav file as argument in the pre-processor which is a python script. Then we call the AWS Sagemaker API to get the predictions and the Node.js application then sends the predictions back to the android counterpart to the endpoint.
## Challenges we ran into
#### **Android**
Initially, in android, we were facing a lot of issues in recording a cough sample as there are two APIs for recording from the android developers, i.e., MediaRecorder, AudioRecord. As the ML model required a .wav file of the cough sample to pre-process, we had to generate it on-device. It is possible with AudioRecord class but requires heavy customization to work and also, saving a file and writing to that file, is a really tedious and buggy process. So, for android counterpart, we used the MediaRecorder class and saving the file and all that boilerplate code is handled by that MediaRecorder class and then we just access that file and send it to our API endpoint which then converts it into a .wav file for the pre-processor to pre-process.
#### **Web Backend**
In the web backend side, we faced a lot of issues in deploying the ML model and to further communicate with the model with node.js application.
Initially, we deployed the Node.js application in AWS Lamdba, but for processing the audio file, we needed to have a python environment as well, so we could not continue with lambda as it was a Node.js environment. So, to actually get the python environment we had to use AWS EC2 instance for deploying the backend server.
Also, we are processing the audio file, we had to use ffmpeg module for which we had to downgrade from the latest version of numpy library in python to older version.
#### **ML Model**
The most difficult challenge for our ml-model was to get it deployed so that it can be directly accessed from the Node.js server to feed the model with the MFCC values for the prediction. But due to lot of complexity of the Sagemaker API and with its integration with Node.js application this was really a challenge for us. But, at last through a lot of documentation and guidance we are able to deploy the model in Sagemaker and we tested some sample data through Postman also.
## Accomplishments that we're proud of
Through this project, we are proud that we are able to get a real and accurate prediction of a real sample data. We are able to send a successful query to the ML Model that is hosted on Sagemaker and the prediction was accurate.
Also, this made us really happy that in a very small amount we are able to overcome with so much of difficulties and also, we are able to solve them and get the app and web backend running and we are able to set the whole system that we planned for maintaining a proper architecture.
## What we learned
Cough It is really an interesting project to work on. It has so much of potential to be one of the best diagnostic tools for COVID-19 which always keeps us motivated to work on it make it better.
In android, working with APIs like MediaRecorder has always been a difficult position for us, but after doing this project and that too in Kotlin, we feel more confident in making a production quality android app. Also, developing an ML powered app is difficult and we are happy that finally we made it.
In web, we learnt the various scenarios in which EC2 instance can be more reliable than AWS Lambda also running various script files in node.js server is a good lesson to be learnt.
In machine learning, we learnt to deploy the ML model in Sagemaker and after that, how to handle the pre-processing file in various types of environments.
## What's next for Untitled
As of now, our project is more focused on our core idea, i.e., to predict by analysing the sample data of the user. So, our app is limited to only one user, but in future, we have already planned to make a database for user management and to show them report of their daily tests and possibility of COVID-19 on a weekly basis as per diagnosis.
## Final Words
There is a lot of scope for this project and this project and we don't want to stop innovating. We would like to take our idea to more platforms and we might also launch the app in the Play-Store soon when everything will be stable enough for the general public.
Our hopes on this project is high and we will say that, we won't leave this project until perfection. | ## Inspiration:
We wanted to combine our passions of art and computer science to form a product that produces some benefit to the world.
## What it does:
Our app converts measured audio readings into images through integer arrays, as well as value ranges that are assigned specific colors and shapes to be displayed on the user's screen. Our program features two audio options, the first allows the user to speak, sing, or play an instrument into the sound sensor, and the second option allows the user to upload an audio file that will automatically be played for our sensor to detect. Our code also features theme options, which are different variations of the shape and color settings. Users can chose an art theme, such as abstract, modern, or impressionist, which will each produce different images for the same audio input.
## How we built it:
Our first task was using Arduino sound sensor to detect the voltages produced by an audio file. We began this process by applying Firmata onto our Arduino so that it could be controlled using python. Then we defined our port and analog pin 2 so that we could take the voltage reading and convert them into an array of decimals.
Once we obtained the decimal values from the Arduino we used python's Pygame module to program a visual display. We used the draw attribute to correlate the drawing of certain shapes and colours to certain voltages. Then we used a for loop to iterate through the length of the array so that an image would be drawn for each value that was recorded by the Arduino.
We also decided to build a figma-based prototype to present how our app would prompt the user for inputs and display the final output.
## Challenges we ran into:
We are all beginner programmers, and we ran into a lot of information roadblocks, where we weren't sure how to approach certain aspects of our program. Some of our challenges included figuring out how to work with Arduino in python, getting the sound sensor to work, as well as learning how to work with the pygame module. A big issue we ran into was that our code functioned but would produce similar images for different audio inputs, making the program appear to function but not achieve our initial goal of producing unique outputs for each audio input.
## Accomplishments that we're proud of
We're proud that we were able to produce an output from our code. We expected to run into a lot of error messages in our initial trials, but we were capable of tackling all the logic and syntax errors that appeared by researching and using our (limited) prior knowledge from class. We are also proud that we got the Arduino board functioning as none of us had experience working with the sound sensor. Another accomplishment of ours was our figma prototype, as we were able to build a professional and fully functioning prototype of our app with no prior experience working with figma.
## What we learned
We gained a lot of technical skills throughout the hackathon, as well as interpersonal skills. We learnt how to optimize our collaboration by incorporating everyone's skill sets and dividing up tasks, which allowed us to tackle the creative, technical and communicational aspects of this challenge in a timely manner.
## What's next for Voltify
Our current prototype is a combination of many components, such as the audio processing code, the visual output code, and the front end app design. The next step would to combine them and streamline their connections. Specifically, we would want to find a way for the two code processes to work simultaneously, outputting the developing image as the audio segment plays. In the future we would also want to make our product independent of the Arduino to improve accessibility, as we know we can achieve a similar product using mobile device microphones. We would also want to refine the image development process, giving the audio more control over the final art piece. We would also want to make the drawings more artistically appealing, which would require a lot of trial and error to see what systems work best together to produce an artistic output. The use of the pygame module limited the types of shapes we could use in our drawing, so we would also like to find a module that allows a wider range of shape and line options to produce more unique art pieces. | ## Inspiration
Learning never ends. It's the cornerstone of societal progress and personal growth. It helps us make better decisions, fosters further critical thinking, and facilitates our contribution to the collective wisdom of humanity. Learning transcends the purpose of solely acquiring knowledge.
## What it does
Understanding the importance of learning, we wanted to build something that can make learning more convenient for anyone and everyone. Being students in college, we often find ourselves meticulously surfing the internet in hopes of relearning lectures/content that was difficult. Although we can do this, spending half an hour to sometimes multiple hours is simply not the most efficient use of time, and we often leave our computers more confused than how we were when we started.
## How we built it
A typical scenario goes something like this: you begin a Google search for something you want to learn about or were confused by. As soon as you press search, you are confronted with hundreds of links to different websites, videos, articles, news, images, you name it! But having such a vast quantity of information thrown at you isn’t ideal for learning. What ends up happening is that you spend hours surfing through different articles and watching different videos, all while trying to piece together bits and pieces of what you understood from each source into one cohesive generalization of knowledge. What if learning could be made easier by optimizing search? What if you could get a guided learning experience to help you self-learn?
That was the motivation behind Bloom. We wanted to leverage generative AI to optimize search specifically for learning purposes. We asked ourselves and others, what helps them learn? By using feedback and integrating it into our idea, we were able to create a platform that can teach you a new concept in a concise, understandable manner, with a test for knowledge as well as access to the most relevant articles and videos, thus enabling us to cover all types of learners. Bloom is helping make education more accessible to anyone who is looking to learn about anything.
## Challenges we ran into
We faced many challenges when it came to merging our frontend and backend code successfully. At first, there were many merge conflicts in the editor but we were able to find a workaround/solution. This was also our first time experimenting with LangChain.js so we had problems with the initial setup and had to learn their wide array of use cases.
## Accomplishments that we're proud of/What's next for Bloom
We are proud of Bloom as a service. We see just how valuable it can be in the real world. It is important that society understands that learning transcends the classroom. It is a continuous, evolving process that we must keep up with. With Bloom, our service to humanity is to make the process of learning more streamlined and convenient for our users. After all, learning is what allows humanity to progress. We hope to continue to optimize our search results, maximizing the convenience we bring to our users. | winning |
## Inspiration
We see technology progressing rapidly in cool fields like virtual reality, social media and artificial intelligence but often neglect those who really need tech to make a difference in their lives.
SignFree aims to bridge the gap between the impaired and the general public by making it easier for everyone to communicate.
## What it does
SignFree is a smart glove that is able to detect movements and gestures to translate sign language into speech or text.
## How we built it
SignFree was built using a glove with embedded sensors to track finger patterns. The project relies on an Arduino board with a small logic circuit to detect which fingers are activated for each sign. This information is relayed over to a database and is collected by a script that converts this information into human speech.
## Challenges we ran into
Coming up with the logic behind sensing different finger patterns was difficult and took some planning
The speech API used on the web server was tricky to implement as well
## Accomplishments that we are proud of
We feel our hack has real world potential and this is something we aimed to accomplish at this hackathon.
## What we learned
Basic phrases in sign language. We used a bunch of new API's to get things working.
## What's next for SignFree
More hackathons. More hardware. More fun | ## Inspiration 💡
**Due to rising real estate prices, many students are failing to find proper housing, and many landlords are failing to find good tenants**. Students looking for houses often have to hire some agent to get a nice place with a decent landlord. The same goes for house owners who need to hire agents to get good tenants. *The irony is that the agent is totally motivated by sheer commission and not by the wellbeing of any of the above two.*
Lack of communication is another issue as most of the things are conveyed by a middle person. It often leads to miscommunication between the house owner and the tenant, as they interpret the same rent agreement differently.
Expensive and time-consuming background checks of potential tenants are also prevalent, as landowners try to use every tool at their disposal to know if the person is really capable of paying rent on time, etc. Considering that current rent laws give tenants considerable power, it's very reasonable for landlords to perform background checks!
Existing online platforms can help us know which apartments are vacant in a locality, but they don't help either party know if the other person is really good! Their ranking algorithms aren't trustable with tenants. The landlords are also reluctant to use these services as they need to manually review applications from thousands of unverified individuals or even bots!
We observed that we are still using these old-age non-scalable methods to match the home seeker and homeowners willing to rent their place in this digital world! And we wish to change it with **RentEasy!**
![Tech-Stack](https://ipfs.infura.io/ipfs/QmRco7zU8Vd9YFv5r9PYKmuvsxxL497AeHSnLiu8acAgCk)
## What it does 🤔
In this hackathon, we built a cross-platform mobile app that is trustable by both potential tenants and house owners.
The app implements a *rating system* where the students/tenants can give ratings for a house/landlord (ex: did not pay security deposit back for no reason), & the landlords can provide ratings for tenants (the house was not clean). In this way, clean tenants and honest landlords can meet each other.
This platform also helps the two stakeholders build an easily understandable contract that will establish better trust and mutual harmony. The contract is stored on an InterPlanetary File System (IPFS) and cannot be tampered by anyone.
![Tech-Stack](https://ipfs.infura.io/ipfs/QmezGvDFVXWHP413JFke1eWoxBnpTk9bK82Dbu7enQHLsc)
Our application also has an end-to-end encrypted chatting module powered by @ Company. The landlords can filter through all the requests and send requests to tenants. This chatting module powers our contract generator module, where the two parties can discuss a particular agreement clause and decide whether to include it or not in the final contract.
## How we built it ️⚙️
Our beautiful and elegant mobile application was built using a cross-platform framework flutter.
We integrated the Google Maps SDK to build a map where the users can explore all the listings and used geocoding API to encode the addresses to geopoints.
We wanted our clients a sleek experience and have minimal overhead, so we exported all network heavy and resource-intensive tasks to firebase cloud functions. Our application also has a dedicated **end to end encrypted** chatting module powered by the **@-Company** SDK. The contract generator module is built with best practices and which the users can use to make a contract after having thorough private discussions. Once both parties are satisfied, we create the contract in PDF format and use Infura API to upload it to IPFS via the official [Filecoin gateway](https://www.ipfs.io/ipfs)
![Tech-Stack](https://ipfs.infura.io/ipfs/QmaGa8Um7xgFJ8aa9wcEgSqAJZjggmVyUW6Jm5QxtcMX1B)
## Challenges we ran into 🧱
1. It was the first time we were trying to integrate the **@-company SDK** into our project. Although the SDK simplifies the end to end, we still had to explore a lot of resources and ask for assistance from representatives to get the final working build. It was very gruelling at first, but in the end, we all are really proud of having a dedicated end to end messaging module on our platform.
2. We used Firebase functions to build scalable serverless functions and used expressjs as a framework for convenience. Things were working fine locally, but our middleware functions like multer, urlencoder, and jsonencoder weren't working on the server. It took us more than 4 hours to know that "Firebase performs a lot of implicit parsing", and before these middleware functions get the data, Firebase already had removed them. As a result, we had to write the low-level encoding logic ourselves! After deploying these, the sense of satisfaction we got was immense, and now we appreciate millions of open source packages much more than ever.
## Accomplishments that we're proud of ✨
We are proud of finishing the project on time which seemed like a tough task as we started working on it quite late due to other commitments and were also able to add most of the features that we envisioned for the app during ideation. Moreover, we learned a lot about new web technologies and libraries that we could incorporate into our project to meet our unique needs. We also learned how to maintain great communication among all teammates. Each of us felt like a great addition to the team. From the backend, frontend, research, and design, we are proud of the great number of things we have built within 36 hours. And as always, working overnight was pretty fun! :)
---
## Design 🎨
We were heavily inspired by the revised version of **Iterative** design process, which not only includes visual design, but a full-fledged research cycle in which you must discover and define your problem before tackling your solution & then finally deploy it.
![Double-Diamond](https://ipfs.infura.io/ipfs/QmPDLVVpsJ9NvJZU2SdaKoidUZNSDJPhC2SQAB8Hh66ZDf)
This time went for the minimalist **Material UI** design. We utilized design tools like Figma, Photoshop & Illustrator to prototype our designs before doing any coding. Through this, we are able to get iterative feedback so that we spend less time re-writing code.
![Brand-identity](https://ipfs.infura.io/ipfs/QmUriwycp6S98HtsA2KpVexLz2CP3yUBmkbwtwkCszpq5P)
---
# Research 📚
Research is the key to empathizing with users: we found our specific user group early and that paves the way for our whole project. Here are few of the resources that were helpful to us —
* Legal Validity Of A Rent Agreement : <https://bit.ly/3vCcZfO>
* 2020-21 Top Ten Issues Affecting Real Estate : <https://bit.ly/2XF7YXc>
* Landlord and Tenant Causes of Action: "When Things go Wrong" : <https://bit.ly/3BemMtA>
* Landlord-Tenant Law : <https://bit.ly/3ptwmGR>
* Landlord-tenant disputes arbitrable when not covered by rent control : <https://bit.ly/2Zrpf7d>
* What Happens If One Party Fails To Honour Sale Agreement? : <https://bit.ly/3nr86ST>
* When Can a Buyer Terminate a Contract in Real Estate? : <https://bit.ly/3vDexWO>
**CREDITS**
* Design Resources : Freepik, Behance
* Icons : Icons8
* Font : Semibold / Montserrat / Roboto / Recoleta
---
# Takeways
## What we learned 🙌
**Sleep is very important!** 🤐 Well, jokes apart, this was an introduction to **Web3** & **Blockchain** technologies for some of us and introduction to mobile app developent to other. We managed to improve on our teamwork by actively discussing how we are planning to build it and how to make sure we make the best of our time. We learned a lot about atsign API and end-to-end encryption and how it works in the backend. We also practiced utilizing cloud functions to automate and ease the process of development.
## What's next for RentEasy 🚀
**We would like to make it a default standard of the housing market** and consider all the legal aspects too! It would be great to see rental application system more organized in the future. We are planning to implement more additional features such as landlord's view where he/she can go through the applicants and filter them through giving the landlord more options. Furthermore we are planning to launch it near university campuses since this is where people with least housing experience live. Since the framework we used can be used for any type of operating system, it gives us the flexibility to test and learn.
**Note** — **API credentials have been revoked. If you want to run the same on your local, use your own credentials.** | ## Inspiration
Coming from a Small School like Fordham LC to a school with such a huge campus made me question how different life would be here. At Fordham there are days when I want to go to the gym when is not crowded, so since it is a very small campus I just walk over to the gym and check, but if I had to do that at Yale and walk 15 minutes back to my room if there was a Stampede of People in the gym at that time I would be pissed!
## What it does
College Stampede is useful for both students and administrators at any college campus. It tracks the number of users connected to a specific router; therefore, being able to determine their position on campus through a Raspberry Pi Zero, and then sends that information to the cloud, where it is displayed in real-time for students to know how crowded the cafeteria, the gym, or the library are before going.
The information is also stored in Google Cloud Storage. From there it is sent to Big Query for Data Regressions with Machine Learning and then displayed through Datalab for Administrators to discover things such as the places on campus that require more or less staffing.
The Machine Learning Analysis is also used to display an estimated number of people that will visit a certain place in the coming hours.
## How I built it
Raspberry Pi's sends information directly to the website's javascript and to Google Cloud Storage through JSON files.
The main website with real-time information and predictions of students was written from Scratch with HTML and Javascript. The data that received to make the graphs comes after the query-scheduled Machine Learning analysis of the data in BigQuery.
Various DataLab notebooks receive the BigQuery Data and include different options for the appropriate analysis of it
## Challenges I ran into
At first, my intention was to create the product in DataLab since it is run through Python's Jupyter Notebooks, which I was already familiar with, also I was going to create the graphs by sending JSON or CSV files to the Notebook. However, the DataLab Python notebook is impossible to embed into a website and the way it is built is very oriented towards passing the data through BigQuery
## Accomplishments that I'm proud of
Although encountering the adversities mentioned before, I was able to make the best out of both. I learned Javascript, and I was able to implement all that I wanted to implement in Python but using a new language. I learned how to use BigQuery and the machine learning capability of it, therefore, my final idea including expected number of users turned out to be more appealing than the original due to this adversity.
## What I learned
JavaScript, Google Cloud APIs
## What's next for College Stampede
Test complete integration with a network of Raspberry Pis
## Note from Tenzin:
Be sure to check out the website! Since we couldn't integrate raspberry pi into the project, please press the button and press send for the data to dynamically change. A button push represents another person entering a router network. Also towards the end, we all agreed that a live twitter feed is also really helpful for students in a rush. Many universities have transport across campus and Fordham is no different. Fordham's ram van's social media is really interactive and gives live updates for whoever can't take the stress of not knowing the possible threat of a delay. Also Fordham IT is always on twitter responding to students desperate to hand in a last minute assignment on a black board that keeps crash. So we embedded the twitter timeline for these two University services to help any visitors in a rush. | winning |
## Inspiration
Therapy is all about creating a trusting relationship between the clients and their therapist. Building rapport, or trust, is the main job of a therapist, especially at the beginning. But in the current practices, therapists have to take notes throughout the session to keep track of their clients. This does 2 things:
* Deviate the therapists from getting fully involved in the sessions.
* Their clients feel disconnected from their therapists (due to minimal/no eye contact, more focus on note-taking than "connecting" with patients, etc.)
## What it does
Enter **MediScript**.
MediScript is an AI-powered android application that:
* documents the conversation in therapy sessions
* supports speaker diarization (multiple speaker labeling)
* determines the theme of the conversation (eg: negative news, drug usage, health issues, etc.)
* transparently share session transcriptions with clients or therapists as per their consent
With MediScript, we aim to automate the tedious note-taking procedures in therapy sessions and as a result, make therapy sessions engaging again!
## How we built it
We built an Android application, adhering to Material Design UI guidelines, and integrated it with the Chaquopy module to run python scripts directly via the android application. Moreover, the audio recording of each session is stored directly within the app which sends the recorded audio files over to an AWS S3 bucket. We made AssemblyAI API calls via the python scripts and accessed the session recording audio files over the same S3 bucket while calling the API.
Documenting conversations, multi-speaker labeling, and conversation theme detection - all of this was made possible by using the brilliant API by **AssemblyAI**.
## Challenges we ran into
Configuring python scripts with the android application proved to be a big challenge initially. We had to experiment with lots of modules before finding Chaquopy which was a perfect fit for our use-case. AsseblyAPI was quite easy to use but we had to figure out a way to host our .mp3 files over the internet so that the API could access them instantly.
## Accomplishments that we're proud of
None of us had developed an Android app before so this was certainly a rewarding experience for all 3 of us. We weren't sure we'd be able to build a functioning prototype in time but we're delighted with the results!
## What's next for MediScript
* Privacy inclusion: we wish to use more privacy-centric methods to share session transcripts with the therapists and their clients
* Make a more easy-to-use and clean UI
* Integrate emotion detection capabilities for better session logging. | ## Inspiration
Ever found yourself struggling to keep up during a lecture, caught between listening to the professor while scrambling to scribble down notes? It’s all too common to miss key points while juggling the demands of note-taking – that’s why we made a tool designed to do the hard work for you!
## What it does
With a simple click, you can start recording of your lecture, and NoteHacks will start generating clear, summarized notes in real time. The summary conciseness parameter can be fine tuned depending on how you want your notes written, and will take note of when it looks like you've been distracted so that you can have all those details you would have missed. These notes are stored for future review, where you can directly ask AI about the content without having to provide background details.
## How we built it
* Backend + database using Convex
* Frontend using Next.js
* Image, speech, and text models by Groq
## Challenges we ran into
* Chunking audio to stream and process it in real-time
* Summarizing a good portion of the text, without it being weirdly chopped off and losing context
* Merge conflicts T-T
* Windows can't open 2 cameras simultaneously
## Accomplishments that we're proud of
* Real-time speech processing that displays on the UI
* Gesture recognition
## What we learned
* Real-time streaming audio and video
* Convex & Groq APIs
* Image recognition
## What's next for NoteHacks
* Support capturing images and adding them to the notes
* Allow for text editing within the app (text formatting, adding/removing text, highlighting) | ## Inspiration
We were originally considering creating an application that would take a large amount of text and summarize it using natural language processing. As well, Shirley Wang feels an awkward obligation to incorporate IBM's Watson into the project. As a result, we came up with the concept of putting in an image and getting a summary from the Wikipedia article.
## What it does
You can input the url of a picture into the webapp, and it will return a brief summary in bullet point form of the Wikipedia article on the object identified within the picture.
## How we built it
We considered originally using Android Studio, but then a lot of problems occurred trying to make the software work with it, so we switched over to Google App Engine. Then we used Python to build the underlaying logic, along with using IBM's Watson to identify and classify photos, the Wikipedia API to get information from Wikipedia articles, and Google's Natural Language API to get only the key sentences and shorten them down to bullet point form while maintaining its original meaning.
## Challenges we ran into
We spent over 2 hours trying to fix a Google Account Authentication problem, which occurred because we didn't know how to properly write the path to a file, and Pycharm running apps is different from Pycharm running apps in its own terminal. We also spent another 2 hours trying to deploy the app, because Pycharm had a screwed up import statement and requirements file that messed up a lot of it.
## Accomplishments that we're proud of
This is our first hackathon and our first time creating a web app, and we're really happy that we managed to actually successfully create something that works.
## What I learned
Sometimes reading the API carefully will save you over half of your debugging time in the long run.
## What's next for Image Summarizer
Maybe we'll be able to make a way for the users to input a photo directly from their camera or their computer saved photos. | partial |
## Inspiration
Many people on our campus use an app called When2Meet to schedule meetings, but their UI is terrible, their features are limited, and overall we thought it could be done better. We brainstormed what would make When2Meet better and thought the biggest thing would be a simple new UI as well as a proper account system to see all the meetings you have.
## What it does
Let's Meet is an app that allows people to schedule meetings effortlessly. "Make an account and make scheduling a breeze." A user can create a meeting and share it with others. Then everyone with access can choose which times work best for them.
## How we built it
We used a lot of Terraform! We really wanted to go with a serverless microservice architecture on AWS and thus chose to deploy via AWS. Since we were already using lambdas for the backend, it made sense to add Amplify for the frontend, Cognito for logging in, and DynamoDB for data storage. We wrote over 900 lines of Terraform to get our lambdas deployed, api gateway properly configured, permissions correct, and everything else we do in AWS configured. Other than AWS, we utilized React with Ant Design components. Our lambdas ran on Python 3.12.
## Challenges we ran into
The biggest challenge we ran into was a bug with AWS. For roughly 5 hours we fought intermittent 403 responses. Initially we had an authorizer on the API gateway, but after a short time we removed it. We confirmed it was deleting by searching the CLI for it. We double checked in the web console because we thought it may be the authorizer but it wasn't there anyway. This ended up requiring everything to be manually deleted around the API gate way and everything have to be rebuilt. Thanks to Terraform it made restoring everything relatively easy.
Another challenge was using Terraform and AWS itself. We had almost no knowledge of it going in and coming out we know there is so much more to learn, but with these skills we feel confident to set up anything in AWS.
## Accomplishments that we're proud of
We are so proud of our deployment and cloud architecture. We think that having built a cloud project of this scale in this time frame is no small feat. Even with some challenges our determination to complete the project helped us get through. We are also proud of our UI as we continue to strengthen our design skills.
## What we learned
We learned that implementing Terraform can sometimes be difficult depending on the scope and complexity of the task. This was our first time using a component library for frontend development and we now know how to design, connect, and build an app from start to finish.
## What's next for Let's Meet
We would add more features such as syncing the meetings to a Google Calendar. More customizations and features such as location would also be added so that users can communicate where to meet through the web app itself. | ## Inspiration
As roommates, we found that keeping track of our weekly chore schedule and house expenses was a tedious process, more tedious than we initially figured.
Though we created a Google Doc to share among us to keep the weekly rotation in line with everyone, manually updating this became hectic and cumbersome--some of us rotated the chores clockwise, others in a zig-zag.
Collecting debts for small purchases for the house split between four other roommates was another pain point we wanted to address. We decided if we were to build technology to automate it, it must be accessible by all of us as we do not share a phone OS in common (half of us are on iPhone, the other half on Android).
## What it does
**Chores:**
Abode automatically assigns a weekly chore rotation and keeps track of expenses within a house. Only one person needs to be a part of the app for it to work--the others simply receive a text message detailing their chores for the week and reply “done” when they are finished.
If they do not finish by close to the deadline, they’ll receive another text reminding them to do their chores.
**Expenses:**
Expenses can be added and each amount owed is automatically calculated and transactions are automatically expensed to each roommates credit card using the Stripe API.
## How we built it
We started by defining user stories and simple user flow diagrams. We then designed the database where we were able to structure our user models. Mock designs were created for the iOS application and was implemented in two separate components (dashboard and the onboarding process). The front and back-end were completed separately where endpoints were defined clearly to allow for a seamless integration process thanks to Standard Library.
## Challenges we ran into
One of the significant challenges that the team faced was when the back-end database experienced technical difficulties at the tail end of the hackathon. This slowed down our ability to integrate our iOS app with our API. However, the team fought back while facing adversity and came out on top.
## Accomplishments that we're proud of
**Back-end:**
Using Standard Library we developed a comprehensive back-end for our iOS app consisting of 13 end-points, along with being able to interface via text messages using Twilio for users that do not necessarily want to download the app.
**Design:**
The team is particularly proud of the design that the application is based on. We decided to choose a relatively simplistic and modern approach through the use of a simple washed out colour palette. The team was inspired by material designs that are commonly found in many modern applications. It was imperative that the designs for each screen were consistent to ensure a seamless user experience and as a result a mock-up of design components was created prior to beginning to the project.
**Use case:**
Not only that, but our app has a real use case for us, and we look forward to iterating on our project for our own use and a potential future release.
## What we learned
This was the first time any of us had gone into a hackathon with no initial idea. There was a lot of startup-cost when fleshing out our design, and as a result a lot of back and forth between our front and back-end members. This showed us the value of good team communication as well as how valuable documentation is -- before going straight into the code.
## What's next for Abode
Abode was set out to be a solution to the gripes that we encountered on a daily basis.
Currently, we only support the core functionality - it will require some refactoring and abstractions so that we can make it extensible. We also only did manual testing of our API, so some automated test suites and unit tests are on the horizon. | ## Inspiration
My teammate and I grew up in Bolivia, where recycling has not taken much of a hold in society unfortunately. As such, once we moved to the US and had to deal with properly throwing away the trash to the corresponding bin, we were a bit lost sometimes on how to determine which bin to use. What better way to solve this problem than creating an app that will do it for us?
## What it does
By opening EcoSnap, you can take a picture of a piece of trash using the front camera, after which the image will be processed by a machine learning algorithm that will classify the primary object and give the user an estimate of the confidence percentage and in which bin the trash should go to.
## How we built it
We decided to use Flutter to make EcoSnap because of its ability to run on multiple platforms with only one main source file. Furthermore, we also really liked its "Hot Reload" feature which allowed us to see the changes in our app instantly. After creating the basic UI and implementing image capturing capabilities, we connected to Google's Cloud Vision and OpenAI's GPT APIs. With this done, we fed Vision the image that was captured, which then returned its classification. Then, we fed this output to GPT, which told us which bin we should put it in. Once all of this information was acquired, a new screen propped up informing the user of the relevant information!
## Challenges we ran into
Given this was our first hackathon and we did not come into it with an initial idea, we spent a lot of time deciding what we should do. After coming up with the idea and deciding on using Flutter, we had to learn from 0 how to use it as well as Dart, which took also a long time. Afterwards, we had issue implementing multiple pages in our app, acquiring the right information from the APIs, feeding correct state variables, creating a visually-appealing UI, and other lesser issues.
## Accomplishments that we're proud of
This is the first app we create, a huge step towards our career in the industry and a nice project we can add to our resume. Our dedication and resilience to keep pushing and absorbing information created an experience we will never forget. It was great to learn Flutter given its extreme flexibility in front-end development. Last but not least, we are proud by our dedication to the end goal of never having to doubt whether the trash we are throwing away is going into the wrong bin.
## What we learned
We learned Flutter. We learned Dart. We learned how to implement multiple APIs into one application to provide the user with very relevant information. We learned how to read documentation. We learned how to learn a language quickly.
## What's next for EcoSnap
Hopefully, win some prizes at the Hackathon and keep developing the app for an AppStore release over Thanksgiving! Likewise, we were also thinking of connecting a hardware component in the future. Basically, it would be a tiny microprocessor connected to a tiny camera connected to an LED light/display. This hardware would be placed on top of trash bins so that people can know very quickly where to throw their trash! | partial |
## Inspiration
Recently we have noticed an influx in elaborate spam calls, email, and texts. Although for a native English and technologically literate person in Canada, these phishing attempts are a mere inconvenience, to susceptible individuals falling for these attempts may result in heavy personal or financial loss. We aim to reduce this using our hack.
We created PhishBlock to address the disparity in financial opportunities faced by minorities and vulnerable groups like the elderly, visually impaired, those with limited technological literacy, and ESL individuals. These groups are disproportionately targeted by financial scams. The PhishBlock app is specifically designed to help these individuals identify and avoid scams. By providing them with the tools to protect themselves, the app aims to level the playing field and reduce their risk of losing savings, ultimately giving them the same financial opportunities as others.
## What it does
PhishBlock is a web application that leverages LLMs to parse and analyze email messages and recorded calls.
## How we built it
We leveraged the following technologies to create a pipeline classify potentially malicious email from safe ones.
Gmail API: Integrated reading a user’s email.
Cloud Tech: Enabled voice recognition, data processing and training models.
Google Cloud Enterprise (Vertex AI): Leveraged for secure cloud infrastructure.
GPT: Employed for natural language understanding and generation.
numPy, Pandas: Data collection and cleaning
Scikit-learn: Applied for efficient model training
## Challenges we ran into
None of our team members had worked with google’s authentication process and the gmail api, so much of saturday was devoted to hashing out technical difficulties with these things. On the AI side, data collection is an important step in training and fine tuning. Assuring the quality of the data was essential
## Accomplishments that we're proud of
We are proud of coming together as a group and creating a demo to a project in such a short time frame
## What we learned
The hackathon was just one day, but we realized we could get much more done than we initially intended. Our goal seemed tall when we planned it on Friday, but by Saturday night all the functionality we imagined had fallen into place. On the technical side, we didn’t use any frontend frameworks and built interactivity the classic way and it was incredibly challenging. However, we discovered a lot about what we’re capable of under extreme time pressures!
## What's next for PhishBlock
We used closed source OpenAI API to fine tune a GPT 3.5 Model. This has obvious privacy concerns, but as a proof of concept it demonstrate the ability of LLMs to detect phishing attempts. With more computing power open source models can be used. | ## Inspiration
In the “new normal” that COVID-19 has caused us to adapt to, our group found that a common challenge we faced was deciding where it was safe to go to complete trivial daily tasks, such as grocery shopping or eating out on occasion. We were inspired to create a mobile app using a database created completely by its users - a program where anyone could rate how well these “hubs” were following COVID-19 safety protocol.
## What it does
Our app allows for users to search a “hub” using a Google Map API, and write a review by rating prompted questions on a scale of 1 to 5 regarding how well the location enforces public health guidelines. Future visitors can then see these reviews and add their own, contributing to a collective safety rating.
## How I built it
We collaborated using Github and Android Studio, and incorporated both a Google Maps API as well integrated Firebase API.
## Challenges I ran into
Our group unfortunately faced a number of unprecedented challenges, including losing a team member mid-hack due to an emergency situation, working with 3 different timezones, and additional technical difficulties. However, we pushed through and came up with a final product we are all very passionate about and proud of!
## Accomplishments that I'm proud of
We are proud of how well we collaborated through adversity, and having never met each other in person before. We were able to tackle a prevalent social issue and come up with a plausible solution that could help bring our communities together, worldwide, similar to how our diverse team was brought together through this opportunity.
## What I learned
Our team brought a wide range of different skill sets to the table, and we were able to learn lots from each other because of our various experiences. From a technical perspective, we improved on our Java and android development fluency. From a team perspective, we improved on our ability to compromise and adapt to unforeseen situations. For 3 of us, this was our first ever hackathon and we feel very lucky to have found such kind and patient teammates that we could learn a lot from. The amount of knowledge they shared in the past 24 hours is insane.
## What's next for SafeHubs
Our next steps for SafeHubs include personalizing user experience through creating profiles, and integrating it into our community. We want SafeHubs to be a new way of staying connected (virtually) to look out for each other and keep our neighbours safe during the pandemic. | # Inspiration
## Every day, numerous individuals, including ourselves, receive a plethora of emails. We frequently find ourselves sifting through this abundance, trying to identify emails with crucial information, those needing a response, and those of lesser importance. But what if there existed an automated solution for this? Well, that's where our project, Email Insight, comes in.
# What it does
## Our program operates by users logging in to our webpage with their Google accounts. Our AI will then extract their emails, summarizes them, and rank them from the most crucial to the least important. Finally, it presents these emails on our webpage in a neat and prioritized order, with the most important ones at the top.
# How we built it
## To develop our project, we began by utilizing Google Cloud's Authentication and API to enable users to log in using their Gmail accounts. We then retrieved their emails and utilized Google Cloud Functions in conjunction with Cohere's API to summarize and rank these emails from most crucial to least important. Finally, we presented the emails in a user-friendly format on our React webpage.
# Challenges we ran into
## Our main challenge arose when we attempted to use Google Cloud for the first time. Initially, we saw an advantage in developing our web app without a backend, relying solely on Cloud Functions to host our AI code. However, we encountered numerous Network Errors while trying to upload user emails to the cloud, which we later identified as CORS errors. After hours of troubleshooting, we reached a point where we were close to giving up and not submitting our project. Fortunately, with the help of the Hack the North mentors, we managed to overcome this issue and successfully fix this issue.
# Accomplishments that we're proud of
## The achievement that brings our team the most pride is our collaborative success in building an app. Initially, we had doubts about our ability to complete the project, fearing we might abandon it halfway. However, whenever one of us encountered an error or challenge, we started brainstorming solutions and would review each other's code, resulting in the resolution of numerous issues.
# What we learned
## Hack the North was the first hackathon for all of us on the team. This shared first-time experience led us to brainstorm ambitious ideas, only to realize that they aren't feasible, and that we should be prioritizing simplicity and usefulness above all else for our project.
# What's next for Email Insight
## Email Insight was developed within in less than 24 hours, signifying it is by far in it's early stages relative to our envisioned potential. In the future, we hope implementing various enhancements. These include crafting a Chrome extension that prioritizes emails based on their subject lines' significance, and creating a mobile application, which will notify you at your chosen daily time, presenting summarized and prioritized emails from the previous 24 hours within the app. | partial |
## Inspiration
**We love to play D&D**
It just has everything! Friends, food, fun, and a whole lot of writing down numbers in the margins of a sheet of notebook paper while you try and figure out who had the highest initiative roll. As a DM (Dungeon Master, or game manager) and a player, we've tried using all sorts of online D&D tools to streamline our games, but have found them to either be insufficient for our needs, or too stifling, digitizing away the purpose of a tabletop game among friends. We wanted to create something in the middle ground, and that's what we ended up making!
## What it does
**Any projector, any size**
Our D&D projector software automatically detects a connected projector, and will open up a dashboard window to your desktop and a display window to the projector screen automatically. The desktop dashboard provides tools for you, the DM, to run your game! You can create and load worlds, characters, and monsters. You can hide the nooks and treasures of a darkened cave, or show off the majesty of an ancient temple. You can easily and rapidly perform a variety of computerized die rolls, and apply damage to players and monsters alike!
## How we built it
**Good old reliable Java**
We used Java and the Swing library to create our windows and display the game world. The display window is projected onto a table or floor with any kind of connected projector. This allows the players to interact with real game pieces in real time, while allowing the DM to alter the game board cleanly and rapidly in response to the state of the game.
## Challenges we ran into
**The bigger the application, the harder it crashes**
Creating such a large application with so many different features is something we haven't done before, so the scope definitely posed a challenge. Additionally, the organization of the code into a robust, scalable codebase proved difficult, but we feel we did a fine job in the end.
## Accomplishments that we're proud of
**A game board you can touch but you can't break**
We've created a fun and interesting way for people to play D&D with nothing more than their laptop and a projector! We've removed many of the annoying or time-wasting aspects of the game, while simultaneously maintaining the authenticity of a tabletop RPG played with real pieces in real life.
## What we learned
**Collaboration? More like code-llaboration!**
We learned a lot about how to put together a multi-windowed, expansive application, with support for many different features during program execution and the need for advanced file storage afterward. We also got better at communicating between the frontend and the backend throughout development.
## What's next for D&D Projector
**Game on!**
D&D is a ridiculously large game, with a vast set of rules, regulations, and variations. With the scalable interface we've built so far, we plan on expanding our dashboard much further to include spells, interactions between players and NPC's, hunger/thirst, exhaustion, and many other parts of the game we feel could be streamlined. We fully intend to be careful in what we add, as retaining the authentic feel of the game is important to us.
Our next D&D session is going to be a very interesting one indeed, and we're really looking forward to it! | ## Inspiration
Many of us had class sessions in which the teacher pulled out whiteboards or chalkboards and used them as a tool for teaching. These made classes very interactive and engaging. With the switch to virtual teaching, class interactivity has been harder. Usually a teacher just shares their screen and talks, and students can ask questions in the chat. We wanted to build something to bring back this childhood memory for students now and help classes be more engaging and encourage more students to attend, especially in the younger grades.
## What it does
Our application creates an environment where teachers can engage students through the use of virtual whiteboards. There will be two available views, the teacher’s view and the student’s view. Each view will have a canvas that the corresponding user can draw on. The difference between the views is that the teacher’s view contains a list of all the students’ canvases while the students can only view the teacher’s canvas in addition to their own.
An example use case for our application would be in a math class where the teacher can put a math problem on their canvas and students could show their work and solution on their own canvas. The teacher can then verify that the students are reaching the solution properly and can help students if they see that they are struggling.
Students can follow along and when they want the teacher’s attention, click on the I’m Done button to notify the teacher. Teachers can see their boards and mark up anything they would want to. Teachers can also put students in groups and those students can share a whiteboard together to collaborate.
## How we built it
* **Backend:** We used Socket.IO to handle the real-time update of the whiteboard. We also have a Firebase database to store the user accounts and details.
* **Frontend:** We used React to create the application and Socket.IO to connect it to the backend.
* **DevOps:** The server is hosted on Google App Engine and the frontend website is hosted on Firebase and redirected to Domain.com.
## Challenges we ran into
Understanding and planning an architecture for the application. We went back and forth about if we needed a database or could handle all the information through Socket.IO. Displaying multiple canvases while maintaining the functionality was also an issue we faced.
## Accomplishments that we're proud of
We successfully were able to display multiple canvases while maintaining the drawing functionality. This was also the first time we used Socket.IO and were successfully able to use it in our project.
## What we learned
This was the first time we used Socket.IO to handle realtime database connections. We also learned how to create mouse strokes on a canvas in React.
## What's next for Lecturely
This product can be useful even past digital schooling as it can save schools money as they would not have to purchase supplies. Thus it could benefit from building out more features.
Currently, Lecturely doesn’t support audio but it would be on our roadmap. Thus, classes would still need to have another software also running to handle the audio communication. | ## Inspiration
Technology in schools today is given to those classrooms that can afford it. Our goal was to create a tablet that leveraged modern touch screen technology while keeping the cost below $20 so that it could be much cheaper to integrate with classrooms than other forms of tech like full laptops.
## What it does
EDT is a credit-card-sized tablet device with a couple of tailor-made apps to empower teachers and students in classrooms. Users can currently run four apps: a graphing calculator, a note sharing app, a flash cards app, and a pop-quiz clicker app.
-The graphing calculator allows the user to do basic arithmetic operations, and graph linear equations.
-The note sharing app allows students to take down colorful notes and then share them with their teacher (or vice-versa).
-The flash cards app allows students to make virtual flash cards and then practice with them as a studying technique.
-The clicker app allows teachers to run in-class pop quizzes where students use their tablets to submit answers.
EDT has two different device types: a "teacher" device that lets teachers do things such as set answers for pop-quizzes, and a "student" device that lets students share things only with their teachers and take quizzes in real-time.
## How we built it
We built EDT using a NodeMCU 1.0 ESP12E WiFi Chip and an ILI9341 Touch Screen. Most programming was done in the Arduino IDE using C++, while a small portion of the code (our backend) was written using Node.js.
## Challenges we ran into
We initially planned on using a Mesh-Networking scheme to let the devices communicate with each other freely without a WiFi network, but found it nearly impossible to get a reliable connection going between two chips. To get around this we ended up switching to using a centralized server that hosts the apps data.
We also ran into a lot of problems with Arduino strings, since their default string class isn't very good, and we had no OS-layer to prevent things like forgetting null-terminators or segfaults.
## Accomplishments that we're proud of
EDT devices can share entire notes and screens with each other, as well as hold a fake pop-quizzes with each other. They can also graph linear equations just like classic graphing calculators can.
## What we learned
1. Get a better String class than the default Arduino one.
2. Don't be afraid of simpler solutions. We wanted to do Mesh Networking but were running into major problems about two-thirds of the way through the hack. By switching to a simple client-server architecture we achieved a massive ease of use that let us implement more features, and a lot more stability.
## What's next for EDT - A Lightweight Tablet for Education
More supported educational apps such as: a visual-programming tool that supports simple block-programming, a text editor, a messaging system, and a more-indepth UI for everything. | partial |
## Inspiration
Everyone learns in a different way. Whether it be from watching that YouTube tutorial series or scouring the textbook, each person responds to and processes knowledge very differently. We hoped to identify students’ learning styles and tailor educational material to the learner for two main reasons: one, so that students can learn more efficiently, and two, so that educators may understand a student’s style and use it to motivate or teach a concept to a student more effectively.
## What it does
EduWave takes live feedback from the Muse Headband while a person is undergoing a learning process using visual, auditory, or haptic educational materials, and it recognizes when the brain is more responsive to a certain method of learning than others. Using this data, we then create a learning profile of the user.
With this learning profile, EduWave tailors educational material to the user by taking any topic that the user wants to learn and finding resources that apply to the type of learner they are. For instance, if the user is a CS major learning different types of elementary sorts and wants to learn specifically how insertion sort works, and if EduWave determines that the user is a visual learner, EduWave will output resources and lesson plans that teach insertion sort with visual aids (e.g. with diagrams and animations).
## How we built it
We used the Muse API and Muse Direct to obtain the data from the user while they were solving the initial assessment tests and checked for what method the brain was more responsive to using data analysis with Python. We added an extra layer to this by using the xLabs Gaze API which tracked eye movements and was able to contribute to the analysis. We then sent this data back with a percentage determination of a learning profile. We then parsed a lesson plan on a certain topic and outputted the elements based on the percentage split of learning type.
## Challenges we ran into
The Muse Headband was somewhat difficult to use, and we had to go through a lot of testing and make sure that the data we were using was accurate. We also ran into some roadblocks proving the correlation between the data and specific learning types. Besides this, we also had to do deep research on what brain waves are most engaged during learning and why, and then subsequently determine a learning profile. Another significant challenge was the creation of lesson plans as we not only had to keep in mind the type of learner but also manage the content itself so that it could be presented in a specific way.
## Accomplishments that we're proud of
We are most proud of learning how to use the Muse data and creating a custom API that was able to show the data for analysis.
## What we learned
How to use Muse API, Standard Library, Muse Direct, how brainwaves work, how people learn and synthesizing unrelated data.
## What's next for EduWave
Our vision for EduWave is to improve it over time. By determining one's most preferred way of learning, we hope to devise custom lesson plans of learning for the user for any topics that they wish to learn – that is, we want a person to be able to have resources for whatever they want to learn made exclusively for them. In addition, we hope to use EduWave to benefit educators, as they can use the data to better understand their students' learning styles, | ## Inspiration
Public speaking is an incredibly important skill that many seek but few master. This is in part due to the high level of individualized attention and feedback needed to improve when practicing. Therefore, we want to solve this with AI! We have created a VR application that allows you to get constructive feedback as you present, debate, or perform by analyzing your arguments and speaking patterns.
While this was our starting motivation for ArticuLab, we quickly noticed the expansive applications and social impact opportunities for it. ArticuLab could be used by people suffering from social anxiety to help improve their confidence in speaking in front of crowds and responding to contrasting opinions. It could also be used by people trying to become more fluent in a language, since it corrects pronunciation and word choice.
## What it does
ArticuLab uses AI in a VR environment to recommend changes to your pace, argument structure, clarity, and boy language when speaking. It holds the key to individualized public speaking practice. In ArticuLab you also have the opportunity to debate directly against AI, who'll point out all the flaws in your arguments and make counterarguments so you can make your defense rock-solid.
## How we built it
For our prototype, we used Meta's Wit.AI natural language processing software for speech recognition, built a VR environment on Unity, and used OpenAI's powerful ChatGPT to base our feedback system on argument construction and presenting ability. Embedding this into an integrated VR App results in a seamless, consumer-ready experience.
## Challenges we ran into
The biggest challenge we ran into is using the VR headset microphone as input for the speech recognition software, and then directly inputting that to our AI system. What made this so difficult was adapting the formatting from each API onto the next. Within the same thread, we ran into an issue where the microphone input would only last for a few seconds, limiting the dialogue between the user and the AI in a debate. These issues were also difficult to test because of the loud environment we were working in.
Additionally, we had to create a VR environment from scratch, since there were no free assets to fit our needs.
## Accomplishments that we're proud of
We're especially proud of accomplishing such an ambitious project with a team that is majority beginners! Treehacks is three of our integrants' first hackathon, so everyone had to step up and do more work or learn more new skills to implement in our project.
## What we learned
We learned a lot about speech to text software, designing an environment and programming in Unity, adapting the powerful ChatGPT to our needs, and integrating a full-stack VR application.
## What's next for ArticuLab
Naturally, there would be lots more polishing of the cosmetics and user interface of the program, which are currently restricted by financial resources and the time available. Among these, would be making the environment a higher definition with better quality assets, crowd responses, ChatGPT responses with ChatGPT plus, etc.
ArticuLab could be useful both academically and professionally in a variety of fields, education, project pitches like Treehacks, company meetings, event organizers… the list goes on! We would also seek to expand the project to alternate versions adapted for the comfort of the users, for example, a simplified iOS version could be used by public speakers to keep notes on their speech and let them know if they're speaking too fast, too slow, or articulating correctly live! Similarly, such a feature would be integrated into the VR version, so a presenter could have notes on their podium and media to present behind them (powerpoint, video, etc.), simulating an even more realistic presenting experience.
Another idea is adding a multiplayer version that would exponentially expand the uses for ArticuLab. Our program could allow debate teams to practice live in front of a mix of AI and real crowds, similarly, ArticuLab could host online live debates between public figures and politicians in the VR environment. | ## Inspiration
Our team has a background in education, and we understand that students learn in different ways. From our own personal experience as students, and our background as educators, we understand how frustrating it can be to iterate on your own learning process. The trial and error leads to countless wasted hours, and oftentimes, students will have to repeat this process with different subjects and topics. Every student must go through this, but what if there was a better way that didn't require extra outside study or for one to dilute their focus? What if we could learn from similar students' processes, while using the best underutilized data requisition tool, the learning management system.
## What it does
We built a learning management system that builds assignments directly on the platform, and has teachers label the dimensions that make questions difficult. Using LLM tagging and a deep recommender system , we are able to find previous students whose skills and challenges line up with our user, and recommend questions that were most helpful to them. Assuming that similar students will benefit from similar questions allows us to mimic the pathway these previous students took, minimizing wasted time on questions that are suboptimally helpful.
## How we built it
We plugged in our LLM tagging (Llama 70b) and recommendation system (Tensorflow Deep Retrieval) procedures into a Nextjs backend, with a Reactjs frontend. We generated synthetic data using numpy, by designing distributions to model student and question profiles, then stored the data using pandas, finally exporting it to supabase. Our team split tasks based off of what areas we were most comfortable with, and clearly iterated as a group.
## Challenges we ran into
We ran into trouble with some of the new technologies and strategies we tried to implement into our stack, but we will keep it to 3 main points:
1. We had problems integrating the database connections with supabase, at first, with unexplained errors.
**Solution**: We kept hammering away at the documentation, and eventually resolved these errors, by finding out that supabase does not auto-enable a feature we assumed was enabled.
2. We also ran into trouble with our data generation, and how to properly create the student profiles we would feed into our model.
**Solution**: We decided on designing different probability distributions to sample from in order to model different student groups, which effectively creates a clustering pattern for the recommender system to learn from.
3. We had problems supplying our data to the Merlin DLRM, as it had a structure that was a bit different than other models/recommendation systems we had worked with before.
**Solution**: We realized our recommendation task was a bit more complex than we had initially realized. We switched libraries and architectures, ending up with a TensorFlow deep retrieval architecture that was different than our original Merlin pick.
## Accomplishments that we're proud of
None of us knew each other before TreeHacks, so the fact that we were able to build a product together, while only knowing each other for a few days, feels amazing. We all worked with at least one technology we weren't comfortable with, but we were able to understand and build with them, by the end of the weekend. We grew tremendously as devs just over these 36 hours, and it was all because we were willing to push ourselves to pick up new skills.
## What we learned
We learned a ton about the intricacies of the libraries and frameworks we used in our project. In particular, we learned more about database socket connections, saving and exporting models with Tensorflow/Tensorflowjs, and data generation.
## What's next for Ladder - The AI-Enabled Learning Management System
Integrating other core learning management system features! Our platform is limited compared to competitors' in terms of core feature integrations, so making it perform similarly to other LMS tools' feeds, integrations to other platforms, and roster management. | winning |
## Inspiration
Moody was inspired by current events, and how we've been connecting with others during these difficult times. While self-isolating due to the COVID-19 pandemic, we notice that like ourselves, many of our friends were struggling with their mental health. Feelings of loneliness, anxiety, depression, sadness and hopelessness seemed to be common. In an attempt to better connect with our friends and to help support each other, we wanted to create a simple way to check in with our loved ones and their feelings, to help to elevate our own moods as well as the moods of others.
## What it does
Moody is an app designed for simple connectivity from one friend to another. Once you sign up for moody, you can actively track your moods, emotions and feelings in the Mood Calendar by entering a quick and easy mood log. If moody notices that you've been having several awful feelings or days, or that you haven't made a mood log in a while, moody will notify your friends so that they can reach out to you directly, connecting you to their support seamlessly.
Added features include a quote of the day feature and an average mood calculation as you continue to enter mood logs.
## How we built it
We built this app in Android Studio using Java language and tested using a virtual android app emulator. All graphics, logos, icons and styles were created by hand by one of our very talented teammates in Adobe Photoshop!
## Challenges we ran into
We had a hard time trying to ensure that all of our dependencies, package and software versions were the same as we worked remotely. None of our teammates had ever used Android Studio before, much less build an entire app, so it was a challenging feat with many ups and downs. It was especially difficult to debug portions of the code when the change was only being made on one teammates screen and where the rest of us had to explain what to do verbally over our virtual call, rather than helping in -person.
## Accomplishments that we're proud of
Since this was our first time building an app, and our first time working with Android Studio, we're quite proud of the final product! We're especially impressed by the amazing artwork that went into this project, and of the dedication our teammates had to learning new techniques, tools and tricks to keep the development going. We employed a lot of teamwork and were dedicated not just to building together, but to learning together as well.
## What we learned
We learned about real industry app development and software engineering principles and methodologies as we worked together in a group and tried our best to simulate a real software development cycle. We learned about nuances in Java with regards to integration into frontend coding, and we learned how to utilize software like Android Studio to our advantage. We also learned how to correctly implement and utilize virtual device emulators as a way of testing and of viewing the final product.
## What's next for Moody
After this hackathon, we plan on finishing up the full development of this app and deploying it, if possible.
Software Improvements:
* Built-in customizer so that users can change the colour scheme and accents of the app.
* More in-depth mood categories and ratings to get more data about how our users are feeling | ## Inspiration
This generation of technological innovation and human factor design focuses heavily on designing for individuals with disabilities. As such, the inspiration for our project was an application of object detection (Darknet YOLOv3) for visually impaired individuals. This user group in particular has limited visual modality, which the project aims to provide.
## What it does
Our project aims to provide the visually impaired with sufficient awareness of their surroundings to maneuver. We created a head-mounted prototype that provides the user group real-time awareness of their surroundings through haptic feedback. Our smart wearable technology uses a computer-vision ML algorithm (convolutional neural network) to help scan the user’s environment to provide haptic feedback as a warning of a nearby obstacle. These obstacles are further categorized by our algorithm to be dynamic (moving, live objects), as well as static (stationary) objects. For our prototype, we filtered through all the objects detected to focus on the nearest object to provide immediate feedback to the user, as well as providing a stronger or weaker haptic feedback if said object is near or far respectively.
## Process
While our idea is relatively simple in nature, we had no idea going in just how difficult the implementation was.
Our main goal was to meet a minimum deliverable product that was capable of vibrating based on the position, type, and distance of an object. From there, we had extra goals like distance calibration, optimization/performance improvements, and a more complex human interface.
Originally, the processing chip (the one with the neural network preloaded) was intended to be the Huawei Atlas. With the additional design optimizations unique to neural networks, it was perfect for our application. After 5 or so hours of tinkering with no progress however, we realized this would be far too difficult for our project.
We turned to a Raspberry Pi and uploaded Google’s pre-trained image analysis network. To get the necessary IO for the haptic feedback, we also had this hooked up to an Arduino which was connected to a series of haptic motors. This showed much more promise than the Huawei board and we had a functional object identifier in no time. The rest of the night was spent figuring out data transmission to the Arduino board and the corresponding decoding/output.
With only 3 hours to go, we still had to finish debugging and assemble the entire hardware rig.
## Key Takeaways
In the future, we all learned just how important having a minimum deliverable product (MDP) was. Our solution could be executed with varying levels of complexity and we wasted a significant amount of time on unachievable pipe dreams instead of focusing on the important base implementation.
The other key takeaway of this event was to be careful with new technology. Since the Huawei boards were so new and relatively complicated to set up, they were incredibly difficult to use. We did not even use the Huawei Atlas in our final implementation meaning that all our work was not useful to our MDP.
## Possible Improvements
If we had more time, there are a few things we would seek to improve.
First, the biggest improvement would be to get a better processor. Either a Raspberry Pi 4 or a suitable replacement would significantly improve the processing framerate. This would make it possible to provide more robust real-time tracking instead of tracking with significant delays.
Second, we would expand the recognition capabilities of our system. Our current system only filters for a very specific set of objects, particular to an office/workplace environment. Our ideal implementation would be a system applicable to all aspects of daily life. This means more objects that are recognized with higher confidence.
Third, we would add a robust distance measurement tool. The current project uses object width to estimate the distance to an object. This is not always accurate unfortunately and could be improved with minimal effort. | ## Inspiration
While we were doing preliminary research, we had found overwhelming amounts of evidence of mental health deterioration as a consequence of life-altering lockdown restrictions. Academic research has shown that adolescents depend on friendship to maintain a sense of self-worth and to manage anxiety and depression. Intimate exchanges and self-esteem support significantly increased long-term self worth and decreased depression.
While people do have virtual classes and social media, some still had trouble feeling close with anyone. This is because conventional forums and social media did not provide a safe space for conversation beyond the superficial. User research also revealed that friendships formed by physical proximity don't necessarily make people feel understood and resulted in feelings of loneliness anyway. Proximity friendships formed in virtual classes also felt shallow in the sense that it only lasted for the duration of the online class.
With this in mind, we wanted to create a platform that encouraged users to talk about their true feelings, and maximize the chance that the user would get heartfelt and intimate replies.
## What it does
Reach is an anonymous forum that is focused on providing a safe space for people to talk about their personal struggles. The anonymity encourages people to speak from the heart. Users can talk about their struggles and categorize them, making it easy for others in similar positions to find these posts and build a sense of closeness with the poster. People with similar struggles have a higher chance of truly understanding each other. Since ill-mannered users can exploit anonymity, there is a tone analyzer that will block posts and replies that contain mean-spirited content from being published while still letting posts of a venting nature through. There is also ReCAPTCHA to block bot spamming.
## How we built it
* Wireframing and Prototyping: Figma
* Backend: Java 11 with Spring Boot
* Database: PostgresSQL
* Frontend: Bootstrap
* External Integration: Recaptcha v3 and IBM Watson - Tone Analyzer
* Cloud: Heroku
## Challenges we ran into
We initially found it a bit difficult to come up with ideas for a solution to the problem of helping people communicate. A plan for a VR space for 'physical' chatting was also scrapped due to time constraints, as we didn't have enough time left to do it by the time we came up with the idea. We knew that forums were already common enough on the internet, so it took time to come up with a product strategy that differentiated us. (Also, time zone issues. The UXer is Australian. They took caffeine pills and still fell asleep.)
## Accomplishments that we're proud of
Finishing it on time, for starters. It felt like we had a bit of a scope problem at the start when deciding to make a functional forum with all these extra features, but I think we pulled it off. The UXer also iterated about 30 screens in total. The Figma file is *messy.*
## What we learned
As our first virtual hackathon, this has been a learning experience for remote collaborative work.
UXer: I feel like i've gotten better at speedrunning the UX process even quicker than before. It usually takes a while for me to get started on things. I'm also not quite familiar with code (I only know python), so watching the dev work and finding out what kind of things people can code was exciting to see.
# What's next for Reach
If this was a real project, we'd work on implementing VR features for those who missed certain physical spaces. We'd also try to work out improvements to moderation, and perhaps a voice chat for users who want to call. | losing |
## Inspiration
As a team, we've all witnessed the devastation of muscular-degenerative diseases, such as Parkinson's, on the family members of the afflicted. Because we didn't have enough money or resources or time to research and develop a new drug or other treatment for the disease, we wanted to make the medicine already available as effective as possible. So, we decided to focus on detection; the early the victim can recognize the disease and report it to his/her physician, the more effective the treatments we have become.
## What it does
HandyTrack uses three tests: a Flex Test, which tests the ability of the user to bend their fingers into a fist, a Release Test, which tests the user's speed in releasing the fist, and a Tremor Test, which measures the user's hand stability. All three of these tests are stored and used to, over time, look for trends that may indicate symptoms of Parkinson's: a decrease in muscle strength and endurance (ability to make a fist), an increase in time spent releasing the fist (muscle stiffness), and an increase in hand tremors.
## How we built it
For the software, we built the entirety of the application in the Arduino IDE using C++. As for the hardware, we used 4 continuous rotation servo motors, an Arduino Uno, an accelerometer, a microSD card, a flex sensor, and an absolute abundance of wires. We also used a 3D printer to make some rings for the users to put their individual fingers in.
The 4 continuous rotation servos were used to provide resistance against the user's hands. The flex sensor, which is attached to the user's palm, is used to control the servos; the more bent the sensor is, the faster the servo rotation. The flex sensor is also used to measure the time it takes for the user to release the fist, a.k.a the time it takes for the sensor to return to the original position. The accelerometer is used to detect the changes in the user's hand's position, and changes in that position represent the user's hand tremors. All of this data is sent to the SD cards, which in turn allow us to review trends over time.
## Challenges we ran into
Calibration was a real pain in the butt. Every time we changed the circuit, the flex sensor values would change. Also, developing accurate algorithms for the functions we wanted to write was kind of difficult. Time was a challenge as well; we had to stay up all night to put out a finished product. Also, because the hack is so hardware intensive, we only had one person working on the code for most of the time, which really limited our options for front-end development. If we had an extra team member, we probably could have made a much more user-friendly application that looks quite a bit cleaner.
## Accomplishments that we're proud of
Honestly, we're happy that we got all of our functions running. It's kind of difficult only having one person code for most of the time. Also, we think our hardware is on-point. We mostly used cheap products and Arduino parts, yet we were able to make a device that can help users detect symptoms of muscular-degenerative diseases.
## What we learned
We learned that we should always have a person dedicated to front-end development, because no matter how functional a program is, it also needs to be easily navigable.
## What's next for HandyTrack
Well, we obviously need to make a much more user-friendly app. We would also want to create a database to store the values of multiple users, so that we can not only track individual users, but also to store data of our own and use the trends of different users to compare to the individuals, in order to create more accurate diagnostics. | ## Inspiration
Physiotherapy is expensive for what it provides you with, A therapist stepping you through simple exercises and giving feedback and evaluation? WE CAN TOTALLY AUTOMATE THAT! We are undergoing the 4th industrial revolution and technology exists to help people in need of medical aid despite not having the time and money to see a real therapist every week.
## What it does
IMU and muscle sensors strapped onto the arm accurately track the state of the patient's arm as they are performing simple arm exercises for recovery. A 3d interactive GUI is set up to direct patients to move their arm from one location to another by performing localization using IMU data. A classifier is run on this variable-length data stream to determine the status of the patient and how well the patient is recovering. This whole process can be initialized with the touch of a button on your very own mobile application.
## How WE built it
on the embedded system side of things, we used a single raspberry pi for all the sensor processing. The Pi is in charge of interfacing with the IMU while another Arduino interfaces with the other IMU and a muscle sensor. The Arduino then relays this info over a bridged connection to a central processing device where it displays the 3D interactive GUI and processes the ML data. all the data in the backend is relayed and managed using ROS. This data is then uploaded to firebase where the information is saved on the cloud and can be accessed anytime by a smartphone. The firebase also handles plotting data to give accurate numerical feedback of the many values orientation trajectory, and improvement over time.
## Challenges WE ran into
hooking up 2 IMU to the same rpy is very difficult. We attempted to create a multiplexer system with little luck.
To run the second IMU we had to hook it up to the Arduino. Setting up the library was also difficult.
Another challenge we ran into was creating training data that was general enough and creating a preprocessing script that was able to overcome the variable size input data issue.
The last one was setting up a firebase connection with the app that supported the high data volume that we were able to send over and to create a graphing mechanism that is meaningful. | ## Inspiration
I was inspired to make this device while sitting in physics class. I really felt compelled to make something that I learned inside the classroom and apply my education to something practical. Growing up I always remembered playing with magnetic kits and loved the feeling of repulsion between magnets.
## What it does
There is a base layer of small magnets all taped together so the North pole is facing up. There are hall effect devices to measure the variances in magnetic field that is created by the user's magnet attached to their finger. This allows the device to track the user's finger and determine how they are interacting with the magnetic field pointing up.
## How I built it
It is build using the intel edison. Each hall effect device is either on or off depending if there is a magnetic field pointing down through the face of the black plate. This determines where the user's finger is. From there the analog data is sent via serial port to the processing program on the computer that demonstrates that it works. That just takes the data and maps the motion of the object.
## Challenges I ran into
There are many challenges I faced. Two of them dealt with just the hardware. I bought the wrong type of sensors. These are threshold sensors which means they are either on or off instead of linear sensors that give a voltage proportional to the strength of magnetic field around it. This would allow the device to be more accurate. The other one deals with having alot of very small worn out magnets. I had to find a way to tape and hold them all together because they are in an unstable configuration to create an almost uniform magnetic field on the base. Another problem I ran into was dealing with the edison, I was planning on just controlling the mouse to show that it works but the mouse library only works with the arduino leonardo. I had to come up with a way to transfer the data to another program, which is how i came up dealing with serial ports and initially tried mapping it into a Unity game.
## Accomplishments that I'm proud of
I am proud of creating a hardware hack that I believe is practical. I used this device as a way to prove the concept of creating a more interactive environment for the user with a sense of touch rather than things like the kinect and leap motion that track your motion but it is just in thin air without any real interaction. Some areas this concept can be useful in is in learning environment or helping people in physical therapy learning to do things again after a tragedy, since it is always better to learn with a sense of touch.
## What I learned
I had a grand vision of this project from thinking about it before hand and I thought it was going to theoretically work out great! I learned how to adapt to many changes and overcoming them with limited time and resources. I also learned alot about dealing with serial data and how the intel edison works on a machine level.
## What's next for Tactile Leap Motion
Creating a better prototype with better hardware(stronger magnets and more accurate sensors) | winning |
## Inspiration
Often times health information is presented in a very clinical manner that is unfriendly for kids, there's too much medical jargon and children have difficulty engaging with the information. We wanted to turn learning about things like diabetes, first aid, and other medical ailments into a fun and interactive experience by utilizing superheroes! Our goal is to help spread awareness and educate students about various health topics in a fun and exciting manor.
## What it does
Marvel Medical Dictionary (MMD) is an mobile augmented reality experience that allows users to learn about different health topics from their favorite Marvel superheroes. After searching a topic like diabetes or spider bites, MMD utilizes natural language processing using data from the Marvel API to select a super hero that is closely related to the search query. For example, if our bite-sized hero/heroine searched for spider bites Spider-Man would be there to provide easily understandable information. Users are able to "watch" Spider-man and other Avengers on their mobile device and learn about different types of health issues.
## How we built it
We built MMD on the Unity Game Engine, C#, and imported 3D models found online. We also utilized flask's python framework to retrieve health information, Marvel API data, as well as run our natural language processing code.
## Challenges we ran into
A little bit of everything. We have limited experience with Unity and have never developed in AR before. This is also our first time working with web development and had a lot of \_ fun \_ trying to find bugs in our python code as well as learning the beauty of JavaScript and its relation with HTML.
## What's next for Marvel Medical Dictionary
Moving forward we would love to integrate more Marvel character models and add more dynamic movements and animations to the AR experience. We also want to more closely integrate engagement principles to increase information retention in regards to health knowledge. Although, MMD was just an idea created on a whim, in the future it could bring awareness and health education to young children all across the world. | ## Inspiration
Every year, our school does a Grand Challenges Research project where they focus on important topic in the world. This year, the focus is on mental health which providing cost-effective treatment and make it accessible to everyone. We all may know someone who has a phobia, came back from a tour in the military, or is living with another mental illness, 1 in 5 Americans to be exact. As the increase of mental health awareness rises, the availability of appointments with counselors and treatments lessens. Additionally, we this could be used to provide at home inquires for people who are hesitant to get help. With Alexa M.D. we hope to use IoT (internet of things) to bring the necessary treatment to the patient for better access, cost, and also to reduce the stigma of mental illness.
## What it does
The user can receive information and various treatment options through Alexa M.D. First, the user speaks to Alexa through the Amazon echo, the central interface, and they can either inquire about various medical information or pick at-home treatment options. Through web-scraping of Web M.D. and other sites, Alexa M.D. provides information simply by asking. Next, Alexa M.D. will prompt the user with various treatment options which are a version of exposure therapy for many of the symptoms. The user will engage in virtual reality treatment by re-enacting various situations that may usually cause them anxiety or distress, but instead in a controlled environment through the Oculus Rift. Treatments will incrementally lessen the user's anxieties; they can use the Leap Motion to engage in another dimension of treatment when they are ready to move to the next step. This virtualizes an interaction with many of the stimuli that they are trying to overcome. When the treatment session has concluded, Alexa M.D. will dispense the user's prescribed medication through the automated medicine dispenser, powered by the Intel Edison. This ensures users take appropriate dosages while also encouraging them to go through their treatment session before taking their medication.
## How we built it
We used the Alexa skills sets to teach the Amazon Echo to recognize new commands. This enables communication to both the Oculus and our automated medicine dispenser through our backend on Firebase. We generated various virtual environments through Unity; the Leap Motion is connected to the Oculus which enables the user to interact with their virtual environment. When prompted by a medical questions, Alexa M.D. uses web-scraping from various medical websites, including Web M.D., to produce accurate responses. To make the automated medicine dispenser, we 3D printed the dispensing mechanism, and laser cut acrylic to provide the structural support. The dispenser is controlled by a servo motor via the Intel Edison and controls output of the medication as prescribed by Alexa M.D.
## Challenges we ran into
We found it difficult to sync the various components together (Oculus, Intel Edison, Amazon Alexa), and communicating between all 3 pieces.
## Accomplishments that we're proud of
The Internet of Things is the frontier of technology, and we are proud of integrating the 3 very distinct components together. Additionally, the pill dispenser was sketched and created all within the span of the hackathon, and we were able to utilize various new methods such as laser cutting.
## What we learned
Through the weekend, we learned a great deal about working with Amazon web service, as well as Amazon Alexa and how to integrate these technologies. Additionally, we learned about using modeling software for both 3D printing and laser printing. Furthermore, we learned how to set up the Arduino shield for the Intel Edison and integrating the leap motion with the Oculus Rift.
## What's next for Alexa M.D.
We hope that this can become available for all households, and that it can reduce the cost necessary for treatments, as well as improve access to such treatments. Costs for regular treatment include transportation, doctors and nurses, pharmacy visits, and more. It can be a first step for people are are hesitant to consult a specialist, or a main component of long-term treatment. Some mental illnesses, such as PTSD, even prevent patients from being able to interact with the outside world, which present difficulties when going to seek treatment. Additionally, we hope that this can reduce the stigma of treatment of mental illnesses by integrating such treatments easily into the daily lives of users. Patients can continue their treatments in the privacy of their own home where they won't feel any pressures. | ## Inspiration:
Our inspiration stems from the identification of two critical problems in the health industry for patients: information overload and inadequate support for patients post-diagnosis resulting in isolationism. We saw an opportunity to leverage computer vision, machine learning, and user-friendly interfaces to simplify the way diabetes patients interact with their health information and connect individuals with similar health conditions and severity.
## What it does:
Our project is a web app that fosters personalized diabetes communities while alleviating information overload to enhance the well-being of at-risk individuals. Users can scan health documents, receive health predictions, and find communities that resonate with their health experiences. It streamlines the entire process, making it accessible and impactful.
## How we built it:
We built this project collaboratively, combining our expertise in various domains. Frontend development was done using Next.js, React, and Tailwind CSS. We leveraged components from <https://www.hyperui.dev> to ensure scalability and flexibility in our project. Our backend relied on Firebase for authentication and user management, PineconeDB for the creation of curated communities, and TensorFlow for the predictive model. For the image recognition, we used React-webcam and Tesseract for the optical character recognition and data parsing. We also used tools like Figma, Canva, and Google Slides for design, prototyping and presentation. Finally, we used the Discord.py API to automatically generate the user communication channels
## Challenges we ran into:
We encountered several challenges throughout the development process. These included integrating computer vision models effectively, managing the flow of data between the frontend and backend, and ensuring the accuracy of health predictions. Additionally, coordinating a diverse team with different responsibilities was another challenge.
## Accomplishments that we're proud of:
We're immensely proud of successfully integrating computer vision into our project, enabling efficient document scanning and data extraction. Additionally, building a cohesive frontend and backend infrastructure, despite the complexity, was a significant accomplishment. Finally, we take pride in successfully completing our project goal, effectively processing user blood report data, generating health predictions, and automatically placing our product users into personalized Discord channels based on common groupings.
## What we learned:
Throughout this project, we learned the value of teamwork and collaboration. We also deepened our understanding of computer vision, machine learning, and front-end development. Furthermore, we honed our skills in project management, time allocation, and presentation.
## What's next for One Health | Your Health, One Community.:
In the future, we plan to expand the platform's capabilities. This includes refining predictive models, adding more health conditions, enhancing community features, and further streamlining document scanning. We also aim to integrate more advanced machine-learning techniques and improve the user experience. Our goal is to make health data management and community connection even more accessible and effective. | partial |
## Inspiration
Badminton boosts your overall health and offers mental health benefits. Doing sports makes you [happier](https://www.webmd.com/fitness-exercise/features/runners-high-is-it-for-real#1) or less stressed.
Badminton is the fastest racket sport.
The greatest speed of sports equipment, which is given acceleration by pushing or hitting a person, is developed by a shuttlecock in badminton
Badminton is the second most popular sport in the world after football
Badminton is an intense sport and one of the three most physically demanding team sports.
For a game in length, a badminton player will "run" up to 10 kilometers, and in height - a kilometer.
Benefits of playing badminton
1. Strengthens heart health.
Badminton is useful in that it increases the level of "good" cholesterol and reduces the level of "bad" cholesterol.
2. Reduces weight.
3. Improves the speed of reaction.
4. Increases muscle endurance and strength.
5. Development of flexibility.
6. Reduces the risk of developing diabetes.
Active people are 30-50% less likely to develop type 2 diabetes, according to a 2005 Swedish study.
7. Strengthens bones.
badminton potentially reduces its subsequent loss and prevents the development of various diseases. In any case, moderate play will help develop joint mobility and strengthen them.
![](https://i.imgur.com/Fre5CiD.png)
However, the statistics show increased screen time leads to obesity, sleep problems, chronic neck and back problems, depression, anxiety and lower test scores in children.
![](https://www.nami.org/NAMI/media/NAMI-Media/Infographics/NAMI_MentalHealthCareMatters_2020_th-734.png)
With Decentralized Storage provider IPFS and blockchain technology, we create a decentralized platform for you to learn about playing Badminton.
We all know that sports are great for your physical health. Badminton also has many psychological benefits.
## What it does
Web Badminton Dapp introduces users to the sport of Badminton as well as contains item store to track and ledger the delivery of badminton equipment.
Each real equipment item is ledgered via a digital one with a smart contract logic system in place to determine the demand and track iteam. When delivery is completed the DApp ERC1155 NFTs should be exchanged for the physical items.
A great win for the producers is to save on costs with improved inventory tracking and demand management.
Web Badminton DApp succeeds where off-chain software ledgering system products fail because they may go out of service, need updates, crash with data losses. Web Badminton DApp is a very low cost business systems management product/tool.
While competing software based ledgering products carry monthly and or annual base fees, the only new costs accrued by the business utilizing the DApp are among new contract deployments. A new contract for new batch of items only is needed every few months based on demand and delivery schedule.
In addition, we created Decentralized Newsletter subscription List that we connected to web3.storage.
## How we built it
We built the application using JavaScript, NextJS, React, Tailwind Css and Wagmi library to connect to the metamask wallet. The application is hosted on vercel. The newsletter list data is stored on ipfs with web3.storage.
The contract is built with solidity, hardhat. The polygon blockchain mumbai testnet and lukso l14 host the smart conract.
Meanwhile the Ipfs data is stored using nft.storage. | ## Inspiration
Members of our team know multiple people who suffer from permanent or partial paralysis. We wanted to build something that could be fun to develop and use, but at the same time make a real impact in people's everyday lives. We also wanted to make an affordable solution, as most solutions to paralysis cost thousands and are inaccessible. We wanted something that was modular, that we could 3rd print and also make open source for others to use.
## What it does and how we built it
The main component is a bionic hand assistant called The PulseGrip. We used an ECG sensor in order to detect electrical signals. When it detects your muscles are trying to close your hand it uses a servo motor in order to close your hand around an object (a foam baseball for example). If it stops detecting a signal (you're no longer trying to close) it will loosen your hand back to a natural resting position. Along with this at all times it sends a signal through websockets to our Amazon EC2 server and game. This is stored on a MongoDB database, using API requests we can communicate between our games, server and PostGrip. We can track live motor speed, angles, and if it's open or closed. Our website is a full-stack application (react styled with tailwind on the front end, node js on the backend). Our website also has games that communicate with the device to test the project and provide entertainment. We have one to test for continuous holding and another for rapid inputs, this could be used in recovery as well.
## Challenges we ran into
This project forced us to consider different avenues and work through difficulties. Our main problem was when we fried our EMG sensor, twice! This was a major setback since an EMG sensor was going to be the main detector for the project. We tried calling around the whole city but could not find a new one. We decided to switch paths and use an ECG sensor instead, this is designed for heartbeats but we managed to make it work. This involved wiring our project completely differently and using a very different algorithm. When we thought we were free, our websocket didn't work. We troubleshooted for an hour looking at the Wifi, the device itself and more. Without this, we couldn't send data from the PulseGrip to our server and games. We decided to ask for some mentor's help and reset the device completely, after using different libraries we managed to make it work. These experiences taught us to keep pushing even when we thought we were done, and taught us different ways to think about the same problem.
## Accomplishments that we're proud of
Firstly, just getting the device working was a huge achievement, as we had so many setbacks and times we thought the event was over for us. But we managed to keep going and got to the end, even if it wasn't exactly what we planned or expected. We are also proud of the breadth and depth of our project, we have a physical side with 3rd printed materials, sensors and complicated algorithms. But we also have a game side, with 2 (questionable original) games that can be used. But they are not just random games, but ones that test the user in 2 different ways that are critical to using the device. Short burst and hold term holding of objects. Lastly, we have a full-stack application that users can use to access the games and see live stats on the device.
## What's next for PulseGrip
* working to improve sensors, adding more games, seeing how we can help people
We think this project had a ton of potential and we can't wait to see what we can do with the ideas learned here.
## Check it out
<https://hacks.pulsegrip.design>
<https://github.com/PulseGrip> | ## Inspiration
We read online that quizzes help people retain the information they learn, and we figured if we could make it fun then more people would want to study. Oftentimes it's difficult to get study groups together, but everyone is constantly on their phones so a quick mobile quiz wouldn't be hard to do.
## What it does
Cram is a live social quiz app for students to study for courses with their classmates.
## How we built it
We created an iOS app in Swift and built the backend in Python.
## Challenges we ran into
It's very difficult to generate questions and answers from a given piece of text, so that part of the app is still something we hope to improve on.
## What's next for Cram
Next, we plan on improving our automatic question generation algorithm to incorporate machine learning to check for question quality. | winning |
## Inspiration
Old technology always had a certain place in our hearts. It is facinating to see such old and simple machines produce such complex results. That's why we wanted to create our own emulator of an 8-bit computer: to learn and explore this topic and also to make it accessible to others through this learning software.
## What it does
It simulates the core features of an 8-bit computer. We can write low-level programs in assembly to get executed on the emulator. It also displays a terminal out to show the results of the program, as well as a window on the actual memory state throughout the program.
## How we built it
Using Java, Imgui and LWJGL.
## Challenges we ran into
The custom design of the computer was quite challenging to get to, as we were trying to keep the project reasonable yet engaging. Getting information on how 8-bit computers work and understanding that in less than a day also proved to be hard.
## Accomplishments that we're proud of
We are proud to present an actual working 8-bit computer emulator that can run custom code written for it.
## What we learned
We learned how to design a computer from scratch, as well as how assembly works and can be used to produce a wide variety of outputs.
## What's next for Axolotl
Axolotl can be improved by adding more modules to it, like audio and a more complex gpu. Ultimately, it could become a full-fledged computer in itself, capable of any prowess anynormal computer can accomplish | ## Inspiration
As a team, we found that there was no portable mouse on the market that could fit us all comfortably. So we figured, why not make a portable mouse that perfectly conforms to our hand? This was the inspiration for the glove mouse; a mouse that seamlessly integrates into our daily life while also providing functionality and comfort.
## What it does
Our project integrates a mouse into a glove, using planar movement of the hand to control the position of the cursor. The project features two push buttons on the fingertips which can control left and right click.
## How we built it
At the core of our project, we utilized an Arduino Uno to transmit data from our push buttons and 6-axis accelerometer module to the computer. Each module sends analog signals to the Arduino, which we then collect with a C program running on the computer. This raw acceleration data is then processed in Python using integration to get the velocity of the cursor, which is then used to output a corresponding cursor movement on the host computer.
## Challenges we ran into
One major challenge the team faced was that our board, the Arduino Uno, didn’t have native support with Arduino’s mouse libraries; meaning we needed to find a different way to interface our sensors with a computer input. Our solution was, based on forums and recommendations online, to output our data to Python using C, where we could then manipulate the data and control the mouse using a Python script. However, since Python is higher level than C, we found that the collection of data in the C program occurred faster than the code in Python could receive. To solve this, we implemented a software enable from Python to C to synchronize the collection of the data.
## Accomplishments that we're proud of
Despite using a board that was incompatible with Arudino's built-in mouse library, we were able to figure out a workaround to implement mouse capabilities on our Arudino board.
## What we learned
Through this project, the team learned a lot about interfacing between different programming languages with Arduinos. Additionally, the team gained experience with scripts for data collection and controlling timings so programs can interact at normal intervals.
## What's next for Glove Mouse
In the future, we want to make our cursor movement smoother on the host PC by spending more time to calibrate the polling rate, response time, and sensitivity. Additionally, we would look to reduce the size of the device by creating an IC to replace our Arduino, add a Bluetooth transceiver, and add a small battery. | ## Inspiration
In online documentaries, we saw visually impaired individuals and their vision consisted of small apertures. We wanted to develop a product that would act as a remedy for this issue.
## What it does
When a button is pressed, a picture is taken of the user's current view. This picture is then analyzed using OCR (Optical Character Recognition) and the text is extracted from the image. The text is then converted to speech for the user to listen to.
## How we built it
We used a push button connected to the GPIO pins on the Qualcomm DragonBoard 410c. The input is taken from the button and initiates a python script that connects to the Azure Computer Vision API. The resulting text is sent to the Azure Speech API.
## Challenges we ran into
Coming up with an idea that we were all interested in, incorporated a good amount of hardware, and met the themes of the makeathon was extremely difficult. We attempted to use Speech Diarization initially but realized the technology is not refined enough for our idea. We then modified our idea and wanted to use a hotkey detection model but had a lot of difficulty configuring it. In the end, we decided to use a pushbutton instead for simplicity in favour of both the user and us, the developers.
## Accomplishments that we're proud of
This is our very first Makeathon and we were proud of accomplishing the challenge of developing a hardware project (using components we were completely unfamiliar with) within 24 hours. We also ended up with a fully function project.
## What we learned
We learned how to operate and program a DragonBoard, as well as connect various APIs together.
## What's next for Aperture
We want to implement hot-key detection instead of the push button to eliminate the need of tactile input altogether. | partial |
## Inspiration
Coming from life science related background, we see a lot of unreliable information circulating on our social media and in news articles. Often the information that could help clear out misinformation is not very accessible.
## What it does
We want to create a convenient application to guide people to reputable information typically shared between health professionals and researchers.
## How we built it
We developed a Google Chrome extension to obtain information from an article, which we would pass through a Python program to search in the PMC/MEDLINE database for relevant articles.
## Challenges we ran into
We definitely had a lot of trouble in the early stages of figuring out how to create and integrate a functional Chrome extension with a python script. Furthermore, we had some problems extracting information from the relevant databases. The wonderful mentors at this hackathon helped us a lot to overcome some of the problems that arose during the course of our development.
## Accomplishments that we're proud of
We've come a long way, learning how to build a Chrome extension with JavaScript, a language which half of us have not ever used before. We managed to put together a python script and 2 prototype chrome extensions in less than 30 hours, especially given that most of us did not have a strong computer science background.
## What we learned
We really learned how work together as a team and co-ordinate in different time zones. We learned a lot of new languages and tools for software development. The workshops also helped a lot!
## What's next for healthINFO
We plan to expand the application to run more smoothly on any website, and include more databases. In addition, we would like to deliver the information to our audience in a more accessible manner in the future. | ## Inspiration
We were looking for ways to use some of the most wild, inaccurate, out-there claims about the Covid-19 Vaccine to instead produce good outcomes.
## What it does
Aggregates misinformation regarding the COVID-19 vaccine in one location to empower public health officials and leaders to quickly address these false claims.
## How we built it
We built it with React front-end, Node.js back-end, and various Python libraries (pandas, matplotlib, snscrape, fuzzywuzzy, and wordcloud) to fuel our data visualizations.
## Challenges we ran into
* Fuzzy matching, figuring out how to apply method to each column without having to declare a new method each time
* Figuring out how to use python with node.js; when child\_process.spawn didn’t work I had to worked around it by using the PythonShell module in node which just ran the python scripts based on my local python environment instead of in-project.
* Figuring out how to get python code to run in both VS code and with nodejs
* We found a work around that allowed the python script to be executed by Node.js server, but it requires the local machine to have all the python dependencies installed D:
* THIS IS EVERYBODY’S FIRST HACKATHON!!!
## Accomplishments that we're proud of
We're proud to have worked with a tech stack we are not very familiar with and completed our first hackathon with a working(ish) product!
## What we learned
* How to web scrape from twitter using python and snscrape, how to perform fuzzy matching on pandas data frames, and become more comfortable and proficient in data analysis and visualization
* How to work with a web stack structure and communicate between front-end and back-end
* How to integrate multiple languages even despite incompatibilities
## What's next for COVAX (mis)Info
Addressing other types of false claims (for instance, false claims about election interference or fraud), and expanding to other social media platforms. Hopefully finding a way to more smoothly integrate Python scripts and libraries too! | ## Inspiration
Video is a great form of media, but it's often to slow and too long. We want a way to find the most important parts of a video, and only watch those parts.
## What it does
Users can submit a video Scribr, and Scribr will automatically find the most important parts of the video, and will provide a brand new video, trimmed to just the important moments.
## How we built it
Scribr first transcribes the video using the RevSpeech API. Then it does a form of sentiment analysis on the transcribed videos, using NLP, to find the most important sentences. It then finds the timestamps of all of the important videos, and then re-cuts the video to trim down to only those sentences.
## Challenges we ran into
So many challenges. From installing Python on a windows computer (for a brand new coder!) to automatically trimming videos on a server... we were genuinely shocked the first time the entire process worked, and we played a trimmed video that actually worked.
It was so seamless, we actually wasn't sure if it worked the first time.
## Accomplishments that we're proud of
We are proud that we came together with a diverse skillset, including someone who just learned how to code, and were able to create a real, working system.
## What we learned
One of us learned to code. The others learned Flask, RevSpeech, how a computationally intense backend system worked, how to work quickly, and how to power through until the end.
## What's next for Team Two
I say dance, they say 'How high?' We're got many more ideas that we want to add to this project. And as soon as we sleep for a week or two, we'll get back to adding them. | losing |
>
> `2023-10-10 Update`
>
> We've moved all of our project information to our GitHub repo so that it's up to date.
> Our project is completely open source, so please feel free to contribute if you want!
> <https://github.com/soobinrho/BeeMovr>
>
>
> | ## Inspiration
We realized how visually-impaired people find it difficult to perceive the objects coming near to them, or when they are either out on road, or when they are inside a building. They encounter potholes and stairs and things get really hard for them. We decided to tackle the issue of accessibility to support the Government of Canada's initiative to make work and public places completely accessible!
## What it does
This is an IoT device which is designed to be something wearable or that can be attached to any visual aid being used. What it does is that it uses depth perception to perform obstacle detection, as well as integrates Google Assistant for outdoor navigation and all the other "smart activities" that the assistant can do. The assistant will provide voice directions (which can be geared towards using bluetooth devices easily) and the sensors will help in avoiding obstacles which helps in increasing self-awareness. Another beta-feature was to identify moving obstacles and play sounds so the person can recognize those moving objects (eg. barking sounds for a dog etc.)
## How we built it
Its a raspberry-pi based device and we integrated Google Cloud SDK to be able to use the vision API and the Assistant and all the other features offered by GCP. We have sensors for depth perception and buzzers to play alert sounds as well as camera and microphone.
## Challenges we ran into
It was hard for us to set up raspberry-pi, having had no background with it. We had to learn how to integrate cloud platforms with embedded systems and understand how micro controllers work, especially not being from the engineering background and 2 members being high school students. Also, multi-threading was a challenge for us in embedded architecture
## Accomplishments that we're proud of
After hours of grinding, we were able to get the raspberry-pi working, as well as implementing depth perception and location tracking using Google Assistant, as well as object recognition.
## What we learned
Working with hardware is tough, even though you could see what is happening, it was hard to interface software and hardware.
## What's next for i4Noi
We want to explore more ways where i4Noi can help make things more accessible for blind people. Since we already have Google Cloud integration, we could integrate our another feature where we play sounds of living obstacles so special care can be done, for example when a dog comes in front, we produce barking sounds to alert the person. We would also like to implement multi-threading for our two processes and make this device as wearable as possible, so it can make a difference to the lives of the people. | ## Inspiration
There is head tracking for advanced headsets like the HTC Vive, but not for WebVR, or Google Daydream.
## What it does
Converts a RGB camera into a tracker for the Google Daydream. Users can even choose to build their own classifiers for different headsets.
## How we built it
On the front end, there is ReactVR, which calls through express.js to our backend. The server is a python OpenCV application that uses our classifier to determine the location of the headset in space.
To generate the classifier, we used OpenCV and C++ to automate building of a training set. Then the collection, selection, and training of the samples was automating using a bash script.
## Challenges we ran into
* Debugging Chrome for Android
* WebVR blocks XMLHttpRequests
* Mac times out ports
* Mac does not communicate well with Android
* Little documentation for WebVR APIs
* Cross origin request denial
* Automating the sample generation
* Poor suppression of false positives
* Bad OpenCV documentation
* Failure of markerless tracking
* Data plumbing
* Request limit on ngrok
* Sensor drift on Android accelerometers
* Bracket wrangling in Python3 and Javascript
* Debugging Chrome for Android
* Damaged USB-C port on the only *vr ready* phone for Google Daydream
* Staying awake
* Downloading and installing the same version of opencv
* Tabs versus spaces
* Debugging Chrome for Android
* libcurl problems in C++
* scrapping C++ and starting over in python
* proxy time outs
## Accomplishments that we're proud of
It works on every platform, except in VR!
## What we learned
* Commit and push often
## What's next for Daydream Lighthouse
Waiting for more stable WebVR! | winning |
## Inspiration
After observing different hardware options, the dust sensor was especially outstanding in its versatility and struck us as exotic. Dust-particulates in our breaths are an ever present threat that is too often overlooked and the importance of raising awareness for this issue became apparent. But retaining interest in an elusive topic would require an innovative form of expression, which left us stumped. After much deliberation, we realized that many of us had a subconscious recognition for pets, and their demanding needs. Applying this concept, Pollute-A-Pet reaches a difficult topic with care and concern.
## What it does
Pollute-A-Pet tracks the particulates in a person's breaths and records them in the behavior of adorable online pets. With a variety of pets, your concern may grow seeing the suffering that polluted air causes them, no matter your taste in companions.
## How we built it
Beginning in two groups, a portion of us focused on connecting the dust sensor using Arduino and using python to connect Arduino using Bluetooth to Firebase, and then reading and updating Firebase from our website using javascript. Our other group first created gifs of our companions in Blender and Adobe before creating the website with HTML and data-controlled behaviors, using javascript, that dictated the pets’ actions.
## Challenges we ran into
The Dust-Sensor was a novel experience for us, and the specifications for it were being researched before any work began. Firebase communication also became stubborn throughout development, as javascript was counterintuitive to object-oriented languages most of us were used to. Not only was animating more tedious than expected, transparent gifs are also incredibly difficult to make through Blender. In the final moments, our team also ran into problems uploading our videos, narrowly avoiding disaster.
## Accomplishments that we're proud of
All the animations of the virtual pets we made were hand-drawn over the course of the competition. This was also our first time working with the feather esp32 v2, and we are proud of overcoming the initial difficulties we had with the hardware.
## What we learned
While we had previous experience with Arduino, we had not previously known how to use a feather esp32 v2. We also used skills we had only learned in beginner courses with detailed instructions, so while we may not have “learned” these things during the hackathon, this was the first time we had to do these things in a practical setting.
## What's next for Dustables
When it comes to convincing people to use a product such as this, it must be designed to be both visually appealing and not physically cumbersome. This cannot be said for our prototype for the hardware element of our project, which focused completely on functionality. Making this more user-friendly would be a top priority for team Dustables. We also have improvements to functionality that we could make, such as using Wi-Fi instead of Bluetooth for the sensors, which would allow the user greater freedom in using the device. Finally, more pets and different types of sensors would allow for more comprehensive readings and an enhanced user experience. | ## Inspiration
We were inspired by the story of the large and growing problem of stray, homeless, and missing pets, and the ways in which technology could be leveraged to solve it, by raising awareness, adding incentive, and exploiting data.
## What it does
Pet Detective is first and foremost a chat bot, integrated into a Facebook page via messenger. The chatbot serves two user groups: pet owners that have recently lost their pets, and good Samaritans that would like to help by reporting. Moreover, Pet Detective provides monetary incentive for such people by collecting donations from happily served users. Pet detective provides the most convenient and hassle free user experience to both user bases. A simple virtual button generated by the chatbot allows the reporter to allow the bot to collect location data. In addition, the bot asks for a photo of the pet, and runs computer vision algorithms in order to determine several attributes and match factors. The bot then places a track on the dog, and continues to alert the owner about potential matches by sending images. In the case of a match, the service sets up a rendezvous with a trusted animal care partner. Finally, Pet Detective collects data on these transactions and reports and provides a data analytics platform to pet care partners.
## How we built it
We used messenger developer integration to build the chatbot. We incorporated OpenCV to provide image segmentation in order to separate the dog from the background photo, and then used Google Cloud Vision service in order to extract features from the image. Our backends were built using Flask and Node.js, hosted on Google App Engine and Heroku, configured as microservices. For the data visualization, we used D3.js.
## Challenges we ran into
Finding the write DB for our uses was challenging, as well as setting up and employing the cloud platform. Getting the chatbot to be reliable was also challenging.
## Accomplishments that we're proud of
We are proud of a product that has real potential to do positive change, as well as the look and feel of the analytics platform (although we still need to add much more there). We are proud of balancing 4 services efficiently, and like our clever name/logo.
## What we learned
We learned a few new technologies and algorithms, including image segmentation, and some Google cloud platform instances. We also learned that NoSQL databases are the way to go for hackathons and speed prototyping.
## What's next for Pet Detective
We want to expand the capabilities of our analytics platform and partner with pet and animal businesses and providers in order to integrate the bot service into many different Facebook pages and websites. | ## Inspiration:
We wanted to combine our passions of art and computer science to form a product that produces some benefit to the world.
## What it does:
Our app converts measured audio readings into images through integer arrays, as well as value ranges that are assigned specific colors and shapes to be displayed on the user's screen. Our program features two audio options, the first allows the user to speak, sing, or play an instrument into the sound sensor, and the second option allows the user to upload an audio file that will automatically be played for our sensor to detect. Our code also features theme options, which are different variations of the shape and color settings. Users can chose an art theme, such as abstract, modern, or impressionist, which will each produce different images for the same audio input.
## How we built it:
Our first task was using Arduino sound sensor to detect the voltages produced by an audio file. We began this process by applying Firmata onto our Arduino so that it could be controlled using python. Then we defined our port and analog pin 2 so that we could take the voltage reading and convert them into an array of decimals.
Once we obtained the decimal values from the Arduino we used python's Pygame module to program a visual display. We used the draw attribute to correlate the drawing of certain shapes and colours to certain voltages. Then we used a for loop to iterate through the length of the array so that an image would be drawn for each value that was recorded by the Arduino.
We also decided to build a figma-based prototype to present how our app would prompt the user for inputs and display the final output.
## Challenges we ran into:
We are all beginner programmers, and we ran into a lot of information roadblocks, where we weren't sure how to approach certain aspects of our program. Some of our challenges included figuring out how to work with Arduino in python, getting the sound sensor to work, as well as learning how to work with the pygame module. A big issue we ran into was that our code functioned but would produce similar images for different audio inputs, making the program appear to function but not achieve our initial goal of producing unique outputs for each audio input.
## Accomplishments that we're proud of
We're proud that we were able to produce an output from our code. We expected to run into a lot of error messages in our initial trials, but we were capable of tackling all the logic and syntax errors that appeared by researching and using our (limited) prior knowledge from class. We are also proud that we got the Arduino board functioning as none of us had experience working with the sound sensor. Another accomplishment of ours was our figma prototype, as we were able to build a professional and fully functioning prototype of our app with no prior experience working with figma.
## What we learned
We gained a lot of technical skills throughout the hackathon, as well as interpersonal skills. We learnt how to optimize our collaboration by incorporating everyone's skill sets and dividing up tasks, which allowed us to tackle the creative, technical and communicational aspects of this challenge in a timely manner.
## What's next for Voltify
Our current prototype is a combination of many components, such as the audio processing code, the visual output code, and the front end app design. The next step would to combine them and streamline their connections. Specifically, we would want to find a way for the two code processes to work simultaneously, outputting the developing image as the audio segment plays. In the future we would also want to make our product independent of the Arduino to improve accessibility, as we know we can achieve a similar product using mobile device microphones. We would also want to refine the image development process, giving the audio more control over the final art piece. We would also want to make the drawings more artistically appealing, which would require a lot of trial and error to see what systems work best together to produce an artistic output. The use of the pygame module limited the types of shapes we could use in our drawing, so we would also like to find a module that allows a wider range of shape and line options to produce more unique art pieces. | partial |
## Inspiration
After witnessing countless victims of home disasters like robberies and hurricanes, we decided, there must be a way to preemptively check against such events. It's far too easy to wait until something bad happens to your home before doing anything to prevent it from happening again. That's how we came up with a way to incentivize people taking the steps to protect their homes against likely threats.
## What it does
Insura revolutionizes the ways to keep your home safe. Based on your location and historical data of items that typically fall under home insurance (burglary, flooding, etc.), Insura will suggest items for fixes around the house, calculating potential premium savings if done properly. With the click of a button, you can see what needs to be done around the house to collect big savings and protect your home from future damage. Insura also connects with a user's insurance provider to allow for users to send emails to insurance providers detailing the work that was done, backed by pictures of the work. Based on this insurance providers can charge premium prices as they see fit.
To incentivize taking active steps to make changes, Insura "gamified" home repair, by allowing people to set goals for task completion, and letting you compete with friends based on the savings they are achieving. The return on investment is therefore crowdsourced; by seeing what your friends are saving on certain fixes around the house, you can determine whether the fix is worth doing.
## How I built it
To build the application we mainly used swift to build the UI and logic for displaying tasks and goals. We also created a server using Node to handle the mail to insurance providers. We used to heroku to deploy the application.
## Challenges I ran into
We had a hard time finding free APIs for national crime and disaster data and integrating them into the application. In addition, we had a tough time authenticating users to send emails from their accounts.
## Accomplishments that I'm proud of
We are really proud of the way the UI looks. We took the time to design everything beforehand, and the outcome was great.
## What I learned
We learned a lot about iOS development, how to integrate the backend and frontend on the iOS application, and more about the complicated world of insurance.
## What's next for Insura
Next we plan on introducing heatmaps and map views to make the full use of our API and so that users can see what is going on locally. | ## Inspiration
Between 1994 and 2013 there were 6,873 natural disasters worldwide, which claimed 1.35 million lives or almost 68,000 lives on average each year. In many of these disasters, people don't know where to go, and where is safe. For example, during Hurricane Harvey, many people learned about dangers such as alligators or flooding through other other people, and through social media.
## What it does
* Post a natural disaster hazard in your area
* Crowd-sourced hazards
* Pulls government severe weather data
* IoT sensor system to take atmospheric measurements and display on map
* Twitter social media feed of trending natural disasters in the area
* Machine learning image processing to analyze posted images of natural disaster hazards
Our app **Eye in the Sky** allows users to post about various natural disaster hazards at their location, upload a photo, and share a description of the hazard. These points are crowd-sourced and displayed on a map, which also pulls live data from the web on natural disasters. It also allows these users to view a Twitter feed of current trending information on the natural disaster in their location. The last feature is the IoT sensor system, which take measurements about the environment in real time and displays them on the map.
## How I built it
We built this app using Android Studio. Our user-created posts are hosted on a Parse database (run with mongodb). We pull data from the National Oceanic and Atmospheric Administration severe weather data inventory. We used Particle Electron to collect atmospheric sensor data, and used AWS to store this data in a JSON.
## Challenges I ran into
We had some issues setting up an AWS instance to run our image processing on the backend, but other than that, decently smooth sailing.
## Accomplishments that I'm proud of
We were able to work really well together, using everyone's strengths as well as combining the pieces into a cohesive app that we're really proud of. Everyone's piece was integrated well into the app, and I think we demonstrated good software collaboration.
## What I learned
We learned how to effectively combine our code across various languages and databases, which is a big challenge in software as a whole. We learned Android Development, how to set up AWS, and how to create databases to store custom data. Most importantly, we learned to have fun!
## What's next for Eye in the Sky
In the future, we would like to add verification of people's self-reported hazards (through machine learning and through up-voting and down-voting)
We would also like to improve the efficiency of our app and reduce reliance on network because there might not be network, or very poor network, in a natural disaster
We would like to add more datasets from online, and to display storms with bigger bubbles, rather than just points
We would also like to attach our sensor to drones or automobiles to gather more weather data points to display on our map | ## Inspiration
We were inspired by the issues related to coughing especially with regards to sickness.
## What it does
We have two programs, one that analyzes sounds to determine types of coughing/wheezing. The other uses an AI to detect movement, which acts as a controller to a maze game.
## How we built it
We utilized HTML, CSS, and Javascript to create these two programs.
## Challenges we ran into
We had some trouble finalizing our ideas and researching methods to accomplish our ideas.
## Accomplishments that we're proud of
The system appears to work to a certain extent.
## What we learned
We learned about the design process and how it is both iterative and useful. We also learned more about making machine learning models.
## What's next for AI Health Mini-Programs
The goal is to be able to better develop the system so it can more seamlessly function. Then, for the maze idea, we hope to be able to apply it into VR headset systems. For the cough project, we hope to consult medical specialists to see how it can be improved and initially applied in scale. | winning |
## Inspiration
3/4 of us are out-of-state or international students, and have had the shared experience of making trips to the SFO via uber, whether it's because of excess luggage or safety. Trips to SFO can be excessively expensive, and to circumvent this, people often think to share the ride. However, current ride share options on Uber or Lyft do not actually split the cost evenly, and each individual is not effectively paying a price proportional to the distance of the trip and the space they occupy in the car. The only way to actually be saving money and paying a fair share of the price is to split the ride with someone you know. We've often seen club members, friends, or floormates reach out about interest to share an Uber in public forums. After all, people would feel more comfortable splitting the ride with a friend, acquaintance, or just someone from the same college.
## What it does
The idea behind Cober is to provide a centralized platform for students from the same college make or join ride share requests. Upon making an account, members can view active ride share requests they can pick from based on pick up location, date, time and a preference for gender of other riders. Members can also post requests with the same information to be displayed on the website.
## How we built it
## Challenges we ran into
We ran into many problems with the backend, starting off with the sample code from Cockroach DB, but soon realizing that efforts were futile since we were trying to integrate a persistent connection for our database with serverless functions in Vercel, which kind of threw us off our tracks since we could not get it to work. We then completely switched gears and started using Prisma, but then to our surprise there were no instructions on deployment, so we had to settle for implementing everything locally. In the end, we had all of our data and tables set up in the backend along with modular components made with Chakra-ui, but we couldn't manage a full integration, and why we have a short simulation of the table.
## Accomplishments that we're proud of
We're proud of overcoming obstacles! We're also proud of the front-end design of our website! One of us pulled an all-nighter for the first time in their life, and they are glad they did not hallucinate, and now all is well with a red bull (thanks sonr).
After spending 14 hours on the backend, it was a bit disappointing to not have everything completely functional, but we are proud of the fact that we stuck with it and completely changed our tracks several times to restart from scratch with a resilient attitude.
## What we learned
All of us were unfamiliar with the technologies that we used for this project, but we all learned quickly and adapted to eventually come out with a product that we are proud of. We split the project up into manageable chunks, each teammate focusing on a specific area, frequently checking back in to keep everyone on the same page.
## What's next for Cober
What we have at the moment is our MVP (minimum viable product), and after sharing just this with fellow hackers, we know the sky is the limit. Our plans including implementing more features such as the flagging system and full profile page, expanding our resources to handle a large user base, and UI/UX features that will make for a smoother user experience. | ## Inspiration
Our inspiration came from how we are all relatively new drivers and terrified of busy intersections. Although speed is extremely important to transport from one spot to another, safety should always be highlighted when it comes to the road because car accidents are the number one cause of death in the world.
## What it does
When the website is first opened, the user is able to see the map with many markers indicating the fact that a fatal collision happened here. As noted in the top legend, the colours represent the different types of collision frequency. When the user specifies an address for the starting and ending location, our algorithm will detect the safest route in order to avoid all potentially dangerous/busy intersections. However, if the route must pass a dangerous intersection, our algorithm will ultimately return it back.
## How we built it
For the backend, we used Javascript functions that took in the latitude and longitude of collisions in order to mark them on the Google Map API. We also had several functions to not only check if the user's path would come across a collision, but also check alternatives in which the user would avoid that intersection.
We were able to find an Excel spreadsheet listing all Toronto's fatal collisions in the past 5 years and copied that into a SQL database. That was then connected to Google SQL to be used as a public host and then using Node.js, data was then taken from it to mark the specified collisions.
For the frontend, we also used a mix of HTML, CSS, Javascript and Node.js to serve the web app to the user. Once the request is made for the specific two locations, Express will read the .JSON file and send information back to other Javascript files in order to display the most optimal and safest path using the Google Map API.
To host the website, a domain registered on Domain.com and launched by creating a simple engine virtual machine on Compute Engine. After creating a Linux machine, a basic Node.js server was set up and the domain was then connected to Google Cloud DNS. After verifying that we did own our domain via DNS record, a bucket containing all the files was stored on Google Cloud and set to be publicly accessible.
## Challenges we ran into
We have all never used Javascript and Google Cloud services before, so challenges that kept arising was our unfamiliarity with new functions (Eg. callback). In addition, it was difficult to set up and host Domain.com since we were new to web hosting. Lastly, Google Cloud was challenging since we were mainly using it to combine all aspects of the project together.
## Accomplishments that We're proud of
We're very proud of our final product. Although we were very new to Javascript, Google Cloud Services, and APIs, my team is extremely proud of utilizing all resources provided at the hackathon. We searched the web, as well as asked mentors for assistance. It was our determination and great time management that pushed us to ultimately finish the project.
## What we learned
We learned about Javascript, Google APIs, and Google Cloud services. We were also introduced to many helpful tutorials (through videos, and online written tutorials). We also learned how to deploy it to a domain in order for worldwide users to access it.
## What's next for SafeLane
Currently, our algorithm will return the most optimal path avoiding all dangerous intersections. However, there may be cases where the amount of travel time needed could be tremendously more than the quickest path. We hope to only show paths that have a maximum 20-30% more travel time than the fastest path. The user will be given multiple options for paths they may take. If the user chooses a path with a potentially dangerous intersection, we will issue out a warning stating all areas of danger.
We also believe that SafeLane can definitely be expanded to first all of Ontario, and then eventually on a national/international scale. SafeLane can also be used by government/police departments to observe all common collision areas and investigate how to make the roads safer. | ## Inspiration
We wanted to watch videos together as a social activity during the pandemic but were unable as many platforms were hard to use or unstable. Therefore our team decided to create a platform that met our needs in terms of usability and functionality.
## What it does
Bubbles allow people to create viewing rooms and invite their rooms to watch synchronized videos. Viewers no longer have to conduct countdowns on their play buttons or have a separate chat while watching!
## How I built it
Our web app uses React for the frontend and interfaces with our Node.js REST API in the backend. The core of our app uses the Solace PubSub+ and the MQTT protocol to coordinate events such as video play/pause, a new member joining your bubble, and instant text messaging. Users broadcast events with the topic being their bubble room code, so only other people in the same room receive these events.
## Challenges I ran into
* Collaborating in a remote environment: we were in a discord call for the whole weekend, using Miro for whiteboarding, etc.
* Our teammates came from various technical backgrounds, and we all had a chance to learn a lot about the technologies we used and how to build a web app
* The project was so much fun we forgot to sleep and hacking was more difficult the next day
## Accomplishments that I'm proud of
The majority of our team was inexperienced in the technologies used. Our team is very proud that our product was challenging and was completed by the end of the hackathon.
## What I learned
We learned how to build a web app ground up, beginning with the design in Figma to the final deployment to GitHub and Heroku. We learned about React component lifecycle, asynchronous operation, React routers with a NodeJS backend, and how to interface all of this with a Solace PubSub+ Event Broker. But also, we learned how to collaborate and be productive together while still having a blast
## What's next for Bubbles
We will develop additional features regarding Bubbles. This includes seek syncing, user join messages and custom privilege for individuals. | partial |
## Inspiration
We're often at our desks, which is unhealthy. I wanted to make a browser extension that reminds users to do certain things to make the effect of sitting less bad.
## What it does
It reminds the user to rest their eyes every 20 minutes, to drink water every 30, to check their posture every 45 minutes, and to stand and move around every hour
## How I built it
It was built with HTML, CSS, JavaScript, and the Chrome Extensions API.
## Challenges I ran into
Storing, in local storage, the state of the checkboxes in the popup was a challenge because it involved working closely with the API.
## Accomplishments that I'm proud of
I'm proud that I was able to use my newly learned JavaScript, HTML, and CSS skills on a project.
## What I learned
This project demystified APIs for me. Trying to use other APIs will likely feel less intimidating since I now have experience trying to use one I did not know how to use before.
## What's next for Cheerio
Adding sliders that lets users specify the time between notifications is on top of the to-do list. | ## Inspiration
Realizing how often our friends forget to take care of their personal well-being while hard at work, we decided to create a Google Chrome extension that would remind them to keep up healthy habits. All students could use a bit of reminding that their physical health is just as important, if not more, than the work on the screen in front.
## What it does
Take Care sends periodic notifications reminding the user to keep healthy habits, such as drinking water, eating food, giving your eyes a break, and more. Additionally, there is a progress bar of each habit so when you can keep track of your advancements the different categories.
## How we built it
We used JavaScript, HTML, and CSS to code our habit helper, utilizing various resources to learn how each component fits together. For fun/challenge, part of the project was coded directly off of GitHub without the use of external IDEs.
## Challenges we ran into
We had to learn many new languages in a short timeframe and had trouble understanding how some aspects worked. Due to inexperience, some features could not be completed in time.
## Accomplishments that we're proud of
* dabbling in new languages
* completing our first in-person hackathon :)
## What we learned
As stated previously, we increased the number of languages we have coded projects in. We also learned how Chrome extensions are created and how many people desperately are in need of a program like ours.
## What's next for Take Care
* making sure the notification system works
* improving progress bar
* adding more customizable features
* adding machine learning to make better predictions on when to send notifications | ## Inspiration
With everything being done virtually these days, including this hackathon, we spend a lot of time at our desks and behind screens. It's more important now than ever before to take breaks from time to time, but it's easy to get lost in our activities. Studies show that breaks increases over overall energy and productivity, and decreases exhaustion and fatigue. If only we had something to help us from forgetting..
## What it does
The screen connected to the microcontrollers tells you when it's time to give your eyes a break, or to move around a bit to get some exercise. Currently, it tells you to take a 20-second break for your eyes for every 20 minutes of sitting, and a few minutes of break to exercise for every hour of sitting.
## How we built it
The hardware includes a RPi 3B+, aluminum foil contacts underneath the chair cushion, a screen, and wires to connect all these components. The software includes the RPi.GPIO library for reading the signal from the contacts and the tkinter library for the GUI displayed on the screen.
## Challenges we ran into
Some python libraries were written for Python 2 and others for Python 3, so we took some time to resolve these dependency issues. The compliant structure underneath the cushion had to be a specific size and rigidity to allow the contacts to move appropriately when someone gets up/sits down on the chair. Finally, the contacts were sometimes inconsistent in the signals they sent to the microcontrollers.
## Accomplishments that we're proud of
We built this system in a few hours and were successful in not spending all night or all day working on the project!
## What we learned
Tkinter takes some time to learn to properly utilize its features, and hardware debugging needs to be a very thorough process!
## What's next for iBreak
Other kinds of reminders could be implemented later like reminder to drink water, or some custom exercises that involve sit up/down repeatedly. | losing |
## Inspiration
We were inspired by JetBlue's challenge to utilize their data in a new way and we realized that, while there are plenty of websites and phone applications that allow you to find the best flight deal, there is none that provide a way to easily plan the trip and items you will need with your friends and family.
## What it does
GrouPlane allows users to create "Rooms" tied to their user account with each room representing an unique event, such as a flight from Toronto to Boston for a week. Within the room, users can select flight times, see the best flight deal, and plan out what they'll need to bring with them. Users can also share the room's unique ID with their friends who can then utilize this ID to join the created room, being able to see the flight plan and modify the needed items.
## How we built it
GrouPlane was built utilizing Android Studio with Firebase, the Google Cloud Platform Authentication API, and JetBlue flight information. Within Android Studio, Java code and XML was utilized.
## Challenges we ran into
The challenges we ran into was learning how to use Android Studio/GCP/Firebase, and having to overcome the slow Internet speed present at the event. In terms of Android Studio/GCP/Firebase, we were all either entirely new or very new to the environment and so had to learn how to access and utilize all the features available. The slow Internet speed was a challenge due to not only making it difficult to learn for the former tools, but, due to the online nature of the database, having long periods of time where we could not test our code due to having no way to connect to the database.
## Accomplishments that we're proud of
We are proud of being able to finish the application despite the challenges. Not only were we able to overcome these challenges but we were able to build an application there functions to the full extent we intended while having an easy to use interface.
## What we learned
We learned a lot about how to program Android applications and how to utilize the Google Cloud Platform, specifically Firebase and Google Authentication.
## What's next for GrouPlane
GrouPlane has many possible avenues for expansion, in particular we would like to integrate GrouPlane with Airbnb, Hotel chains, and Amazon Alexa. In terms of Airbnb and hotel chains, we would utilize their APIs in order to pull information about hotel deals for the flight locations picked for users can plan out their entire trip within GrouPlane. With this integration, we would also expand GrouPlane to be able to inform everyone within the "event room" about how much the event will cost each person. We would also integrate Amazon Alexa with GrouPlane in order to provide users the ability to plane out their vacation entirely through the speech interface provided by Alexa rather than having to type on their phone. | ## Inspiration
```
We had multiple inspirations for creating Discotheque. Multiple members of our team have fond memories of virtual music festivals, silent discos, and other ways of enjoying music or other audio over the internet in a more immersive experience. In addition, Sumedh is a DJ with experience performing for thousands back at Georgia Tech, so this space seemed like a fun project to do.
```
## What it does
```
Currently, it allows a user to log in using Google OAuth and either stream music from their computer to create a channel or listen to ongoing streams.
```
## How we built it
```
We used React, with Tailwind CSS, React Bootstrap, and Twilio's Paste component library for the frontend, Firebase for user data and authentication, Twilio's Live API for the streaming, and Twilio's Serverless functions for hosting and backend. We also attempted to include the Spotify API in our application, but due to time constraints, we ended up not including this in the final application.
```
## Challenges we ran into
```
This was the first ever hackathon for half of our team, so there was a very rapid learning curve for most of the team, but I believe we were all able to learn new skills and utilize our abilities to the fullest in order to develop a successful MVP! We also struggled immensely with the Twilio Live API since it's newer and we had no experience with it before this hackathon, but we are proud of how we were able to overcome our struggles to deliver an audio application!
```
## What we learned
```
We learned how to use Twilio's Live API, Serverless hosting, and Paste component library, the Spotify API, and brushed up on our React and Firebase Auth abilities. We also learned how to persevere through seemingly insurmountable blockers.
```
## What's next for Discotheque
```
If we had more time, we wanted to work on some gamification (using Firebase and potentially some blockchain) and interactive/social features such as song requests and DJ scores. If we were to continue this, we would try to also replace the microphone input with computer audio input to have a cleaner audio mix. We would also try to ensure the legality of our service by enabling plagiarism/copyright checking and encouraging DJs to only stream music they have the rights to (or copyright-free music) similar to Twitch's recent approach. We would also like to enable DJs to play music directly from their Spotify accounts to ensure the good availability of good quality music.
``` | ## Inspiration
Living in the big city, we're often conflicted between the desire to get more involved in our communities, with the effort to minimize the bombardment of information we encounter on a daily basis. NoteThisBoard aims to bring the user closer to a happy medium by allowing them to maximize their insights in a glance. This application enables the user to take a photo of a noticeboard filled with posters, and, after specifying their preferences, select the events that are predicted to be of highest relevance to them.
## What it does
Our application uses computer vision and natural language processing to filter any notice board information in delivering pertinent and relevant information to our users based on selected preferences. This mobile application lets users to first choose different categories that they are interested in knowing about and can then either take or upload photos which are processed using Google Cloud APIs. The labels generated from the APIs are compared with chosen user preferences to display only applicable postings.
## How we built it
The the mobile application is made in a React Native environment with a Firebase backend. The first screen collects the categories specified by the user and are written to Firebase once the user advances. Then, they are prompted to either upload or capture a photo of a notice board. The photo is processed using the Google Cloud Vision Text Detection to obtain blocks of text to be further labelled appropriately with the Google Natural Language Processing API. The categories this returns are compared to user preferences, and matches are returned to the user.
## Challenges we ran into
One of the earlier challenges encountered was a proper parsing of the fullTextAnnotation retrieved from Google Vision. We found that two posters who's text were aligned, despite being contrasting colours, were mistaken as being part of the same paragraph. The json object had many subfields which from the terminal took a while to make sense of in order to parse it properly.
We further encountered troubles retrieving data back from Firebase as we switch from the first to second screens in React Native, finding the proper method of first making the comparison of categories to labels prior to the final component being rendered. Finally, some discrepancies in loading these Google APIs in a React Native environment, as opposed to Python, limited access to certain technologies, such as ImageAnnotation.
## Accomplishments that we're proud of
We feel accomplished in having been able to use RESTful APIs with React Native for the first time. We kept energy high and incorporated two levels of intelligent processing of data, in addition to smoothly integrating the various environments, yielding a smooth experience for the user.
## What we learned
We were at most familiar with ReactJS- all other technologies were new experiences for us. Most notably were the opportunities to learn about how to use Google Cloud APIs and what it entails to develop a RESTful API. Integrating Firebase with React Native exposed the nuances between them as we passed user data between them. Non-relational database design was also a shift in perspective, and finally, deploying the app with a custom domain name taught us more about DNS protocols.
## What's next for notethisboard
Included in the fullTextAnnotation object returned by the Google Vision API were bounding boxes at various levels of granularity. The natural next step for us would be to enhance the performance and user experience of our application by annotating the images for the user manually, utilizing other Google Cloud API services to obtain background colour, enabling us to further distinguish posters on the notice board to return more reliable results. The app can also be extended to identifying logos and timings within a poster, again catering to the filters selected by the user. On another front, this app could be extended to preference-based information detection from a broader source of visual input. | partial |
## Inspiration
For the last few years, I’ve had back problems, most likely caused by poor posture when using the computer.
A few weeks ago, I had a back spasm, and for a few days, I was unable to move most of my body without sharp pain in my back. Using the computer was painful, but I still needed to get work done, especially during midterm season. Both sitting in a chair and slouching in bed cause me to crane my neck, which is not good for my posture.
As I was lying there in pain, I wished I had a way to just have the computer screen right above my face, while keeping the keyboard on my lap, so I could keep my spine fully straight when using the computer. I knew I could do this with a VR headset like the Oculus Rift, but those are expensive — around $400.
But what if I could use my phone as the VR headset? Since I already own it, the only cost is the VR headset holder. Google Cardboard costs around $5, and a more comfortable one that I purchased off Amazon was only $20. Simply slide the phone in, and you’re ready to display content.
I tried searching for an app that would allow me to mirror my screen in a stereoscopic view on my phone, but I couldn’t find one in existence. So I made one!
## What it does
First, launch the app on your computer. It waits for the companion app to be launched on a phone connected by USB. Once it’s connected, you can start streaming your desktop to your phone, and there is a stereoscopic view displayed on the companion app. Although it looks small, modern phones are pretty high-resolution, so it’s really easy to read the text on the screen. Now simply slide it into the holder and put it on your head. It's just like an extra monitor, but you get a full view and you can look in any direction without hurting your neck!
## How we built it
There were a lot of technical challenges implementing this. In order to have a low latency stream to the phone, I had to do a wired connection over USB, but Apple doesn’t support this natively. I used an external framework called PeerTalk, but they only allow raw TCP packets to be sent over the USB from the computer to the phone, so I had to serialize each frame, de-serialize it on the phone, and then display it stereoscopically, all in real-time.
## Challenges we ran into
I had a lot of trouble with Objective-C memory safety, especially during deserialization when the data was received by the phone. I was obtaining the data on the phone but was unable to turn it into an actual image for around 24 hours due to numerous bugs. Shoutout to Bhushan, a mentor who helped me debug some annoying memory issues!
## Accomplishments that we're proud of
I'm super proud that it actually works! In fact, I made one of the final commits using the headset instead of my actual computer screen. It's practical and it works well!
## What we learned
Learned a ton about graphics serialization and Objective-C memory safety.
## What's next for LaptopVR
I want to add the optional ability to track movements using the gyroscope, so you can have a slightly more zoomed-in version of the screen, and look up/down/left/right to see the extremes of the screen. This would make for a more immersive experience! | ## Inspiration
We spend roughly [1.5-2 hours](http://www.huffingtonpost.com/entry/smartphone-usage-estimates_us_5637687de4b063179912dc96) staring at our **phone screens** everyday. Its not uncommon to hear someone complain about neck pain or to notice a progressive slouching and hunchback developing among adults and teenagers today. The common way of holding our phones leads to an unattractive "the cellphone slump", and its no wonder seeing as there's as much as 60lbs of pressure on a persons neck when they're staring down at their phone.
As a big believer and correct anatomical positioning, posture and the effects it can have on one's confidence and personality, I wanted to develop an app to help me remember to keep my phone at eye level and steer clear of the cellphone slump!
![Pressure on Neck](https://cdn2.omidoo.com/sites/default/files/imagecache/full_width/images/bydate/201508/headtiltcopy2.png)
## What it does
The Posture app is pretty simple. Just like the blue light filters like f.lux and Twightlight we have on our phones, Posture runs in the background and gives you a small indicator if you've been holding your phone in a risky position for too long.
## How I built it
With love, Android and caffeine!
## Challenges I ran into
Understanding the Sensor API was definitely a challenge, but my biggest difficulty was learning how to run Services, background threads and processes without multiple Activities to rely on.
## Accomplishments that I'm proud of
It's a fairly simple app, but I'm hopeful (and proud?) that it will help people who want to correct or improve their posture!
## What I learned
I still have a lot to learn :)
## What's next for Posture
Soon to be released in Beta on the Playstore :) | ## Inspiration
Frustrated university students in need of a safe space to scream and vent.
## What it does
Let a little guy represent you in a scream world fearlessly, along with other fellow screamers.
## How we built it
We used Unity, C# scripting, Blender and Adobe Photoshop.
## Challenges we ran into
Learning everything from scratch, getting different technologies to work together.
## Accomplishments that we're proud of
End product
## What we learned
Everything.
## What's next for AAAAAH
Building community, scream chat, leaderboard. | partial |
## Inspiration:
Right now, tutoring is needed more than ever. The pandemic and initial school closures have increased inequality in educational outcomes across racial and socioeconomic lines, creating an urgency to identify programs and policies—such as tutoring—to mitigate COVID learning loss.
At the same time, the pandemic further widened socioeconomic gaps.
In fact, not just tutoring but, accessible and equitable education is needed more than ever. Several non-profit organizations are trying to make that a reality but are usually constrained by funding or manpower. Our app, EQT, solves this problem.
## What it does:
It creates personalized tutor and tutee matches and give need-based aid to tutees who need them with the help of the tutors who are incentivized to volunteer-- while earning. Each tutor can sign up to entirely volunteer or subsidize the cost for students and earn kind points while still receiving base pay. They can then redeem kindness points and get money proportional to kindness in form of PI-coins, which is a blockchain technology allowing for more decentralized, secure, and transparent transactions. Moreover, tutors can advance to different levels of volunteering-- KIND, KINDER, KINDEST, and SUPER-KINDEST to get a permanent base pay raise!
The app also shows ratings, reviews, and resumes of matched tutors so all tutees regardless of financial need can choose from a handful of their matches. Moreover, group tutoring sessions for up to 3 people (the optimal number of tutees according to research for a better learning environment) can also be scheduled. This allows tutees from diverse cultures and socio-economic backgrounds to connect and form meaningful friendships. Our platform allows tutees to connect with tutors/mentors who are leaders in their fields and gain valuable skills/mentorship.
## How we build it:
Using SQL, Node.js and piNETWORK, HTML, figma and github
## Challenges we ran into
Thinking about the sustainable financial model was an involved task.
## What's next
Right now primarily focus on high school, but we can scale it to middle school, and elementary school for the most impact. This is essential because classroom learning has decreased relatively proportionally across all levels of the educational system. We plan to transcribe Figma to the front end to improve our existing front end and add extended functionality to the app.
## If you want to read more about the app and our research/inspiration
The COVID-19 Pandemic and initial school closures have increased inequality in educational outcomes across racial and socioeconomic lines, creating an urgency to identify programs and policies—such as tutoring—to mitigate COVID learning loss. The COVID educational loss impacts students in disadvantaged backgrounds more compared to their counterparts with high socioeconomic ranking. In fact, according to a Pew Survey, 20% of upper-class households have hired someone to help with their children's educational needs, while only 7% of households in lower and middle-class households have hired someone. According to the OECD, the COVID-19 learning loss is projected to decrease GDP by 1.5% per year, on average. This is assuming that the learning loss can be recovered. However, this is an extremely tricky situation; if COVID learning loss has occurred proportionally among all age groups. As a result, teachers will need to modify their instruction to fill in those gaps. This is true for all levels of education; therefore, it is difficult to see how this can be recovered for those already in school as we'll be lowering the level of the studies for those in school to re-compensate for the learning loss. Therefore, the potential GDP loss is likely to be even higher. Loss in GDP is not only detrimental to the quality of life, but it also slows growth and innovation. Possessing fewer skills as a result of learning loss means workers will have decreased wages over their lifetime. In fact, the OPDC estimates the loss in income to be 3% over the lifetime of those who experienced learning loss due to COVID. In addition, this affects individuals in different socioeconomic groups disproportionately. Since those on the higher economic ladder can afford to learn outside of the classroom, their reduced income is not quite as large as the reduction in income of those who can not afford to learn outside the classroom. This is why we're seeking to make our tutorial services as accessible as possible, and usable even by those traditionally unable to afford tutorial services.
The benefits of this will be felt everywhere. The cut in the projected loss of GDP means all workers have higher skill levels. With the GDP as good as it can possibly be, the economy of the US--as a whole--will be at its best level. This means, in general, the general well-being of the country as a whole would be higher than what it would be with the effects of the COVID-19 learning loss.
So, in fact, not just tutoring but, accessible and equitable education is needed more than ever. Several non-profit organizations are trying to make that a reality but are usually constrained by funding or manpower. Our app, EQT, solves this problem. We create personalized tutor and tutee matches and give need-based aid to tutees who need them with the help of the tutors who are incentivized to volunteer-- while earning. Each tutor can sign up to entirely volunteer or subsidize the cost for students and earn kind points while still receiving base pay. They can then redeem kindness points and get money proportional to kindness in form of PI-coins, which is a blockchain technology allowing for more decentralized, secure, and transparent transactions. Moreover, tutors can advance to different levels of volunteering-- KIND, KINDER, KINDEST, and SUPER-KINDEST to get a permanent base pay raise!
The app also shows ratings, reviews, and resumes of matched tutors so all tutees regardless of financial need can choose from a handful of their matches. Moreover, group tutoring sessions for up to 3 people (the optimal number of tutees according to research for a better learning environment) can also be scheduled. This allows tutees from diverse cultures and socio-economic backgrounds to connect and form meaningful friendships. Our platform allows tutees to connect with tutors/mentors who are leaders in their fields and gain valuable skills/mentorship.
Now, you may ask, what makes our tutoring service different from others? We pride ourselves in being driven by the belief that anyone can learn and we just need to provide the right resources for people to learn. We strongly believe in our mission, and we hire people who do the same. We envision our volunteers to be strongly passionate about bridging the educational gap, and we believe that passion will drive the volunteers to do great things that money may not necessarily lead them to.
Now, another question is how do we provide need-based aid? We attract students from a wide range of socioeconomic backgrounds due to the qualifications of our tutors. Students who can afford to pay for their tutoring sessions are welcomed with as much zeal and enthusiasm as students who are not able to, and, in fact, we anticipate much more students from predominantly advantaged socioeconomic backgrounds. This is due to the nature of private tutoring; it is more predominant and easily accessible to students with high economic standing. We also provide high quality training to tutors from workshops and Q&A events hosted by students from top universities around the world. We’ve seen–across many hackathons, campus clubs, and social events–that people are willing to share the knowledge they have, and they do not shy away from teaching other people about their We see in a lot of university students the desire to teach others about their difficult, complicated work; this will expand the tutor’s horizon, which establishes a suitable environment to transfer the tutor’s knowledge into his tutees. Experiences like this will attract students from predominantly advantaged backgrounds. Furthermore, there is an incentive for economically privileged students to use EQT since their simple use of EQT could transform the experience for students disadvantaged by their economic situation. In addition, in an ever-growing diverse, globalized world, it is more important than ever to be able to get along with more people from diverse backgrounds. EQT–with the group tutoring services as well–allows for a perfect avenue to build these skills for people of all backgrounds. Diversity, we believe, makes people strong problem solvers and will only contribute to a greater understanding of themselves and the world around them.
This approach is also well scalable, not just with regards to the school year of the student. Right now primarily focus on high school, but we can scale it to middle school, and elementary school for the most impact. This is essential because classroom learning has decreased relatively proportionally across all levels of the educational system. In addition, we have the ability to scale it to different countries and increase the well-being of residents of the countries as they can benefit from recovering from that GDP loss. | ## Inspiration
During our brainstorming sessions on sustainability, our team identified a significant issue: companies generate massive amounts of data daily, leading to waste as older, unused data accumulates. We recognized that this challenge also affects individual developers, who often leave databases untouched as projects stall or evolve over time. Our research revealed that 1 TB of data corresponds to 2 metric tonnes of carbon emissions, which inspired our project, TinderDB. Our goal is to empower users to declutter their databases, ultimately fostering a greener working environment.
## What it does
TinderDB empowers developers to manage their data actively and sustainably. Users first log in and then connect their databases via MongoDB Atlas. Once connected, they’ll see their least active database displayed as a card, complete with statistics on its environmental impact, such as “This database emits 2 tons of carbon/year.” From there, users can choose to delete, retain, or migrate the data to archival storage which is a more sustainable storage option. We guide users through a seamless decluttering process, presenting one database at a time to simplify their cleaning efforts with just a swipe. Additionally, our dashboard tracks their contributions to sustainability, featuring a pie chart that illustrates how their efforts contribute to the community’s overall impact.
## How we built it
On the backend, we used Python and Flask to ship simple APIs that enable our core database manipulation (safe deletion and migration/archival). Our data migration API provides a pipeline for users to smoothly migrate their database to MongoDB Online Archive, with just a swipe. For our consumer-facing web application, we used Next.JS and Tailwind for UI, and Clerk for authentication.
## Challenges we ran into
We faced a challenge with MongoDB’s robust security measures, which are great for user protection but complicated our development process. Authenticating our team members for app testing proved difficult due to MongoDB's IP address connection restrictions. Additionally, because everyone had a different role, we had difficulties connecting all of the components together into a complete project.
## Accomplishments that we're proud of
We’re proud of our team’s smooth and efficient collaboration. By dividing into two groups—one focused on the frontend and the other on the backend—we were able to work in tandem effectively. As a result, we were able to build out a full stack web application with a seamless user interface and a neat backend integration.
## What we learned
This was everyone's first time working deeply with MongoDB. While this povided numerous challenges, we learned how to navigate the complex layers to MongoDB's security to integrate it into an application.
## What's next for TinderDB
In the next update for TinderDB, we aim to enhance user experience by allowing individuals to view their personal contributions to the overall reduction in carbon emissions from all users, showcasing their impact on building a greener environment. We also plan to integrate an AI component that provides comprehensive summaries of users’ projects, helping them recall the context for which their databases were created. Additionally, we will introduce a history page that enables users to track the databases they’ve removed, fostering a sense of accomplishment and awareness of their sustainability efforts.
Outside of just databases, there are many other areas of compute that developers and enterprises may overlook. We see TinderDB being a tool for not only databases, but a universal tool to make any form of computation more sustainable, whether it's training an ML model or deploying an web app to production. | ## Inspiration
Technology in schools today is given to those classrooms that can afford it. Our goal was to create a tablet that leveraged modern touch screen technology while keeping the cost below $20 so that it could be much cheaper to integrate with classrooms than other forms of tech like full laptops.
## What it does
EDT is a credit-card-sized tablet device with a couple of tailor-made apps to empower teachers and students in classrooms. Users can currently run four apps: a graphing calculator, a note sharing app, a flash cards app, and a pop-quiz clicker app.
-The graphing calculator allows the user to do basic arithmetic operations, and graph linear equations.
-The note sharing app allows students to take down colorful notes and then share them with their teacher (or vice-versa).
-The flash cards app allows students to make virtual flash cards and then practice with them as a studying technique.
-The clicker app allows teachers to run in-class pop quizzes where students use their tablets to submit answers.
EDT has two different device types: a "teacher" device that lets teachers do things such as set answers for pop-quizzes, and a "student" device that lets students share things only with their teachers and take quizzes in real-time.
## How we built it
We built EDT using a NodeMCU 1.0 ESP12E WiFi Chip and an ILI9341 Touch Screen. Most programming was done in the Arduino IDE using C++, while a small portion of the code (our backend) was written using Node.js.
## Challenges we ran into
We initially planned on using a Mesh-Networking scheme to let the devices communicate with each other freely without a WiFi network, but found it nearly impossible to get a reliable connection going between two chips. To get around this we ended up switching to using a centralized server that hosts the apps data.
We also ran into a lot of problems with Arduino strings, since their default string class isn't very good, and we had no OS-layer to prevent things like forgetting null-terminators or segfaults.
## Accomplishments that we're proud of
EDT devices can share entire notes and screens with each other, as well as hold a fake pop-quizzes with each other. They can also graph linear equations just like classic graphing calculators can.
## What we learned
1. Get a better String class than the default Arduino one.
2. Don't be afraid of simpler solutions. We wanted to do Mesh Networking but were running into major problems about two-thirds of the way through the hack. By switching to a simple client-server architecture we achieved a massive ease of use that let us implement more features, and a lot more stability.
## What's next for EDT - A Lightweight Tablet for Education
More supported educational apps such as: a visual-programming tool that supports simple block-programming, a text editor, a messaging system, and a more-indepth UI for everything. | losing |
# OpenSpeedCoach
### Stroke WhaAAAAaat?
To track their rowing pace and speed, most rowers use a Stroke Coach, a product produced by [NK Sports](https://nksports.com/category-strokecoach-and-speedcoach). This device monitors and displays live data to the athlete and coach, such as:
* Strokes per minute
* Distance traveled
* Time
* Speed
This company has a complete monopoly on electronic hardware for rowing around the world. Cost is a major factor in the recent decrease in rowers. Moreover, the lack of equipment in more rural rowing clubs results in a lack ability to send data to post secondary recruiters. OpenSpeedCoach is a cheap open source alternative to the proprietary Stroke Coach. OpenSpeedCoach empowers amateur rowers with lower budgets to be able to train and compete on the same level as their more well-funded private-school peers.
# Index
* [Goals](#goals)
* [Part list](#part-list)
* [State of Project](#state-of-Project)
* [Running the Project Locally](#running-the-snake-locally)
* [Questions?](#questions)
* [License](#license)
## Goals
* Empower youth by tackling monopolies in the rowing world that and give junior students an alternative to the $200-$600 proprietary Stroke Coach hardware.
* Support rural rowing community by giving them the same tool private schools have at a low cost.
* Measure stroke rate, speed, and distance traveled.
### Objective
* Destroy outdated hardware with open source hardware and software for a cheap cost available for people of all financial situations.
## Part list
* [Arduino Nano](https://www.ebay.com/itm/MINI-USB-Nano-V3-0-ATmega328P-CH340G-5V-16M-Micro-controller-board-for-Arduino/381374550571?hash=item58cbb1d22b:g:ci0AAOSwNSxVAB3c)
* [IIC/I2C/TWI/SPI Serial Interface2004 20X4 Character LCD Module](https://www.ebay.ca/itm/IIC-I2C-TWI-SP-I-Serial-Interface2004-20X4-Character-LCD-Module-Display-Blue-/402030637583?oid=142276252781)
* [MPU-6050 6DOF 3 Axis Gyroscope+Accelerometer](https://www.ebay.com/itm/171907295226?ViewItem=&item=171907295226)
* [GY-NEO6MV2 NEO-6M GPS Module NEO6MV2](https://www.elektor.com/gy-neo6mv2-neo-6m-gps-module-neo6mv2-with-flight)
## State of Project
* 2020/01/12:
+ Started a finished a working demo of concept, next step is to refine and bug squash.
## Running the Project Locally
* TBA after cnc circut design... Stay tuned.
## Questions?
Contact me [mrryanbarclay@gmail.com](mailto:mrryanbarclay@gmail.com)
## License
[MIT](https://choosealicense.com/licenses/mit/) | ## Inspiration
After reading a paper called NEAT: Evolving Increasingly Complex Neural Network Topologies published out of the University of Texas in 2000, we were inspired to test the limits of the algorithm by creating an implementation to learn one of our favourite classic games, Super Mario Bros. We've always found the concepts of machine learning and artificial intelligence incredibly interesting, so when we realized we could combine that passion with our love for games, we jumped at the opportunity.
## What it does
crAIg starts out as an effectively blank canvas. He is dropped into level 1-1 of Super Mario Bros. with no knowledge of the game itself, what the game's goal is, or how to play. He has but one evolutionary instinct - to survive. crAIg 'breeds' himself using the algorithms and techniques explained in NEAT. Thousands of different crAIg iterations are created, each identified by their complex web of virtual neurons and synapses. These neurons are mapped to the tile locations on the game screen which are read through an emulator, and are connected through other neurons and synapses to the potential outputs for crAIg, the buttons on the controller. The life of the crAIgs is short and exciting, if they are not able to continually progress through the level, their genome is deemed inferior and its chance of reproduction slims. Through this process, stronger and more intelligent crAIgs continue to breed with each other, as happens in natural evolution, and eventually, an alpha species can prevail.
## How I built it
We build crAIg largely through the Lua scripting language. Lua is an ideal language for interacting with a classic game like Super Mario Bros., as many emulators support interfacing with it. We used an open source emulator called FCEUX to build and test crAIg. The front end for the app was built using a node.js server with Socket.io integration. crAIg periodically saves his status as JSON files which are read by the server, passed to the client through websockets, and displayed to the user through d3. This allows us to better understand crAIg's brain as it gives a visual representation for what he is 'thinking'.
## Challenges I ran into
Challenges in this project were frequent and occasionally frustrating. One of the largest issues we ran into was various quirks with the Lua language and the environment which we had to test it through. When we wanted to test crAIg in action, the only way to do so is by running the script through the emulator. Unfortunately, the emulator lacks any script debugging tools outside of a console, making bugfixing a tiresome process. Additionally, FCEUX is very old software, originally published in 2000, causing us to run into memory issues which were difficult to diagnose.
## Accomplishments that I'm proud of
While crAIg is unable to clear a full level yet, we are confident he soon will be able to. His progress throughout the level is evident, as each successful iteration he appears smarter and tends to make it further.
## What I learned
We learned a significant amount through crAIg's development. Having never had any experience with Lua, both of us are proud to say we have a new language under our belts. We also learned a considerable amount about machine learning and neural networks, knowledge we hope to be able to use in the future.
## What's next for crAIg | ## Inspiration
Self-motivation is hard. It’s time for a social media platform that is meaningful and brings a sense of achievement instead of frustration.
While various pro-exercise campaigns and apps have tried inspire people, it is difficult to stay motivated with so many other more comfortable distractions around us. Surge is a social media platform that helps solve this problem by empowering people to exercise. Users compete against themselves or new friends to unlock content that is important to them through physical activity.
True friends and formed through adversity, and we believe that users will form more authentic, lasting relationships as they compete side-by-side in fitness challenges tailored to their ability levels.
## What it does
When you register for Surge, you take an initial survey about your overall fitness, preferred exercises, and the websites you are most addicted to. This survey will serve as the starting point from which Surge creates your own personalized challenges: Run 1 mile to watch Netflix for example. Surge links to your phone or IOT wrist device (Fitbit, Apple Watch, etc...) and, using its own Chrome browser extension, 'releases' content that is important to the user when they complete the challenges.
The platform is a 'mixed bag'. Sometimes users will unlock rewards such as vouchers or coupons, and sometimes they will need to complete the challenge to unlock their favorite streaming or gaming platforms.
## How we built it
Back-end:
We used Python Flask to run our webserver locally as we were familiar with it and it was easy to use it to communicate with our Chrome extension's Ajax. Our Chrome extension will check the URL of whatever webpage you are on against the URLs of sites for a given user. If the user has a URL locked, the Chrome extension will display their challenge instead of the original site at that URL. We used an ESP8266 (onboard Arduino) with an accelerometer in lieu of an IOT wrist device, as none of our team members own those devices. We don’t want an expensive wearable to be a barrier to our platform, so we might explore providing a low cost fitness tracker to our users as well.
We chose to use Google's Firebase as our database for this project as it supports calls from many different endpoints. We integrated it with our Python and Arduino code and intended to integrate it with our Chrome extension, however we ran into trouble doing that, so we used AJAX to send a request to our Flask server which then acts as a middleman between the Firebase database and our Chrome extension.
Front-end:
We used Figma to prototype our layout, and then converted to a mix of HTML/CSS and React.js.
## Challenges we ran into
Connecting all the moving parts: the IOT device to the database to the flask server to both the chrome extension and the app front end.
## Accomplishments that we're proud of
Please see above :)
## What we learned
Working with firebase and chrome extensions.
## What's next for SURGE
Continue to improve our front end. Incorporate analytics to accurately identify the type of physical activity the user is doing. We would also eventually like to include analytics that gauge how easily a person is completing a task, to ensure the fitness level that they have been assigned is accurate. | losing |
We created this app in light of the recent wildfires that have raged across the west coast. As California Natives ourselves, we have witnessed the devastating effects of these fires first-hand. Not only do these wildfires pose a danger to those living around the evacuation area, but even for those residing tens to hundreds of miles away, the after-effects are lingering.
For many with sensitive respiratory systems, the wildfire smoke has created difficulty breathing and dizziness as well. One of the reasons we like technology is its ability to impact our lives in novel and meaningful ways. This is extremely helpful for people highly sensitive to airborne pollutants, such as some of our family members that suffer from asthma, and those who also own pets to find healthy outdoor spaces. Our app greatly simplifies the process of finding a location with healthier air quality amidst the wildfires and ensures that those who need essential exercise are able to do so.
We wanted to develop a web app that could help these who are particularly sensitive to smoke and ash to find a temporary respite from the harmful air quality in their area. With our app air.ly, users can navigate across North America to identify areas where the air quality is substantially better. Each dot color indicates a different air quality level ranging from healthy to hazardous. By clicking on a dot, users will be shown a list of outdoor recreation areas, parks, and landmarks they can visit to take a breather at.
We utilized a few different APIs in order to build our web app. The first step was to implement the Google Maps API using JavaScript. Next, we scraped location and air quality index data for each city within North America. After we were able to source real-time data from the World Air Quality Index API, we used the location information to connect to our Google Maps API implementation. Our code took in longitude and latitude data to place a dot on the location of each city within our map. This dot was color-coded based on its city AQI value.
At the same time, the longitude and latitude data was passed into our Yelp Fusion API implementation to find parks, hiking areas, and outdoor recreation local to the city. We processed the Yelp city and location data using Python and Flask integrations. The city-specific AQI value, as well as our local Yelp recommendations, were coded in HTML and CSS to display an info box upon clicking on a dot to help a user act on the real-time data. As a final touch, we also included a legend that indicated the AQI values with their corresponding dot colors to allow ease with user experience.
We really embraced the hacker resilience mindset to create a user-focused product that values itself on providing safe and healthy exploration during the current wildfire season. Thank you :) | ## Inspiration
How many clicks does it take to upload a file to Google Drive? TEN CLICKS. How many clicks does it take for PUT? **TWO** **(that's 1/5th the amount of clicks)**.
## What it does
Like the name, PUT is just as clean and concise. PUT is a storage universe designed for maximum upload efficiency, reliability, and security. Users can simply open our Chrome sidebar extension and drag files into it, or just click on any image and tap "upload". Our AI algorithm analyzes the file content and organizes files into appropriate folders. Users can easily access, share, and manage their files through our dashboard, chrome extension or CLI.
## How we built it
We the TUS protocol for secure and reliable file uploads, Cloudflare workers for AI content analysis and sorting, React and Next.js for the dashboard and Chrome extension, Python for the back-end, and Terraform allow anyone to deploy the workers and s3 bucket used by the app to their own account.
## Challenges we ran into
TUS. Let's prefix this by saying that one of us spent the first 18 hours of the hackathon on a golang backend then had to throw the code away due to a TUS protocol incompatibility. TUS, Cloudflare's AI suite and Chrome extension development were completely new to us and we've run into many difficulties relating to implementing and combining these technologies.
## Accomplishments that we're proud of
We managed to take 36 hours and craft them into a product that each and every one of us would genuinely use.
We actually received 30 downloads of the CLI from people interested in it.
## What's next for PUT
If given more time, we would make our platforms more interactive by utilizing AI and faster client-server communications. | ## Inspiration
With the effects of climate change becoming more and more apparent, we wanted to make a tool that allows users to stay informed on current climate events and stay safe by being warned of nearby climate warnings.
## What it does
Our web app has two functions. One of the functions is to show a map of the entire world that displays markers on locations of current climate events like hurricanes, wildfires, etc. The other function allows users to submit their phone numbers to us, which subscribes the user to regular SMS updates through Twilio if there are any dangerous climate events in their vicinity. This SMS update is sent regardless of whether the user has the app open or not, allowing users to be sure that they will get the latest updates in case of any severe or dangerous weather patterns.
## How we built it
We used Angular to build our frontend. With that, we used the Google Maps API to show the world map along with markers, with information we got from our server. The server gets this climate data from the NASA EONET API. The server also uses Twilio along with Google Firebase to allow users to sign up and receive text message updates about severe climate events in their vicinity (within 50km).
## Challenges we ran into
For the front end, one of the biggest challenges was the markers on the map. Not only, did we need to place markers on many different climate event locations, but we wanted the markers to have different icons based on weather events. We also wanted to be able to filter the marker types for a better user experience. For the back end, we had challenges to figure out Twilio to be able to text users, Google firebase for user sign-in, and MongoDB for database operation. Using these tools was a challenge at first because this was our first time using these tools. We also ran into problems trying to accurately calculate a user's vicinity to current events due to the complex nature of geographical math, but after a lot of number crunching, and the use of a helpful library, we were accurately able to determine if any given event is within 50km of a users position based solely on the coordiantes.
## Accomplishments that we're proud of
We are really proud to make an app that not only informs users but can also help them in dangerous situations. We are also proud of ourselves for finding solutions to the tough technical challenges we ran into.
## What we learned
We learned how to use all the different tools that we used for the first time while making this project. We also refined our front-end and back-end experience and knowledge.
## What's next for Natural Event Tracker
We want to perhaps make the map run faster and have more features for the user, like more information, etc. We also are interested in finding more ways to help our users stay safer during future climate events that they may experience. | winning |
## Inspiration
In the current media landscape, control over distribution has become almost as important as the actual creation of content, and that has given Facebook a huge amount of power. The impact that Facebook newsfeed has in the formation of opinions in the real world is so huge that it potentially affected the 2016 election decisions, however these newsfeed were not completely accurate. Our solution? FiB because With 1.5 Billion Users, Every Single Tweak in an Algorithm Can Make a Change, and we dont stop at just one.
## What it does
Our algorithm is two fold, as follows:
**Content-consumption**: Our chrome-extension goes through your facebook feed in real time as you browse it and verifies the authenticity of posts. These posts can be status updates, images or links. Our backend AI checks the facts within these posts and verifies them using image recognition, keyword extraction, and source verification and a twitter search to verify if a screenshot of a twitter update posted is authentic. The posts then are visually tagged on the top right corner in accordance with their trust score. If a post is found to be false, the AI tries to find the truth and shows it to you.
**Content-creation**: Each time a user posts/shares content, our chat bot uses a webhook to get a call. This chat bot then uses the same backend AI as content consumption to determine if the new post by the user contains any unverified information. If so, the user is notified and can choose to either take it down or let it exist.
## How we built it
Our chrome-extension is built using javascript that uses advanced web scraping techniques to extract links, posts, and images. This is then sent to an AI. The AI is a collection of API calls that we collectively process to produce a single "trust" factor. The APIs include Microsoft's cognitive services such as image analysis, text analysis, bing web search, Twitter's search API and Google's Safe Browsing API. The backend is written in Python and hosted on Heroku. The chatbot was built using Facebook's wit.ai
## Challenges we ran into
Web scraping Facebook was one of the earliest challenges we faced. Most DOM elements in Facebook have div ids that constantly change, making them difficult to keep track of. Another challenge was building an AI that knows the difference between a fact and an opinion so that we do not flag opinions as false, since only facts can be false. Lastly, integrating all these different services, in different languages together using a single web server was a huge challenge.
## Accomplishments that we're proud of
All of us were new to Javascript so we all picked up a new language this weekend. We are proud that we could successfully web scrape Facebook which uses a lot of techniques to prevent people from doing so. Finally, the flawless integration we were able to create between these different services really made us feel accomplished.
## What we learned
All concepts used here were new to us. Two people on our time are first-time hackathon-ers and learned completely new technologies in the span of 36hrs. We learned Javascript, Python, flask servers and AI services.
## What's next for FiB
Hopefully this can be better integrated with Facebook and then be adopted by other social media platforms to make sure we stop believing in lies. | ## Inspiration
**Read something, do something.** We constantly encounter articles about social and political problems affecting communities all over the world – mass incarceration, the climate emergency, attacks on women's reproductive rights, and countless others. Many people are concerned or outraged by reading about these problems, but don't know how to directly take action to reduce harm and fight for systemic change. **We want to connect users to events, organizations, and communities so they may take action on the issues they care about, based on the articles they are viewing**
## What it does
The Act Now Chrome extension analyzes articles the user is reading. If the article is relevant to a social or political issue or cause, it will display a banner linking the user to an opportunity to directly take action or connect with an organization working to address the issue. For example, someone reading an article about the climate crisis might be prompted with a link to information about the Sunrise Movement's efforts to fight for political action to address the emergency. Someone reading about laws restricting women's reproductive rights might be linked to opportunities to volunteer for Planned Parenthood.
## How we built it
We built the Chrome extension by using a background.js and content.js file to dynamically render a ReactJS app onto any webpage the API identified to contain topics of interest. We built the REST API back end in Python using Django and Django REST Framework. Our API is hosted on Heroku, the chrome app is published in "developer mode" on the chrome app store and consumes this API. We used bitbucket to collaborate with one another and held meetings every 2 - 3 hours to reconvene and engage in discourse about progress or challenges we encountered to keep our team productive.
## Challenges we ran into
Our initial attempts to use sophisticated NLP methods to measure the relevance of an article to a given organization or opportunity for action were not very successful. A simpler method based on keywords turned out to be much more accurate.
Passing messages to the back-end REST API from the Chrome extension was somewhat tedious as well, especially because the API had to be consumed before the react app was initialized. This resulted in the use of chrome's messaging system and numerous javascript promises.
## Accomplishments that we're proud of
In just one weekend, **we've prototyped a versatile platform that could help motivate and connect thousands of people to take action toward positive social change**. We hope that by connecting people to relevant communities and organizations, based off their viewing of various social topics, that the anxiety, outrage or even mere preoccupation cultivated by such readings may manifest into productive action and encourage people to be better allies and advocates for communities experiencing harm and oppression.
## What we learned
Although some of us had basic experience with Django and building simple Chrome extensions, this project provided new challenges and technologies to learn for all of us. Integrating the Django backend with Heroku and the ReactJS frontend was challenging, along with writing a versatile web scraper to extract article content from any site.
## What's next for Act Now
We plan to create a web interface where organizations and communities can post events, meetups, and actions to our database so that they may be suggested to Act Now users. This update will not only make our applicaiton more dynamic but will further stimulate connection by introducing a completely new group of people to the application: the event hosters. This update would also include spatial and temporal information thus making it easier for users to connect with local organizations and communities. | ## Inspiration
Social media has been shown in studies that it thrives on emotional and moral content, particularly angry in nature. In similar studies, these types of posts have shown to have effects on people's well-being, mental health, and view of the world. We wanted to let people take control of their feed and gain insight into the potentially toxic accounts on their social media feed, so they can ultimately decide what to keep and what to remove while putting their mental well-being first. We want to make social media a place for knowledge and positivity, without the anger and hate that it can fuel.
## What it does
The app performs an analysis on all the Twitter accounts the user follows and reads the tweets, checking for negative language and tone. Using machine learning algorithms, the app can detect negative and potentially toxic tweets and accounts to warn users of the potential impact, while giving them the option to act how they see fit with this new information. In creating this app and its purpose, the goal is to **put the user first** and empower them with data.
## How We Built It
We wanted to make this application as accessible as possible, and in doing so, we made it with React Native so both iOS and Android users can use it. We used Twitter OAuth to be able to access who they follow and their tweets while **never storing their token** for privacy and security reasons.
The app sends the request to our web server, written in Kotlin, hosted on Google App Engine where it uses Twitter's API and Google's Machine Learning APIs to perform the analysis and power back the data to the client. By using a multi-threaded approach for the tasks, we streamlined the workflow and reduced response time by **700%**, now being able to manage an order of magnitude more data. On top of that, we integrated GitHub Actions into our project, and, for a hackathon mind you, we have a *full continuous deployment* setup from our IDE to Google Cloud Platform.
## Challenges we ran into
* While library and API integration was no problem in Kotlin, we had to find workarounds for issues regarding GCP deployment and local testing with Google's APIs
* Since being cross-platform was our priority, we had issues integrating OAuth with its requirement for platform access (specifically for callbacks).
* If each tweet was sent individually to Google's ML API, each user could have easily required over 1000+ requests, overreaching our limit. Using our technique to package the tweets together, even though it is unsupported, we were able to reduce those requests to a maximum of 200, well below our limits.
## What's next for pHeed
pHeed has a long journey ahead: from integrating with more social media platforms to new features such as account toxicity tracking and account suggestions. The social media space is one that is rapidly growing and needs a user-first approach to keep it sustainable; ultimately, pHeed can play a strong role in user empowerment and social good. | partial |
## Inspiration
Wanted to create something fun that would be a good use of Snapchat's SnapKit! Did not get to it, but the idea was sharing quotes and good reads could be pretty neat between friends - recommending novels is as easy as using the app which interfaces directly into Snapchat! It could also become kind of a Yelp for reading aficionados, with a whole sharing community, and can even become an book/e-book commerce market!
## What it does
Allows you to save your personal favourite reads, allowing you to write down thoughts like a diary.
## How I built it
Android studio!
## Challenges I ran into
Wanted to use React Native, but not possible on the LAN here, and iteration times was slow with Android studio.
## Accomplishments that I'm proud of
Learned android dev!
## What I learned
React Native Dev environment, Android Studio Development, Snapkit
## What's next for ReadR | ## Inspiration
On social media, most of the things that come up are success stories. We've seen a lot of our friends complain that there are platforms where people keep bragging about what they've been achieving in life, but not a single one showing their failures.
We realized that there's a need for a platform where people can share their failure episodes for open and free discussion. So we have now decided to take matters in our own hands and are creating Failed-In to break the taboo around failures! On Failed-in, you realize - "You're NOT alone!"
## What it does
* It is a no-judgment platform to learn to celebrate failure tales.
* Enabled User to add failure episodes (anonymously/non-anonymously), allowing others to react and comment.
* Each episode on the platform has #tags associated with it, which helps filter out the episodes easily. A user's recommendation is based on the #tags with which they usually interact
* Implemented sentiment analysis to predict the sentiment score of a user from the episodes and comments posted.
* We have a motivational bot to lighten the user's mood.
* Allowed the users to report the episodes and comments for
+ NSFW images (integrated ML check to detect nudity)
+ Abusive language (integrated ML check to classify texts)
+ Spam (Checking the previous activity and finding similarities)
+ Flaunting success (Manual checks)
## How we built it
* We used Node for building REST API and MongoDb as database.
* For the client side we used flutter.
* Also we used tensorflowjs library and its built in models for NSFW, abusive text checks and sentiment analysis.
## Challenges we ran into
* While brainstorming on this particular idea, we weren't sure how to present it not to be misunderstood. Mental health issues from failure are serious, and using Failed-In, we wanted to break the taboo around discussing failures.
* It was the first time we tried using Flutter-beta instead of React with MongoDB and node. It took a little longer than usual to integrate the server-side with the client-side.
* Finding the versions of tensorflow and other libraries which could integrate with the remaining code.
## Accomplishments that we're proud of
* During the 36 hour time we were able to ideate and build a prototype for the same.
* From fixing bugs to resolving merge conflicts the whole experience is worth remembering.
## What we learned
* Team collaboration
* how to remain calm and patient during the 36 hours
* Remain up on caffeine.
## What's next for Failed-In
* Improve the model of sentiment analysis to get more accurate results so we can understand the users and recommend them famous failure to success stories using web scraping.
* Create separate discussion rooms for each #tag, facilitating users to communicate and discuss their failures.
* Also provide the option to follow/unfollow a user. | ## Inspiration
In today's age, people have become more and more divisive on their opinions. We've found that discussion nowadays can just result in people shouting instead of trying to understand each other.
## What it does
**Change my Mind** helps to alleviate this problem. Our app is designed to help you find people to discuss a variety of different topics. They can range from silly scenarios to more serious situations. (Eg. Is a Hot Dog a sandwich? Is mass surveillance needed?)
Once you've picked a topic and your opinion of it, you'll be matched with a user with the opposing opinion and put into a chat room. You'll have 10 mins to chat with this person and hopefully discover your similarities and differences in perspective.
After the chat is over, you ask you to rate the maturity level of the person you interacted with. This metric allows us to increase the success rate of future discussions as both users matched will have reputations for maturity.
## How we built it
**Tech Stack**
* Front-end/UI
+ Flutter and dart
+ Adobe XD
* Backend
+ Firebase
- Cloud Firestore
- Cloud Storage
- Firebase Authentication
**Details**
* Front end was built after developing UI mockups/designs
* Heavy use of advanced widgets and animations throughout the app
* Creation of multiple widgets that are reused around the app
* Backend uses gmail authentication with firebase.
* Topics for debate are uploaded using node.js to cloud firestore and are displayed in the app using specific firebase packages.
* Images are stored in firebase storage to keep the source files together.
## Challenges we ran into
* Initially connecting Firebase to the front-end
* Managing state while implementing multiple complicated animations
* Designing backend and mapping users with each other and allowing them to chat.
## Accomplishments that we're proud of
* The user interface we made and animations on the screens
* Sign up and login using Firebase Authentication
* Saving user info into Firestore and storing images in Firebase storage
* Creation of beautiful widgets.
## What we're learned
* Deeper dive into State Management in flutter
* How to make UI/UX with fonts and colour palates
* Learned how to use Cloud functions in Google Cloud Platform
* Built on top of our knowledge of Firestore.
## What's next for Change My Mind
* More topics and User settings
* Implementing ML to match users based on maturity and other metrics
* Potential Monetization of the app, premium analysis on user conversations
* Clean up the Coooooode! Better implementation of state management specifically implementation of provide or BLOC. | partial |
## Inspiration
We wanted to make language-learning faster, more omnipresent, and a less burdensome process, so we built lexi to automate the web for you so you can focus on your everyday internet use instead of the barriers to language learning.
The English lexicon is different for everyone. We aimed to provide a novel and efficient personalized language-learning for anyone browsing the Internet. That meant providing an application for both experienced and new English speakers alike, and not getting in the way of your normal browsing experience.
## What it does
lexi knows the words you don’t. It instantly turns your daily Internet browsing into a personalized language-learning experience. It doesn’t matter if you already speak English or not: lexi will calibrate to the words you need.
lexi works by highlighting the words on a webpage that it thinks you don’t know and then providing synonyms on the page in unobtrusive parentheses. It provides a sliding scale to instantly adjust both the words it predicts and your proficiency level.
## How we built it
The back-end of lexi runs on a blazing-fast Microsoft Azure server, with Node.js set up to provide optimal performance. Word difficulties, synonyms, and definitions are retrieved from a third-party service using an API we built and are then cached both locally and on our server to ensure high performance and availability, even in the face of a potential third-party outage. This API is then utilised by a Google Chrome extension which compares the words on the user’s web page to the user’s personal difficulty level to determine which words will require clarification and inserts helpful synonyms in parentheses. We further boost performance by filtering each request to avoid re-fetching words that have already been cached in Chrome local storage.
## Challenges we ran into
Scraping all the plaintext from the DOM of an arbitrary website proved to be quite difficult. Every website is unique in its design, but we still managed to build a robust parser from the ground up to handle this challenge.
Another significant challenge was improving lexi’s speed to the point where users could seamlessly integrate lexi into their reading experience.
Finally, we found that there was a lack of API’s for analyzing word difficulty level. To compensate for this gap, we ended up building our own system to retrieve difficulty ratings from a live website and provide them in an easy-to-parse format.
## Accomplishments that we're proud of
Creating our own api for our lexical queries.
Building a fast backend that takes advantage of parallelism and caching.
Optimizing our algorithm to extract plaintext from an arbitrary website through applying techniques like binary search.
Scalable and modular software design
## What we learned
Parsing of complex website structures to allow seamlessly inserting and removing content without breaking the page
Caching both local and server-side to improve performance.
Building chrome extensions
## What's next for lexi
In the next version, lexi will seamlessly handle translations between multiple languages, to help people who are learning a foreign language other than English. We will also continue working to make lexi more accurate, more responsive, and more predictive. This may be done by computing our own difficulty ratings for words by analyzing word frequencies on Wikipedia and other common domains, as well as aggregate reading patterns for lexi users. Furthermore, we will give users the ability to provide feedback to lexi, training lexi to be even more personalized and accurate. | ## Inspiration
When we got together for the first time, we instantly gravitated towards project ideas that would appeal to a broad audience, so music as a theme for our project was a very natural choice. Originally, our ideas around a music-based project were much more abstract, incorporating some notes of music perception. We eventually realized that there were too many logistical hurdles surrounding each that they would not be feasible to do during the course of the hackathon, and we realized this as we were starting to brainstorm ideas for social media apps. We started thinking of ideas for music-based social media, and that's when we came up with the idea of making an app where people would judge other's music tastes in a lighthearted fashion.
## What it does
The concept of Rate-ify is simple; users post their Spotify playlists and write a little bit about them for context. Users can also view playlists that other people have posted, have a listen to them, and then either upvote or downvote the playlist based on their enjoyment. Finally, users can stay up to date on the most popular playlists through the website's leaderboard, which ranks all playlists that have been posted to the site.
## How we built it and what we learned
Our team learned more about tooling surrounding web dev. We had a great opportunity to practice frontend development using React and Figma, learning practices that we will likely be using in future projects. Some members were additionally introduced to tools that they had never used before this hackathon, such as databases.
## Challenges we ran into
Probably the biggest challenge of the hackathon was debugging the frontend. Our team came from a limited background, so being able to figure out how to successfully send data from the backend to the frontend could sometimes be a hassle. The quintessential example of this was when we were working on the leaderboard feature. Though the server was correctly returning ranking data, we had lots of trouble getting the frontend to successfully receive the data so that we could display it, and part of this was because of the server returning ranking data as a promise. After figuring out how to correctly return the ranking data without promises, we then had trouble storing that data as part of a React component, which was fixed by using effect hooks.
## Accomplishments that we're proud of
For having done limited work on frontend for past projects, we ended up very happy with how the UI came out. It's a very simple and charming looking UI.
## What's next for Rate-ify
There were certainly some features that we wanted to include that we didn't end up working on, such as a mode of the app where you would see two playlists and say which one you would prefer and a way of allowing users to identify their preferred genres so that we could categorize the number of upvotes and downvotes of playlists based on the favorite genres of the users who rated them. If we do continue working on Rate-ify, then there are definitely more ways than one that we could refine and expand upon the basic premise that we've developed over the course of the last two days, so that would be something that we should consider. | ## About
We are team #27 on discord, team members are: anna.m#8841, PawpatrollN#9367, FrozenTea#9601, bromainelettuce#8008.
Domain.com challenge: [beatparty.tech]
## Inspiration
Due to the current pandemic, we decided to create a way for people to exercise in the safety of their home, in a fun and competitive manner.
## What it does
BeatParty is an augmented reality mobile app designed to display targets for the user to hit in a specific pattern to the beat of a song. The app has a leaderboard to promote healthy competition.
## How we built it
We built BeatParty in Unity, using plug-ins from OpenPose, and echoAR's API for some models.
## Challenges we ran into
Without native support from Apple ARkit and Google AR core, on the front camera, we had to instead use OpenPose, a plug-in that would not be able to take full advantage of the phone's processor, resulting in a lower quality image.
## What we learned
**Unity:**
* We learned how to implement libraries onto unity and how to manipulate elements within such folder.
* We learned how to do the basics in Unity, such as making and creating hitboxes
* We learned how to use music and create and destroy gameobjects.
**UI:**
* We learned how to implement various UI components such as making an animated logo alongside simpler things such as using buttons in Unity.
## What's next for BeatParty
The tracking software can be further developed to be more accurate and respond faster to user movements. We plan to add an online multiplayer mode through our website ([beatparty.tech]). We also plan to use EchoAR to make better objects so that the user can interact with, (ex. The hitboxes or cosmetics). BeatParty is currently an android application and we have the intention to expand BeatParty to both IOS and Windows in the near future. | losing |
## Inspiration
Sign language is already difficult to learn; adding on the difficulty of learning movements from static online pictures makes it next to impossible to do without help. We came up with an elegant robotic solution to remedy this problem.
## What it does
Handy Signbot is a tool that translates voice to sign language, displayed using a set of prosthetic arms. It is a multipurpose sign language device including uses such as: a teaching model for new students, a voice to sign translator for live events, or simply a communication device between voice and sign.
## How we built it
**Physical**: The hand is built from 3D printed parts and is controlled by several servos and pulleys. Those are in turn controlled by Arduinos, housing all the calculations that allow for finger control and semi-spherical XYZ movement in the arm. The entire setup is enclosed and protected by a wooden frame.
**Software**: The bulk of the movement control is written in NodeJS, using the Johnny-Five library for servo control. Voice to text is process using the Nuance API, and text to sign is created with our own database of sign movements.
## Challenges we ran into
The Nuance library was not something we have worked with before, and took plenty of trial and error before we could eventually implement it. Other difficulties included successfully developing a database, and learning to recycle movements to create more with higher efficiency.
## Accomplishments that we're proud of
From calculating inverse trigonometry to processing audio, several areas had to work together for anything to work at all. We are proud that we were able successfully combine so many different parts together for one big project.
## What we learned
We learned about the importance of teamwork and friendship :)
## What's next for Handy Signbot
-Creating a smaller scale model that is more realistic for a home environment, and significantly reducing cost at the same time.
-Reimplement the LeapMotion to train the model for an increased vocabulary, and different accents (did you know you can have an accent in sign language too?). | ## Inspiration
Over 70 million people around the world use sign language as their native form of communication. 70 million voices who were unable to be fully recognized in today’s society. This disparity inspired our team to develop a program that will allow for real-time translation of sign language into a text display monitor. Allowing for more inclusivity among those who do not know sign language to be able to communicate with a new community by breaking down language barriers.
## What it does
It translates sign language into text in real-time processing.
## How we built it
We set the environment by installing different packages (Open cv, mediapipe, scikit-learn), and set a webcam.
-Data preparation: We collected data for our ML model by capturing a few sign language letters through the webcam that takes the whole image frame to collect it into different categories to classify the letters.
-Data processing: We use the media pipe computer vision inference to capture the hand gestures to localize the landmarks of your fingers.
-Train/ Test model: We trained our model to detect the matches between the trained images and hand landmarks captured in real time.
## Challenges we ran into
The challenges we ran into first began with our team struggling to come up with a topic to develop. Then we ran into the issue of developing a program to integrate our sign language detection code with the hardware due to our laptop lacking the ability to effectively process the magnitude of our code.
## Accomplishments that we're proud of
The accomplishment that we are most proud of is that we were able to implement hardware in our project as well as Machine Learning with a focus on computer vision.
## What we learned
At the beginning of our project, our team was inexperienced with developing machine learning coding. However, through our extensive research on machine learning, we were able to expand our knowledge in under 36 hrs to develop a fully working program. | ## What it does
What our project does is introduce ASL letters, words, and numbers to you in a flashcard manner.
## How we built it
We built our project with React, Vite, and TensorFlowJS.
## Challenges we ran into
Some challenges we ran into included issues with git commits and merging. Over the course of our project we made mistakes while resolving merge conflicts which resulted in a large part of our project being almost discarded. Luckily we were able to git revert back to the correct version but time was misused regardless. With our TensorFlow model we had trouble reading the input/output and getting webcam working.
## Accomplishments that we're proud of
We are proud of the work we got done in the time frame of this hackathon with our skill level. Out of the workshops we attended I think we learned a lot and can't wait to implement them in future projects!
## What we learned
Over the course of this hackathon we learned that it is important to clearly define our project scope ahead of time. We spent a lot of our time on day 1 thinking about what we could do with the sponsor technologies and should have looked into them more in depth before the hackathon.
## What's next for Vision Talks
We would like to train our own ASL image detection model so that people can practice at home in real time. Additionally we would like to transcribe their signs into plaintext and voice so that they can confirm what they are signing. Expanding our project scope beyond ASL to other languages is also something we wish to do. | winning |
## Inspiration
A friend of us works in a medical research office where she found inputting patient data for clinical trials is extremely time-consuming. This inspired us to create an iOS app with a key feature of "third-party confirmation system" for clinical trials participants to record their lab results, drug usage, consultant results and etc, which improves the accuracy and effectiveness of clinical trials.
## What it does
It allows clinical trial participants to record everyday activities and it also monitors the drug take-in time. Along with the third-party confirmation system, iCare greatly ensures the accuracy of clinical trial data.
## How we built it
## Challenges we ran into
We had a hard time setting up the environment for an existing app where we need to build upon.
## Accomplishments that we're proud of
We are proud of our teamwork and efficiency from where we learned a lot.
## What we learned
## What's next for iCare | ## Inspiration
Earlier this week, following the devastation of Hurricane Florence, my newsfeed surged with friends offering their excess food and water to displaced community members. Through technology, the world had grown smaller. Resources had been shared.
Our team had a question: what if we could redistribute something else just as valuable? Something just as critical in both our every day lives and in moments of crisis: server space. The fact of the matter is that everything else we depend on, from emergency services apps to messenger systems, relies on server performance as a given. But the reality is that during storms, data centers go down all the time. This problem is exacerbated in remote areas of the world, where redirecting requests to regional data centers isn't an option. When a child is stranded in a natural disaster, mere minutes of navigation mean the difference between a miracle and a tragedy. Those are the moments when we have to be able to trust our technology. We weren't willing to leave that to chance, so Nimbus was born.
## What it does
Nimbus iOS harnesses the processing power of idle mobile phones in order to serve compute tasks. So imagine charging your phone, enabling Nimbus, and allowing your locked phone to act as the server for a schoolchild in Indonesia during typhoon season. Where other distributed computation engines have failed, Nimbus excels. Rather than treating each node as equally suitable for a compute task, our scheduler algorithm takes into account all sorts of factors before assigning a task to a best node, like CPU and the time the user intends to spend idle (how long the user will be asleep, how long the user will be at an offline Facebook event). Users could get paid marginal compensation for each compute task, or Nimbus could come bundled into a larger app, like Facebook.
Nimbus Desktop, which we've proof-of-concepted in the Desktop branch of our Github repo, uses a central server to assign tasks to each computer-node via Vagrant Docker provisioning. We haven't completed this platform option, but it serves another important product case: enterprise clients. We did the math for you: a medium sized company running 22,000 ec2s on Nimbus Desktop on its idle computers for 14 hours a day could save $6 million / year in AWS fees. In this case, the number of possible attack vectors is minimized because all the requests would originate from within the organization. This is the future of computing because it's far more efficient and environmentally friendly than solely running centralized servers. Data centers are having an increasingly detrimental effect on global warming; Iceland is already feeling its effects. Nimbus Desktop offers a scalable and efficient future. We don't have a resource issue. We have a distribution one.
## How we built it
The client-facing web app is built with react and node.js. The backend is built with node.js. The iOS app is built with react-native, express, and node.js. The Desktop script is built on Docker and Vagrant.
## Challenges we ran into
npm was consistently finnicky when we integrated node.js with react-native and built all of that in XCode with Metro Bundler. We also had to switch the scheduler-node interaction to a pull model rather than a push model to guarantee certain security and downtime minimization parameters. We didn't have time to complete Nimbus Desktop, save stepwise compute progress in a hashed database for large multi-hour computes (this would enable us to reassign the compute to the next best node in the case of disruption and optimize for memory usage), or get to the web compute version (diagrammed in the photo carousel, which would enable the nodes to act as true load balancers for more complex hosting)
## Accomplishments that we're proud of
Ideating Nimbus Desktop happened in the middle of the night. That was pretty cool.
## What we learned
Asking too many questions leads to way better product decisions.
## What's next for nimbus
In addition to the incomplete items in the challenges section, we ultimately would want the scheduler to be able to predict disruption using ML time series data. | ## Inspiration
In 2012 in the U.S infants and newborns made up 73% of hospitals stays and 57.9% of hospital costs. This adds up to $21,654.6 million dollars. As a group of students eager to make a change in the healthcare industry utilizing machine learning software, we thought this was the perfect project for us. Statistical data showed an increase in infant hospital visits in recent years which further solidified our mission to tackle this problem at its core.
## What it does
Our software uses a website with user authentication to collect data about an infant. This data considers factors such as temperature, time of last meal, fluid intake, etc. This data is then pushed onto a MySQL server and is fetched by a remote device using a python script. After loading the data onto a local machine, it is passed into a linear regression machine learning model which outputs the probability of the infant requiring medical attention. Analysis results from the ML model is passed back into the website where it is displayed through graphs and other means of data visualization. This created dashboard is visible to users through their accounts and to their family doctors. Family doctors can analyze the data for themselves and agree or disagree with the model result. This iterative process trains the model over time. This process looks to ease the stress on parents and insure those who seriously need medical attention are the ones receiving it. Alongside optimizing the procedure, the product also decreases hospital costs thereby lowering taxes. We also implemented a secure hash to uniquely and securely identify each user. Using a hyper-secure combination of the user's data, we gave each patient a way to receive the status of their infant's evaluation from our AI and doctor verification.
## Challenges we ran into
At first, we challenged ourselves to create an ethical hacking platform. After discussing and developing the idea, we realized it was already done. We were challenged to think of something new with the same amount of complexity. As first year students with little to no experience, we wanted to tinker with AI and push the bounds of healthcare efficiency. The algorithms didn't work, the server wouldn't connect, and the website wouldn't deploy. We persevered and through the help of mentors and peers we were able to make a fully functional product. As a team, we were able to pick up on ML concepts and data-basing at an accelerated pace. We were challenged as students, upcoming engineers, and as people. Our ability to push through and deliver results were shown over the course of this hackathon.
## Accomplishments that we're proud of
We're proud of our functional database that can be accessed from a remote device. The ML algorithm, python script, and website were all commendable achievements for us. These components on their own are fairly useless, our biggest accomplishment was interfacing all of these with one another and creating an overall user experience that delivers in performance and results. Using sha256 we securely passed each user a unique and near impossible to reverse hash to allow them to check the status of their evaluation.
## What we learned
We learnt about important concepts in neural networks using TensorFlow and the inner workings of the HTML code in a website. We also learnt how to set-up a server and configure it for remote access. We learned a lot about how cyber-security plays a crucial role in the information technology industry. This opportunity allowed us to connect on a more personal level with the users around us, being able to create a more reliable and user friendly interface.
## What's next for InfantXpert
We're looking to develop a mobile application in IOS and Android for this app. We'd like to provide this as a free service so everyone can access the application regardless of their financial status. | partial |
## Inspiration
We use notetaking apps everyday to take lecture notes, work on problem sets, and share idea sketches. Unfortunately, notetaking apps today are little more than digital paper notes, making our tablets essentially thousand-dollar notebooks. We sought to empower our notes with a digital assistant that could help us save time doing our homework, like inline definition look-up, LaTeX formatting, algebra correction, and live note-sharing. (We accomplished the first 2.) The goal was to build our own notetaking app, and use its processing power and its internet connection to provide us, and students like ourselves, a 21st century notetaking experience.
## What it does
Our app lets you draw and take notes like other normal apps, but its core difference is that it uses the powerful MyScript handwriting recognition API to recognize all your handwriting as text or LaTeX. This allows Notelet to compile a LaTeX project from your notes by converting blocks of your handwriting to text and math, as well as more advanced features. Users can select words and look them up in the native Apple dictionary, and they can additionally select math expressions to evaluate or plot inline from Wolfram Alpha.
## How we built it
We built it by experimenting with the MyScript API, the Wolfram Alpha api, and through listening to feedback from students who also share the the same issues.
## Challenges we ran into
* Using the Rev.ai api for voice translation
This was more difficult than anticipated, and we couldn't figure out how to encode voice recordings into HTTP POST requests. With another few hours or so, we probably could've added it. This would have been awesome to add, as it would be used to automatically transcribe lectures, and provide searchable audio.
## Accomplishments that we're proud of
* Fully renderable LaTeX document produced from handwritten stroke document
* Dictionary word look-up
* Math plotting and expression using Wolfram Alpha API
## What we learned
Notetaking is a delicate, but not too difficult of a service to build. The key difficulty seems to be in what constraints we put on user interaction. Since we rely on the user to select blocks of their handwriting for us to detect, we had to tradeoff automagical detection asynchronously for something that was much easier to implement.
## What's next for Notelet
Big picture, we want to build out a fully functional beta with a smooth note-taking experience that we can put in the hands of students to get feedback from them as to what works, what doesn’t, and other features they would like to have to help them in their educational experience. We envision this as creating an IDE of sorts for doing problem sets and notetaking, and hope to see what response we get from students with this vision in the form of a concrete, tablet-based experience.
Additionally, we hope to build out the drawing portion of the app a bit more, and provide more organized way of saving important lecture definitions. It would be awesome to be able to create definitions that have associated drawings with them (e.g. associating "mitochondria" with a picture of the organelle).
In terms of accessibility, we'd love to expand upon the Voice-as-Input portion, so that more students have access to the functionality we are bringing to this app. With some more thought, we might be able to make mini games out of lecture audio to make sitting in class more engaging! | ## Inspiration
We wanted to built a fun, lighthearted project that is still useful for ML researchers, and settled on a fun little game that allows a martian rover to navigate its terrain! One of our members, Michael, has worked on training RL agents using OpenAI's gym framework during his research and past internships, and wanted to create another open-source environment. The other member, Michelle, just thought it was cool!
## What it does
Our environment allows a human or an AI to control a rover on the surface of Mars whose objective is to navigate toward its goal location. The environment also allows the user to specify different "forces" that act on the rover, making the environment dynamic (i.e. not stationary), which is a challenge in modern RL research. Because of this challenge, we hope that it will serve as a useful benchmark for future research.
## How we built it
We used python for the whole project. The game itself was built using pygame, and the framework used by the AI to control the game is OpenAI Gym.
## Challenges we ran into
Figuring out how each of the environment's components worked together was challenging. We also had a lot of difficulty figuring our the kinematics of the rover and integrating that with pygame's framework. We ran in to issues of timing as well; We thought we would be able to finish more tasks (like benchmarking the environment using a modern RL algorithm like PPO), but we ended up not having the time.
## Accomplishments that we're proud of
We are happy to have made a fully-functional game that integrated well with Gym! Michelle enjoyed learning more about reinforcement learning and the frameworks that surround it. Michael enjoyed learning about different features of pygame.
## What's next for OpenAI Gym: Martian Rover Dynamic Environment
We want to randomize the landscape and made it more challenging for the rover to navigate. We also want to add more actions to the rover to allow it to navigate a more complex environment. For example, the rover might be able to lift its front wheels in the air to crawl over a boulder.
We want to test the game using modern RL algorithms as a proof of concept, and eventually submit a PR to the Gym Github repo. We also want to create a web-based demo of our game. | ## **Inspiration:**
Our inspiration stemmed from the realization that the pinnacle of innovation occurs at the intersection of deep curiosity and an expansive space to explore one's imagination. Recognizing the barriers faced by learners—particularly their inability to gain real-time, personalized, and contextualized support—we envisioned a solution that would empower anyone, anywhere to seamlessly pursue their inherent curiosity and desire to learn.
## **What it does:**
Our platform is a revolutionary step forward in the realm of AI-assisted learning. It integrates advanced AI technologies with intuitive human-computer interactions to enhance the context a generative AI model can work within. By analyzing screen content—be it text, graphics, or diagrams—and amalgamating it with the user's audio explanation, our platform grasps a nuanced understanding of the user's specific pain points. Imagine a learner pointing at a perplexing diagram while voicing out their doubts; our system swiftly responds by offering immediate clarifications, both verbally and with on-screen annotations.
## **How we built it**:
We architected a Flask-based backend, creating RESTful APIs to seamlessly interface with user input and machine learning models. Integration of Google's Speech-to-Text enabled the transcription of users' learning preferences, and the incorporation of the Mathpix API facilitated image content extraction. Harnessing the prowess of the GPT-4 model, we've been able to produce contextually rich textual and audio feedback based on captured screen content and stored user data. For frontend fluidity, audio responses were encoded into base64 format, ensuring efficient playback without unnecessary re-renders.
## **Challenges we ran into**:
Scaling the model to accommodate diverse learning scenarios, especially in the broad fields of maths and chemistry, was a notable challenge. Ensuring the accuracy of content extraction and effectively translating that into meaningful AI feedback required meticulous fine-tuning.
## **Accomplishments that we're proud of**:
Successfully building a digital platform that not only deciphers image and audio content but also produces high utility, real-time feedback stands out as a paramount achievement. This platform has the potential to revolutionize how learners interact with digital content, breaking down barriers of confusion in real-time. One of the aspects of our implementation that separates us from other approaches is that we allow the user to perform ICL (In Context Learning), a feature that not many large language models don't allow the user to do seamlessly.
## **What we learned**:
We learned the immense value of integrating multiple AI technologies for a holistic user experience. The project also reinforced the importance of continuous feedback loops in learning and the transformative potential of merging generative AI models with real-time user input. | losing |
## Inspiration
We were inspired by the Instagram app, which set out to connect people using photo media.
We believe that the next evolution of connectivity is augmented reality, which allows people to share and bring creations into the world around them. This revolutionary technology has immense potential to help restore the financial security of small businesses, which can no longer offer the same in-person shopping experiences they once did before the pandemic.
## What It Does
Metagram is a social network that aims to restore the connection between people and small businesses. Metagram allows users to scan creative works (food, models, furniture), which are then converted to models that can be experienced by others using AR technology.
## How we built it
We built our front-end UI using React.js, Express/Node.js and used MongoDB to store user data. We used Echo3D to host our models and AR capabilities on the mobile phone. In order to create personalized AR models, we hosted COLMAP and OpenCV scripts on Google Cloud to process images and then turn them into 3D models ready for AR.
## Challenges we ran into
One of the challenges we ran into was hosting software on Google Cloud, as it needed CUDA to run COLMAP. Since this was our first time using AR technology, we faced some hurdles getting to know Echo3D. However, the documentation was very well written, and the API integrated very nicely with our custom models and web app!
## Accomplishments that we're proud of
We are proud of being able to find a method in which we can host COLMAP on Google Cloud and also connect it to the rest of our application. The application is fully functional, and can be accessed by [clicking here](https://meta-match.herokuapp.com/).
## What We Learned
We learned a great deal about hosting COLMAP on Google Cloud. We were also able to learn how to create an AR and how to use Echo3D as we have never previously used it before, and how to integrate it all into a functional social networking web app!
## Next Steps for Metagram
* [ ] Improving the web interface and overall user experience
* [ ] Scan and upload 3D models in a more efficient manner
## Research
Small businesses are the backbone of our economy. They create jobs, improve our communities, fuel innovation, and ultimately help grow our economy! For context, small businesses made up 98% of all Canadian businesses in 2020 and provided nearly 70% of all jobs in Canada [[1]](https://www150.statcan.gc.ca/n1/pub/45-28-0001/2021001/article/00034-eng.htm).
However, the COVID-19 pandemic has devastated small businesses across the country. The Canadian Federation of Independent Business estimates that one in six businesses in Canada will close their doors permanently before the pandemic is over. This would be an economic catastrophe for employers, workers, and Canadians everywhere.
Why is the pandemic affecting these businesses so severely? We live in the age of the internet after all, right? Many retailers believe customers shop similarly online as they do in-store, but the research says otherwise.
The data is clear. According to a 2019 survey of over 1000 respondents, consumers spend significantly more per visit in-store than online [[2]](https://www.forbes.com/sites/gregpetro/2019/03/29/consumers-are-spending-more-per-visit-in-store-than-online-what-does-this-man-for-retailers/?sh=624bafe27543). Furthermore, a 2020 survey of over 16,000 shoppers found that 82% of consumers are more inclined to purchase after seeing, holding, or demoing products in-store [[3]](https://www.businesswire.com/news/home/20200102005030/en/2020-Shopping-Outlook-82-Percent-of-Consumers-More-Inclined-to-Purchase-After-Seeing-Holding-or-Demoing-Products-In-Store).
It seems that our senses and emotions play an integral role in the shopping experience. This fact is what inspired us to create Metagram, an AR app to help restore small businesses.
## References
* [1] <https://www150.statcan.gc.ca/n1/pub/45-28-0001/2021001/article/00034-eng.htm>
* [2] <https://www.forbes.com/sites/gregpetro/2019/03/29/consumers-are-spending-more-per-visit-in-store-than-online-what-does-this-man-for-retailers/?sh=624bafe27543>
* [3] <https://www.businesswire.com/news/home/20200102005030/en/2020-Shopping-Outlook-82-Percent-of-Consumers-More-Inclined-to-Purchase-After-Seeing-Holding-or-Demoing-Products-In-Store> | ## 💡 Inspiration💡
Our team is saddened by the fact that so many people think that COVID-19 is obsolete when the virus is still very much relevant and impactful to us. We recognize that there are still a lot of people around the world that are quarantining—which can be a very depressing situation to be in. We wanted to create some way for people in quarantine, now or in the future, to help them stay healthy both physically and mentally; and to do so in a fun way!
## ⚙️ What it does ⚙️
We have a full-range of features. Users are welcomed by our virtual avatar, Pompy! Pompy is meant to be a virtual friend for users during quarantine. Users can view Pompy in 3D to see it with them in real-time and interact with Pompy. Users can also view a live recent data map that shows the relevance of COVID-19 even at this time. Users can also take a photo of their food to see the number of calories they eat to stay healthy during quarantine. Users can also escape their reality by entering a different landscape in 3D. Lastly, users can view a roadmap of next steps in their journey to get through their quarantine, and to speak to Pompy.
## 🏗️ How we built it 🏗️
### 🟣 Echo3D 🟣
We used Echo3D to store the 3D models we render. Each rendering of Pompy in 3D and each landscape is a different animation that our team created in a 3D rendering software, Cinema 4D. We realized that, as the app progresses, we can find difficulty in storing all the 3D models locally. By using Echo3D, we download only the 3D models that we need, thus optimizing memory and smooth runtime. We can see Echo3D being much more useful as the animations that we create increase.
### 🔴 An Augmented Metaverse in Swift 🔴
We used Swift as the main component of our app, and used it to power our Augmented Reality views (ARViewControllers), our photo views (UIPickerControllers), and our speech recognition models (AVFoundation). To bring our 3D models to Augmented Reality, we used ARKit and RealityKit in code to create entities in the 3D space, as well as listeners that allow us to interact with 3D models, like with Pompy.
### ⚫ Data, ML, and Visualizations ⚫
There are two main components of our app that use data in a meaningful way. The first and most important is using data to train ML algorithms that are able to identify a type of food from an image and to predict the number of calories of that food. We used OpenCV and TensorFlow to create the algorithms, which are called in a Python Flask server. We also used data to show a choropleth map that shows the active COVID-19 cases by region, which helps people in quarantine to see how relevant COVID-19 still is (which it is still very much so)!
## 🚩 Challenges we ran into
We wanted a way for users to communicate with Pompy through words and not just tap gestures. We planned to use voice recognition in AssemblyAI to receive the main point of the user and create a response to the user, but found a challenge when dabbling in audio files with the AssemblyAI API in Swift. Instead, we overcame this challenge by using a Swift-native Speech library, namely AVFoundation and AVAudioPlayer, to get responses to the user!
## 🥇 Accomplishments that we're proud of
We have a functioning app of an AR buddy that we have grown heavily attached to. We feel that we have created a virtual avatar that many people really can fall for while interacting with it, virtually traveling places, talking with it, and getting through quarantine happily and healthily.
## 📚 What we learned
For the last 36 hours, we learned a lot of new things from each other and how to collaborate to make a project.
## ⏳ What's next for ?
We can use Pompy to help diagnose the user’s conditions in the future; asking users questions about their symptoms and their inner thoughts which they would otherwise be uncomfortable sharing can be more easily shared with a character like Pompy. While our team has set out for Pompy to be used in a Quarantine situation, we envision many other relevant use cases where Pompy will be able to better support one's companionship in hard times for factors such as anxiety and loneliness. Furthermore, we envisage the Pompy application being a resource hub for users to improve their overall wellness. Through providing valuable sleep hygiene, exercise tips and even lifestyle advice, Pompy will be the one-stop, holistic companion for users experiencing mental health difficulties to turn to as they take their steps towards recovery.
\*\*we had to use separate github workspaces due to conflicts. | ## Inspiration
It's an infamous phenomena that riddles modern society - camera galleries filled with near-similar group and self portraits. With burst shutter shots and the general tendency to take multiple photos of a gathered group out of fear that one image may be cursed with a blink, misdirected gaze, or perhaps even an ill-conceived countenance, our team saw a potential tool to save people some time, and offer new ways of thinking about using their camera.
## What it does
This app can either take a series of image urls or a Facebook album's id, and parse the images with Azure's Face Cognitive service to determine the strength of the smile and general photo quality. The app then returns the same series of images, sorted from "best" to "least," in accordance to Microsoft's algorithms regarding blurriness, happiness, and size of smile.
## How we built it
We built the app on a NodeJS server and immediately began working on learning about how to prepare data for the Azure cognitive surfaces. This web server runs express to quickly deploy the app, and we used Postman repeatedly to troubleshoot API calls. Then, we hosted the web server on Google's cloud platform to deploy the dynamic site, and with that site we used Facebook's graph API to collect user images upon entering an album ID. The front end itself takes its design from Materialize.
## Challenges we ran into
One of the main sources of troubleshooting was working with very particular image urls. For Azure's cognitive services to take the image files, they must be urls of images already hosted on the internet. We spent a while thinking about how to overcome this, as Google Photos images were not reliably returning data from the Azure service, so instead we used Facebook albums. Additionally, we never really got to figure out which features are best correlated with picture quality, and instead arbitrarily chose blurriness and happiness as a stand-in for picture quality.
## Accomplishments that we're proud of
Getting the album to display user information was amazing, and connecting our pipes between our server infrastructure and Microsoft's cognitive service was extremely awarding. We were also proud of being allowed to bulk compare photos with Facebook's API.
## What we learned
How to handle tricky AJAX calls, and send more tricky header calls to retain information. We also learned about the variety of web hosting platforms in the area, and took our first foray into the world of Computer Vision!
## What's next for FotoFinder
Integration with Google Photos, customized ML models for image quality, and an open source tool for a project like this so other companies can simply use the idea with a public API. | winning |
## Inspiration
During my application to NWHacks, I was pretty stumped on what to answer for the question on what I was planning on building here. One of my team members suggested I think of something that you \_ can’t \_ do yet with software, and see if you can implement it. After some thinking, I realized that there weren’t any good budgeting apps that allowed you to take pictures of your receipts every time you bought something. BudgetBunny was our solution; a way to integrate receipts that you get after purchases with a budgeting app (with a cute bunny!)
## What it does
The app is like a simple personal budgeting tool. You can input your monthly budget on the status page. As you make purchases, you can scan your receipts to subtract from your monthly allowance. Every time you add a receipt, you can “feed” it to your bunny, which makes it happy. Careful though: as you spend more, your budget shrinks, as does you bunny’s health bar. By making sure that your bunny is kept constantly happy, we hope to incentivize users to save more, and spend less.
## How we built it
Originally, we planned to use Python to parse the output from our camera vision API. We learnt that, in fact, it’s much easier to code everything in Java. Therefore, the entire project was implemented entirely in Java.
## Challenges we ran into
Like any other software project (especially those written in 24 hours), this project did not come without a heck of a lot of issues. For starters, setting up Android Studio was challenging at the beginning, due to issues with Gradle not recognizing the build properly. Furthermore, the Google camera vision api we were using kept trying to read the text sideways, so we had to quickly implement a workaround to rotate the image in the backend.
## Accomplishments that we're proud of
Well, primarily we’re proud that we finished the product, and got the minimum features required to make a product. It wasn’t easy, but it was definitely worth it. We’re also really proud of our cute bunny!
## What we learned
Well, the first thing we learnt is to have our environment set up \_ before \_ the competition. It would also help if everybody on our team knew how to use git, as a lot of time was spent doing that as well. I think if we also had more experience with Android development, the whole process would’ve been more streamlined, and we could’ve had time to implement more features.
## What's next for BudgetBunny
We’re definitely thinking of working more on this project in the future (preferably after a good night’s rest). There were a lot of features that we really wanted implemented, but simply didn’t have the time to create. For instance, we wanted features to export reciept data, show the items in the receipt itself, and implement a fancier UI. If we can get enough traction, we may even plan to release this app on the Play Store! | # Budget Buddy
An Android mobile app built for Hack the 6ix. We designed this app to assist users with budgeting by using AI/OCR to analyze receipts. The user takes a picture of the receipt, and the image gets processed by Textract which will parse out the prices. The user can then categorize purchases, and Budget Buddy will provide a chart showcasing the user's expenditures.
## Tools Used:
* Android Studio
* AWS Textract
* AWS Amplify
* AWS DynamoDB
## Potential Long-term Add-ons:
* Integrate AWS Comprehend to leverage NLP for categorization
* Implement robust budget indicators (approaching targets/meeting goals) | ## Inspiration
The expense behavior of the user, especially in the age group of 15-29, is towards spending unreasonably amount in unnecessary stuff. So we want them to have a better financial life, and help them understand their expenses better, and guide them towards investing that money into stocks instead.
## What it does
It points out the unnecessary expenses of the user, and suggests what if you invest this in the stocks what amount of income you could gather around in time.
So, basically the app shows you two kinds of investment grounds:
1. what if you invested somewhere around 6 months back then what amount of money you could have earned now.
2. The app also shows what the most favorable companies to invest at the moment based on the Warren Buffet Model.
## How we built it
We basically had a python script that scrapes the web and analyzes the Stock market and suggests the user the most potential companies to invest based on the Warren Buffet model.
## Challenges we ran into
Initially the web scraping was hard, we tried multiple ways and different automation software to get the details, but some how we are not able to incorporate fully. So we had to write the web scrapper code completely by ourselves and set various parameters to short list the companies for the Investment.
## Accomplishments that we're proud of
We are able to come up with an good idea of helping people to have a financially better life.
We have learnt so many things on spot and somehow made them work for satisfactory results. but i think there is many more ways to make this effective.
## What we learned
We learnt firebase, also we learnt how to scrape data from a complex structural sites.
Since, we are just a team of three new members who just formed at the hackathon, we had to learn and co-operate with each other.
## What's next for Revenue Now
We can study our user and his behavior towards spending money, and have customized profiles that suits him and guides him for the best use of financial income and suggests the various saving patterns and investment patterns to make even the user comfortable. | losing |
# Randflix: a Netflix Randomizer
## Motivation
A handy chrome extension for those who have seen their favourite Netflix shows way too many times. When choosing an episode becomes arduous, just turn to Randflix to make the choice for you!
[Get the extension on the chrome web store!](https://chrome.google.com/webstore/detail/randflix/enjakkkpkgpcnjbmagjgkccljfimgbhh) | ## Inspiration
Access to course resources is often fragmented and lacks personalization, making it difficult for students to optimize their learning experiences. When students use Large Language Models (LLMs) for academic insights, they often encounter limitations due to LLMs’ inability to interpret various data formats—like lecture audio or photos of notes. Additionally, context often gets lost between sessions, leading to fragmented study experiences.
We created OpenContext to offer a comprehensive solution that enables students to manage, customize, and retain context across multiple learning resources.
## What it does
OpenContext is an all-in-one platform designed to provide students with personalized, contextually-aware resources using LLMs. It aims to make course data open source by allowing students to share their course materials.
### Features:
* Upload and Process Documents: Users can upload or record audio, PDF, and image files related to their classes.
* Chat Assistant: Users can chat with an assistant which will have the context of all the uploaded documents, and will be able to refer to course materials on any given questions.
* Real-Time Audio Transcription: Record lectures directly in the browser, and the audio is transcribed and processed in real time.
* Document Merging and Quiz Generation: Users can combine documents from different formats and generate quizzes that mimic Quizlet-style flashcards.
* Progress Tracking: After completing quizzes, users receive a detailed summary of their performance.
**Full User Flow**: The user faces a landing page where they are prompted with three options: Create a new chat, Upload documents, Generate quizzes. As the user navigates to Upload documents, they have the option to record their current lecture real time from their browser. Or, they can upload documents ranging from audio, pdf and image. We are using Tesseract for Optical Character Recognition on image files, OpenAI’s SpeechtoText API on audio files, and our own PDF parser for other class documents. The user can also record the lecture in real time which will be transcribed and processed in real time. After the transcription of a lecture or another class document is finished, it is displayed to the user. They will be able to create a new chat and ask our AI assistant anything related to their course materials. The assistant will have the full context of uploaded documents and will be able to answer with references to those documents. The user will also have the option to generate a quiz based on the transcription of the lecture that just got recorded. They will also be able to merge multiple class documents and generate a custom quiz out of all. The quizzes will have the format of quizlet flashcards, where a question is asked, 4 answers are provided as options and after an option is chosen, the website will prompt with either the chosen answer is correct or incorrect. The score for each question is calculated and at the end of the quiz a summary of the performance is written for the student user to track their progress.
## How We Built It
* Frontend: Built with React for a responsive and dynamic user interface.
* Backend: Developed with FastAPI, handling various tasks from file processing to vector database interactions.
* AI Integration: Utilized OpenAI's Whisper for real-time speech-to-text transcription and embedding functionalities.
* OCR: Tesseract is used for Optical Character Recognition on uploaded images, allowing us to convert handwritten or printed text into machine-readable text.
Infrastructure: Hosted with Defang for production-grade API management, alongside Cloudflare for data operations and performance optimization.
### Tech Stack:
* Figma: We used Figma to design our warm color palette and simplistic theme for pages and logo.
* Python: Python scripts are used ubiquitously in OpenContext whether it’s for our deployment scripts for Defang or pipelines to our Vector Databases, our API servers, OpenAI wrappers or other miscellaneous tasks, we utilize Python.
* React: React framework is used to build the entire components, pages, and routes on the frontend.
* Tesseract OCR: For converting images to text.
* FastAPI: We have multiple FastAPI apps that we use for multiple purposes. Having endpoints to respond to different types of file requests from the user, making connections to our Vector DB, and other scripting tasks are all handled by the FastAPI endpoints that we have built.
* OpenAI API: We are using multiple services of OpenAI such as the Whisper for ASR or Text Embeddings for later function calling and vector storing.
### Sponsor Products:
* Defang: We used Defang in order to test and host both of our API systems in a production environment. Here is the production PDF API: <https://alpnix-pdf-to-searchable--8080.prod1b.defang.dev/docs>
* Terraform: We used Terraform, a main.tf script, to test and validate our configuration for our deployment services such as our API hosting with Defang and nginx settings.
* Midnight: For open-source data sharing, Midnight provides us the perfect tool to encrypt the shared information. We created our own wallet as the host server and each user is able to create their own private wallets to share files securely.
* Cloudflare: We are using multiple services of Cloudflare…
+ Vectorize: In addition to using Pinecone, we have fully utilized Vectorize, Cloudflare’s Vector DB, at a high level.
+ Cloudflare Registrar: We are using Cloudflare’s domain registration to buy our domain.
+ Proxy Traffic: We are using Cloudflare’s proxy traffic service to handle requests in a secure and efficient manner.
+ Site Analytics: Cloudflare’s data analytics tool to help us analyze the traffic as the site is launched.
* Databricks: We have fully utilized Databricks Starter Application to familiarize ourselves with efficient open source data sharing feature of our product. After running some tests, we also decided to integrate LangChain in the future to enhance the context-aware nature of our system.
## Challenges we ran into
One significant challenge was efficiently extracting text from images. This required converting images to PDFs, running OCR to overlay text onto the original document, and accurately placing the text for quiz generation. Ensuring real-time transcription accuracy and managing the processing load on our servers were also challenging.
## Accomplishments that we're proud of
* Tool Mastery: In a short time, we learned and successfully implemented production environment tools like NGINX, Pinecone, and Terraform.
* API Integration: Seamlessly integrated OpenAI’s Whisper and Tesseract for multi-format document processing, enhancing the utility of LLMs for students.
* Quiz Generation Pipeline: Developed an efficient pipeline for custom quiz generation from multiple class resources.
## What we learned
* Infrastructure Management: Gained experience using Defang, Terraform, and Midnight to host and manage a robust data application.
* Prompt Engineering: Through David Malan’s session, we enhanced our ability to prompt engineer, configuring ChatGPT’s API to fulfill specific roles and restrictions effectively.
## What's next for Open Context
We aim to develop a secure information-sharing system within the platform, enabling students to share their study materials safely and privately with their peers. Additionally, we plan to introduce collaborative study sessions where students can work together on quizzes and share real-time notes. This could involve shared document editing and group quiz sessions to enhance the sense of open source. | ## Inspiration
Unhealthy diet is the leading cause of death in the U.S., contributing to approximately 678,000 deaths each year, due to nutrition and obesity-related diseases, such as heart disease, cancer, and type 2 diabetes. Let that sink in; the leading cause of death in the U.S. could be completely nullified if only more people cared to monitor their daily nutrition and made better decisions as a result. But **who** has the time to meticulously track every thing they eat down to the individual almond, figure out how much sugar, dietary fiber, and cholesterol is really in their meals, and of course, keep track of their macros! In addition, how would somebody with accessibility problems, say blindness for example, even go about using an existing app to track their intake? Wouldn't it be amazing to be able to get the full nutritional breakdown of a meal consisting of a cup of grapes, 12 almonds, 5 peanuts, 46 grams of white rice, 250 mL of milk, a glass of red wine, and a big mac, all in a matter of **seconds**, and furthermore, if that really is your lunch for the day, be able to log it and view rich visualizations of what you're eating compared to your custom nutrition goals?? We set out to find the answer by developing macroS.
## What it does
macroS integrates seamlessly with the Google Assistant on your smartphone and let's you query for a full nutritional breakdown of any combination of foods that you can think of. Making a query is **so easy**, you can literally do it while *closing your eyes*. Users can also make a macroS account to log the meals they're eating everyday conveniently and without hassle with the powerful built-in natural language processing model. They can view their account on a browser to set nutrition goals and view rich visualizations of their nutrition habits to help them outline the steps they need to take to improve.
## How we built it
DialogFlow and the Google Action Console were used to build a realistic voice assistant that responds to user queries for nutritional data and food logging. We trained a natural language processing model to identify the difference between a call to log a food eaten entry and simply a request for a nutritional breakdown. We deployed our functions written in node.js to the Firebase Cloud, from where they process user input to the Google Assistant when the test app is started. When a request for nutritional information is made, the cloud function makes an external API call to nutrionix that provides nlp for querying from a database of over 900k grocery and restaurant foods. A mongo database is to be used to store user accounts and pass data from the cloud function API calls to the frontend of the web application, developed using HTML/CSS/Javascript.
## Challenges we ran into
Learning how to use the different APIs and the Google Action Console to create intents, contexts, and fulfillment was challenging on it's own, but the challenges amplified when we introduced the ambitious goal of training the voice agent to differentiate between a request to log a meal and a simple request for nutritional information. In addition, actually finding the data we needed to make the queries to nutrionix were often nested deep within various JSON objects that were being thrown all over the place between the voice assistant and cloud functions. The team was finally able to find what they were looking for after spending a lot of time in the firebase logs.In addition, the entire team lacked any experience using Natural Language Processing and voice enabled technologies, and 3 out of the 4 members had never even used an API before, so there was certainly a steep learning curve in getting comfortable with it all.
## Accomplishments that we're proud of
We are proud to tackle such a prominent issue with a very practical and convenient solution that really nobody would have any excuse not to use; by making something so important, self-monitoring of your health and nutrition, much more convenient and even more accessible, we're confident that we can help large amounts of people finally start making sense of what they're consuming on a daily basis. We're literally able to get full nutritional breakdowns of combinations of foods in a matter of **seconds**, that would otherwise take upwards of 30 minutes of tedious google searching and calculating. In addition, we're confident that this has never been done before to this extent with voice enabled technology. Finally, we're incredibly proud of ourselves for learning so much and for actually delivering on a product in the short amount of time that we had with the levels of experience we came into this hackathon with.
## What we learned
We made and deployed the cloud functions that integrated with our Google Action Console and trained the nlp model to differentiate between a food log and nutritional data request. In addition, we learned how to use DialogFlow to develop really nice conversations and gained a much greater appreciation to the power of voice enabled technologies. Team members who were interested in honing their front end skills also got the opportunity to do that by working on the actual web application. This was also most team members first hackathon ever, and nobody had ever used any of the APIs or tools that we used in this project but we were able to figure out how everything works by staying focused and dedicated to our work, which makes us really proud. We're all coming out of this hackathon with a lot more confidence in our own abilities.
## What's next for macroS
We want to finish building out the user database and integrating the voice application with the actual frontend. The technology is really scalable and once a database is complete, it can be made so valuable to really anybody who would like to monitor their health and nutrition more closely. Being able to, as a user, identify my own age, gender, weight, height, and possible dietary diseases could help us as macroS give users suggestions on what their goals should be, and in addition, we could build custom queries for certain profiles of individuals; for example, if a diabetic person asks macroS if they can eat a chocolate bar for lunch, macroS would tell them no because they should be monitoring their sugar levels more closely. There's really no end to where we can go with this! | partial |
## Inspiration
Our inspiration for building this app came from our personal experiences as library users. We realized that it can be frustrating to walk to the library only to find out that it is crowded and difficult to find a quiet spot to study. Some of our team members prefer to study in the same spot, so the idea of being able to check the occupancy levels of the library before leaving home was appealing. We wanted to create a tool that would make it easier for library users to plan their visits and find an available seat quickly, without having to walk around the library searching for an open spot. That's why we decided to develop an app that uses NFC tags to track the occupancy levels of the library in real-time, providing library users with the information they need to make informed decisions about when to visit the library and where to study.
## What it does
Our software is a library occupancy tracker that uses NFC (Near Field Communication) technology to provide real-time updates on the busyness of the library. The software is comprised of two main components: NFC tags placed throughout the library and a smartphone with a built in NFC scanner to scan these tags and update the live occupancy.
The NFC tags are placed in every seat throughout the library. These tags can be scanned using the your smartphone, which then displays the occupancy levels of that specific area in real-time. The app also includes a map feature that shows the location of available seats and study spaces.
The app also allows library staff to monitor the library's resources more efficiently.
In summary, our software aims to provide students with real-time information on the busyness of the library, allowing them to plan their visit and make the most of their time. It also helps library staff to manage resources and provide a better service to students.
## How we built it
We used XAMPP to host a local Apache server and store an online MySQL database on this server to store live “occupancy” results. The website we made allows users to modify seat entries from a live database. In order to make the local server public, we utilized ngrok to create a secure tunnel to our localhost and assigned it a specific domain name. On the front-end, we used HTML, CSS, and Javascript to create the user interface. We also used PHP to make requests to the online MySQL database to retrieve data and update the specific occupancy in the designated location on the map. To change the "occupied" entry in the database, we used data munging skills to change the request after the "?" in the URL. Each NFC tag has a specific URL encoding, which corresponds to a specific seat in the library. Each time an HTTP request is made with this URL, this toggles the “occupancy” result from on/off and vice-versa. Integrating these technologies allowed us to create a dynamic and functional website that can retrieve data from the database in real-time.
## Challenges we ran into
One of the main challenges we encountered while developing our project was hosting a server. Initially, we attempted to use Google Cloud, but we were unable to get it to work properly. We then decided to try forwarding our local host to ngrok, which allowed us to redirect HTTP requests to our local host. This proved to be a successful solution and allowed us to continue with our project development.
## Accomplishments that we're proud of
One of the accomplishments we are most proud of is how well our team worked together to develop our app. We effectively split up tasks among team members and then reconvened to ensure everything was working seamlessly together. Additionally, we are proud of how much we have learned through this process. None of us had any prior web development experience on the front or back end, but through this project, we have acquired a solid foundation of knowledge. We are leaving this project with a newfound appreciation for the hard work and dedication that goes into creating a functional and user-friendly app.
## What we learned
What we learned from this project is the potential of using NFC tags to program live updates on a website. None of us had any previous experience in website development, but through this project, we were able to learn and apply new skills. We discovered the complexity and effort required to build a project like this, but also the power of teamwork and passion for learning. We all came into the project with limited knowledge, but through the process of collaboration and problem-solving, we were able to achieve our goals and create a functional software. It was a valuable learning experience that has allowed us to grow both technically and as a team.
## What's next for OpenSeat
What's next for OpenSeat is to continue scaling our solution to accurately track every seat in every library at Queens University and all levels. We hope to pitch this idea to the university and see if they would be open to subsidizing the project as it is relatively inexpensive to develop, and NFC tags are inexpensive. We plan to expand the number of NFC tags placed throughout the library and improving the accuracy of the occupancy levels reported by the tags. Additionally, we hope to integrate the software with existing library systems such as the library's website or catalog. This would allow students to access the occupancy information directly from the library's website.
Furthermore, we plan to add more features to our software, such as past statistics on library busyness. This would allow library staff to better understand and manage the usage of the library's resources. We also would like to explore the possibility of pitching this solution to other universities and even other areas such as hospitals, to track and manage the wait times and available spaces. We believe that this software has the potential to greatly benefit any organization that needs to manage the usage of its resources. We hope that by expanding and enhancing our solution, we can make it more widely available to other organizations. | ## Inspiration
During exam season, people get stressed trying to find open study locations.
## What it does
Directs users to nearby study locations, showing different attributes about these spots, such as their level of crowdedness, if food is allowed, etc.
## How we built it
We utilized HTML and MapBox's Maps API.
## Challenges we ran into
We were unable to connect the backend server with the frontend interface or add all of the features we wanted to.
## Accomplishments that we're proud of
We're proud of the beautiful UI and design, especially the integration of the map. We are proud of learning so many aspects of web development and sparking our individual interests about these technologies.
## What we learned
We began our exploration of HTML, CSS, Javascript, Node.js, and more through this project. Although we didn't fully complete our dream application, all of the pieces are there for us to pick up in the future (which we plan to do so). We started self-learning backend development.
## What's next for Alcove
Possibly, in the future, our team will get back together and further study web design principles. We look forward to improving our development skills. One day, we would love to implement real time synchronization of location data and add more study locations (not just libraries!) with greater detail. | ## Inspiration
We thought it would be nice if, for example, while working in the Computer Science building, you could send out a little post asking for help from people around you.
Also, it would also enable greater interconnectivity between people at an event without needing to subscribe to anything.
## What it does
Users create posts that are then attached to their location, complete with a picture and a description. Other people can then view it two ways.
**1)** On a map, with markers indicating the location of the post that can be tapped on for more detail.
**2)** As a live feed, with the details of all the posts that are in your current location.
The posts don't last long, however, and only posts within a certain radius are visible to you.
## How we built it
Individual pages were built using HTML, CSS and JS, which would then interact with a server built using Node.js and Express.js. The database, which used cockroachDB, was hosted with Amazon Web Services. We used PHP to upload images onto a separate private server. Finally, the app was packaged using Apache Cordova to be runnable on the phone. Heroku allowed us to upload the Node.js portion on the cloud.
## Challenges we ran into
Setting up and using CockroachDB was difficult because we were unfamiliar with it. We were also not used to using so many technologies at once. | losing |
## Inspiration
Each of us uses speed reading tools on the daily for online news articles and such, so it's always been a pain transitioning to printed media. We wanted a tool that would enable us to read as efficiently as we do online, in real life.
## What it does
This program recognizes words on a physical page using nothing but a phone's camera. It displays it back to you rapidly, word by word, in order to prevent you from reading the word out loud in your head. There's a lot of research behind this speed reading technique, and it can result in crazy fast comprehension speeds.
## How I built it
It's all client side, including "uploading" and all of the algorithms we used to segment words and process the image. All in vanilla Javascript, using no external libraries.
## Challenges I ran into
We wanted to submit this to Datto's 4K, and keeping the project small turned out to be the hardest part of this project. Coming up with algorithms that were short and space efficient (no, not memory, but rather characters of source code!) was an interesting creative challenge. It was also hard figuring out how to iterate the words in the proper order -- left to right, top to bottom. In the end, we managed to figure it out.
## Accomplishments that I'm proud of
The algorithms we implemented worked! We didn't use any external resources to figure out how to solve the problems we needed to. Building this project under the space constraints we were under was a tremendous learning experience and a fascinating puzzle. We wanted a challenge, and we conquered it.
## What I learned
We taught ourselves a lot about the intricacies of processing textual images. We saw firsthand how frustrating it can be to teach computers to perform the simplest of tasks. Once we figured out how to locate words, it literally took us six plus hours to figure out how to list them in order. Real world images are noisy and bendy and not at all nice to work with... but we did it.
## What's next for Reedr
We had plenty of ideas that we didn't have the time or space to implement: ways to improve the speed reading process, other algorithms to segment words, etc. Also, there are a bunch of general usability improvements that would make this much better for users. | ## Inspiration
Today, anything can be learned on the internet with just a few clicks. Information is accessible anywhere and everywhere- one great resource being Youtube videos. However accessibility doesn't mean that our busy lives don't get in the way of our quest for learning.
TLDR: Some videos are too long, and so we didn't watch them.
## What it does
TLDW - Too Long; Didn't Watch is a simple and convenient web application that turns Youtube and user-uploaded videos into condensed notes categorized by definition, core concept, example and points. It saves you time by turning long-form educational content into organized and digestible text so you can learn smarter, not harder.
## How we built it
First, our program either takes in a youtube link and converts it into an MP3 file or prompts the user to upload their own MP3 file. Next, the audio file is transcribed with Assembly AI's transcription API. The text transcription is then fed into Co:here's Generate, then Classify, then Generate again to summarize the text, organize by type of point (main concept, point, example, definition), and extract key terms. The processed notes are then displayed on the website and coded onto a PDF file downloadable by user. The Python backend built with Django is connected to a ReactJS frontend for an optimal user experience.
## Challenges we ran into
Manipulating Co:here's NLP APIs to generate good responses was certainly our biggest challenge. With a lot of experimentation *(and exploration)* and finding patterns in our countless test runs, we were able to develop an effective note generator. We also had trouble integrating the many parts as it was our first time working with so many different APIs, languages, and frameworks.
## Accomplishments that we're proud of
Our greatest accomplishment and challenge. The TLDW team is proud of the smooth integration of the different APIs, languages and frameworks that ultimately permitted us to run our MP3 file through many different processes and coding languages Javascript and Python to our final PDF product.
## What we learned
Being the 1st or 2nd Hackathon of our First-year University student team, the TLDW team learned a fortune of technical knowledge, and what it means to work in a team. While every member tackled an unfamiliar API, language or framework, we also learned the importance of communication. Helping your team members understand your own work is how the bigger picture of TLDW comes to fruition.
## What's next for TLDW - Too Long; Didn't Watch
Currently TLDW generates a useful PDF of condensed notes in the same order as the video. For future growth, TLDW hopes to grow to become a platform that provides students with more tools to work smarter, not harder. Providing a flashcard option to test the user on generated definitions, and ultimately using the Co-here Api to also read out questions based on generated provided examples and points. | ## Inspiration
What inspired us was we wanted to make an innovative solution which can have a big impact on people's lives. Most accessibility devices for the visually impaired are text to speech based which is not ideal for people who may be both visually and auditorily impaired (such as the elderly). To put yourself in someone else's shoes is important, and we feel that if we can give the visually impaired a helping hand, it would be an honor.
## What it does
The proof of concept we built is separated in two components. The first is an image processing solution which uses OpenCV and Tesseract to act as an OCR by having an image input and creating a text output. This text would then be used as an input to the second part, which is a working 2 by 3 that converts any text into a braille output, and then vibrate specific servo motors to represent the braille, with a half second delay between letters. The outputs were then modified for servo motors which provide tactile feedback.
## How we built it
We built this project using an Arduino Uno, six LEDs, six servo motors, and a python file that does the image processing using OpenCV and Tesseract.
## Challenges we ran into
Besides syntax errors, on the LED side of things there were challenges in converting the text to braille. Once that was overcome, and after some simple troubleshooting for menial errors, like type comparisons, this part of the project was completed. In terms of the image processing, getting the algorithm to properly process the text was the main challenge.
## Accomplishments that we're proud of
We are proud of having completed a proof of concept, which we have isolated in two components. Consolidating these two parts is only a matter of more simple work, but these two working components are the fundamental core of the project we consider it be a start of something revolutionary.
## What we learned
We learned to iterate quickly and implement lateral thinking. Instead of being stuck in a small paradigm of thought, we learned to be more creative and find alternative solutions that we might have not initially considered.
## What's next for Helping Hand
* Arrange everything in one android app, so the product is cable of mobile use.
* Develop neural network so that it will throw out false text recognitions (usually look like a few characters without any meaning).
* Provide API that will be able to connect our glove to other apps, where the user for example may read messages.
* Consolidate the completed project components, which is to implement Bluetooth communication between a laptop processing the images, using OpenCV & Tesseract, and the Arduino Uno which actuates the servos.
* Furthermore, we must design the actual glove product, implement wire management, an armband holder for the uno with a battery pack, and position the servos. | partial |
## Inspiration
“Emergency” + “Need” = “EmergeNeed”
Imagine a pleasant warm Autumn evening, and you are all ready to have Thanksgiving dinner with your family. You are having a lovely time, but suddenly you notice a batch of red welts, swollen lips, and itchy throat. Worried and scared, you rush to the hospital just to realize that you will have to wait for another 3 hours to see a doctor due to the excess crowd.
Now imagine that you could quickly talk to a medical professional who could recommend going to urgent care instead to treat your allergic reaction. Or, if you were recommended to seek emergency hospital care, you could see the estimated wait times at different hospitals before you left. Such a system would allow you to get advice from a medical professional quickly, save time waiting for treatment, and decrease your risk of COVID exposure by allowing you to avoid large crowds.
## What it does
Our project aims to address three main areas of healthcare improvement. First, there is no easy way for an individual to know how crowded a hospital will be at a given time. Especially in the current pandemic environment, users would benefit from information such as **crowd level and estimated travel times to different hospitals** near them. Knowing this information would help them avoid unnecessary crowds and the risk of COVID19 exposure and receive faster medical attention and enhanced treatment experience. Additionally, such a system allows hospital staff to operate more effectively and begin triaging earlier since they will receive a heads-up about incoming (non-ambulance) patients before they arrive.
Second, online information is often unreliable, and specific demographics may not have access to a primary care provider to ask for advice during an emergency. Our interface allows users to access **on-call tele-network services specific to their symptoms** easily and therefore receive advice about options such as monitoring at home, urgent care, or an emergency hospital.
Third, not knowing what to expect contributes to the elevated stress levels surrounding an emergency. Having an app service that encourages users to **actively engage in health monitoring** and providing **tips about what to expect** and how to prepare in an emergency will make users better equipped to handle these situations when they occur. Our dashboard offers tools such as a check-in journal to log their mood gratitudes and vent about frustrations. The entries are sent for sentiment analysis to help monitor mental states and offer support. Additionally, the dashboard allows providers to assign goals to patients and monitor progress (for example, taking antibiotics every day for 1 week or not smoking). Furthermore, the user can track upcoming medical appointments and access key medical data quickly (COVID19 vaccination card, immunization forms, health insurance).
## How we built it
Our application consists of a main front end and a backend.
The front end was built using the Bubble.io interface. Within the Bubble service, we set up a database to store user profile information, create emergency events, and accumulate user inputs and goals. The Bubble Design tab and connection to various API’s allowed us to develop different pages to represent the functionalities and tools we needed. For example, we had a user login page, voice recording and symptom input page, emergency event trigger with dynamic map page, and dashboard with journaling and calendar schedule page. The Bubble Workflow tab allowed us to easily connect these pages and communicate information between the front and back end.
The back end was built using Python Flask. We also used Dialogflow to map the symptoms with the doctor's speciality the user should visit. We processed data calls to InterSystems API in the backend server and processed data from the front end. We created synthetic data to test on.
## Challenges we ran into
This project was a great learning experience, and we had a lot of fun (and frustration) working through many challenges. First, we needed to spend time coming up with a project idea and then refining the scope of our idea. To do this, we talked with various sponsors and mentors to get feedback on our proposal and learn about the industry and actual needs of patients. Once we had a good roadmap for what features we wanted, we had to find data that we could use. Currently, hospitals are not required to provide any information about estimated wait time, so we had to find an alternative way to assess this. We decided to address this by developing our own heuristic that considers hospital distance, number of beds, and historic traffic estimation. This is a core functionality of our project, but also the most difficult, and we are still working on optimizing this metric. Another significant challenge we ran into was learning how to use the Bubble service, explicitly setting up the google maps functionality we wanted and connecting the backend with the frontend through Bubbles API. We sought mentor help, and are still trying to debug this step. Another ongoing challenge is implementing the call a doc feature with Twilio API. Finally, our team consists of members from drastically different time zones. So we needed to be proactive about scheduling meetings and communicating progress and tasks.
## Accomplishments that we're proud of
We are proud of our idea - indeed the amount of passion put into developing this toolkit to solve a meaningful problem is something very special (Thank you TreeHacks!).
We are proud of the technical complexity we accomplished in this short time frame. Our project idea seemed very complex, with lots of features we wanted to add.
Collaboration with team mates from different parts of the world and integration of different API’s (Bubble, Google Maps, InterSystems)
## What we learned
We learned a lot about the integration of multiple frameworks. Being a newbie in web development and making an impactful application was one of the things that we are proud of. Most importantly, the research and problem identification were the most exciting part of the whole project. We got to know the possible shortcomings of our present-day healthcare systems and how we can improve them. Coming to the technical part, we learned Bubble, Web Scraping, NLP, integrating with InterSystems API, Dialogflow, Flask.
## What's next for EmergeNeed
We could not fully integrate our backend to our Frontend web application built on Bubble as we faced some technical difficulties at the end that we didn’t expect. The calling feature needs to be implemented fully (currently it just records user audio). We look to make EmergeNeed a full-fledged customer-friendly application. We plan to implement our whole algorithm (ranging from finding hospitals with proper machines and less commute time to integrating real-time speech to text recognition) for large datasets. | ## Inspiration
What would you do with 22 hours of your time? I could explore all of Ottawa - from sunrise at parliament, to lunch at Shawarma palace, and end the night at our favourite pub, Heart and Crown!
But imagine you hurt your ankle and go to the ER. You're gonna spend that entire 22 hours in the waiting room, before you even get to see a doctor for this. This is a critical problem in our health care system.
We're first year medical students, and we've seen how much patients struggle to get the care they need. From the overwhelming ER wait time, to travelling over 2 hours to talk to a family doctor (not to mention only 1/5 Canadians having a family doctor), Canada's health care system is currently in a crisis. Using our domain knowledge, we wanted to take a step towards solving this problem.
## What is PocketDoc?
PocketDoc is your own personal physician available on demand. You can talk to it like you would to any other person, explaining what you're feeling, and PocketDoc will inform you what you may be experiencing at the moment. But can't WedMD do that? No! Because our app actually uses your personalized portfolio - consisting of user inputed vaccinations, current medications, allergies, and more, and PocketDoc can use that information to figure out the best diagnosis for your body. It tells you what your next steps are: go to your pharmacist who can now in Ontario, prescribe the appropriate medication, or maybe use your puffer for an acute allergic reaction, or maybe you do need to go to the ER. But wait, it doesn't stop there! PocketDoc uses your location to find the closest walk-in clinics, pharmacies, and hospitals - and its all in one app!
## How we built it
We've all dealt with the healthcare system in Canada, and with all the pros it offers, there are also many cons. From the perspective of a healthcare provider, we recognized that a more efficient solution is feasible. We used a dataset from Kaggle which provided long text data on symptoms, and the associated diagnosis. After trying various ML systems for classification, we decided to Cohere to implement a natural language processing model to classify any user input into one of 21 possible diagnoses. We further used XCode to implement login and used Auth0 to provide an authenticated login experience and ensure users feel safe inputing and storing their data on the app. We fully prototyped our app in Figma to show the range of functionalities we wish to implement beyond this hackathon.
## Challenges we ran into
We faced challenges at every step of the design and implementation process. As computer science beginners, we took on a ML-based classification task that required a lot of new learning. The first step was the most difficult: choosing a dataset. There were many ML systems we were considering, such as Tensor Flow, PyTorch, Keras, Scikid-learn, and each one worked best with a certain type of dataset. The dataset we chose also had to give use verified diagnoses for a set of symptoms, and we narrowed it down to 3 different sets. Choosing one of these sets took up a lot of time and effort.
The next challenge we faced occurred due to cross-platform incompatibility, where Xcode was used for app development but the ML algorithm was built on python 3. A huge struggle was bringing this model to run on the app directly. We found our only solution was to build a python API that can be accessed by Xcode, a task that we had no time to learn and implement.
Hardware was also a bottleneck for our productivity. With limited storage and computing power on our devices, we were compelled to use smaller datasets and simpler algorithms. This used up lots of time and resources as well.
The final and most important challenge was the massive learning curve under the short time constraints. For the majority of our team, this was our first hackathon and there is a lot to learn about the hackathon expectations/requirements while also learning new skills on the fly. The lack of prior knowledge made it difficult for us to manage resources efficiently throughout the 36 hours. This brought on more unexpected challenges throughout the entire process.
## Accomplishments that we're proud of
As medical students, we're proud to have been introduced to the field of computer science and the intersection between computer science and medicine as this will help us become well-versed and equipped physicians.
**Project Planning and Ideation**: Our team spent the initial hours of the hackathon discussing various ideas using the creative design process and finally settled on the healthcare app concept. Together, we outlined the features and functionalities the app would offer, considering user experience and technical feasibility.
**Learning and Skill Development**: Since this was our first time coding, we embraced the opportunity to learn new programming languages and technologies. We used our time carefully to learn from tutorials, online resources, and guidance from hackathon mentors.
**Prototype Development**: Despite the time constraints, we worked hard to develop a functional prototype of the app. We divided and conquered -- some team members focused on front-end development including designing the user interface and implementing navigation elements while others tackled back-end tasks like cleaning up the dataset and building our machine learning model.
**Iterative Development and Feedback**: We worked tirelessly on the prototype based on feedback from mentors and participants. We remained open to suggestions for improvement to enhance the app's functionality.
**Presentation Preparation**: As the deadline rapidly approached, we prepared a compelling presentation to showcase our project to the judges using the skills we learned from the public speaking workshop with Ivan Wanis Ruiz.
**Final Demo and Pitch**: In the final moments of the hackathon, we confidently presented our prototype to the judges and fellow participants. We demonstrated the key functionalities of the app, emphasizing its user-friendly design and its potential to improve the lives of individuals managing chronic illnesses.
**Reflection**: The hackathon experience itself has been incredibly rewarding. We gained valuable coding skills, forged strong bonds with our teammates, and contributed to a meaningful project with real-world applications.
Specific tasks:
1. Selected a high quality medical-based dataset that was representative of the Canadian patient population to ensure generalizability
2. Learned Cohere AI through YouTube tutorials
3. Learned Figma through trial and error and YouTube tutorials
4. Independently used XCode
5. Learned a variety of ML systems - Tensor Flow, PyTorch, Keras, Scikid-learn
6. Acquired skills in public speaking to captivate and audience with our unique solution to enhance individual quality of life, improve population health, and streamline the use of scarce healthcare resources.
## What we learned
1. Technical skills in coding, problem-solving, and utilizing development tools.
2. Effective time management under tight deadlines.
3. Improved communication and collaboration within a team setting.
4. Creative thinking and innovation in problem-solving.
5. Presentation skills for effectively showcasing our project.
6. Resilience and adaptability in overcoming challenges.
7. Ethical considerations in technology, considering the broader implications of our solutions on society and individuals.
8. Experimental learning by fearlessly trying new approaches and learning from both successes and failures.
Most importantly, we developed a passion for computer science and we’re incredibly eager to build off our skills through future independent projects, hackathons, and internships. Now more than ever, with rapid advancements in technology and the growing complexity of healthcare systems, as future physicians and researchers we must embrace computational tools and techniques to enhance patient care and optimize clinical outcomes. This could be through Electronic Health Records (EHR) management, data analysis and interpretation, diagnosing complex medical conditions using machine learning algorithms, and creating clinician decision support systems with evidence-based recommendations to improve patient care.
## What's next for PocketDoc
Main goal: connecting our back end with our front end through an API
NEXT STEPS
**Enhancing Accuracy and Reliability**: by integrating more comprehensive medical databases, and refining the diagnostic process based on user feedback and real-world data.
**Expanding Medical Conditions**: to include a wider range of specialties and rare diseases.
**Integrating Telemedicine**: to facilitate seamless connections between users and healthcare providers. This involves implemented features including real-time video consultations, secure messaging and virtual follow-up appointments.
**Personalizing Health Recommendations**: along with preventive care advice based on users' medical history, lifestyle factors, and health goals to empower users to take control of their health and prevent health issues before they arise. This can decrease morbidity and mortality.
**Health Monitoring and Tracking**: this would enable users to monitor their health metrics, track progress towards health goals, and receive actionable insights to improve their well-being.
**Global Expansion and Localization**: having PocketDoc available to new regions and markets along with tailoring the app to different languages, cultural norms, and healthcare systems.
**Partnerships and Collaborations**: with healthcare organizations, insurers, pharmaceutical companies, and other stakeholders to enhance the app's capabilities and promote its adoption. | ## Inspiration
We looked at lots of popular memes recently, and one that stood out was the recently price crash of the cryptocurrency market, which inspired us to make a crypto game.
## What it does
<https://youtu.be/hhcNKm9l5Hc>
## How we built it
We built the bot using Python3 and the discord.py library. The bot has a script for each command, which are all loaded by the main startup script.
## Challenges we ran into
We ran into issues with sqlalchemy, which forced us to send SQL commands via their own format instead of standard SQL format. We also had challenges with the leaderboard system, which we did not have enough time to fix.
## Accomplishments that we're proud of
We're proud of our ability to work together in sync, and the features we managed to develop in the short timespan we had while working in different timezones.
## What we learned
We learned a lot about using cloud hosting providers for hosting databases, the internal workings of Discord bots, and lots about working with SQL in general.
## What's next for Crypto Tycoon Bot
We hope to expand the amount of features and polish the user experience. | partial |
## Inspiration
Our game stems from the current global pandemic we are grappling with and the importance of getting vaccinated. As many of our loved ones are getting sick, we believe it is important to stress the effectiveness of vaccines and staying protected from Covid in a fun and engaging game.
## What it does
An avatar runs through a school terrain while trying to avoid obstacles and falling Covid viruses. The player wins the game by collecting vaccines and accumulating points, successfully dodging Covid, and delivering the vaccines to the hospital.
Try out our game by following the link to github!
## How we built it
After brainstorming our game, we split the game components into 4 parts for each team member to work on. Emily created the educational terrain using various assets, Matt created the character and its movements, Veronica created the falling Covid virus spikes, and Ivy created the vaccines and point counter. After each of the components were made, we brought it all together, added music, and our game was completed.
## Challenges we ran into
As all our team members had never used Unity before, there was a big learning curve and we faced some difficulties while navigating the new platform.
As every team member worked on a different scene on our Unity project, we faced some tricky merge conflicts at the end when we were bringing our project together.
## Accomplishments that we're proud of
We're proud of creating a fun and educational game that teaches the importance of getting vaccinated and avoiding Covid.
## What we learned
For this project, it was all our first time using the Unity platform to create a game. We learned a lot about programming in C# and the game development process. Additionally, we learned a lot about git management through debugging and resolving merge conflicts.
## What's next for CovidRun
We want to especially educate the youth on the importance of vaccination, so we plan on introducing the game into k-12 schools and releasing the game on steam. We would like to add more levels and potentially have an infinite level that is procedurally generated. | ## Inspiration
Making learning fun for children is harder than ever. Mobile Phones have desensitized them to videos and simple app games that intend to teach a concept.
We wanted to use Projection Mapping and Computer Vision to create an extremely engaging game that utilizes both the physical world, and the virtual. This basic game intends to prep them for natural disasters through an engaging manner.
We think a slightly more developed version would be effective in engaging class participation in places like school, or even museums and exhibitions, where projection-mapping tech is widely used.
## What it does
The Camera scans for markers in the camera image, and then uses the markers position and rotation to create shapes on the canvas. This canvas then undergoes an affine transformation and then gets outputted by the projector as if it were an overlay on top of any object situated next to the markers. This means that moving the markers results in these shapes following the markers' position.
## How the game works
When the game starts, Melvin the Martian needs to prepare for an earthquake. In order to do so you need to build him a path to his First Aid Kit with your Blocks (that you can physically move around, as they are attached to markers). After he gets his first Aid kit, you need to build him a table to hide under, before the earthquake approaches (again, using any physical objects attached to markers). After he hides, You Win!
## How I built it
I began by trying to identify the markers - for which there was an already implemented library that required extensive tuning to get working right. I then made the calibration process, which took three points from the initial, untransformed camera image, and the actual location of these three points on the projector screen. This automatically created a transformation matrix that I then applied to every graphic I rendered (eg. the physical blocks). After this, I made the game, and used the position of the markers to determine is certain events were satisfied, which decided whether the game would progress, or wait until it received the correct input.
## Challenges I ran into
It was very difficult to transform the camera's perspective (which was at a different frame of reference to the projector's) to the projector's perspective. Every camera image had undergone some varying scale, rotation and translation, which require me to create a calibration program that ran at the start of the program's launch.
## Accomplishments that I'm proud of
Instead of relying wholly on any library, I tried my best to directly manipulate the Numpy Matrices in order to achieve transformation effects referred to previously. I'm also happy that I was able to greatly speed up camera-projector frame calibration, which began taking around 5 minutes, and now takes about 15-20 seconds.
## What I learned
I learnt a great deal about Affine Transformations, how to decompose a transformation matrix into its scale, rotation and translation values. I also learnt the drawbacks of using more precise markers (eg. April tags, or ARUCO tags) as opposed to something much simpler, like an HSV color & shape detector.
## What's next for Earthquake Education With Projection Mapping and CV
I want to automate the calibration process, so it requires no user input (which is technically possible, but is prone to error and requires knowledge about the camera being used). I also want to get rid of the ARUCO tags entirely, and instead use the edges of physical objects to somehow manipulate the virtual world. | ## Inspiration
Tax Simulator 2019 takes inspiration from a variety of different games, such as the multitude of simulator games that have gained popularity in recent years, the board game Life, and the video game Pokemon.
## What it does
Tax Simulator is a video game designed to introduce students to taxes and benefits.
## How we built it
Tax Simulator was built in Unity using C#.
## Challenges we ran into
The biggest challenge that we had to overcome was time. Creating the tax calculation system, designing and building the game's levels, implementing the narrative text elements, and debugging every single area of the program were all tedious and demanding tasks, and as a result, there are several features of the game that have not yet been fully implemented, such as the home-purchasing system.
Learning how to use the Unity game engine also proved to be a challenge as not all of us had past experience with the software, so picking up the skills to implement our ideas into our creation and develop a fleshed-out product was an essential yet difficult task.
## Accomplishments that we're proud of
Although simple, Tax Simulator incorporates concepts such as common tax deductions and two savings vehicles in a fun and interactive game. The game makes use of a charming visual aesthetic, simple mechanics, and an engaging narrative that makes it fun to play through, and we're very proud of our ability to portray learning and education in an appealing way.
## What we learned
We learned that although it is tempting to try and incorporate as many features as possible in our project, a simple game that is easy to understand and fun to play will keep players engaged better than a game with many complex features and options that ultimately contribute to confusion and clutter.
## What's next for Tax Simulator 2019
Although it is a great start for learning about taxes, Tax Simulator could benefit from incorporating more life events such, purchasing a house with the First-Time Home Buyer Incentive or having kids, and saving for college with RESPs. The game could also suggest ways for players to improve their gameplay based on how the decisions they made regarding their taxes. | winning |
## Inspiration
**Read something, do something.** We constantly encounter articles about social and political problems affecting communities all over the world – mass incarceration, the climate emergency, attacks on women's reproductive rights, and countless others. Many people are concerned or outraged by reading about these problems, but don't know how to directly take action to reduce harm and fight for systemic change. **We want to connect users to events, organizations, and communities so they may take action on the issues they care about, based on the articles they are viewing**
## What it does
The Act Now Chrome extension analyzes articles the user is reading. If the article is relevant to a social or political issue or cause, it will display a banner linking the user to an opportunity to directly take action or connect with an organization working to address the issue. For example, someone reading an article about the climate crisis might be prompted with a link to information about the Sunrise Movement's efforts to fight for political action to address the emergency. Someone reading about laws restricting women's reproductive rights might be linked to opportunities to volunteer for Planned Parenthood.
## How we built it
We built the Chrome extension by using a background.js and content.js file to dynamically render a ReactJS app onto any webpage the API identified to contain topics of interest. We built the REST API back end in Python using Django and Django REST Framework. Our API is hosted on Heroku, the chrome app is published in "developer mode" on the chrome app store and consumes this API. We used bitbucket to collaborate with one another and held meetings every 2 - 3 hours to reconvene and engage in discourse about progress or challenges we encountered to keep our team productive.
## Challenges we ran into
Our initial attempts to use sophisticated NLP methods to measure the relevance of an article to a given organization or opportunity for action were not very successful. A simpler method based on keywords turned out to be much more accurate.
Passing messages to the back-end REST API from the Chrome extension was somewhat tedious as well, especially because the API had to be consumed before the react app was initialized. This resulted in the use of chrome's messaging system and numerous javascript promises.
## Accomplishments that we're proud of
In just one weekend, **we've prototyped a versatile platform that could help motivate and connect thousands of people to take action toward positive social change**. We hope that by connecting people to relevant communities and organizations, based off their viewing of various social topics, that the anxiety, outrage or even mere preoccupation cultivated by such readings may manifest into productive action and encourage people to be better allies and advocates for communities experiencing harm and oppression.
## What we learned
Although some of us had basic experience with Django and building simple Chrome extensions, this project provided new challenges and technologies to learn for all of us. Integrating the Django backend with Heroku and the ReactJS frontend was challenging, along with writing a versatile web scraper to extract article content from any site.
## What's next for Act Now
We plan to create a web interface where organizations and communities can post events, meetups, and actions to our database so that they may be suggested to Act Now users. This update will not only make our applicaiton more dynamic but will further stimulate connection by introducing a completely new group of people to the application: the event hosters. This update would also include spatial and temporal information thus making it easier for users to connect with local organizations and communities. | ## Inspiration
In the future we would be driving electric cars. They would have different automatic features, including electronic locks. This system may be vulnerable to the hackers, who would want to unlock the car in public parking lots. So we would like to present **CARSNIC**, a solution to this problem.
## What it does
The device implements a continuous loop, in which the camera is checked in order to detect the theft/car unlocking activity. If there is something suspect, the program iterated in the list of frequencies for the unlocking signal (*315MHz* for US and *433.92MHz* in the rest of the world). If the signal is detected, then the antenna starts to transmit a mirrored signal in order to neutralize the hacker's signal.
We used the propriety that the signal from car keys are sinusoidal, and respects the formula: sin(-x) = -sin(x).
## How I built it
We used a **Raspberry Pi v3** as SBC, a RPI camera and a **RTL-SDR** antenna for RX/TX operations. In order to detect the malicious activity and to analyze the plots of the signals, I used python and **Custom Vision** API from Azure. The admin platform was created using **PowerApps** and **Azure SQL** Databases.
## Challenges I ran into
The main challenge was that I was not experienced in electronics and learned harder how to work with the components.
## Accomplishments that I'm proud of
The main accomplishment was that the MVP was ready for the competition, in order to demonstrate the proposed idea.
## What I learned
In this project I learned mostly how to work with hardware embedded systems, This is my first project with Raspberry Pi and RTL-SDR antenna.
## What's next for CARSNIC
In the next couple of months, I would like to finish the MVP with all the features in the plan: 3D scanning of the structure, acclimatization and automatic parking from an intelligent service sent directly to your car. Then I think the project should be ready to be presented to investors and accelerators. | ## Inspiration
Reading the news helps people expand their knowledge and broaden their horizons. However, it can be time-consuming and troublesome to find quality news articles and read lengthy, boring chunks of text. Our goal is to make news **accessible** to everyone. We provide **concise**, **digestible** news **summaries** in a **conversational** manner to make it as easy as possible for anyone to educate themselves by reading the news.
## What it does
News.ai provides a concise and digestible summary of a quality article related to the topic you care about. You can easily ask **follow-up questions** to learn more information from the article or learn about any related concepts mentioned in the article.
## How we built it
1. We used *React.js* and *Flask* for our web app.
2. We used *NewsAPI* to recommend the most updated news based on preferences.
3. We used *Monster API's OpenAI-Whisper API* for speech-to-text transcription.
4. We used *Monster API's SunoAI Bark API* for text-to-speech generation.
5. We used *OpenAI's GPT 4 API* large language model (LLM) to provide summaries of news articles.
## Challenges we ran into
We ran into the challenge of connecting multiple parts of the project. Because of its inherent complexity and interconnectivity, making different APIs and frontend plus backend to work together has been our most difficult task.
## Accomplishments that we're proud of
We're happy that we established a strong pipeline of API calls using AI models. For example, we converted the user's audio input to text using Whisper API before generating text in response to the user's request using GPT API and finally, we converted the generated text to audio output using Bark API. We are also proud to have integrated the NewsAPI in our recommendation system so we can display the latest news for each user tailored to their preferences.
## What we learned
Each of our team members had a deep understanding of a specific part of our tech stack; whether that be the frontend, backend, or usage of AI/LLM models and APIs. We learned a lot about how these tools can be integrated and applied to solve real-world problems.
Furthermore, by spending the first day going booth to booth and speaking individually to every sponsor, we learned about the intricacies of each platform and API. This allowed us to build a platform that synthesized the strengths of various tools and technologies. For example, we were able to take advantage of the ease and scalability of Monster API's Whisper and Bark APIs.
## What's next for News.ai
Moving forward, we hope to allow for more personalized search of news articles beyond generic topics. Furthermore, we hope to collect additional personalized characteristics that improve the podcast content and understanding for users. | partial |
## Inspiration
Only a small percentage of Americans use ASL as their main form of daily communication. Hence, no one notices when ASL-first speakers are left out of using FaceTime, Zoom, or even iMessage voice memos. This is a terrible inconvenience for ASL-first speakers attempting to communicate with their loved ones, colleagues, and friends.
There is a clear barrier to communication between those who are deaf or hard of hearing and those who are fully-abled.
We created Hello as a solution to this problem for those experiencing similar situations and to lay the ground work for future seamless communication.
On a personal level, Brandon's grandma is hard of hearing, which makes it very difficult to communicate. In the future this tool may be their only chance at clear communication.
## What it does
Expectedly, there are two sides to the video call: a fully-abled person and a deaf or hard of hearing person.
For the fully-abled person:
* Their speech gets automatically transcribed in real-time and displayed to the end user
* Their facial expressions and speech get analyzed for sentiment detection
For the deaf/hard of hearing person:
* Their hand signs are detected and translated into English in real-time
* The translations are then cleaned up by an LLM and displayed to the end user in text and audio
* Their facial expressions are analyzed for emotion detection
## How we built it
Our frontend is a simple React and Vite project. On the backend, websockets are used for real-time inferencing. For the fully-abled person, their speech is first transcribed via Deepgram, then their emotion is detected using HumeAI. For the deaf/hard of hearing person, their hand signs are first translated using a custom ML model powered via Hyperbolic, then these translations are cleaned using both Google Gemini and Hyperbolic. Hume AI is used similarly on this end as well. Additionally, the translations are communicated back via text-to-speech using Cartesia/Deepgram.
## Challenges we ran into
* Custom ML models are very hard to deploy (Credits to <https://github.com/hoyso48/Google---American-Sign-Language-Fingerspelling-Recognition-2nd-place-solution>)
* Websockets are easier said than done
* Spotty wifi
## Accomplishments that we're proud of
* Learned websockets from scratch
* Implemented custom ML model inferencing and workflows
* More experience in systems design
## What's next for Hello
Faster, more accurate ASL model. More scalability and maintainability for the codebase. | ## Background
Half **a million people** in the United States identify as deaf or hard of hearing. Accessibility has always been a huge problem, but especially nowadays as the number of people that know ASL is decreasing each year. The best time to learn a language is as a child, so we aim to target teaching ASL at the root. Our app, **HelloSign**, is an gamified, e-learning web app focused on teaching small children ASL.
## What is HelloSign?
**HelloSign** uses artificial intelligence to provide instant feedback on hand sign technique and augmented reality for an interactive and kinesthetic learning experience. Our main features include:
### Lessons & Quizzes
The student can learn through our lessons and test their skills with our quizzes. The lessons are easy to understand and the learner can learn kinesthetically with augmented reality. The quizzes use machine learning to detect the signs and provide real-time feedback.
### Badges & Prizes
We took into account that kids don't like traditional e-learning as they find it boring, so we gamified it. You can earn badges and prizes by completing lessons and quizzes
### Friends & Leaderboard
We wanted to make sure you can interact and practice with your friends through video call. We believe that socializing and friends is a part of what makes learning fun. The leaderboard is a competitive aspect to encourage children to do their very best.
### Donate Cryptocurrency
To further develop and maintain our free app, we give the option for users to donate cryptocurrency, a digital payment system that doesn't rely on banks to verify transactions, using a technology called blockchain.
## How HelloSign was built
**HelloSign** was built by a team consisting of both beginner and advanced hackers, including both designers and developers.
**HelloSign**’s design was created based on our audience, need-finding, user personas and user-flow.
**HelloSign**’s tech was made with React, Redux, Material-UI, Framer-Motion and many other technologies. Tensorflow Object Detection API and Python were used to create our own machine learning model, and then we converted our pretrained model to TensorFlow.js and hosted our model on cloud object store so that we could use it with our frontend. The backend was made with Node and Express with MongoDB as our database to store user data. Cryptocurrency transactions are made possible with Ethers and MetaMask. Finally, EchoAR was used to view 3D hand sign models in augmented reality.
## Engineering
![link](https://challengepost-s3-challengepost.netdna-ssl.com/photos/production/software_photos/001/776/667/datas/original.png)
Our tech stack.
![link](https://media.discordapp.net/attachments/920321339708031006/932092746834002010/yes.PNG?)
## UX/UI
![img](https://github.com/impedaka/codetivate/blob/main/assets/persona.PNG?raw=true)
User personas helped direct and guide us with the design of our app
### Hi-fi Prototypes
![image](https://github.com/impedaka/codetivate/blob/main/assets/Capture.PNG?raw=true)
We selected a red, green, and blue color palette, font, and developed the art style from there.
## Challenges
We used a wide variety of technologies for our frontend, our main ones being: React, Redux, CSS, JavaScript, and HTML. We also used Framer Motion for animations and Material-UI as a component library for icons and modals.
The real time object detection machine learning model was trained through transfer learning using SSD MobileNet and tensorflow object detection api from using our own dataset by labeling our images with LabelImg.As a result, it can detect hand signs in real time with your webcam and OpenCV. After making our own model, we then converted it to Tensorflow.js and hosted it through a cloud object store. From there, we could use our machine learning model with our frontend React app.
The most frustrating part would be creating our own dataset from scratch, as it was very large and time consuming. Another challenge we faced was implementing the bounding boxes for the image recognition. The feature itself was unnecessary, but it greatly improves the user interface altogether. We also struggled to provide real-time feedback/scoring, and figuring our Base64 encoding and integrating all of these components within a short period of time
## Accomplishments we are proud of
We are proud that we managed to polish and finish our app! We finished slightly earlier than the hackathon deadline, so we decided to add extra details with Material-UI icons and animations with Framer-Motion to make our user interface look more professional and organized.
## What we learned
We learned to work remotely from each other, since this was an online hackathon. We relied on discord for communication, google docs for brainstorming, figma for our design and more. The technologies and apis we used were a fun and good challenge as well.
## What’s next?
If there was more time to work on our project, we would have added a lot more features. This includes making our own 3d models to view in AR, and providing more lessons and quizzes. We would also train our machine learning model to recognize more hand signs, including hand signs that aren’t part of ASL, such as Chinese Sign Language and French Sign Language. Ultimately, we hope to see the app launch one day, we want to encourage more children to learn ASL and socialize!. | CalHacks 2022 Project - Fokus
⚡ Welcome to Fokus! ⚡
Fokus is a Chrome extension for studying, focusing, and productivity. As studious Berkeley students in the midst of midterm season, we wanted to create a tool that can help other students study and prepare for their exams!
In the late 1980s Francesco Cirillo developed a time management method known as the Pomodoro Technique in which a kitchen timer is used to break work into intervals of around 20-25 minutes in length, separated by short breaks. Therefore, one of the main features of our extension is the Pomodoro timer.
Another feature we have included is an editable to-do list - students can add, change, or delete any homework assignments, quizzes, and/or exams as they please.
There is also a new inspirational quote featured everyday!
Meet the Team:
Created by:
Rishi Khare
Mukhamediyar Kudaikulov
Grace Qian | partial |
## Inspiration
Shashank Ojha, Andreas Joannou, Abdellah Ghassel, Cameron Smith
#
![](https://drive.google.com/uc?export=view&id=1griTlDOUhpmhqq7CLNtwrQnRGaBXGn72)
Clarity is an interactive smart glass that uses a convolutional neural network, to notify the user of the emotions of those in front of them. This wearable gadget has other smart glass abilities such as the weather and time, viewing daily reminders and weekly schedules, to ensure that users get the best well-rounded experience.
## Problem:
As mental health raises barriers inhibiting people's social skills, innovative technologies must accommodate everyone. Studies have found that individuals with developmental disorders such as Autism and Asperger’s Syndrome have trouble recognizing emotions, thus hindering social experiences. For these reasons, we would like to introduce Clarity. Clarity creates a sleek augmented reality experience that allows the user to detect the emotion of individuals in proximity. In addition, Clarity is integrated with unique and powerful features of smart glasses including weather and viewing daily routines and schedules. With further funding and development, the glasses can incorporate more inclusive features straight from your fingertips and to your eyes.
![](https://drive.google.com/uc?export=view&id=1eVZFYgQIm7vu5UOjp5tvgFOxvf3kv4Oj)
![](https://drive.google.com/uc?export=view&id=1L-5w9jzwKG0dLdwe-OCMUa6S2HnZeaFo)
![](https://drive.google.com/uc?export=view&id=1LP7bI9jAupQDQcfbQIszs9igVEFSuqDb)
## Mission Statement:
At Clarity, we are determined to make everyone’s lives easier, specifically to help facilitate social interactions for individuals with developmental disorders. Everyone knows someone impacted by mental health or cognitive disabilities and how meaningful those precious interactions are. Clarity wants to leap forward to make those interactions more memorable, so they can be cherished for a lifetime.
![](https://drive.google.com/uc?export=view&id=1qJgJIAwDI0jxhs1Q59WyaGAvFg5fysTt)
![](https://drive.google.com/uc?export=view&id=1AY5zbgfUB4c_4feWVVrQcuOGtn_yGc99)
We are first-time Makeathon participants who are determined to learn what it takes to make this project come to life and to impact as many lives as possible. Throughout this Makeathon, we have challenged ourselves to deliver a well-polished product that, with the purpose of doing social good. We are second-year students from Queen's University who are very passionate about designing innovative solutions to better the lives of everyone. We share a mindset to give any task our all and obtain the best results. We have a diverse skillset and throughout the hackathon, we utilized everyone's strengths to work efficiently. This has been a great learning experience for our first makeathon, and even though we have some respective experiences, this was a new journey that proved to be intellectually stimulating for all of us.
## About:
### Market Scope:
![](https://drive.google.com/uc?export=view&id=10LWCDhgfDPp1scpVI1GSAGIWrjprQtOY)
Although the main purpose of this device is to help individuals with mental disorders, the applications of Clarity are limitless. Other integral market audiences to our device include:
• Educational Institutions can use Clarity to help train children to learn about emotions and feelings at a young age. Through exposure to such a powerful technology, students can be taught fundamental skills such as sharing, and truly caring by putting themselves in someone else's shoes, or lenses in this case.
• The interview process for social workers can benefit from our device to create a dynamic and thorough experience to determine the ideal person for a task. It can also be used by social workers and emotional intelligence researchers to have better studies and results.
• With further development, this device can be used as a quick tool for psychiatrists to analyze and understand their patients at a deeper level. By assessing individuals in need of help at a faster level, more lives can be saved and improved.
### Whats In It For You:
![](https://drive.google.com/uc?export=view&id=1XbrcnIEc3eAYDmkopmwGbSew11GQv91v)
The first stakeholder to benefit from Clarity is our users. This product provides accessibility right to the eye for almost 75 million users (number of individuals in the world with developmental disorders). The emotion detection system is accessible at a user's disposal and makes it easy to recognize anyone's emotions. Whether one watching a Netflix show or having a live casual conversation, Clarity has got you covered.
Next, Qualcomm could have a significant partnership in the forthcoming of Clarity, as they would be an excellent distributor and partner. With professional machining and Qualcomm's Snapdragon processor, the model is guaranteed to have high performance in a small package.
Due to the various applications mentioned of this product, this product has exponential growth potential in the educational, research, and counselling industry, thus being able to offer significant potential in profit/possibilities for investors and researchers.
## Technological Specifications
## Hardware:
At first, the body of the device was a simple prism with an angled triangle to reflect the light at 90° from the user. The initial intention was to glue the glass reflector to the outer edge of the triangle to complete the 180° reflection. This plan was then scrapped in favour of a more robust mounting system, including a frontal clip for the reflector and a modular cage for the LCD screen. After feeling confident in the primary design, a CAD prototype was printed via a 3D printer. During the construction of the initial prototype, a number of challenges surfaced including dealing with printer errors, component measurement, and manufacturing mistakes. One problem with the prototype was the lack of adhesion to the printing bed. This resulted in raised corners which negatively affected component cooperation. This issue was overcome by introducing a ring of material around the main body. Component measurements and manufacturing mistakes further led to improper fitting between pieces. This was ultimately solved by simplifying the initial design, which had fewer points of failure. The evolution of the CAD files can be seen below.
![](https://drive.google.com/uc?export=view&id=1vDT1gGyfM7FgioSRr71yBSysGntOfiFC)
The material chosen for the prototypes was PLA plastic for its strength to weight ratio and its low price. This material is very lightweight and strong, allowing for a more comfortable experience for the user. Furthermore, inexpensive plastic allows for inexpensive manufacturing.
Clarity runs on a Raspberry Pi Model 4b. The RPi communicates with the OLED screen using the I2C protocol. It additionally powers and communicates with the camera module and outputs a signal to a button to control the glasses. The RPi handles all the image processing, to prepare the image for emotion recognition and create images to be output to the OLED screen.
### Optics:
Clarity uses two reflections to project the image from the screen to the eye of the wearer. The process can be seen in the figure below. First, the light from the LCD screen bounces off the mirror which has a normal line oriented at 45° relative to the viewer. Due to the law of reflection, which states that the angle of incidence is equal to the angle of reflection relative to the normal line, the light rays first make a 90° turn. This results in a horizontal flip in the projected image. Then, similarly, this ray is reflected another 90° against a transparent piece of polycarbonate plexiglass with an anti-reflective coating. This flips the image horizontally once again, resulting in a correctly oriented image. The total length that the light waves must travel should be equivalent to the straight-line distance required for an image to be discernible. This minimum distance is roughly 25 cm for the average person. This led to shifting the screen back within the shell to create a clearer image in the final product.
![](https://drive.google.com/uc?export=view&id=1dOHIXN2L045LHh7rCoD0iTrW_IVKf7dz)
## Software:
![](https://drive.google.com/uc?export=view&id=1DzqhM4p5y729deKQQkTw5isccUeZRCP8)
The emotion detection capabilities of Clarity smart glasses are powered by Google Cloud Vision API. The glasses capture a photo of the people in front of the user, runs the photo through the Cloud Vision model using an API key, and outputs a discrete probability distribution of the emotions. This probability distribution is analyzed by Clarity’s code to determine the emotion of the people in the image. The output of the model is sent to the user through the OLED screen using the Pillow library.
The additional features of the smart glasses include displaying the current time, weather, and the user’s daily schedule. These features are implemented using various Python libraries and a text file-based storage system. Clarity allows all the features of the smart glasses to be run concurrently through the implementation of asynchronous programming. Using the asyncio library, the user can iterate through the various functionalities seamlessly.
The glasses are interfaced through a button and the use of Siri. Using an iPhone, Siri can remotely power on the glasses and start the software. From there, users can switch between the various features of Clarity by pressing the button on the side of the glasses.
The software is implemented using a multi-file program that calls functions based on the current state of the glasses, acting as a finite state machine. The program looks for the rising edge of a button impulse to receive inputs from the user, resulting in a change of state and calling the respective function.
## Next Steps:
The next steps include integrating a processor/computer inside the glasses, rather than using raspberry pi. This would allow for the device to take the next step from a prototype stage to a mock mode. The model would also need to have Bluetooth and Wi-Fi integrated, so that the glasses are modular and easily customizable. We may also use magnifying lenses to make the images on the display bigger, with the potential of creating a more dynamic UI.
## Timelines:
As we believe that our device can make a drastic impact in people’s lives, the following diagram is used to show how we will pursue Clarity after this Makathon:
![](https://drive.google.com/uc?export=view&id=1m85rTMVAqIIK5VRbjqESn1Df-H0Pilx8)
## References:
• <https://cloud.google.com/vision>
• Python Libraries
### Hardware:
All CADs were fully created from scratch. However, inspiration was taken from conventional DIY smartglasses out there.
### Software:
### Research:
• <https://www.vectorstock.com/royalty-free-vector/smart-glasses-vector-3794640>
• <https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2781897/>
• <https://www.google.com/search?q=how+many+people+have+autism&rlz=1C1CHZN_enCA993CA993&oq=how+many+people+have+autism+&aqs=chrome..69i57j0i512l2j0i390l5.8901j0j9&sourceid=chrome&ie=UTF-8>
• (<http://labman.phys.utk.edu/phys222core/modules/m8/human_eye.html>)
• <https://mammothmemory.net/physics/mirrors/flat-mirrors/normal-line-and-two-flat-mirrors-at-right-angles.html> | ## Inspiration
Every year hundreds of thousands of preventable deaths occur due to the lack of first aid knowledge in our societies. Many lives could be saved if the right people are in the right places at the right times. We aim towards connecting people by giving them the opportunity to help each other in times of medical need.
## What it does
It is a mobile application that is aimed towards connecting members of our society together in times of urgent medical need. Users can sign up as respondents which will allow them to be notified when people within a 300 meter radius are having a medical emergency. This can help users receive first aid prior to the arrival of an ambulance or healthcare professional greatly increasing their chances of survival. This application fills the gap between making the 911 call and having the ambulance arrive.
## How we built it
The app is Android native and relies heavily on the Google Cloud Platform. User registration and authentication is done through the use of Fireauth. Additionally, user data, locations, help requests and responses are all communicated through the Firebase Realtime Database. Lastly, the Firebase ML Kit was also used to provide text recognition for the app's registration page. Users could take a picture of their ID and their information can be retracted.
## Challenges we ran into
There were numerous challenges in terms of handling the flow of data through the Firebase Realtime Database and providing the correct data to authorized users.
## Accomplishments that we're proud of
We were able to build a functioning prototype! Additionally we were able to track and update user locations in a MapFragment and ended up doing/implementing things that we had never done before. | ## Inspiration
Integration for patients into society: why can't it be achieved? This is due to the lack of attempts to combine medical solutions and the perspectives of patients in daily use. More specifically, we notice fields in aid to visual disabilities lack efficiency, as the most common option for patients with blindness is to use a cane and tap as they move forward, which can be slow, dangerous, and limited. They are clunky and draw attention to the crowd, leading to more possible stigmas and inconveniences in use. We attempt to solve this, combining effective healthcare and fashion.
## What it does
* At Signifeye, we have created a pair of shades with I/O sensors that provides audio feedback to the wearer on how far they are to the object they are looking at.
* We help patients build a 3D map of their surroundings and they can move around much quicker, as opposed to slowly tapping the guide cane forward
* Signifeye comes with a companion app that allows for both the blind user and caretakers. The UI is easy to navigate for the blind user and allows for easier haptic feedback manipulation.Through the app, caretakers can also monitor and render assistance to the blind user, thereby being there for the latter 24/7 without having to be there physically through tracking of data and movement.
## How we built it
* The frame of the sunglasses is inspired by high-street fashion, and was modeled via Rhinoceros 3D
to balance aesthetics and functionality. The frame is manufactured using acrylic sheets on a
laser cutter for rapid prototyping
* The sensor arrays consist of an ultrasonic sensor, a piezo speaker, a 5V regulator and a 9V
battery, and are powered by the Arduino MKR WiFi 1010
* The app was created using React Native and Figma for more comprehensive user details, using Expo Go and VSCode for a development environment that could produce testable outputs.
## Challenges we ran into
Difficulty of iterative hardware prototyping under time and resource constraints
* Limited design iterations,
* Shortage of micro-USB cables that transfer power and data, and
* For the frame design, coordinating the hardware with the design for dimensioning.
Implementing hardware data to softwares
* Collecting Arduino data into a file and accommodating that with the function of the application, and
* Altering user and haptic feedback on different mobile operating systems, where different programs had different dependencies that had to be followed.
## What we learned
As most of us were beginner hackers, we learned about multiple aspects that went into creating a viable product.
* Fully integrating hardware and software functionality, including Arduino programming and streamlining.
* The ability to connect cross platform softwares, where I had to incorporate features or data pulled from hardware or data platforms.
* Dealing with transferring of data and the use of computer language to process different formats, such as audio files or censor induced wavelengths.
* Became more proficient in running and debugging code. I was able to adjust to a more independent and local setting, where an emulator or external source was required aside from just an IDE terminal. | winning |
## Inspiration
There is head tracking for advanced headsets like the HTC Vive, but not for WebVR, or Google Daydream.
## What it does
Converts a RGB camera into a tracker for the Google Daydream. Users can even choose to build their own classifiers for different headsets.
## How we built it
On the front end, there is ReactVR, which calls through express.js to our backend. The server is a python OpenCV application that uses our classifier to determine the location of the headset in space.
To generate the classifier, we used OpenCV and C++ to automate building of a training set. Then the collection, selection, and training of the samples was automating using a bash script.
## Challenges we ran into
* Debugging Chrome for Android
* WebVR blocks XMLHttpRequests
* Mac times out ports
* Mac does not communicate well with Android
* Little documentation for WebVR APIs
* Cross origin request denial
* Automating the sample generation
* Poor suppression of false positives
* Bad OpenCV documentation
* Failure of markerless tracking
* Data plumbing
* Request limit on ngrok
* Sensor drift on Android accelerometers
* Bracket wrangling in Python3 and Javascript
* Debugging Chrome for Android
* Damaged USB-C port on the only *vr ready* phone for Google Daydream
* Staying awake
* Downloading and installing the same version of opencv
* Tabs versus spaces
* Debugging Chrome for Android
* libcurl problems in C++
* scrapping C++ and starting over in python
* proxy time outs
## Accomplishments that we're proud of
It works on every platform, except in VR!
## What we learned
* Commit and push often
## What's next for Daydream Lighthouse
Waiting for more stable WebVR! | ## Inspiration
Many visually impaired people have difficulties on a day-to-day basis reading text and other items, and other comparable technologies and software are either expensive or bulky and impractical to use. We sought to create a cheap, light and reliable system for text reading.
## What it does
A wearable that reads aloud text to its user through a mobile app. It was made for the visually impaired who have trouble reading text that doesn't have a braille translation.
## How we built it
We used an ESP-32 Cam, mounted in a 3d printed enclosure atop a hat. The camera hosts a WiFi Local Access Point, to which the user's smartphone is connected to. The smartphone processes the image with an OCR, and sends the resulting text to a text-to-speech API, which is played to the user.
## Challenges we ran into
* We forgot to bring a charger and HDMI cable for our raspberry PI and were unable to rely on its processing power.
* We tried to run ML models on the ESP32-CAM but we were only able to small and simple models due to the hardware limitations.
* We were unable to send images over Bluetooth LE because of the low data transfer limit and we were unable to connect the ESP32-CAM so we opted to create a soft access point on the esp32 so other devices could connect to it to retrieve images.
* Getting a 3D profile of the hat (we borrowed **stole** from xero) was challenging because we couldn't measure the spherical shape of the hat.
* We had a lot of trouble retrieving images on our React Native app, making us switch to React. After a lot of trial and error, we finally got the ESP32-CAM to stream images on a ReactJS website, but we didn't have enough time to combine the text recognition with it.
## Accomplishments that we're proud of
* 3d Printed Shroud
* Building a website and implemented Machine Learning model
* Streaming ESP32-CAM to our website
## What we learned
* Always allow more overhead when possible to allow for changes while creating your project
* Create a schedule to predict what you will have time for and when it should be done by
+ ML training is very time-consuming and it is very likely that you will have bugs to work out
+ Start with a smaller project and work up towards a larger and more complete one
* New things our team members have experienced: React (JS & Native), ESP32-CAM, Tesseract (js & py), NodeJS, 3d modeling/printing under a time constraint, getting free food
## What's next for Assisted Reader
* Fix issues / merge the ESP32-CAM with the ML models
* Better form factor (smaller battery, case, lower power usage)
* Use automatic spelling correction to ensure Text-to-Speech always reads proper English words
* More ML training for an improved OCR model
* Translation to other languages for a larger customer base
* Cleaner and more modern User Interface
* Add support for Bluetooth and connecting to other Wifi networks | ## Inspiration
We wanted to pioneer the use of computationally intensive image processing and machine learning algorithms for use in low resource robotic or embedded devices by leveraging cloud computing.
## What it does
CloudChaser (or "Chase" for short) allows the user to input custom objects for chase to track. To do this Chase will first rotate counter-clockwise until the object comes into the field of view of the front-facing camera, then it will continue in the direction of the object, continually updating its orientation.
## How we built it
"Chase" was built with four continuous rotation servo motors mounted onto our custom modeled 3D-printed chassis. Chase's front facing camera was built using a raspberry pi 3 camera mounted onto a custom 3D-printed camera mount. The four motors and the camera are controlled by the raspberry pi 3B which streams video to and receives driving instructions from our cloud GPU server through TCP sockets. We interpret this cloud data using YOLO (our object recognition library) which is connected through another TCP socket to our cloud-based parser script, which interprets the data and tells the robot which direction to move.
## Challenges we ran into
The first challenge we ran into was designing the layout and model for the robot chassis. Because the print for the chassis was going to take 12 hours, we had to make sure we had the perfect dimensions on the very first try, so we took calipers to the motors, dug through the data sheets, and made test mounts to ensure we nailed the print.
The next challenge was setting up the TCP socket connections and developing our software such that it could accept data from multiple different sources in real time. We ended up solving the connection timing issue by using a service called cam2web to stream the webcam to a URL instead of through TCP, allowing us to not have to queue up the data on our server.
The biggest challenge however by far was dealing with the camera latency. We wanted to camera to be as close to live as possible so we took all possible processing to be on the cloud and none on the pi, but since the rasbian operating system would frequently context switch away from our video stream, we still got frequent lag spikes. We ended up solving this problem by decreasing the priority of our driving script relative to the video stream on the pi.
## Accomplishments that we're proud of
We're proud of the fact that we were able to model and design a robot that is relatively sturdy in such a short time. We're also really proud of the fact that we were able to interface the Amazon Alexa skill with the cloud server, as nobody on our team had done an Alexa skill before. However, by far, the accomplishment that we are the most proud of is the fact that our video stream latency from the raspberry pi to the cloud is low enough that we can reliably navigate the robot with that data.
## What we learned
Through working on the project, our team learned how to write an skill for Amazon Alexa, how to design and model a robot to fit specific hardware, and how to program and optimize a socket application for multiple incoming connections in real time with minimal latency.
## What's next for CloudChaser
In the future we would ideally like for Chase to be able to compress a higher quality video stream and have separate PWM drivers for the servo motors to enable higher precision turning. We also want to try to make Chase aware of his position in a 3D environment and track his distance away from objects, allowing him to "tail" objects instead of just chase them.
## CloudChaser in the news!
<https://medium.com/penn-engineering/object-seeking-robot-wins-pennapps-xvii-469adb756fad>
<https://penntechreview.com/read/cloudchaser> | losing |